Compare commits

..

130 Commits

Author SHA1 Message Date
04c2731349 bump version to 0.8.15-1 2020-09-10 09:26:16 +02:00
5656888cc9 verify: fix done count
We need to filter out benchmark group earlier
2020-09-10 09:06:33 +02:00
5fdc5a6f3d verify: skip benchmark directory 2020-09-10 08:44:18 +02:00
61d7b5013c add benchmark flag to backup creation for proper cleanup when running a benchmark
Signed-off-by: Hannes Laimer <h.laimer@proxmox.com>
Signed-off-by: Dietmar Maurer <dietmar@proxmox.com>
2020-09-10 08:25:24 +02:00
871181d984 mount: fix mount subcommand
fixes the error, "manifest does not contain
file 'X.pxar'", that occurs when trying to mount
a pxar archive with 'proxmox-backup-client mount':

Signed-off-by: Dylan Whyte <d.whyte@proxmox.com>
2020-09-10 07:21:16 +02:00
02939e178d ui: only mark backup encrypted if there are any files
if we have a stale backup without an manifest, we do not count
the remaining files in the backup dir anymore, but this means
we now have to check here if there are really any encrypted

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-09-10 07:18:51 +02:00
3be308b949 improve server->client tcp performance for high latency links
similar to the other fix, if we do not set the buffer size manually,
we get better performance for high latency connections

restore benchmark from f.gruenbicher:

no delay, without patch: ~50MB/s
no delay, with patch: ~50MB/s
25ms delay, without patch: ~11MB/s
25ms delay, with path: ~50MB/s

my own restore benchmark:

no delay, without patch: ~1.5GiB/s
no delay, with patch: ~1.5GiB/s
25ms delay, without patch: 30MiB/s
25ms delay, with patch: ~950MiB/s

for some more details about those benchmarks see
https://lists.proxmox.com/pipermail/pbs-devel/2020-September/000600.html

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-09-10 07:15:25 +02:00
83088644da fix #2983: improve tcp performance
by leaving the buffer sizes on default, we get much better tcp performance
for high latency links

throughput is still impacted by latency, but much less so when
leaving the sizes at default.
the disadvantage is slightly higher memory usage of the server
(details below)

my local benchmarks (proxmox-backup-client benchmark):

pbs client:
PVE Host
Epyc 7351P (16core/32thread)
64GB Memory

pbs server:
VM on Host
1 Socket, 4 Cores (Host CPU type)
4GB Memory

average of 3 runs, rounded to MB/s
                    | no delay |     1ms |     5ms |     10ms |    25ms |
without this patch  |  230MB/s |  55MB/s |  13MB/s |    7MB/s |   3MB/s |
with this patch     |  293MB/s | 293MB/s | 249MB/s |  241MB/s | 104MB/s |

memory usage (resident memory) of proxmox-backup-proxy:

                    | peak during benchmarks | after benchmarks |
without this patch  |                  144MB |            100MB |
with this patch     |                  145MB |            130MB |

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-09-10 07:15:12 +02:00
14db8b52dc src/backup/chunk_store.rs: use ? insteadf of unwrap 2020-09-10 06:37:37 +02:00
597427afaf clean up .bad file handling in sweep_unused_chunks
Code cleanup, no functional change intended.

Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
2020-09-10 06:31:22 +02:00
3cddfb29be backup: ensure no fixed index writers are left over either
Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
2020-09-10 06:29:38 +02:00
e15b76369a buildsys: upload client packages also to PMG repo
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-09-08 16:48:31 +02:00
d7c1251435 ui: calendar event: disable matchFieldWidth for picker
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-09-08 16:48:31 +02:00
ea3ce82a74 ui: calendar event: enable more complex examples again
now that they (should) work.

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-09-08 16:48:31 +02:00
092378ba92 Change "data store" to "datastore" throughout docs
Before, there were mixed usages of "data store" and
"datastore" throughout the docs.
This improves consistency in the docs by using only
"datastore" throughout.

Signed-off-by: Dylan Whyte <d.whyte@proxmox.com>
2020-09-08 13:12:01 +02:00
068e526862 backup: touch all chunks, even if they exist
We need to update the atime of chunk files if they already exist,
otherwise a concurrently running GC could sweep them away.

This is protected with ChunkStore.mutex, so the fstat/unlink does not
race with touching.

Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
2020-09-08 12:51:03 +02:00
a9767cf7de gc: remove .bad files on garbage collect
The iterator of get_chunk_iterator is extended with a third parameter
indicating whether the current file is a chunk (false) or a .bad file
(true).

Count their sizes to the total of removed bytes, since it also frees
disk space.

.bad files are only deleted if the corresponding chunk exists, i.e. has
been rewritten. Otherwise we might delete data only marked bad because
of transient errors.

While at it, also clean up and use nix::unistd::unlinkat instead of
unsafe libc calls.

Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
2020-09-08 12:43:13 +02:00
aadcc2815c cleanup rename_corrupted_chunk: avoid duplicate format macro 2020-09-08 12:29:53 +02:00
0f3b7efa84 verify: rename corrupted chunks with .bad extension
This ensures that following backups will always upload the chunk,
thereby replacing it with a correct version again.

Format for renaming is <digest>.<counter>.bad where <counter> is used if
a chunk is found to be bad again before a GC cleans it up.

Care has been taken to deliberately only rename a chunk in conditions
where it is guaranteed to be an error in the chunk itself. Otherwise a
broken index file could lead to an unwanted mass-rename of chunks.

Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
2020-09-08 12:20:57 +02:00
7c77e2f94a verify: fix log units
Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
2020-09-08 12:10:19 +02:00
abd4c4cb8c ui: add translation support
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-09-08 08:54:33 +02:00
09f12d1cf3 tools: rename extract_auth_cookie to extract_cookie
It does nothing specific to authentication..

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-09-08 08:54:33 +02:00
1db4cfb308 tools/sytemd/time: add tests for multivalue fields
we did this wrong earlier, so it makes sense to add regression tests

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-09-08 07:09:43 +02:00
a4c1143664 server/worker_task: fix upid_read_status
a range from high to low in rust results in an empty range
(see std::ops::Range documentation)
so we need to generate the range from 0..data.len() and then reverse it

also, the task log contains a newline at the end, so we have to remove
that (should it exist)

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-09-08 07:06:22 +02:00
0623674f44 Edit section "Network Management"
Following changes made:
    * Remove empty column "method6" from network list output,
      so table fits in console code-block
    * Walkthrough bond, rather than a bridge as it may be a more
      common setup case

Signed-off-by: Dylan Whyte <d.whyte@proxmox.com>
2020-09-08 06:30:42 +02:00
2dd58db792 PVE integration: Add note about hiding password
Add a note to section "Proxmox VE integration" explaining
how to avoid passing password as plain text when using the
pvesm command.

Signed-off-by: Dylan Whyte <d.whyte@proxmox.com>
2020-09-08 06:19:20 +02:00
e11cfb93c0 change order of "Image Archives" and "File Archives"
Change the order of the "Image Archives" and "File
Archives" subsections, so that they match the order
which they are introduced in, in the section "Backup
Content" (minor readability improvement).

Signed-off-by: Dylan Whyte <d.whyte@proxmox.com>
2020-09-08 06:19:09 +02:00
bc0608955e Sync Jobs: add screenshots and explanation
Add screenshots of sync jobs panel in web interface
and explain how to carry out related tasks from it.

Signed-off-by: Dylan Whyte <d.whyte@proxmox.com>
2020-09-08 06:18:50 +02:00
36be19218e Network Config: Add screenshots and explanation
Add screenshots for network configuration and explain
how to carry out related tasks using the web interface.

Signed-off-by: Dylan Whyte <d.whyte@proxmox.com>
2020-09-08 06:18:15 +02:00
9fa39a46ba User Management: Add screenshots and explanation
Add screenshots for user management section in web
interface and explain how to carry out relevant tasks
using it.

Signed-off-by: Dylan Whyte <d.whyte@proxmox.com>
2020-09-08 06:18:01 +02:00
ff30b912a0 Datastore Config: add screenshots and explanation
Add screenshots from the datastore section of the
web interface and explain how to carry out tasks using
the web interface.

Signed-off-by: Dylan Whyte <d.whyte@proxmox.com>
2020-09-08 06:17:50 +02:00
b0c10a88a3 Disk Management: Add screenshots and explanation
This adds screenshots from the web interface for the
sections related to disk management and adds explanation
of how to carry out tasks using the web interface.

Signed-off-by: Dylan Whyte <d.whyte@proxmox.com>
2020-09-08 06:12:54 +02:00
ccbe6547a7 Add screenshots of web interface
Signed-off-by: Dylan Whyte <d.whyte@proxmox.com>
2020-09-08 06:12:17 +02:00
32afd60336 src/tools/systemd/time.rs: derive Clone 2020-09-07 12:37:08 +02:00
02e47b8d6e SYSTEMD_CALENDAR_EVENT_SCHEMA: fix wrong schema description 2020-09-07 09:07:55 +02:00
44055cac4d tools/systemd/time: enable dates for calendarevents
this implements parsing and calculating calendarevents that have a
basic date component (year-mon-day) with the usual syntax options
(*, ranges, lists)

and some special events:
monthly
yearly/annually (like systemd)
quarterly
semiannually,semi-annually (like systemd)

includes some regression tests

the ~ syntax for days (the last x days of the month) is not yet
implemented

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-09-04 15:36:29 +02:00
1dfc09cb6b tools/systemd/time: fix signed conversion
instead of using 'as' and silently converting wrong,
use the TryInto trait and raise an error if we cannot convert

this should only happen if we have a negative year,
but this is expected (we do not want schedules from before the year 0)

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-09-04 15:35:38 +02:00
48c56024aa tools/systemd/tm_editor: add setter/getter for months/years/days
add_* are modeled after add_days

subtract one for set_mon to have a consistent interface for all fields
(i.e. getter/setter return/expect the 'real' number, not the ones
in the tm struct)

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-09-04 15:34:27 +02:00
cf103266b3 tools/systemd/tm_editor: move conversion of the year into getter and setter
the tm struct contains the year - 1900 but we added that

if we want to use the libc normalization correctly, the tm struct
must have the correct year in it, else the computations for timezones,
etc. fail

instead add a getter that adds the years and a setter that subtracts it again

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-09-04 15:34:04 +02:00
d5cf8f606c tools/systemd/time: fix selection for multiple options
if we give multiple options/ranges for a value, e.g.
2,4,8
we always choose the biggest, instead of the smallest that is next

this happens because in DateTimeValue::find_next(value)
'next' can be set multiple times and we set it when the new
value was *bigger* than the last found 'next' value, when in reality
we have to choose the *smallest* next we can find

reverse the comparison operator to fix this

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-09-04 15:33:42 +02:00
ce7ab28cfa tools/systemd/parse_time: error out on invalid ranges
if the range is reverse (bigger..smaller) we will never find a value,
so error out during parsing

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-09-04 15:28:48 +02:00
07ca6f6e66 tools/systemd/tm_editor: remove reset_time from add_days and document it
we never passed 'false' to it anyway so remove it
(we can add it again if we should ever need it)

also remove the adding of wday (gets normalized anyway)

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-09-04 15:28:24 +02:00
15ec790a40 tools/systemd/time: convert the resulting timestamp into an option
we want to use dates for the calendarspec, and with that there are some
impossible combinations that cannot be detected during parsing
(e.g. some datetimes do not exist in some timezones, and the timezone
can change after setting the schedule)

so finding no timestamp is not an error anymore but a valid result

we omit logging in that case (since it is not an error anymore)

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-09-04 15:28:05 +02:00
cb73b2d69c tools/systemd/time: move continue out of the if/else
will be called anyway

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-09-04 15:27:20 +02:00
c931c87173 tools/systemd/time: let libc normalize time for us
mktime/gmtime can normalize time and even can handle special timezone
cases like the fact that the time 2:30 on specific day/timezone combos
do not exists

we have to convert the signature of all functions that use
normalize_time since mktime/gmtime can return an EOVERFLOW
but if this happens there is no way we can find a good time anyway

since normalize_time will always set wday according to the rest of the
time, remove set_wday

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-09-04 15:26:40 +02:00
28a0a9343c tools/systemd/tm_editor: remove TMChanges optimization
while it was correct, there was no measurable speed gain
(a benchmark yielded 2.8 ms for a spec that did not find a timestamp either way)
so remove it for simpler code

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-09-04 15:26:04 +02:00
56b666458c server/worker_task: fix 'unknown' status for some big task logs
when trying to parse the task status, we seek 8k from the end
which may be into the middle of a line, so the datetime parsing
can fail (when the log message contains ': ')

This patch does a fast search for the last line, and avoid the
'lines' iterator.
2020-09-04 10:41:13 +02:00
cd6ddb5a69 depend on proxmox 0.3.5 2020-09-04 08:11:53 +02:00
ecd55041a2 fix #2978: allow non-root to view datastore usage
for datastores where the requesting user has read or write permissions,
since the API method itself filters by that already. this is the same
permission setting and filtering that the datastore list API endpoint
does.

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2020-09-04 06:18:20 +02:00
e7e8e6d5f7 online help: use a phony target and regenerate
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-09-02 14:41:03 +02:00
49df8ac115 docs: add prototype sphinx extension for online help
goes through the sections in the documents and creates the
OnlineHelpInfo.js file from the explicitly defined section labels which
are used in the js files with 'onlineHelp' variable.
2020-09-02 14:38:27 +02:00
7397f4a390 bump version to 0.8.14-1
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-09-02 10:41:42 +02:00
8317873c06 gc: improve percentage done logs 2020-09-02 10:04:18 +02:00
deef63699e verify: also fail on server shutdown 2020-09-02 09:50:17 +02:00
c6e07769e9 ui: datastore content: eslint fixes
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-09-02 09:30:57 +02:00
423df9b1f4 ui: datastore: show more granular verify state
Allows to differ the following situations:
* some snapshots in a group where not verified
* how many snapshots failed to verify in a group
* all snapshots verified but last verification task was over 30 days
  ago

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-09-02 09:30:57 +02:00
c879e5af11 ui: datastore: mark row invalid if last snapshot verification failed
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-09-02 09:12:05 +02:00
63d9aca96f verify: log progress 2020-09-02 07:43:28 +02:00
c3b1da9e41 datastore content: search: set emptytext to searched columns
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-09-01 18:30:54 +02:00
46388e6aef datastore content: reduce count column width
Using 75 as width we can display up to 9999999 which would allow
displaying over 19 years of snapshots done each minute, so quite
enough for the common cases.

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-09-01 18:28:14 +02:00
484d439a7c datastore content: reload after verify
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-09-01 18:27:30 +02:00
ab6615134c d/postinst: always fixup termproxy user id and for all users
Anyone with a PAM account and Sys.Console access could have started a
termproxy session, adapt the regex.

Always test for broken entries and run the sed expression to make sure
eventually all occurences of the broken syntax are fixed.

Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
2020-09-01 18:02:11 +02:00
b1149ebb36 ui: DataStoreContent.js: fix wrong comma
should be semicolon

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-09-01 15:33:55 +02:00
1bfdae7933 ui: DataStoreContent: improve encrypted column
do not count files where we do not have any information

such files exist in the backup dir, but are not in the manifest
so we cannot use those files for determining if the backups are
encrypted or not

this marks encrypted/signed backups with unencrypted client.log.blob files as
encrypted/signed (respectively) instead of 'Mixed'

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-09-01 15:33:55 +02:00
4f09d31085 src/backup/verify.rs: use global hashes (instead of per group)
This makes verify more predictable.
2020-09-01 13:33:04 +02:00
58d73ddb1d src/backup/data_blob.rs: avoid useless &, data is already a reference 2020-09-01 12:56:25 +02:00
6b809ff59b src/backup/verify.rs: use separate thread to load data 2020-09-01 12:56:25 +02:00
afe08d2755 debian/control: fix versions 2020-09-01 10:19:40 +02:00
a7bc5d4eaf depend on proxmox 0.3.4 2020-08-28 06:32:33 +02:00
97cd0a2a6d bump version to 0.8.13-1
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-08-27 16:15:31 +02:00
49a92084a9 gc: use human readable units for summary
and avoid the "percentage done: X %" phrase

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-08-27 16:06:35 +02:00
9bdeecaee4 bump pxar dep to 0.6.0
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2020-08-27 12:16:21 +02:00
843880f008 bin/backup-proxy: assert that daemon runs as backup user/group
Because if not, the backups it creates have bogus permissions and may
seem like they got broken once the daemon is started again with the
correct user/group.

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-08-26 10:30:15 +02:00
a6ed5e1273 backup: add BACKUP_GROUP_NAME const and backup_group helper
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-08-26 10:27:47 +02:00
74f94d0678 bin/backup-proxy: remove outdated perl comments
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-08-26 10:27:47 +02:00
946c3e8a81 bin/backup-proxy: return error directly in main
anyhow makes this a nice error message, similar to the manual
wrapping used.

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-08-26 10:27:47 +02:00
7b212c1f79 ui: datastore content: show last verify result from a snapshot
Double-click on the verify grid-cell of a specific snapshot (not the
group) opens the relevant task log.

The date of the last verify is shown as tool-tip.

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-08-26 07:36:16 +02:00
3b2046d263 save last verify result in snapshot manifest
Save the state ("ok" or "failed") and the UPID of the respective
verify task. With this we can easily allow to open the relevant task
log and show when the last verify happened.

As we already load the manifest when listing the snapshots, just add
it there directly.

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-08-26 07:35:13 +02:00
1ffe030123 various typo fixes
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-08-25 18:52:31 +02:00
5255e641fa SnapshotListItem: add comment field also to schema
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-08-25 16:24:36 +02:00
c86b6f40d7 tools/format: implement from u64 for HumanByte helper type
Could be problematic for systems where usize is 32 bit, but we do not
really support those.

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-08-25 14:18:49 +02:00
5a718dce17 api datastore: fix typo in error message
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-08-25 14:16:40 +02:00
1b32750644 update d/control for pxar 0.5.0
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2020-08-25 12:37:11 +02:00
5aa103c3c3 bump pxar dep to 0.5.0
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2020-08-25 12:37:11 +02:00
fd3f690104 Add section "Garbage Collection"
Add the section "Garbage Collection" to section "Backup Server
Management". This briefly explains the "garbage-collection"
subcommand of "proxmox-backup-manager"

Signed-off-by: Dylan Whyte <d.whyte@proxmox.com>
2020-08-25 09:38:03 +02:00
24b638bd9f Add section "Network Management"
Add the section "Network Management", which explains the
"network" subcommand of "proxmox-backup-manager"

Signed-off-by: Dylan Whyte <d.whyte@proxmox.com>
2020-08-25 09:37:41 +02:00
9624c5eecb add note about TLS benchmark test. 2020-08-25 09:36:12 +02:00
503dd339a8 Add further explanation to benchmarking
Adds a note, explaing the percentages shown in the output
of the benchmark

Signed-off-by: Dylan Whyte <d.whyte@proxmox.com>
2020-08-25 09:33:23 +02:00
36ea5df444 administration-guide.rst: remove debug output from code examples 2020-08-25 09:29:52 +02:00
dce9dd6f70 Add section "Disk Management"
Add the section "Disk Management" to the admin guide, explaining
the use of the "disk" subcommand of "proxmox-backup-manager"

Signed-off-by: Dylan Whyte <d.whyte@proxmox.com>
2020-08-25 09:27:48 +02:00
88e28e15e4 debian/control: update for new pxar 0.4 dependency 2020-08-25 09:09:37 +02:00
399e48a1ed bump version to 0.8.12-1 2020-08-25 08:57:12 +02:00
7ae571e7cb verify: speedup - only verify chunks once
We need to do the check before we load the chunk.
2020-08-25 08:52:24 +02:00
4264c5023b verify: sort backup groups 2020-08-25 08:38:47 +02:00
82b7adf90b bump pxar dep to 0.4.0
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2020-08-24 11:56:01 +02:00
71c4a3138f docs: fix PBS wiki link
rst/sphinx and comments are a PITA...

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-08-21 11:09:41 +02:00
52991f239f bump version to 0.8.11-1
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-08-19 19:20:22 +02:00
3435f5491b Fix typo in program output
Change "comptation" -> "computation"

Signed-off-by: Dylan Whyte <d.whyte@proxmox.com>
2020-08-19 09:06:27 +02:00
aafe8609e5 d/postinst: fixup userid for older termproxy tasks
At the time when we can fix this up the new (and possibly an old)
server daemon process is running, so use the flock CLI tool from
util-linux to ensure we do the same locking as the server and thus we
avoid a race condition.

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-08-19 07:26:58 +02:00
a8d69fcf05 Add "Benchmarking" section
This adds the "Benchmarking" section which discusses
the proxmox-backup-client benchmark subcommand.

Signed-off-by: Dylan Whyte <d.whyte@proxmox.com>
2020-08-18 14:24:08 +02:00
1e68497c03 Add section describing acl tool
This adds a section how to use the acl subcommand
to manage user access control

Signed-off-by: Dylan Whyte <d.whyte@proxmox.com>
2020-08-18 14:24:08 +02:00
74fc844787 Correct erroneous instructions and add clarity
This patch changes the following:
- Provide extra clarity to instruction and information where
  appropriate.
- Fix examples and content that would lead to erroneous behavior
  in a command.
- Insert section about installing on Debian into a caution block

Signed-off-by: Dylan Whyte <d.whyte@proxmox.com>
2020-08-18 14:24:08 +02:00
4cda7603c4 minor language and formatting fixup
this fixes minor grammatical errors throughout the pbs docs
and rewords certain sections for improved readability.

Signed-off-by: Dylan Whyte <d.whyte@proxmox.com>
2020-08-18 14:24:08 +02:00
11e1e27a42 turn UPID into an API type
It's a string-type.
Implement Serialize via Display, Deserialize via FromStr and
add an API_SCHEMA so that it can be used as a type within
the #[api] macro.

Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2020-08-18 11:54:30 +02:00
4ea831bfa1 style fixups
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2020-08-18 08:50:14 +02:00
c1d7d708d4 remove map_struct helper
if we ever need this it should be marked as unsafe!

Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2020-08-17 11:53:02 +02:00
3fa2b983c1 add methods to allocate a DynamicIndexHeader
to avoid `map_struct` which is actually unsafe because it
does not verify alignment constraints at all

Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2020-08-17 11:50:32 +02:00
a1e9c05738 api2/node/services: turn service api calls into workers
to be in line with pve/pmg and be able to show the progress in the gui

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-08-14 12:37:17 +02:00
934deeff2d fix #2904: zpool status: parse vdevs with state but without statistics
some vdevs (e.g. spares) have a 'state' (e.g. AVAIL), but
not statistics like READ/WRITE/etc.

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-08-14 11:41:32 +02:00
c162df60c8 zfs status: add test with spares
this will fail for now, fixed in the next commit

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-08-14 11:41:32 +02:00
98161fddb5 cleanup last patch 2020-08-14 07:30:05 +02:00
be614c625f api2/node/../disks/directory: added DELETE endpoint for removal of mount-units
Signed-off-by: Hannes Laimer <h.laimer@proxmox.com>
2020-08-14 07:06:10 +02:00
87c4cb7419 Fix #2926: parse_iface_attributes: always break on non-{attribitue, comment} token
There is no requirement to have at least
a blank line, attribute or comment in between two
interface definitions, e.g.
iface lo inet loopback
iface lo inet6 loopback

Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
2020-08-14 06:57:07 +02:00
93bb51fe7e config/jobstate: replace Job:load with create_state_file
it really is not necessary, since the only time we are interested in
loading the state from the file is when we list it, and there
we use JobState::load directly to avoid the lock

we still need to create the file on syncjob creation though, so
that we have the correct time for the schedule

to do this we add a new create_state_file that overwrites it on creation
of a syncjob

for safety, we subtract 30 seconds from the in-memory state in case
the statefile is missing

since we call create_state_file from  proxmox-backup-api,
we have to chown the lock file after creating to the backup user,
else the sync job scheduling cannot aquire the lock

also we remove the lock file on statefile removal

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-08-14 06:38:02 +02:00
713b66b6ed cleanup: replace id from do_sync_job with info from job
we already have it inside the job itself

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-08-14 06:36:43 +02:00
77bd2a469c cleanup: merge endtime into TaskState
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-08-14 06:36:19 +02:00
97af919530 ui: syncjob: make some columns smaller
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-08-13 11:51:47 +02:00
c91602316b ui: syncjob: improve task text rendering
to also have the correct icons for warnings and unknown tasks

the text is here "ERROR: ..." now, so leave the 'Error' out

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-08-13 11:51:35 +02:00
a13573c24a syncjob: use do_sync_job also for scheduled sync jobs
and determine the last runtime with the jobstate

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-08-13 11:51:20 +02:00
02543a5c7f api2/pull: extend do_sync_job to also handle schedule and jobstate
so that we can log if triggered by a schedule, and writing to a jobstatefile
also correctly polls now the abort_future of the worker, so that
users can stop a sync

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-08-13 11:49:28 +02:00
42b68f72e6 api/{pull, sync}: refactor to do_sync_job
and move the pull parameters into the worker, so that the task log
contains the error if there is one

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-08-13 11:40:52 +02:00
664d8a2765 api2/admin/sync: use JobState for faster access to state info
and delete the statefile again on syncjob removal

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-08-13 11:40:00 +02:00
e6263c2662 config: add JobState helper
this is intended to be a generic helper to (de)serialize job states
(e.g., sync, verify, and so on)

writes a json file into '/var/lib/proxmox-backup/jobstates/TYPE-ID.json'

the api creates the directory with the correct permissions, like
the rrd directory

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-08-13 11:36:10 +02:00
ae197dda23 server/worker_task: let upid_read_status also return the endtime
the endtime should be the timestamp of the last log line
or if there is no log at all, the starttime

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-08-13 11:35:44 +02:00
4c116bafb8 server: change status of a task from a string to an enum
representing a state via an enum makes more sense in this case
we also implement FromStr and Display to make it easy to convet from/to
a string

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-08-13 11:35:19 +02:00
df30017ff8 remove unused import
rustc doesn't warn about this kind of import, however,
clippy does

Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2020-08-13 09:05:15 +02:00
3f3ae19d63 formatting fixups
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2020-08-12 14:30:03 +02:00
72dc68323c replace and remove old ticket functions
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2020-08-12 14:28:21 +02:00
593f917742 introduce Ticket struct
and add tests and compatibility tests

Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2020-08-12 14:28:21 +02:00
639419b049 worker_task: new_thread() - remove unused tokio channel 2020-08-12 08:43:09 +02:00
83 changed files with 2740 additions and 902 deletions

View File

@ -1,6 +1,6 @@
[package]
name = "proxmox-backup"
version = "0.8.10"
version = "0.8.15"
authors = ["Dietmar Maurer <dietmar@proxmox.com>"]
edition = "2018"
license = "AGPL-3"
@ -39,11 +39,11 @@ pam-sys = "0.5"
percent-encoding = "2.1"
pin-utils = "0.1.0"
pathpatterns = "0.1.2"
proxmox = { version = "0.3.3", features = [ "sortable-macro", "api-macro", "websocket" ] }
proxmox = { version = "0.3.5", features = [ "sortable-macro", "api-macro", "websocket" ] }
#proxmox = { git = "ssh://gitolite3@proxdev.maurer-it.com/rust/proxmox", version = "0.1.2", features = [ "sortable-macro", "api-macro" ] }
#proxmox = { path = "../proxmox/proxmox", features = [ "sortable-macro", "api-macro", "websocket" ] }
proxmox-fuse = "0.1.0"
pxar = { version = "0.3.0", features = [ "tokio-io", "futures-io" ] }
pxar = { version = "0.6.0", features = [ "tokio-io", "futures-io" ] }
#pxar = { path = "../pxar", features = [ "tokio-io", "futures-io" ] }
regex = "1.2"
rustyline = "6"

View File

@ -150,4 +150,4 @@ upload: ${SERVER_DEB} ${CLIENT_DEB} ${DOC_DEB}
# check if working directory is clean
git diff --exit-code --stat && git diff --exit-code --stat --staged
tar cf - ${SERVER_DEB} ${SERVER_DBG_DEB} ${DOC_DEB} | ssh -X repoman@repo.proxmox.com upload --product pbs --dist buster
tar cf - ${CLIENT_DEB} ${CLIENT_DBG_DEB} | ssh -X repoman@repo.proxmox.com upload --product "pbs,pve" --dist buster
tar cf - ${CLIENT_DEB} ${CLIENT_DBG_DEB} | ssh -X repoman@repo.proxmox.com upload --product "pbs,pve,pmg" --dist buster

82
debian/changelog vendored
View File

@ -1,3 +1,84 @@
rust-proxmox-backup (0.8.15-1) unstable; urgency=medium
* verify: skip benchmark directory
* add benchmark flag to backup creation for proper cleanup when running
a benchmark
* mount: fix mount subcommand
* ui: only mark backup encrypted if there are any files
* fix #2983: improve tcp performance
* improve ui and docs
* verify: rename corrupted chunks with .bad extension
* gc: remove .bad files on garbage collect
* ui: add translation support
* server/worker_task: fix upid_read_status
* tools/systemd/time: enable dates for calendarevents
* server/worker_task: fix 'unknown' status for some big task logs
-- Proxmox Support Team <support@proxmox.com> Thu, 10 Sep 2020 09:25:59 +0200
rust-proxmox-backup (0.8.14-1) unstable; urgency=medium
* verify speed up: use separate IO thread, use datastore-wide cache (instead
of per group)
* ui: datastore content: improve encrypted column
* ui: datastore content: show more granular verify state, especially for
backup group rows
* verify: log progress in percent
-- Proxmox Support Team <support@proxmox.com> Wed, 02 Sep 2020 09:36:47 +0200
rust-proxmox-backup (0.8.13-1) unstable; urgency=medium
* improve and add to documentation
* save last verify result in snapshot manifest and show it in the GUI
* gc: use human readable units for summary in task log
-- Proxmox Support Team <support@proxmox.com> Thu, 27 Aug 2020 16:12:07 +0200
rust-proxmox-backup (0.8.12-1) unstable; urgency=medium
* verify: speedup - only verify chunks once
* verify: sort backup groups
* bump pxar dep to 0.4.0
-- Proxmox Support Team <support@proxmox.com> Tue, 25 Aug 2020 08:55:52 +0200
rust-proxmox-backup (0.8.11-1) unstable; urgency=medium
* improve sync jobs, allow to stop them and better logging
* fix #2926: make network interfaces parser more flexible
* fix #2904: zpool status: parse also those vdevs without READ/ẀRITE/...
statistics
* api2/node/services: turn service api calls into workers
* docs: add sections describing ACL related commands and describing
benchmarking
* docs: general grammar, wording and typo improvements
-- Proxmox Support Team <support@proxmox.com> Wed, 19 Aug 2020 19:20:03 +0200
rust-proxmox-backup (0.8.10-1) unstable; urgency=medium
* ui: acl: add improved permission selector
@ -391,4 +472,3 @@ proxmox-backup (0.1-1) unstable; urgency=medium
* first try
-- Proxmox Support Team <support@proxmox.com> Fri, 30 Nov 2018 13:03:28 +0100

15
debian/control vendored
View File

@ -34,14 +34,14 @@ Build-Depends: debhelper (>= 11),
librust-pathpatterns-0.1+default-dev (>= 0.1.2-~~),
librust-percent-encoding-2+default-dev (>= 2.1-~~),
librust-pin-utils-0.1+default-dev,
librust-proxmox-0.3+api-macro-dev (>= 0.3.3-~~),
librust-proxmox-0.3+default-dev (>= 0.3.3-~~),
librust-proxmox-0.3+sortable-macro-dev (>= 0.3.3-~~),
librust-proxmox-0.3+websocket-dev (>= 0.3.3-~~),
librust-proxmox-0.3+api-macro-dev (>= 0.3.5-~~),
librust-proxmox-0.3+default-dev (>= 0.3.5-~~),
librust-proxmox-0.3+sortable-macro-dev (>= 0.3.5-~~),
librust-proxmox-0.3+websocket-dev (>= 0.3.5-~~),
librust-proxmox-fuse-0.1+default-dev,
librust-pxar-0.3+default-dev,
librust-pxar-0.3+futures-io-dev,
librust-pxar-0.3+tokio-io-dev,
librust-pxar-0.6+default-dev,
librust-pxar-0.6+futures-io-dev,
librust-pxar-0.6+tokio-io-dev,
librust-regex-1+default-dev (>= 1.2-~~),
librust-rustyline-6+default-dev,
librust-serde-1+default-dev,
@ -103,6 +103,7 @@ Depends: fonts-font-awesome,
libjs-extjs (>= 6.0.1),
libzstd1 (>= 1.3.8),
lvm2,
pbs-i18n,
proxmox-backup-docs,
proxmox-mini-journalreader,
proxmox-widget-toolkit (>= 2.2-4),

1
debian/control.in vendored
View File

@ -4,6 +4,7 @@ Depends: fonts-font-awesome,
libjs-extjs (>= 6.0.1),
libzstd1 (>= 1.3.8),
lvm2,
pbs-i18n,
proxmox-backup-docs,
proxmox-mini-journalreader,
proxmox-widget-toolkit (>= 2.2-4),

6
debian/postinst vendored
View File

@ -14,6 +14,12 @@ case "$1" in
_dh_action=start
fi
deb-systemd-invoke $_dh_action proxmox-backup.service proxmox-backup-proxy.service >/dev/null || true
# FIXME: Remove in future version once we're sure no broken entries remain in anyone's files
if grep -q -e ':termproxy::[^@]\+: ' /var/log/proxmox-backup/tasks/active; then
echo "Fixing up termproxy user id in task log..."
flock -w 30 /var/log/proxmox-backup/tasks/active.lock sed -i 's/:termproxy::\([^@]\+\): /:termproxy::\1@pam: /' /var/log/proxmox-backup/tasks/active
fi
;;
abort-upgrade|abort-remove|abort-deconfigure)

View File

@ -28,7 +28,6 @@ COMPILEDIR := ../target/debug
SPHINXOPTS += -t devbuild
endif
# Sphinx internal variables.
ALLSPHINXOPTS = -d $(BUILDDIR)/doctrees $(SPHINXOPTS) .
@ -68,6 +67,12 @@ proxmox-backup-manager.1: proxmox-backup-manager/man1.rst proxmox-backup-manage
proxmox-backup-proxy.1: proxmox-backup-proxy/man1.rst proxmox-backup-proxy/description.rst
rst2man $< >$@
.PHONY: onlinehelpinfo
onlinehelpinfo:
@echo "Generating OnlineHelpInfo.js..."
$(SPHINXBUILD) -b proxmox-scanrefs $(ALLSPHINXOPTS) $(BUILDDIR)/scanrefs
@echo "Build finished. OnlineHelpInfo.js is in $(BUILDDIR)/scanrefs."
.PHONY: html
html: ${GENERATED_SYNOPSIS}
$(SPHINXBUILD) -b html $(ALLSPHINXOPTS) $(BUILDDIR)/html

View File

@ -0,0 +1,133 @@
#!/usr/bin/env python3
# debugging stuff
from pprint import pprint
from typing import cast
import json
import re
import os
import io
from docutils import nodes
from sphinx.builders import Builder
from sphinx.util import logging
logger = logging.getLogger(__name__)
# refs are added in the following manner before the title of a section (note underscore and newline before title):
# .. _my-label:
#
# Section to ref
# --------------
#
#
# then referred to like (note missing underscore):
# "see :ref:`my-label`"
#
# the benefit of using this is if a label is explicitly set for a section,
# we can refer to it with this anchor #my-label in the html,
# even if the section name changes.
#
# see https://www.sphinx-doc.org/en/master/usage/restructuredtext/roles.html#role-ref
def scan_extjs_files(wwwdir="../www"): # a bit rough i know, but we can optimize later
js_files = []
used_anchors = []
logger.info("scanning extjs files for onlineHelp definitions")
for root, dirs, files in os.walk("{}".format(wwwdir)):
#print(root, dirs, files)
for filename in files:
if filename.endswith('.js'):
js_files.append(os.path.join(root, filename))
for js_file in js_files:
fd = open(js_file).read()
match = re.search("onlineHelp:\s*[\'\"](.*?)[\'\"]", fd) # match object is tuple
if match:
anchor = match.groups()[0]
anchor = re.sub('_', '-', anchor) # normalize labels
logger.info("found onlineHelp: {} in {}".format(anchor, js_file))
used_anchors.append(anchor)
return used_anchors
def setup(app):
logger.info('Mapping reference labels...')
app.add_builder(ReflabelMapper)
return {
'version': '0.1',
'parallel_read_safe': True,
'parallel_write_safe': True,
}
class ReflabelMapper(Builder):
name = 'proxmox-scanrefs'
def init(self):
self.docnames = []
self.env.online_help = {}
self.env.online_help['pbs_documentation_index'] = {
'link': '/docs/index.html',
'title': 'Proxmox Backup Server Documentation Index',
}
self.env.used_anchors = scan_extjs_files()
if not os.path.isdir(self.outdir):
os.mkdir(self.outdir)
self.output_filename = os.path.join(self.outdir, 'OnlineHelpInfo.js')
self.output = io.open(self.output_filename, 'w', encoding='UTF-8')
def write_doc(self, docname, doctree):
for node in doctree.traverse(nodes.section):
#pprint(vars(node))
if hasattr(node, 'expect_referenced_by_id') and len(node['ids']) > 1: # explicit labels
filename = self.env.doc2path(docname)
filename_html = re.sub('.rst', '.html', filename)
labelid = node['ids'][1] # [0] is predefined by sphinx, we need [1] for explicit ones
title = cast(nodes.title, node[0])
logger.info('traversing section {}'.format(title.astext()))
ref_name = getattr(title, 'rawsource', title.astext())
self.env.online_help[labelid] = {'link': '', 'title': ''}
self.env.online_help[labelid]['link'] = "/docs/" + os.path.basename(filename_html) + "#{}".format(labelid)
self.env.online_help[labelid]['title'] = ref_name
return
def get_outdated_docs(self):
return 'all documents'
def prepare_writing(self, docnames):
return
def get_target_uri(self, docname, typ=None):
return ''
def validate_anchors(self):
#pprint(self.env.online_help)
to_remove = []
for anchor in self.env.used_anchors:
if anchor not in self.env.online_help:
logger.info("[-] anchor {} is missing from onlinehelp!".format(anchor))
for anchor in self.env.online_help:
if anchor not in self.env.used_anchors and anchor != 'pbs_documentation_index':
logger.info("[*] anchor {} not used! deleting...".format(anchor))
to_remove.append(anchor)
for anchor in to_remove:
self.env.online_help.pop(anchor, None)
return
def finish(self):
# generate OnlineHelpInfo.js output
self.validate_anchors()
self.output.write("const proxmoxOnlineHelpInfo = ")
self.output.write(json.dumps(self.env.online_help, indent=2))
self.output.write(";\n")
self.output.close()
return

View File

@ -24,6 +24,13 @@ good deduplication rates for file archives.
The Proxmox Backup Server supports both strategies.
Image Archives: ``<name>.img``
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
This is used for virtual machine images and other large binary
data. Content is split into fixed-sized chunks.
File Archives: ``<name>.pxar``
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
@ -34,13 +41,6 @@ the :ref:`pxar-format`, split into variable-sized chunks. The format
is optimized to achieve good deduplication rates.
Image Archives: ``<name>.img``
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
This is used for virtual machine images and other large binary
data. Content is split into fixed-sized chunks.
Binary Data (BLOBs)
^^^^^^^^^^^^^^^^^^^
@ -146,12 +146,109 @@ when setting up the backup server.
filesystem configuration from being supported for a datastore. For example,
``ext3`` as a whole or ``ext4`` with the ``dir_nlink`` feature manually disabled.
Disk Management
~~~~~~~~~~~~~~~
.. image:: images/screenshots/pbs-gui-disks.png
:width: 230
:align: right
:alt: List of disks
Proxmox Backup Server comes with a set of disk utilities, which are
accessed using the ``disk`` subcommand. This subcommand allows you to initialize
disks, create various filesystems, and get information about the disks.
To view the disks connected to the system, navigate to **Administration ->
Disks** in the web interface or use the ``list`` subcommand of
``disk``:
.. code-block:: console
# proxmox-backup-manager disk list
┌──────┬────────┬─────┬───────────┬─────────────┬───────────────┬─────────┬────────┐
│ name │ used │ gpt │ disk-type │ size │ model │ wearout │ status │
╞══════╪════════╪═════╪═══════════╪═════════════╪═══════════════╪═════════╪════════╡
│ sda │ lvm │ 1 │ hdd │ 34359738368 │ QEMU_HARDDISK │ - │ passed │
├──────┼────────┼─────┼───────────┼─────────────┼───────────────┼─────────┼────────┤
│ sdb │ unused │ 1 │ hdd │ 68719476736 │ QEMU_HARDDISK │ - │ passed │
├──────┼────────┼─────┼───────────┼─────────────┼───────────────┼─────────┼────────┤
│ sdc │ unused │ 1 │ hdd │ 68719476736 │ QEMU_HARDDISK │ - │ passed │
└──────┴────────┴─────┴───────────┴─────────────┴───────────────┴─────────┴────────┘
To initialize a disk with a new GPT, use the ``initialize`` subcommand:
.. code-block:: console
# proxmox-backup-manager disk initialize sdX
.. image:: images/screenshots/pbs-gui-disks-dir-create.png
:width: 230
:align: right
:alt: Create a directory
You can create an ``ext4`` or ``xfs`` filesystem on a disk using ``fs
create``, or by navigating to **Administration -> Disks -> Directory** in the
web interface and creating one from there. The following command creates an
``ext4`` filesystem and passes the ``--add-datastore`` parameter, in order to
automatically create a datastore on the disk (in this case ``sdd``). This will
create a datastore at the location ``/mnt/datastore/store1``:
|
.. code-block:: console
# proxmox-backup-manager disk fs create store1 --disk sdd --filesystem ext4 --add-datastore true
create datastore 'store1' on disk sdd
Percentage done: 1
...
Percentage done: 99
TASK OK
.. image:: images/screenshots/pbs-gui-disks-zfs-create.png
:width: 230
:align: right
:alt: Create a directory
You can also create a ``zpool`` with various raid levels from **Administration
-> Disks -> Zpool** in the web interface, or by using ``zpool create``. The command
below creates a mirrored ``zpool`` using two disks (``sdb`` & ``sdc``) and
mounts it on the root directory (default):
|
.. code-block:: console
# proxmox-backup-manager disk zpool create zpool1 --devices sdb,sdc --raidlevel mirror
create Mirror zpool 'zpool1' on devices 'sdb,sdc'
# "zpool" "create" "-o" "ashift=12" "zpool1" "mirror" "sdb" "sdc"
TASK OK
.. note::
You can also pass the ``--add-datastore`` parameter here, to automatically
create a datastore from the disk.
You can use ``disk fs list`` and ``disk zpool list`` to keep track of your
filesystems and zpools respectively.
If a disk supports S.M.A.R.T. capability, and you have this enabled, you can
display S.M.A.R.T. attributes from the web interface or by using the command:
.. code-block:: console
# proxmox-backup-manager disk smart-attributes sdX
Datastore Configuration
~~~~~~~~~~~~~~~~~~~~~~~
.. image:: images/screenshots/pbs-gui-datastore.png
:width: 230
:align: right
:alt: Datastore Overview
You can configure multiple datastores. Minimum one datastore needs to be
configured. The datastore is identified by a simple `name` and points to a
configured. The datastore is identified by a simple *name* and points to a
directory on the filesystem. Each datastore also has associated retention
settings of how many backup snapshots for each interval of ``hourly``,
``daily``, ``weekly``, ``monthly``, ``yearly`` as well as a time-independent
@ -159,13 +256,35 @@ number of backups to keep in that store. :ref:`Pruning <pruning>` and
:ref:`garbage collection <garbage-collection>` can also be configured to run
periodically based on a configured :term:`schedule` per datastore.
The following command creates a new datastore called ``store1`` on :file:`/backup/disk1/store1`
Creating a Datastore
^^^^^^^^^^^^^^^^^^^^
.. image:: images/screenshots/pbs-gui-datastore-create-general.png
:width: 230
:align: right
:alt: Create a datastore
You can create a new datastore from the web GUI, by navigating to **Datastore** in
the menu tree and clicking **Create**. Here:
* *Name* refers to the name of the datastore
* *Backing Path* is the path to the directory upon which you want to create the
datastore
* *GC Schedule* refers to the time and intervals at which garbage collection
runs
* *Prune Schedule* refers to the frequency at which pruning takes place
* *Prune Options* set the amount of backups which you would like to keep (see :ref:`Pruning <pruning>`).
Alternatively you can create a new datastore from the command line. The
following command creates a new datastore called ``store1`` on :file:`/backup/disk1/store1`
.. code-block:: console
# proxmox-backup-manager datastore create store1 /backup/disk1/store1
To list existing datastores run:
Managing Datastores
^^^^^^^^^^^^^^^^^^^
To list existing datastores from the command line run:
.. code-block:: console
@ -176,13 +295,15 @@ To list existing datastores run:
│ store1 │ /backup/disk1/store1 │ This is my default storage. │
└────────┴──────────────────────┴─────────────────────────────┘
You can change settings of a datastore, for example to set a prune and garbage
collection schedule or retention settings using ``update`` subcommand and view
a datastore with the ``show`` subcommand:
You can change the garbage collection and prune settings of a datastore, by
editing the datastore from the GUI or by using the ``update`` subcommand. For
example, the below command changes the garbage collection schedule using the
``update`` subcommand and prints the properties of the datastore with the
``show`` subcommand:
.. code-block:: console
# proxmox-backup-manager datastore update store1 --keep-last 7 --prune-schedule daily --gc-schedule 'Tue 04:27'
# proxmox-backup-manager datastore update store1 --gc-schedule 'Tue 04:27'
# proxmox-backup-manager datastore show store1
┌────────────────┬─────────────────────────────┐
│ Name │ Value │
@ -260,6 +381,11 @@ directories will store the chunked data after a backup operation has been execut
User Management
~~~~~~~~~~~~~~~
.. image:: images/screenshots/pbs-gui-user-management.png
:width: 230
:align: right
:alt: User management
Proxmox Backup Server supports several authentication realms, and you need to
choose the realm when you add a new user. Possible realms are:
@ -284,19 +410,22 @@ users:
│ root@pam │ 1 │ │ │ │ │ Superuser │
└─────────────┴────────┴────────┴───────────┴──────────┴────────────────┴────────────────────┘
.. image:: images/screenshots/pbs-gui-user-management-add-user.png
:width: 230
:align: right
:alt: Add a new user
The superuser has full administration rights on everything, so you
normally want to add other users with less privileges:
normally want to add other users with less privileges. You can create a new
user with the ``user create`` subcommand or through the web interface, under
**Configuration -> User Management**. The ``create`` subcommand lets you specify
many options like ``--email`` or ``--password``. You can update or change any
user properties using the ``update`` subcommand later (**Edit** in the GUI):
.. code-block:: console
# proxmox-backup-manager user create john@pbs --email john@example.com
The create command lets you specify many options like ``--email`` or
``--password``. You can update or change any of them using the
update command later:
.. code-block:: console
# proxmox-backup-manager user update john@pbs --firstname John --lastname Smith
# proxmox-backup-manager user update john@pbs --comment "An example user."
@ -344,10 +473,10 @@ following roles exist:
Disable Access - nothing is allowed.
**Admin**
The Administrator can do anything.
Can do anything.
**Audit**
An Auditor can view things, but is not allowed to change settings.
Can view things, but is not allowed to change settings.
**DatastoreAdmin**
Can do anything on datastores.
@ -356,10 +485,10 @@ following roles exist:
Can view datastore settings and list content. But
is not allowed to read the actual data.
**DataStoreReader**
**DatastoreReader**
Can Inspect datastore content and can do restores.
**DataStoreBackup**
**DatastoreBackup**
Can backup and restore owned backups.
**DatastorePowerUser**
@ -374,24 +503,151 @@ following roles exist:
**RemoteSyncOperator**
Is allowed to read data from a remote.
:width: 230
:align: right
:alt: Add permissions for user
You can manage datastore permissions from **Configuration -> Permissions** in
the web interface. Likewise, you can use the ``acl`` subcommand to manage and
monitor user permissions from the command line. For example, the command below
will add the user ``john@pbs`` as a **DatastoreAdmin** for the datastore
``store1``, located at ``/backup/disk1/store1``:
.. code-block:: console
# proxmox-backup-manager acl update /datastore/store1 DatastoreAdmin --userid john@pbs
You can monitor the roles of each user using the following command:
.. code-block:: console
# proxmox-backup-manager acl list
┌──────────┬──────────────────┬───────────┬────────────────┐
│ ugid │ path │ propagate │ roleid │
╞══════════╪══════════════════╪═══════════╪════════════════╡
│ john@pbs │ /datastore/disk1 │ 1 │ DatastoreAdmin │
└──────────┴──────────────────┴───────────┴────────────────┘
A single user can be assigned multiple permission sets for different datastores.
.. Note::
Naming convention is important here. For datastores on the host,
you must use the convention ``/datastore/{storename}``. For example, to set
permissions for a datastore mounted at ``/mnt/backup/disk4/store2``, you would use
``/datastore/store2`` for the path. For remote stores, use the convention
``/remote/{remote}/{storename}``, where ``{remote}`` signifies the name of the
remote (see `Remote` below) and ``{storename}`` is the name of the datastore on
the remote.
Network Management
~~~~~~~~~~~~~~~~~~
Proxmox Backup Server provides both a web interface and a command line tool for
network configuration. You can find the configuration options in the web
interface under the **Network Interfaces** section of the **Configuration** menu
tree item. The command line tool is accessed via the ``network`` subcommand.
These interfaces allow you to carry out some basic network management tasks,
such as adding, configuring, and removing network interfaces.
.. note:: Any changes made to the network configuration are not
applied, until you click on **Apply Configuration** or enter the ``network
reload`` command. This allows you to make many changes at once. It also allows
you to ensure that your changes are correct before applying them, as making a
mistake here can render the server inaccessible over the network.
To get a list of available interfaces, use the following command:
.. code-block:: console
# proxmox-backup-manager network list
┌───────┬────────┬───────────┬────────┬─────────────┬──────────────┬──────────────┐
│ name │ type │ autostart │ method │ address │ gateway │ ports/slaves │
╞═══════╪════════╪═══════════╪════════╪═════════════╪══════════════╪══════════════╡
│ bond0 │ bond │ 1 │ static │ x.x.x.x/x │ x.x.x.x │ ens18 ens19 │
├───────┼────────┼───────────┼────────┼─────────────┼──────────────┼──────────────┤
│ ens18 │ eth │ 1 │ manual │ │ │ │
├───────┼────────┼───────────┼────────┼─────────────┼──────────────┼──────────────┤
│ ens19 │ eth │ 1 │ manual │ │ │ │
└───────┴────────┴───────────┴────────┴─────────────┴──────────────┴──────────────┘
.. image:: images/screenshots/pbs-gui-network-create-bond.png
:width: 230
:align: right
:alt: Add a network interface
To add a new network interface, use the ``create`` subcommand with the relevant
parameters. The following command shows a template for creating the bond shown
in the list above:
.. code-block:: console
# proxmox-backup-manager network create bond0 --type bond --bond_mode active-backup --slaves ens18,ens19 --autostart true --cidr x.x.x.x/x --gateway x.x.x.x
You can make changes to the configuration of a network interface with the
``update`` subcommand:
.. code-block:: console
# proxmox-backup-manager network update bond0 --cidr y.y.y.y/y
You can also remove a network interface:
.. code-block:: console
# proxmox-backup-manager network remove bond0
The pending changes for the network configuration file will appear at the bottom of the
web interface. You can also view these changes, by using the command:
.. code-block:: console
# proxmox-backup-manager network changes
If you would like to cancel all changes at this point, you can either click on
the **Revert** button or use the following command:
.. code-block:: console
# proxmox-backup-manager network revert
If you are happy with the changes and would like to write them into the
configuration file, select **Apply Configuration**. The corresponding command
is:
.. code-block:: console
# proxmox-backup-manager network reload
You can also configure DNS settings, from the **DNS** section
of **Configuration** or by using the ``dns`` subcommand of
``proxmox-backup-manager``.
:term:`Remote`
~~~~~~~~~~~~~~
A remote refers to a separate Proxmox Backup Server installation and a user on that
installation, from which you can `sync` datastores to a local datastore with a
`Sync Job`.
`Sync Job`. You can configure remotes in the web interface, under **Configuration
-> Remotes**. Alternatively, you can use the ``remote`` subcommand.
.. image:: images/screenshots/pbs-gui-remote-add.png
.. image:: images/screenshots/pbs-gui-permissions-add.png
:width: 230
:align: right
:alt: Add a remote
To add a remote, you need its hostname or ip, a userid and password on the
remote, and its certificate fingerprint. To get the fingerprint, use the
``proxmox-backup-manager cert info`` command on the remote.
``proxmox-backup-manager cert info`` command on the remote, or navigate to
**Dashboard** in the remote's web interface and select **Show Fingerprint**.
.. code-block:: console
# proxmox-backup-manager cert info |grep Fingerprint
Fingerprint (sha256): 64:d3:ff:3a:50:38:53:5a:9b:f7:50:...:ab:fe
Using the information specified above, add the remote with:
Using the information specified above, you can add a remote from the **Remotes**
configuration panel, or by using the command:
.. code-block:: console
@ -415,10 +671,16 @@ Use the ``list``, ``show``, ``update``, ``remove`` subcommands of
Sync Jobs
~~~~~~~~~
Sync jobs are configured to pull the contents of a datastore on a `Remote` to a
local datastore. You can either start the sync job manually on the GUI or
provide it with a :term:`schedule` to run regularly. The
``proxmox-backup-manager sync-job`` command is used to manage sync jobs:
.. image:: images/screenshots/pbs-gui-syncjob-add.png
:width: 230
:align: right
:alt: Add a remote
Sync jobs are configured to pull the contents of a datastore on a **Remote** to a
local datastore. You can either start a sync job manually on the GUI or
provide it with a :term:`schedule` to run regularly. You can manage sync jobs
under **Configuration -> Sync Jobs** in the web interface, or using the
``proxmox-backup-manager sync-job`` command:
.. code-block:: console
@ -433,6 +695,15 @@ provide it with a :term:`schedule` to run regularly. The
# proxmox-backup-manager sync-job remove pbs2-local
Garbage Collection
~~~~~~~~~~~~~~~~~~
You can monitor and run :ref:`garbage collection <garbage-collection>` on the
Proxmox Backup Server using the ``garbage-collection`` subcommand of
``proxmox-backup-manager``. You can use the ``start`` subcommand to manually start garbage
collection on an entire datastore and the ``status`` subcommand to see
attributes relating to the :ref:`garbage collection <garbage-collection>`.
Backup Client usage
-------------------
@ -543,7 +814,9 @@ This will prompt you for a password and then uploads a file archive named
The ``--repository`` option can get quite long and is used by all
commands. You can avoid having to enter this value by setting the
environment variable ``PBS_REPOSITORY``.
environment variable ``PBS_REPOSITORY``. Note that if you would like this to remain set
over multiple sessions, you should instead add the below line to your
``.bashrc`` file.
.. code-block:: console
@ -578,7 +851,7 @@ Excluding files/folders from a backup
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Sometimes it is desired to exclude certain files or folders from a backup archive.
To tell the Proxmox backup client when and how to ignore files and directories,
To tell the Proxmox Backup client when and how to ignore files and directories,
place a text file called ``.pxarexclude`` in the filesystem hierarchy.
Whenever the backup client encounters such a file in a directory, it interprets
each line as glob match patterns for files and directories that are to be excluded
@ -775,7 +1048,9 @@ To set up a master key:
backed up. It can happen, for example, that you back up an entire system, using
a key on that system. If the system then becomes inaccessable for any reason
and needs to be restored, this will not be possible as the encryption key will be
lost along with the broken system.
lost along with the broken system. In preparation for the worst case scenario,
you should consider keeping a paper copy of this key locked away in
a safe place.
Restoring Data
~~~~~~~~~~~~~~
@ -818,7 +1093,7 @@ backup.
# proxmox-backup-client restore host/elsa/2019-12-03T09:35:01Z root.pxar /target/path/
To get the contents of any archive, you can restore the ``ìndex.json`` file in the
To get the contents of any archive, you can restore the ``index.json`` file in the
repository to the target path '-'. This will dump the contents to the standard output.
.. code-block:: console
@ -900,8 +1175,8 @@ file archive as a read-only filesystem to a mountpoint on your host.
.. code-block:: console
# proxmox-backup-client mount host/backup-client/2020-01-29T11:29:22Z root.pxar /mnt
# ls /mnt
# proxmox-backup-client mount host/backup-client/2020-01-29T11:29:22Z root.pxar /mnt/mountpoint
# ls /mnt/mountpoint
bin dev home lib32 libx32 media opt root sbin sys usr
boot etc lib lib64 lost+found mnt proc run srv tmp var
@ -916,7 +1191,7 @@ To unmount the filesystem use the ``umount`` command on the mountpoint:
.. code-block:: console
# umount /mnt
# umount /mnt/mountpoint
Login and Logout
~~~~~~~~~~~~~~~~
@ -959,8 +1234,8 @@ command:
snapshot. They will be inaccessible and unrecoverable.
The manual removal is sometimes required, but normally the prune
command is used to systematically delete older backups. Prune lets
Although manual removal is sometimes required, the ``prune``
command is normally used to systematically delete older backups. Prune lets
you specify which backup snapshots you want to keep. The
following retention options are available:
@ -1035,7 +1310,7 @@ Garbage Collection
~~~~~~~~~~~~~~~~~~
The ``prune`` command removes only the backup index files, not the data
from the data store. This task is left to the garbage collection
from the datastore. This task is left to the garbage collection
command. It is recommended to carry out garbage collection on a regular basis.
The garbage collection works in two phases. In the first phase, all
@ -1080,6 +1355,42 @@ unused data blocks are removed.
.. todo:: howto run garbage-collection at regular intervalls (cron)
Benchmarking
~~~~~~~~~~~~
The backup client also comes with a benchmarking tool. This tool measures
various metrics relating to compression and encryption speeds. You can run a
benchmark using the ``benchmark`` subcommand of ``proxmox-backup-client``:
.. code-block:: console
# proxmox-backup-client benchmark
Uploaded 656 chunks in 5 seconds.
Time per request: 7659 microseconds.
TLS speed: 547.60 MB/s
SHA256 speed: 585.76 MB/s
Compression speed: 1923.96 MB/s
Decompress speed: 7885.24 MB/s
AES256/GCM speed: 3974.03 MB/s
┌───────────────────────────────────┬─────────────────────┐
│ Name │ Value │
╞═══════════════════════════════════╪═════════════════════╡
│ TLS (maximal backup upload speed) │ 547.60 MB/s (93%) │
├───────────────────────────────────┼─────────────────────┤
│ SHA256 checksum computation speed │ 585.76 MB/s (28%) │
├───────────────────────────────────┼─────────────────────┤
│ ZStd level 1 compression speed │ 1923.96 MB/s (89%) │
├───────────────────────────────────┼─────────────────────┤
│ ZStd level 1 decompression speed │ 7885.24 MB/s (98%) │
├───────────────────────────────────┼─────────────────────┤
│ AES256 GCM encryption speed │ 3974.03 MB/s (104%) │
└───────────────────────────────────┴─────────────────────┘
.. note:: The percentages given in the output table correspond to a
comparison against a Ryzen 7 2700X. The TLS test connects to the
local host, so there is no network involved.
You can also pass the ``--output-format`` parameter to output stats in ``json``,
rather than the default table format.
.. _pve-integration:
@ -1096,6 +1407,10 @@ as ``user1@pbs``.
# pvesm add pbs store2 --server localhost --datastore store2
# pvesm set store2 --username user1@pbs --password <secret>
.. note:: If you would rather not pass your password as plain text, you can pass
the ``--password`` parameter, without any arguments. This will cause the
program to prompt you for a password upon entering the command.
If your backup server uses a self signed certificate, you need to add
the certificate fingerprint to the configuration. You can get the
fingerprint by running the following command on the backup server:

View File

@ -18,9 +18,12 @@
# documentation root, use os.path.abspath to make it absolute, like shown here.
#
import os
# import sys
import sys
# sys.path.insert(0, os.path.abspath('.'))
# custom extensions
sys.path.append(os.path.abspath("./_ext"))
# -- Implement custom formatter for code-blocks ---------------------------
#
# * use smaller font
@ -46,7 +49,7 @@ PygmentsBridge.latex_formatter = CustomLatexFormatter
# extensions coming with Sphinx (named 'sphinx.ext.*') or your custom
# ones.
extensions = ["sphinx.ext.graphviz", "sphinx.ext.todo"]
extensions = ["sphinx.ext.graphviz", "sphinx.ext.todo", "proxmox-scanrefs"]
todo_link_only = True

View File

@ -13,7 +13,8 @@
.. _Proxmox: https://www.proxmox.com
.. _Proxmox Community Forum: https://forum.proxmox.com
.. _Proxmox Virtual Environment: https://www.proxmox.com/proxmox-ve
.. _Proxmox Backup: https://pbs.proxmox.com/wiki/index.php/Main_Page // FIXME
// FIXME
.. _Proxmox Backup: https://pbs.proxmox.com/wiki/index.php/Main_Page
.. _PBS Development List: https://lists.proxmox.com/cgi-bin/mailman/listinfo/pbs-devel
.. _reStructuredText: https://www.sphinx-doc.org/en/master/usage/restructuredtext/index.html
.. _Rust: https://www.rust-lang.org/

Binary file not shown.

After

Width:  |  Height:  |  Size: 18 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 60 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 12 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 43 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 79 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 26 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 14 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 20 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 21 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 18 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 54 KiB

View File

@ -19,9 +19,9 @@ for various management tasks such as disk management.
The disk image (ISO file) provided by Proxmox includes a complete Debian system
("buster" for version 1.x) as well as all necessary packages for the `Proxmox Backup`_ server.
The installer will guide you through the setup process and allows
The installer will guide you through the setup process and allow
you to partition the local disk(s), apply basic system configurations
(e.g. timezone, language, network), and installs all required packages.
(e.g. timezone, language, network), and install all required packages.
The provided ISO will get you started in just a few minutes, and is the
recommended method for new and existing users.
@ -36,11 +36,11 @@ It includes the following:
* The `Proxmox Backup`_ server installer, which partitions the local
disk(s) with ext4, ext3, xfs or ZFS, and installs the operating
system.
system
* Complete operating system (Debian Linux, 64-bit)
* Our Linux kernel with ZFS support.
* Our Linux kernel with ZFS support
* Complete tool-set to administer backups and all necessary resources
@ -54,7 +54,7 @@ Install `Proxmox Backup`_ server on Debian
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Proxmox ships as a set of Debian packages which can be installed on top of a
standard Debian installation. After configuring the
standard Debian installation. After configuring the
:ref:`sysadmin_package_repositories`, you need to run:
.. code-block:: console
@ -76,12 +76,11 @@ does, please use the following:
This will install all required packages, the Proxmox kernel with ZFS_
support, and a set of common and useful packages.
Installing `Proxmox Backup`_ on top of an existing Debian_ installation looks easy, but
it presumes that the base system and local storage has been set up correctly.
In general this is not trivial, especially when LVM_ or ZFS_ is used.
The network configuration is completely up to you as well.
.. caution:: Installing `Proxmox Backup`_ on top of an existing Debian_
installation looks easy, but it assumes that the base system and local
storage have been set up correctly. In general this is not trivial, especially
when LVM_ or ZFS_ is used. The network configuration is completely up to you
as well.
.. note:: You can access the webinterface of the Proxmox Backup Server with
your web browser, using HTTPS on port 8007. For example at
@ -103,9 +102,9 @@ After configuring the
server to store backups. Should the hypervisor server fail, you can
still access the backups.
.. note:: You can access the webinterface of the Proxmox Backup Server with
your web browser, using HTTPS on port 8007. For example at
``https://<ip-or-dns-name>:8007``
.. note::
You can access the webinterface of the Proxmox Backup Server with your web
browser, using HTTPS on port 8007. For example at ``https://<ip-or-dns-name>:8007``
Client installation
-------------------

View File

@ -22,7 +22,7 @@ Architecture
------------
Proxmox Backup Server uses a `client-server model`_. The server stores the
backup data and provides an API to create backups and restore data. With the
backup data and provides an API to create and manage datastores. With the
API, it's also possible to manage disks and other server-side resources.
The backup client uses this API to access the backed up data. With the command
@ -143,6 +143,7 @@ Mailing Lists
Proxmox Backup Server is fully open-source and contributions are welcome! Here
is the primary communication channel for developers:
:Mailing list for developers: `PBS Development List`_
Bug Tracker

View File

@ -1,3 +1,6 @@
.. _chapter-zfs:
ZFS on Linux
------------

View File

@ -3,8 +3,8 @@
Debian Package Repositories
---------------------------
All Debian based systems use APT_ as package management tool. The list of
repositories is defined in ``/etc/apt/sources.list`` and ``.list`` files found
All Debian based systems use APT_ as a package management tool. The lists of
repositories are defined in ``/etc/apt/sources.list`` and the ``.list`` files found
in the ``/etc/apt/sources.d/`` directory. Updates can be installed directly
with the ``apt`` command line tool, or via the GUI.
@ -26,11 +26,10 @@ update``.
.. FIXME for 7.0: change security update suite to bullseye-security
In addition, you need a package repositories from Proxmox to get the backup
server updates.
In addition, you need a package repository from Proxmox to get Proxmox Backup updates.
During the Proxmox Backup beta phase only one repository (pbstest) will be
available. Once released, a Enterprise repository for production use and a
During the Proxmox Backup beta phase, only one repository (pbstest) will be
available. Once released, an Enterprise repository for production use and a
no-subscription repository will be provided.
SecureApt
@ -39,8 +38,8 @@ SecureApt
The `Release` files in the repositories are signed with GnuPG. APT is using
these signatures to verify that all packages are from a trusted source.
If you install Proxmox Backup Server from an official ISO image, the key for
verification is already installed.
If you install Proxmox Backup Server from an official ISO image, the
verification key is already installed.
If you install Proxmox Backup Server on top of Debian, download and install the
key with the following commands:
@ -136,17 +135,17 @@ During the public beta, there is a repository called ``pbstest``. This one
contains the latest packages and is heavily used by developers to test new
features.
.. .. warning:: the ``pbstest`` repository should (as the name implies)
.. .. warning:: the ``pbstest`` repository should (as the name implies)
only be used to test new features or bug fixes.
You can configure this using ``/etc/apt/sources.list`` by adding the following
line:
You can access this repository by adding the following line to
``/etc/apt/sources.list``:
.. code-block:: sources.list
:caption: sources.list entry for ``pbstest``
deb http://download.proxmox.com/debian/pbs buster pbstest
If you installed Proxmox Backup Server from the official beta ISO you should
If you installed Proxmox Backup Server from the official beta ISO, you should
have this repository already configured in
``/etc/apt/sources.list.d/pbstest-beta.list``

View File

@ -9,7 +9,7 @@ which caters to a similar use-case.
The ``.pxar`` format is adapted to fulfill the specific needs of the Proxmox
Backup Server, for example, efficient storage of hardlinks.
The format is designed to reduce storage space needed on the server by achieving
a high level of de-duplication.
a high level of deduplication.
Creating an Archive
^^^^^^^^^^^^^^^^^^^
@ -29,7 +29,7 @@ This will create a new archive called ``archive.pxar`` with the contents of the
By default, ``pxar`` will skip certain mountpoints and will not follow device
boundaries. This design decision is based on the primary use case of creating
archives for backups. It is sensible to not back up the contents of certain
archives for backups. It makes sense to not back up the contents of certain
temporary or system specific files.
To alter this behavior and follow device boundaries, use the
``--all-file-systems`` flag.
@ -66,7 +66,7 @@ All the glob patterns are relative to the ``source`` directory.
previous ones. Permutations of the same patterns lead to different results.
``pxar`` will store the list of glob match patterns passed as parameters via the
command line in a file called ``.pxarexclude-cli`` and stores it at the root of
command line, in a file called ``.pxarexclude-cli`` at the root of
the archive.
If a file with this name is already present in the source folder during archive
creation, this file is not included in the archive and the file containing the
@ -85,23 +85,23 @@ The behavior is the same as described in :ref:`creating-backups`.
Extracting an Archive
^^^^^^^^^^^^^^^^^^^^^
An existing archive ``archive.pxar`` is extracted to a ``target`` directory
An existing archive, ``archive.pxar``, is extracted to a ``target`` directory
with the following command:
.. code-block:: console
# pxar extract archive.pxar --target target
# pxar extract archive.pxar /path/to/target
If no target is provided, the content of the archive is extracted to the current
working directory.
In order to restore only parts of an archive, single files and/or folders,
In order to restore only parts of an archive, single files, and/or folders,
it is possible to pass the corresponding glob match patterns as additional
parameters or use the patterns stored in a file:
parameters or to use the patterns stored in a file:
.. code-block:: console
# pxar extract etc.pxar '**/*.conf' --target /restore/target/etc
# pxar extract etc.pxar /restore/target/etc --pattern '**/*.conf'
The above example restores all ``.conf`` files encountered in any of the
sub-folders in the archive ``etc.pxar`` to the target ``/restore/target/etc``.

View File

@ -7,8 +7,7 @@ use proxmox::api::router::{Router, SubdirMap};
use proxmox::{sortable, identity};
use proxmox::{http_err, list_subdirs_api_method};
use crate::tools;
use crate::tools::ticket::*;
use crate::tools::ticket::{self, Empty, Ticket};
use crate::auth_helpers::*;
use crate::api2::types::*;
@ -35,27 +34,31 @@ fn authenticate_user(
bail!("user account disabled or expired.");
}
let ticket_lifetime = tools::ticket::TICKET_LIFETIME;
if password.starts_with("PBS:") {
if let Ok((_age, Some(ticket_username))) = tools::ticket::verify_rsa_ticket(public_auth_key(), "PBS", password, None, -300, ticket_lifetime) {
if *userid == ticket_username {
if let Ok(ticket_userid) = Ticket::<Userid>::parse(password)
.and_then(|ticket| ticket.verify(public_auth_key(), "PBS", None))
{
if *userid == ticket_userid {
return Ok(true);
} else {
bail!("ticket login failed - wrong userid");
}
bail!("ticket login failed - wrong userid");
}
} else if password.starts_with("PBSTERM:") {
if path.is_none() || privs.is_none() || port.is_none() {
bail!("cannot check termnal ticket without path, priv and port");
}
let path = path.unwrap();
let privilege_name = privs.unwrap();
let port = port.unwrap();
let path = path.ok_or_else(|| format_err!("missing path for termproxy ticket"))?;
let privilege_name = privs
.ok_or_else(|| format_err!("missing privilege name for termproxy ticket"))?;
let port = port.ok_or_else(|| format_err!("missing port for termproxy ticket"))?;
if let Ok((_age, _data)) =
tools::ticket::verify_term_ticket(public_auth_key(), &userid, &path, port, password)
if let Ok(Empty) = Ticket::parse(password)
.and_then(|ticket| ticket.verify(
public_auth_key(),
ticket::TERM_PREFIX,
Some(&ticket::term_aad(userid, &path, port)),
))
{
for (name, privilege) in PRIVILEGES {
if *name == privilege_name {
@ -138,7 +141,7 @@ fn create_ticket(
) -> Result<Value, Error> {
match authenticate_user(&username, &password, path, privs, port) {
Ok(true) => {
let ticket = assemble_rsa_ticket(private_auth_key(), "PBS", Some(&username), None)?;
let ticket = Ticket::new("PBS", &username)?.sign(private_auth_key(), None)?;
let token = assemble_csrf_prevention_token(csrf_secret(), &username);

View File

@ -1,6 +1,7 @@
use std::collections::{HashSet, HashMap};
use std::ffi::OsStr;
use std::os::unix::ffi::OsStrExt;
use std::sync::{Arc, Mutex};
use anyhow::{bail, format_err, Error};
use futures::*;
@ -361,7 +362,7 @@ pub fn list_snapshots (
let mut size = None;
let (comment, files) = match get_all_snapshot_files(&datastore, &info) {
let (comment, verification, files) = match get_all_snapshot_files(&datastore, &info) {
Ok((manifest, files)) => {
size = Some(files.iter().map(|x| x.size.unwrap_or(0)).sum());
// extract the first line from notes
@ -370,11 +371,21 @@ pub fn list_snapshots (
.and_then(|notes| notes.lines().next())
.map(String::from);
(comment, files)
let verify = manifest.unprotected["verify_state"].clone();
let verify: Option<SnapshotVerifyState> = match serde_json::from_value(verify) {
Ok(verify) => verify,
Err(err) => {
eprintln!("error parsing verification state : '{}'", err);
None
}
};
(comment, verify, files)
},
Err(err) => {
eprintln!("error during snapshot file listing: '{}'", err);
(
None,
None,
info
.files
@ -394,6 +405,7 @@ pub fn list_snapshots (
backup_id: group.backup_id().to_string(),
backup_time: info.backup_dir.backup_time().timestamp(),
comment,
verification,
files,
size,
owner: Some(owner),
@ -489,7 +501,7 @@ pub fn verify(
(None, None, None) => {
worker_id = store.clone();
}
_ => bail!("parameters do not spefify a backup group or snapshot"),
_ => bail!("parameters do not specify a backup group or snapshot"),
}
let userid: Userid = rpcenv.get_user().unwrap().parse()?;
@ -501,25 +513,34 @@ pub fn verify(
userid,
to_stdout,
move |worker| {
let verified_chunks = Arc::new(Mutex::new(HashSet::with_capacity(1024*16)));
let corrupt_chunks = Arc::new(Mutex::new(HashSet::with_capacity(64)));
let failed_dirs = if let Some(backup_dir) = backup_dir {
let mut verified_chunks = HashSet::with_capacity(1024*16);
let mut corrupt_chunks = HashSet::with_capacity(64);
let mut res = Vec::new();
if !verify_backup_dir(&datastore, &backup_dir, &mut verified_chunks, &mut corrupt_chunks, &worker)? {
if !verify_backup_dir(datastore, &backup_dir, verified_chunks, corrupt_chunks, worker.clone())? {
res.push(backup_dir.to_string());
}
res
} else if let Some(backup_group) = backup_group {
verify_backup_group(&datastore, &backup_group, &worker)?
let (_count, failed_dirs) = verify_backup_group(
datastore,
&backup_group,
verified_chunks,
corrupt_chunks,
None,
worker.clone(),
)?;
failed_dirs
} else {
verify_all_backups(&datastore, &worker)?
verify_all_backups(datastore, worker.clone())?
};
if failed_dirs.len() > 0 {
worker.log("Failed to verify following snapshots:");
for dir in failed_dirs {
worker.log(format!("\t{}", dir));
}
bail!("verfication failed - please check the log for details");
bail!("verification failed - please check the log for details");
}
Ok(())
},
@ -1218,7 +1239,7 @@ fn catalog(
pub const API_METHOD_PXAR_FILE_DOWNLOAD: ApiMethod = ApiMethod::new(
&ApiHandler::AsyncHttp(&pxar_file_download),
&ObjectSchema::new(
"Download single file from pxar file of a bacup snapshot. Only works if it's not encrypted.",
"Download single file from pxar file of a backup snapshot. Only works if it's not encrypted.",
&sorted!([
("store", false, &DATASTORE_SCHEMA),
("backup-type", false, &BACKUP_TYPE_SCHEMA),

View File

@ -1,6 +1,4 @@
use std::collections::HashMap;
use anyhow::{Error};
use anyhow::{format_err, Error};
use serde_json::Value;
use proxmox::api::{api, ApiMethod, Router, RpcEnvironment};
@ -8,9 +6,10 @@ use proxmox::api::router::SubdirMap;
use proxmox::{list_subdirs_api_method, sortable};
use crate::api2::types::*;
use crate::api2::pull::{get_pull_parameters};
use crate::api2::pull::do_sync_job;
use crate::config::sync::{self, SyncJobStatus, SyncJobConfig};
use crate::server::{self, TaskListInfo, WorkerTask};
use crate::server::UPID;
use crate::config::jobstate::{Job, JobState};
use crate::tools::systemd::time::{
parse_calendar_event, compute_next_event};
@ -34,38 +33,32 @@ pub fn list_sync_jobs(
let mut list: Vec<SyncJobStatus> = config.convert_to_typed_array("sync")?;
let mut last_tasks: HashMap<String, &TaskListInfo> = HashMap::new();
let tasks = server::read_task_list()?;
for info in tasks.iter() {
let worker_id = match &info.upid.worker_id {
Some(id) => id,
_ => { continue; },
};
if let Some(last) = last_tasks.get(worker_id) {
if last.upid.starttime < info.upid.starttime {
last_tasks.insert(worker_id.to_string(), &info);
}
} else {
last_tasks.insert(worker_id.to_string(), &info);
}
}
for job in &mut list {
let mut last = 0;
if let Some(task) = last_tasks.get(&job.id) {
job.last_run_upid = Some(task.upid_str.clone());
if let Some((endtime, status)) = &task.state {
job.last_run_state = Some(String::from(status));
job.last_run_endtime = Some(*endtime);
last = *endtime;
}
}
let last_state = JobState::load("syncjob", &job.id)
.map_err(|err| format_err!("could not open statefile for {}: {}", &job.id, err))?;
let (upid, endtime, state, starttime) = match last_state {
JobState::Created { time } => (None, None, None, time),
JobState::Started { upid } => {
let parsed_upid: UPID = upid.parse()?;
(Some(upid), None, None, parsed_upid.starttime)
},
JobState::Finished { upid, state } => {
let parsed_upid: UPID = upid.parse()?;
(Some(upid), Some(state.endtime()), Some(state.to_string()), parsed_upid.starttime)
},
};
job.last_run_upid = upid;
job.last_run_state = state;
job.last_run_endtime = endtime;
let last = job.last_run_endtime.unwrap_or_else(|| starttime);
job.next_run = (|| -> Option<i64> {
let schedule = job.schedule.as_ref()?;
let event = parse_calendar_event(&schedule).ok()?;
compute_next_event(&event, last, false).ok()
// ignore errors
compute_next_event(&event, last, false).unwrap_or_else(|_| None)
})();
}
@ -84,7 +77,7 @@ pub fn list_sync_jobs(
}
)]
/// Runs the sync jobs manually.
async fn run_sync_job(
fn run_sync_job(
id: String,
_info: &ApiMethod,
rpcenv: &mut dyn RpcEnvironment,
@ -95,26 +88,9 @@ async fn run_sync_job(
let userid: Userid = rpcenv.get_user().unwrap().parse()?;
let delete = sync_job.remove_vanished.unwrap_or(true);
let (client, src_repo, tgt_store) = get_pull_parameters(&sync_job.store, &sync_job.remote, &sync_job.remote_store).await?;
let job = Job::new("syncjob", &id)?;
let upid_str = WorkerTask::spawn("syncjob", Some(id.clone()), userid, false, move |worker| async move {
worker.log(format!("sync job '{}' start", &id));
crate::client::pull::pull_store(
&worker,
&client,
&src_repo,
tgt_store.clone(),
delete,
Userid::backup_userid().clone(),
).await?;
worker.log(format!("sync job '{}' end", &id));
Ok(())
})?;
let upid_str = do_sync_job(job, sync_job, &userid, None)?;
Ok(upid_str)
}

View File

@ -38,6 +38,7 @@ pub const API_METHOD_UPGRADE_BACKUP: ApiMethod = ApiMethod::new(
("backup-id", false, &BACKUP_ID_SCHEMA),
("backup-time", false, &BACKUP_TIME_SCHEMA),
("debug", true, &BooleanSchema::new("Enable verbose debug logging.").schema()),
("benchmark", true, &BooleanSchema::new("Job is a benchmark (do not keep data).").schema()),
]),
)
).access(
@ -56,6 +57,7 @@ fn upgrade_to_backup_protocol(
async move {
let debug = param["debug"].as_bool().unwrap_or(false);
let benchmark = param["benchmark"].as_bool().unwrap_or(false);
let userid: Userid = rpcenv.get_user().unwrap().parse()?;
@ -90,11 +92,24 @@ async move {
let backup_group = BackupGroup::new(backup_type, backup_id);
let worker_type = if backup_type == "host" && backup_id == "benchmark" {
if !benchmark {
bail!("unable to run benchmark without --benchmark flags");
}
"benchmark"
} else {
if benchmark {
bail!("benchmark flags is only allowed on 'host/benchmark'");
}
"backup"
};
// lock backup group to only allow one backup per group at a time
let (owner, _group_guard) = datastore.create_locked_backup_group(&backup_group, &userid)?;
// permission check
if owner != userid { // only the owner is allowed to create additional snapshots
if owner != userid && worker_type != "benchmark" {
// only the owner is allowed to create additional snapshots
bail!("backup owner check failed ({} != {})", userid, owner);
}
@ -116,14 +131,15 @@ async move {
let (path, is_new, _snap_guard) = datastore.create_locked_backup_dir(&backup_dir)?;
if !is_new { bail!("backup directory already exists."); }
WorkerTask::spawn("backup", Some(worker_id), userid.clone(), true, move |worker| {
WorkerTask::spawn(worker_type, Some(worker_id), userid.clone(), true, move |worker| {
let mut env = BackupEnvironment::new(
env_type, userid, worker.clone(), datastore, backup_dir);
env.debug = debug;
env.last_backup = last_backup;
env.log(format!("starting new backup on datastore '{}': {:?}", store, path));
env.log(format!("starting new {} on datastore '{}': {:?}", worker_type, store, path));
let service = H2Service::new(env.clone(), worker.clone(), &BACKUP_API_ROUTER, debug);
@ -160,7 +176,11 @@ async move {
req = req_fut => req,
abrt = abort_future => abrt,
};
if benchmark {
env.log("benchmark finished successfully");
env.remove_backup()?;
return Ok(());
}
match (res, env.ensure_finished()) {
(Ok(_), Ok(())) => {
env.log("backup finished successfully");

View File

@ -457,11 +457,11 @@ impl BackupEnvironment {
/// Mark backup as finished
pub fn finish_backup(&self) -> Result<(), Error> {
let mut state = self.state.lock().unwrap();
// test if all writer are correctly closed
state.ensure_unfinished()?;
if state.dynamic_writers.len() != 0 {
// test if all writer are correctly closed
if state.dynamic_writers.len() != 0 || state.fixed_writers.len() != 0 {
bail!("found open index writer - unable to finish backup");
}

View File

@ -83,6 +83,8 @@ pub fn create_sync_job(param: Value) -> Result<(), Error> {
sync::save_config(&config)?;
crate::config::jobstate::create_state_file("syncjob", &sync_job.id)?;
Ok(())
}
@ -264,6 +266,8 @@ pub fn delete_sync_job(id: String, digest: Option<String>) -> Result<(), Error>
sync::save_config(&config)?;
crate::config::jobstate::remove_state_file("syncjob", &id)?;
Ok(())
}

View File

@ -22,6 +22,7 @@ use crate::api2::types::*;
use crate::config::acl::PRIV_SYS_CONSOLE;
use crate::server::WorkerTask;
use crate::tools;
use crate::tools::ticket::{self, Empty, Ticket};
pub mod disks;
pub mod dns;
@ -105,12 +106,11 @@ async fn termproxy(
let listener = TcpListener::bind("localhost:0")?;
let port = listener.local_addr()?.port();
let ticket = tools::ticket::assemble_term_ticket(
crate::auth_helpers::private_auth_key(),
&userid,
&path,
port,
)?;
let ticket = Ticket::new(ticket::TERM_PREFIX, &Empty)?
.sign(
crate::auth_helpers::private_auth_key(),
Some(&ticket::term_aad(&userid, &path, port)),
)?;
let mut command = Vec::new();
match cmd.as_ref().map(|x| x.as_str()) {
@ -273,17 +273,16 @@ fn upgrade_to_websocket(
) -> ApiResponseFuture {
async move {
let userid: Userid = rpcenv.get_user().unwrap().parse()?;
let ticket = tools::required_string_param(&param, "vncticket")?.to_owned();
let ticket = tools::required_string_param(&param, "vncticket")?;
let port: u16 = tools::required_integer_param(&param, "port")? as u16;
// will be checked again by termproxy
tools::ticket::verify_term_ticket(
crate::auth_helpers::public_auth_key(),
&userid,
&"/system",
port,
&ticket,
)?;
Ticket::<Empty>::parse(ticket)?
.verify(
crate::auth_helpers::public_auth_key(),
ticket::TERM_PREFIX,
Some(&ticket::term_aad(&userid, "/system", port)),
)?;
let (ws, response) = WebSocket::new(parts.headers)?;

View File

@ -16,6 +16,7 @@ use crate::tools::systemd::{self, types::*};
use crate::server::WorkerTask;
use crate::api2::types::*;
use crate::config::datastore::DataStoreConfig;
#[api(
properties: {
@ -175,9 +176,69 @@ pub fn create_datastore_disk(
Ok(upid_str)
}
#[api(
protected: true,
input: {
properties: {
node: {
schema: NODE_SCHEMA,
},
name: {
schema: DATASTORE_SCHEMA,
},
}
},
access: {
permission: &Permission::Privilege(&["system", "disks"], PRIV_SYS_MODIFY, false),
},
)]
/// Remove a Filesystem mounted under '/mnt/datastore/<name>'.".
pub fn delete_datastore_disk(name: String) -> Result<(), Error> {
let path = format!("/mnt/datastore/{}", name);
// path of datastore cannot be changed
let (config, _) = crate::config::datastore::config()?;
let datastores: Vec<DataStoreConfig> = config.convert_to_typed_array("datastore")?;
let conflicting_datastore: Option<DataStoreConfig> = datastores.into_iter()
.filter(|ds| ds.path == path)
.next();
if let Some(conflicting_datastore) = conflicting_datastore {
bail!("Can't remove '{}' since it's required by datastore '{}'",
conflicting_datastore.path, conflicting_datastore.name);
}
// disable systemd mount-unit
let mut mount_unit_name = systemd::escape_unit(&path, true);
mount_unit_name.push_str(".mount");
systemd::disable_unit(&mount_unit_name)?;
// delete .mount-file
let mount_unit_path = format!("/etc/systemd/system/{}", mount_unit_name);
let full_path = std::path::Path::new(&mount_unit_path);
log::info!("removing systemd mount unit {:?}", full_path);
std::fs::remove_file(&full_path)?;
// try to unmount, if that fails tell the user to reboot or unmount manually
let mut command = std::process::Command::new("umount");
command.arg(&path);
match crate::tools::run_command(command, None) {
Err(_) => bail!(
"Could not umount '{}' since it is busy. It will stay mounted \
until the next reboot or until unmounted manually!",
path
),
Ok(_) => Ok(())
}
}
const ITEM_ROUTER: Router = Router::new()
.delete(&API_METHOD_DELETE_DATASTORE_DISK);
pub const ROUTER: Router = Router::new()
.get(&API_METHOD_LIST_DATASTORE_MOUNTS)
.post(&API_METHOD_CREATE_DATASTORE_DISK);
.post(&API_METHOD_CREATE_DATASTORE_DISK)
.match_all("name", &ITEM_ROUTER);
fn create_datastore_mount_unit(

View File

@ -4,12 +4,13 @@ use anyhow::{bail, Error};
use serde_json::{json, Value};
use proxmox::{sortable, identity, list_subdirs_api_method};
use proxmox::api::{api, Router, Permission};
use proxmox::api::{api, Router, Permission, RpcEnvironment};
use proxmox::api::router::SubdirMap;
use proxmox::api::schema::*;
use crate::api2::types::*;
use crate::config::acl::{PRIV_SYS_AUDIT, PRIV_SYS_MODIFY};
use crate::server::WorkerTask;
static SERVICE_NAME_LIST: [&str; 7] = [
"proxmox-backup",
@ -181,31 +182,43 @@ fn get_service_state(
Ok(json_service_state(&service, status))
}
fn run_service_command(service: &str, cmd: &str) -> Result<Value, Error> {
fn run_service_command(service: &str, cmd: &str, userid: Userid) -> Result<Value, Error> {
// fixme: run background worker (fork_worker) ???
let workerid = format!("srv{}", &cmd);
let cmd = match cmd {
"start"|"stop"|"restart"=> cmd,
"reload" => "try-reload-or-restart", // some services do not implement reload
"start"|"stop"|"restart"=> cmd.to_string(),
"reload" => "try-reload-or-restart".to_string(), // some services do not implement reload
_ => bail!("unknown service command '{}'", cmd),
};
let service = service.to_string();
if service == "proxmox-backup" && cmd == "stop" {
bail!("invalid service cmd '{} {}' cannot stop essential service!", service, cmd);
}
let upid = WorkerTask::new_thread(
&workerid,
Some(service.clone()),
userid,
false,
move |_worker| {
let real_service_name = real_service_name(service);
if service == "proxmox-backup" && cmd == "stop" {
bail!("invalid service cmd '{} {}' cannot stop essential service!", service, cmd);
}
let status = Command::new("systemctl")
.args(&[cmd, real_service_name])
.status()?;
let real_service_name = real_service_name(&service);
if !status.success() {
bail!("systemctl {} failed with {}", cmd, status);
}
let status = Command::new("systemctl")
.args(&[&cmd, real_service_name])
.status()?;
Ok(Value::Null)
if !status.success() {
bail!("systemctl {} failed with {}", cmd, status);
}
Ok(())
}
)?;
Ok(upid.into())
}
#[api(
@ -228,11 +241,14 @@ fn run_service_command(service: &str, cmd: &str) -> Result<Value, Error> {
fn start_service(
service: String,
_param: Value,
rpcenv: &mut dyn RpcEnvironment,
) -> Result<Value, Error> {
let userid: Userid = rpcenv.get_user().unwrap().parse()?;
log::info!("starting service {}", service);
run_service_command(&service, "start")
run_service_command(&service, "start", userid)
}
#[api(
@ -255,11 +271,14 @@ fn start_service(
fn stop_service(
service: String,
_param: Value,
rpcenv: &mut dyn RpcEnvironment,
) -> Result<Value, Error> {
let userid: Userid = rpcenv.get_user().unwrap().parse()?;
log::info!("stopping service {}", service);
run_service_command(&service, "stop")
run_service_command(&service, "stop", userid)
}
#[api(
@ -282,15 +301,18 @@ fn stop_service(
fn restart_service(
service: String,
_param: Value,
rpcenv: &mut dyn RpcEnvironment,
) -> Result<Value, Error> {
let userid: Userid = rpcenv.get_user().unwrap().parse()?;
log::info!("re-starting service {}", service);
if &service == "proxmox-backup-proxy" {
// special case, avoid aborting running tasks
run_service_command(&service, "reload")
run_service_command(&service, "reload", userid)
} else {
run_service_command(&service, "restart")
run_service_command(&service, "restart", userid)
}
}
@ -314,11 +336,14 @@ fn restart_service(
fn reload_service(
service: String,
_param: Value,
rpcenv: &mut dyn RpcEnvironment,
) -> Result<Value, Error> {
let userid: Userid = rpcenv.get_user().unwrap().parse()?;
log::info!("reloading service {}", service);
run_service_command(&service, "reload")
run_service_command(&service, "reload", userid)
}

View File

@ -10,7 +10,7 @@ use proxmox::{identity, list_subdirs_api_method, sortable};
use crate::tools;
use crate::api2::types::*;
use crate::server::{self, UPID};
use crate::server::{self, UPID, TaskState};
use crate::config::acl::{PRIV_SYS_AUDIT, PRIV_SYS_MODIFY};
use crate::config::cached_user_info::CachedUserInfo;
@ -105,9 +105,9 @@ async fn get_task_status(
if crate::server::worker_is_active(&upid).await? {
result["status"] = Value::from("running");
} else {
let exitstatus = crate::server::upid_read_status(&upid).unwrap_or(String::from("unknown"));
let exitstatus = crate::server::upid_read_status(&upid).unwrap_or(TaskState::Unknown { endtime: 0 });
result["status"] = Value::from("stopped");
result["exitstatus"] = Value::from(exitstatus);
result["exitstatus"] = Value::from(exitstatus.to_string());
};
Ok(result)
@ -352,8 +352,9 @@ pub fn list_tasks(
if let Some(ref state) = info.state {
if running { continue; }
if errors && state.1 == "OK" {
continue;
match state {
crate::server::TaskState::OK { .. } if errors => continue,
_ => {},
}
}

View File

@ -2,6 +2,7 @@
use std::sync::{Arc};
use anyhow::{format_err, Error};
use futures::{select, future::FutureExt};
use proxmox::api::api;
use proxmox::api::{ApiMethod, Router, RpcEnvironment, Permission};
@ -12,6 +13,8 @@ use crate::client::{HttpClient, HttpClientOptions, BackupRepository, pull::pull_
use crate::api2::types::*;
use crate::config::{
remote,
sync::SyncJobConfig,
jobstate::Job,
acl::{PRIV_DATASTORE_BACKUP, PRIV_DATASTORE_PRUNE, PRIV_REMOTE_READ},
cached_user_info::CachedUserInfo,
};
@ -62,6 +65,68 @@ pub async fn get_pull_parameters(
Ok((client, src_repo, tgt_store))
}
pub fn do_sync_job(
mut job: Job,
sync_job: SyncJobConfig,
userid: &Userid,
schedule: Option<String>,
) -> Result<String, Error> {
let job_id = job.jobname().to_string();
let worker_type = job.jobtype().to_string();
let upid_str = WorkerTask::spawn(
&worker_type,
Some(job.jobname().to_string()),
userid.clone(),
false,
move |worker| async move {
job.start(&worker.upid().to_string())?;
let worker2 = worker.clone();
let worker_future = async move {
let delete = sync_job.remove_vanished.unwrap_or(true);
let (client, src_repo, tgt_store) = get_pull_parameters(&sync_job.store, &sync_job.remote, &sync_job.remote_store).await?;
worker.log(format!("Starting datastore sync job '{}'", job_id));
if let Some(event_str) = schedule {
worker.log(format!("task triggered by schedule '{}'", event_str));
}
worker.log(format!("Sync datastore '{}' from '{}/{}'",
sync_job.store, sync_job.remote, sync_job.remote_store));
crate::client::pull::pull_store(&worker, &client, &src_repo, tgt_store.clone(), delete, Userid::backup_userid().clone()).await?;
worker.log(format!("sync job '{}' end", &job_id));
Ok(())
};
let mut abort_future = worker2.abort_future().map(|_| Err(format_err!("sync aborted")));
let res = select!{
worker = worker_future.fuse() => worker,
abort = abort_future => abort,
};
let status = worker2.create_state(&res);
match job.finish(status) {
Ok(_) => {},
Err(err) => {
eprintln!("could not finish job state: {}", err);
}
}
res
})?;
Ok(upid_str)
}
#[api(
input: {
properties: {

View File

@ -74,6 +74,9 @@ use crate::config::acl::{
},
},
},
access: {
permission: &Permission::Anybody,
},
)]
/// List Datastore usages and estimates
fn datastore_status(

View File

@ -6,6 +6,7 @@ use proxmox::const_regex;
use proxmox::{IPRE, IPV4RE, IPV6RE, IPV4OCTET, IPV6H16, IPV6LS32};
use crate::backup::CryptMode;
use crate::server::UPID;
#[macro_use]
mod macros;
@ -379,6 +380,25 @@ pub struct GroupListItem {
pub owner: Option<Userid>,
}
#[api(
properties: {
upid: {
schema: UPID_SCHEMA
},
state: {
type: String
},
},
)]
#[derive(Serialize, Deserialize)]
/// Task properties.
pub struct SnapshotVerifyState {
/// UPID of the verify task
pub upid: UPID,
/// State of the verification. "failed" or "ok"
pub state: String,
}
#[api(
properties: {
"backup-type": {
@ -390,6 +410,14 @@ pub struct GroupListItem {
"backup-time": {
schema: BACKUP_TIME_SCHEMA,
},
comment: {
schema: SINGLE_LINE_COMMENT_SCHEMA,
optional: true,
},
verification: {
type: SnapshotVerifyState,
optional: true,
},
files: {
items: {
schema: BACKUP_ARCHIVE_NAME_SCHEMA
@ -411,6 +439,9 @@ pub struct SnapshotListItem {
/// The first line from manifest "notes"
#[serde(skip_serializing_if="Option::is_none")]
pub comment: Option<String>,
/// The result of the last run verify task
#[serde(skip_serializing_if="Option::is_none")]
pub verification: Option<SnapshotVerifyState>,
/// List of contained archive files.
pub files: Vec<BackupContent>,
/// Overall snapshot size (sum of all archive sizes).
@ -528,6 +559,8 @@ pub struct GarbageCollectionStatus {
pub pending_bytes: u64,
/// Number of pending chunks (pending removal - kept for safety).
pub pending_chunks: usize,
/// Number of chunks marked as .bad by verify that have been removed by GC.
pub removed_bad: usize,
}
impl Default for GarbageCollectionStatus {
@ -542,6 +575,7 @@ impl Default for GarbageCollectionStatus {
removed_chunks: 0,
pending_bytes: 0,
pending_chunks: 0,
removed_bad: 0,
}
}
}
@ -595,7 +629,7 @@ impl From<crate::server::TaskListInfo> for TaskListItem {
fn from(info: crate::server::TaskListInfo) -> Self {
let (endtime, status) = info
.state
.map_or_else(|| (None, None), |(a,b)| (Some(a), Some(b)));
.map_or_else(|| (None, None), |a| (Some(a.endtime()), Some(a.to_string())));
TaskListItem {
upid: info.upid_str,

View File

@ -9,7 +9,7 @@
//! with `String`, meaning you can only make references to it.
//! * [`Realm`]: an owned realm (`String` equivalent).
//! * [`RealmRef`]: a borrowed realm (`str` equivalent).
//! * [`Userid`]: an owned user id (`"user@realm"`). Note that this does not have a separte
//! * [`Userid`]: an owned user id (`"user@realm"`). Note that this does not have a separate
//! borrowed type.
//!
//! Note that `Username`s are not unique, therefore they do not implement `Eq` and cannot be

View File

@ -120,6 +120,8 @@ macro_rules! PROXMOX_BACKUP_READER_PROTOCOL_ID_V1 {
/// Unix system user used by proxmox-backup-proxy
pub const BACKUP_USER_NAME: &str = "backup";
/// Unix system group used by proxmox-backup-proxy
pub const BACKUP_GROUP_NAME: &str = "backup";
/// Return User info for the 'backup' user (``getpwnam_r(3)``)
pub fn backup_user() -> Result<nix::unistd::User, Error> {
@ -129,6 +131,14 @@ pub fn backup_user() -> Result<nix::unistd::User, Error> {
}
}
/// Return Group info for the 'backup' group (``getgrnam(3)``)
pub fn backup_group() -> Result<nix::unistd::Group, Error> {
match nix::unistd::Group::from_name(BACKUP_GROUP_NAME)? {
Some(group) => Ok(group),
None => bail!("Unable to lookup backup user."),
}
}
mod file_formats;
pub use file_formats::*;

View File

@ -45,6 +45,31 @@ pub struct BackupGroup {
backup_id: String,
}
impl std::cmp::Ord for BackupGroup {
fn cmp(&self, other: &Self) -> std::cmp::Ordering {
let type_order = self.backup_type.cmp(&other.backup_type);
if type_order != std::cmp::Ordering::Equal {
return type_order;
}
// try to compare IDs numerically
let id_self = self.backup_id.parse::<u64>();
let id_other = other.backup_id.parse::<u64>();
match (id_self, id_other) {
(Ok(id_self), Ok(id_other)) => id_self.cmp(&id_other),
(Ok(_), Err(_)) => std::cmp::Ordering::Less,
(Err(_), Ok(_)) => std::cmp::Ordering::Greater,
_ => self.backup_id.cmp(&other.backup_id),
}
}
}
impl std::cmp::PartialOrd for BackupGroup {
fn partial_cmp(&self, other: &Self) -> Option<std::cmp::Ordering> {
Some(self.cmp(other))
}
}
impl BackupGroup {
pub fn new<T: Into<String>, U: Into<String>>(backup_type: T, backup_id: U) -> Self {

View File

@ -104,7 +104,7 @@ impl ChunkStore {
}
let percentage = (i*100)/(64*1024);
if percentage != last_percentage {
eprintln!("Percentage done: {}", percentage);
eprintln!("{}%", percentage);
last_percentage = percentage;
}
}
@ -187,7 +187,7 @@ impl ChunkStore {
pub fn get_chunk_iterator(
&self,
) -> Result<
impl Iterator<Item = (Result<tools::fs::ReadDirEntry, Error>, usize)> + std::iter::FusedIterator,
impl Iterator<Item = (Result<tools::fs::ReadDirEntry, Error>, usize, bool)> + std::iter::FusedIterator,
Error
> {
use nix::dir::Dir;
@ -219,19 +219,21 @@ impl ChunkStore {
Some(Ok(entry)) => {
// skip files if they're not a hash
let bytes = entry.file_name().to_bytes();
if bytes.len() != 64 {
if bytes.len() != 64 && bytes.len() != 64 + ".0.bad".len() {
continue;
}
if !bytes.iter().all(u8::is_ascii_hexdigit) {
if !bytes.iter().take(64).all(u8::is_ascii_hexdigit) {
continue;
}
return Some((Ok(entry), percentage));
let bad = bytes.ends_with(".bad".as_bytes());
return Some((Ok(entry), percentage, bad));
}
Some(Err(err)) => {
// stop after first error
done = true;
// and pass the error through:
return Some((Err(err), percentage));
return Some((Err(err), percentage, false));
}
None => (), // open next directory
}
@ -261,7 +263,7 @@ impl ChunkStore {
// other errors are fatal, so end our iteration
done = true;
// and pass the error through:
return Some((Err(format_err!("unable to read subdir '{}' - {}", subdir, err)), percentage));
return Some((Err(format_err!("unable to read subdir '{}' - {}", subdir, err)), percentage, false));
}
}
}
@ -280,6 +282,7 @@ impl ChunkStore {
worker: &WorkerTask,
) -> Result<(), Error> {
use nix::sys::stat::fstatat;
use nix::unistd::{unlinkat, UnlinkatFlags};
let mut min_atime = phase1_start_time - 3600*24; // at least 24h (see mount option relatime)
@ -292,10 +295,10 @@ impl ChunkStore {
let mut last_percentage = 0;
let mut chunk_count = 0;
for (entry, percentage) in self.get_chunk_iterator()? {
for (entry, percentage, bad) in self.get_chunk_iterator()? {
if last_percentage != percentage {
last_percentage = percentage;
worker.log(format!("percentage done: {}, chunk count: {}", percentage, chunk_count));
worker.log(format!("percentage done: phase2 {}% (processed {} chunks)", percentage, chunk_count));
}
worker.fail_on_abort()?;
@ -321,14 +324,47 @@ impl ChunkStore {
let lock = self.mutex.lock();
if let Ok(stat) = fstatat(dirfd, filename, nix::fcntl::AtFlags::AT_SYMLINK_NOFOLLOW) {
if stat.st_atime < min_atime {
if bad {
// filename validity checked in iterator
let orig_filename = std::ffi::CString::new(&filename.to_bytes()[..64])?;
match fstatat(
dirfd,
orig_filename.as_c_str(),
nix::fcntl::AtFlags::AT_SYMLINK_NOFOLLOW)
{
Ok(_) => {
match unlinkat(Some(dirfd), filename, UnlinkatFlags::NoRemoveDir) {
Err(err) =>
worker.warn(format!(
"unlinking corrupt chunk {:?} failed on store '{}' - {}",
filename,
self.name,
err,
)),
Ok(_) => {
status.removed_bad += 1;
status.removed_bytes += stat.st_size as u64;
}
}
},
Err(nix::Error::Sys(nix::errno::Errno::ENOENT)) => {
// chunk hasn't been rewritten yet, keep .bad file
},
Err(err) => {
// some other error, warn user and keep .bad file around too
worker.warn(format!(
"error during stat on '{:?}' - {}",
orig_filename,
err,
));
}
}
} else if stat.st_atime < min_atime {
//let age = now - stat.st_atime;
//println!("UNLINK {} {:?}", age/(3600*24), filename);
let res = unsafe { libc::unlinkat(dirfd, filename.as_ptr(), 0) };
if res != 0 {
let err = nix::Error::last();
if let Err(err) = unlinkat(Some(dirfd), filename, UnlinkatFlags::NoRemoveDir) {
bail!(
"unlink chunk {:?} failed on store '{}' - {}",
"unlinking chunk {:?} failed on store '{}' - {}",
filename,
self.name,
err,
@ -366,6 +402,7 @@ impl ChunkStore {
if let Ok(metadata) = std::fs::metadata(&chunk_path) {
if metadata.is_file() {
self.touch_chunk(digest)?;
return Ok((true, metadata.len()));
} else {
bail!("Got unexpected file type on store '{}' for chunk {}", self.name, digest_str);

View File

@ -304,7 +304,7 @@ impl DataBlob {
let digest = match config {
Some(config) => config.compute_digest(data),
None => openssl::sha::sha256(&data),
None => openssl::sha::sha256(data),
};
if &digest != expected_digest {
bail!("detected chunk with wrong digest.");

View File

@ -21,6 +21,7 @@ use super::{DataBlob, ArchiveType, archive_type};
use crate::config::datastore;
use crate::server::WorkerTask;
use crate::tools;
use crate::tools::format::HumanByte;
use crate::tools::fs::{lock_dir_noblock, DirLockGuard};
use crate::api2::types::{GarbageCollectionStatus, Userid};
@ -84,7 +85,7 @@ impl DataStore {
pub fn get_chunk_iterator(
&self,
) -> Result<
impl Iterator<Item = (Result<tools::fs::ReadDirEntry, Error>, usize)>,
impl Iterator<Item = (Result<tools::fs::ReadDirEntry, Error>, usize, bool)>,
Error
> {
self.chunk_store.get_chunk_iterator()
@ -299,7 +300,7 @@ impl DataStore {
/// And set the owner to 'userid'. If the group already exists, it returns the
/// current owner (instead of setting the owner).
///
/// This also aquires an exclusive lock on the directory and returns the lock guard.
/// This also acquires an exclusive lock on the directory and returns the lock guard.
pub fn create_locked_backup_group(
&self,
backup_group: &BackupGroup,
@ -429,6 +430,12 @@ impl DataStore {
let image_list = self.list_images()?;
let image_count = image_list.len();
let mut done = 0;
let mut last_percentage: usize = 0;
for path in image_list {
worker.fail_on_abort()?;
@ -443,6 +450,14 @@ impl DataStore {
self.index_mark_used_chunks(index, &path, status, worker)?;
}
}
done += 1;
let percentage = done*100/image_count;
if percentage > last_percentage {
worker.log(format!("percentage done: phase1 {}% ({} of {} index files)",
percentage, done, image_count));
last_percentage = percentage;
}
}
Ok(())
@ -462,9 +477,8 @@ impl DataStore {
let _exclusive_lock = self.chunk_store.try_exclusive_lock()?;
let now = unsafe { libc::time(std::ptr::null_mut()) };
let oldest_writer = self.chunk_store.oldest_writer().unwrap_or(now);
let phase1_start_time = unsafe { libc::time(std::ptr::null_mut()) };
let oldest_writer = self.chunk_store.oldest_writer().unwrap_or(phase1_start_time);
let mut gc_status = GarbageCollectionStatus::default();
gc_status.upid = Some(worker.to_string());
@ -474,26 +488,29 @@ impl DataStore {
self.mark_used_chunks(&mut gc_status, &worker)?;
worker.log("Start GC phase2 (sweep unused chunks)");
self.chunk_store.sweep_unused_chunks(oldest_writer, now, &mut gc_status, &worker)?;
self.chunk_store.sweep_unused_chunks(oldest_writer, phase1_start_time, &mut gc_status, &worker)?;
worker.log(&format!("Removed bytes: {}", gc_status.removed_bytes));
worker.log(&format!("Removed garbage: {}", HumanByte::from(gc_status.removed_bytes)));
worker.log(&format!("Removed chunks: {}", gc_status.removed_chunks));
if gc_status.pending_bytes > 0 {
worker.log(&format!("Pending removals: {} bytes ({} chunks)", gc_status.pending_bytes, gc_status.pending_chunks));
worker.log(&format!("Pending removals: {} (in {} chunks)", HumanByte::from(gc_status.pending_bytes), gc_status.pending_chunks));
}
if gc_status.removed_bad > 0 {
worker.log(&format!("Removed bad files: {}", gc_status.removed_bad));
}
worker.log(&format!("Original data bytes: {}", gc_status.index_data_bytes));
worker.log(&format!("Original data usage: {}", HumanByte::from(gc_status.index_data_bytes)));
if gc_status.index_data_bytes > 0 {
let comp_per = (gc_status.disk_bytes*100)/gc_status.index_data_bytes;
worker.log(&format!("Disk bytes: {} ({} %)", gc_status.disk_bytes, comp_per));
let comp_per = (gc_status.disk_bytes as f64 * 100.)/gc_status.index_data_bytes as f64;
worker.log(&format!("On-Disk usage: {} ({:.2}%)", HumanByte::from(gc_status.disk_bytes), comp_per));
}
worker.log(&format!("Disk chunks: {}", gc_status.disk_chunks));
worker.log(&format!("On-Disk chunks: {}", gc_status.disk_chunks));
if gc_status.disk_chunks > 0 {
let avg_chunk = gc_status.disk_bytes/(gc_status.disk_chunks as u64);
worker.log(&format!("Average chunk size: {}", avg_chunk));
worker.log(&format!("Average chunk size: {}", HumanByte::from(avg_chunk)));
}
*self.last_gc_status.lock().unwrap() = gc_status;

View File

@ -11,7 +11,6 @@ use anyhow::{bail, format_err, Error};
use proxmox::tools::io::ReadExt;
use proxmox::tools::uuid::Uuid;
use proxmox::tools::vec;
use proxmox::tools::mmap::Mmap;
use pxar::accessor::{MaybeReady, ReadAt, ReadAtOperation};
@ -41,6 +40,24 @@ proxmox::static_assert_size!(DynamicIndexHeader, 4096);
// pub data: DynamicIndexHeaderData,
// }
impl DynamicIndexHeader {
/// Convenience method to allocate a zero-initialized header struct.
pub fn zeroed() -> Box<Self> {
unsafe {
Box::from_raw(std::alloc::alloc_zeroed(std::alloc::Layout::new::<Self>()) as *mut Self)
}
}
pub fn as_bytes(&self) -> &[u8] {
unsafe {
std::slice::from_raw_parts(
self as *const Self as *const u8,
std::mem::size_of::<Self>(),
)
}
}
}
#[derive(Clone, Debug)]
#[repr(C)]
pub struct DynamicEntry {
@ -489,27 +506,16 @@ impl DynamicIndexWriter {
let mut writer = BufWriter::with_capacity(1024 * 1024, file);
let header_size = std::mem::size_of::<DynamicIndexHeader>();
// todo: use static assertion when available in rust
if header_size != 4096 {
panic!("got unexpected header size");
}
let ctime = epoch_now_u64()?;
let uuid = Uuid::generate();
let mut buffer = vec::zeroed(header_size);
let header = crate::tools::map_struct_mut::<DynamicIndexHeader>(&mut buffer)?;
let mut header = DynamicIndexHeader::zeroed();
header.magic = super::DYNAMIC_SIZED_CHUNK_INDEX_1_0;
header.ctime = u64::to_le(ctime);
header.uuid = *uuid.as_bytes();
header.index_csum = [0u8; 32];
writer.write_all(&buffer)?;
// header.index_csum = [0u8; 32];
writer.write_all(header.as_bytes())?;
let csum = Some(openssl::sha::Sha256::new());

View File

@ -145,7 +145,7 @@ impl BackupManifest {
Ok(())
}
// Generate cannonical json
// Generate canonical json
fn to_canonical_json(value: &Value) -> Result<Vec<u8>, Error> {
let mut data = Vec::new();
Self::write_canonical_json(value, &mut data)?;

View File

@ -1,16 +1,20 @@
use std::collections::HashSet;
use std::sync::{Arc, Mutex};
use std::sync::atomic::{Ordering, AtomicUsize};
use std::time::Instant;
use anyhow::{bail, Error};
use anyhow::{bail, format_err, Error};
use crate::server::WorkerTask;
use crate::api2::types::*;
use super::{
DataStore, BackupGroup, BackupDir, BackupInfo, IndexFile,
DataStore, DataBlob, BackupGroup, BackupDir, BackupInfo, IndexFile,
CryptMode,
FileInfo, ArchiveType, archive_type,
};
fn verify_blob(datastore: &DataStore, backup_dir: &BackupDir, info: &FileInfo) -> Result<(), Error> {
fn verify_blob(datastore: Arc<DataStore>, backup_dir: &BackupDir, info: &FileInfo) -> Result<(), Error> {
let blob = datastore.load_blob(backup_dir, &info.filename)?;
@ -35,38 +39,125 @@ fn verify_blob(datastore: &DataStore, backup_dir: &BackupDir, info: &FileInfo) -
}
}
fn rename_corrupted_chunk(
datastore: Arc<DataStore>,
digest: &[u8;32],
worker: Arc<WorkerTask>,
) {
let (path, digest_str) = datastore.chunk_path(digest);
let mut counter = 0;
let mut new_path = path.clone();
loop {
new_path.set_file_name(format!("{}.{}.bad", digest_str, counter));
if new_path.exists() && counter < 9 { counter += 1; } else { break; }
}
match std::fs::rename(&path, &new_path) {
Ok(_) => {
worker.log(format!("corrupted chunk renamed to {:?}", &new_path));
},
Err(err) => {
match err.kind() {
std::io::ErrorKind::NotFound => { /* ignored */ },
_ => worker.log(format!("could not rename corrupted chunk {:?} - {}", &path, err))
}
}
};
}
// We use a separate thread to read/load chunks, so that we can do
// load and verify in parallel to increase performance.
fn chunk_reader_thread(
datastore: Arc<DataStore>,
index: Box<dyn IndexFile + Send>,
verified_chunks: Arc<Mutex<HashSet<[u8;32]>>>,
corrupt_chunks: Arc<Mutex<HashSet<[u8;32]>>>,
errors: Arc<AtomicUsize>,
worker: Arc<WorkerTask>,
) -> std::sync::mpsc::Receiver<(DataBlob, [u8;32], u64)> {
let (sender, receiver) = std::sync::mpsc::sync_channel(3); // buffer up to 3 chunks
std::thread::spawn(move|| {
for pos in 0..index.index_count() {
let info = index.chunk_info(pos).unwrap();
let size = info.range.end - info.range.start;
if verified_chunks.lock().unwrap().contains(&info.digest) {
continue; // already verified
}
if corrupt_chunks.lock().unwrap().contains(&info.digest) {
let digest_str = proxmox::tools::digest_to_hex(&info.digest);
worker.log(format!("chunk {} was marked as corrupt", digest_str));
errors.fetch_add(1, Ordering::SeqCst);
continue;
}
match datastore.load_chunk(&info.digest) {
Err(err) => {
corrupt_chunks.lock().unwrap().insert(info.digest);
worker.log(format!("can't verify chunk, load failed - {}", err));
errors.fetch_add(1, Ordering::SeqCst);
rename_corrupted_chunk(datastore.clone(), &info.digest, worker.clone());
continue;
}
Ok(chunk) => {
if sender.send((chunk, info.digest, size)).is_err() {
break; // receiver gone - simply stop
}
}
}
}
});
receiver
}
fn verify_index_chunks(
datastore: &DataStore,
index: Box<dyn IndexFile>,
verified_chunks: &mut HashSet<[u8;32]>,
corrupt_chunks: &mut HashSet<[u8; 32]>,
datastore: Arc<DataStore>,
index: Box<dyn IndexFile + Send>,
verified_chunks: Arc<Mutex<HashSet<[u8;32]>>>,
corrupt_chunks: Arc<Mutex<HashSet<[u8; 32]>>>,
crypt_mode: CryptMode,
worker: &WorkerTask,
worker: Arc<WorkerTask>,
) -> Result<(), Error> {
let mut errors = 0;
for pos in 0..index.index_count() {
let errors = Arc::new(AtomicUsize::new(0));
let start_time = Instant::now();
let chunk_channel = chunk_reader_thread(
datastore.clone(),
index,
verified_chunks.clone(),
corrupt_chunks.clone(),
errors.clone(),
worker.clone(),
);
let mut read_bytes = 0;
let mut decoded_bytes = 0;
loop {
worker.fail_on_abort()?;
crate::tools::fail_on_shutdown()?;
let info = index.chunk_info(pos).unwrap();
let size = info.range.end - info.range.start;
let chunk = match datastore.load_chunk(&info.digest) {
Err(err) => {
corrupt_chunks.insert(info.digest);
worker.log(format!("can't verify chunk, load failed - {}", err));
errors += 1;
continue;
},
Ok(chunk) => chunk,
let (chunk, digest, size) = match chunk_channel.recv() {
Ok(tuple) => tuple,
Err(std::sync::mpsc::RecvError) => break,
};
read_bytes += chunk.raw_size();
decoded_bytes += size;
let chunk_crypt_mode = match chunk.crypt_mode() {
Err(err) => {
corrupt_chunks.insert(info.digest);
corrupt_chunks.lock().unwrap().insert(digest);
worker.log(format!("can't verify chunk, unknown CryptMode - {}", err));
errors += 1;
errors.fetch_add(1, Ordering::SeqCst);
continue;
},
Ok(mode) => mode,
@ -78,27 +169,33 @@ fn verify_index_chunks(
chunk_crypt_mode,
crypt_mode
));
errors += 1;
errors.fetch_add(1, Ordering::SeqCst);
}
if !verified_chunks.contains(&info.digest) {
if !corrupt_chunks.contains(&info.digest) {
if let Err(err) = chunk.verify_unencrypted(size as usize, &info.digest) {
corrupt_chunks.insert(info.digest);
worker.log(format!("{}", err));
errors += 1;
} else {
verified_chunks.insert(info.digest);
}
} else {
let digest_str = proxmox::tools::digest_to_hex(&info.digest);
worker.log(format!("chunk {} was marked as corrupt", digest_str));
errors += 1;
}
if let Err(err) = chunk.verify_unencrypted(size as usize, &digest) {
corrupt_chunks.lock().unwrap().insert(digest);
worker.log(format!("{}", err));
errors.fetch_add(1, Ordering::SeqCst);
rename_corrupted_chunk(datastore.clone(), &digest, worker.clone());
} else {
verified_chunks.lock().unwrap().insert(digest);
}
}
if errors > 0 {
let elapsed = start_time.elapsed().as_secs_f64();
let read_bytes_mib = (read_bytes as f64)/(1024.0*1024.0);
let decoded_bytes_mib = (decoded_bytes as f64)/(1024.0*1024.0);
let read_speed = read_bytes_mib/elapsed;
let decode_speed = decoded_bytes_mib/elapsed;
let error_count = errors.load(Ordering::SeqCst);
worker.log(format!(" verified {:.2}/{:.2} MiB in {:.2} seconds, speed {:.2}/{:.2} MiB/s ({} errors)",
read_bytes_mib, decoded_bytes_mib, elapsed, read_speed, decode_speed, error_count));
if errors.load(Ordering::SeqCst) > 0 {
bail!("chunks could not be verified");
}
@ -106,12 +203,12 @@ fn verify_index_chunks(
}
fn verify_fixed_index(
datastore: &DataStore,
datastore: Arc<DataStore>,
backup_dir: &BackupDir,
info: &FileInfo,
verified_chunks: &mut HashSet<[u8;32]>,
corrupt_chunks: &mut HashSet<[u8;32]>,
worker: &WorkerTask,
verified_chunks: Arc<Mutex<HashSet<[u8;32]>>>,
corrupt_chunks: Arc<Mutex<HashSet<[u8;32]>>>,
worker: Arc<WorkerTask>,
) -> Result<(), Error> {
let mut path = backup_dir.relative_path();
@ -132,12 +229,12 @@ fn verify_fixed_index(
}
fn verify_dynamic_index(
datastore: &DataStore,
datastore: Arc<DataStore>,
backup_dir: &BackupDir,
info: &FileInfo,
verified_chunks: &mut HashSet<[u8;32]>,
corrupt_chunks: &mut HashSet<[u8;32]>,
worker: &WorkerTask,
verified_chunks: Arc<Mutex<HashSet<[u8;32]>>>,
corrupt_chunks: Arc<Mutex<HashSet<[u8;32]>>>,
worker: Arc<WorkerTask>,
) -> Result<(), Error> {
let mut path = backup_dir.relative_path();
@ -167,14 +264,14 @@ fn verify_dynamic_index(
/// - Ok(false) if there were verification errors
/// - Err(_) if task was aborted
pub fn verify_backup_dir(
datastore: &DataStore,
datastore: Arc<DataStore>,
backup_dir: &BackupDir,
verified_chunks: &mut HashSet<[u8;32]>,
corrupt_chunks: &mut HashSet<[u8;32]>,
worker: &WorkerTask
verified_chunks: Arc<Mutex<HashSet<[u8;32]>>>,
corrupt_chunks: Arc<Mutex<HashSet<[u8;32]>>>,
worker: Arc<WorkerTask>
) -> Result<bool, Error> {
let manifest = match datastore.load_manifest(&backup_dir) {
let mut manifest = match datastore.load_manifest(&backup_dir) {
Ok((manifest, _)) => manifest,
Err(err) => {
worker.log(format!("verify {}:{} - manifest load error: {}", datastore.name(), backup_dir, err));
@ -186,40 +283,53 @@ pub fn verify_backup_dir(
let mut error_count = 0;
let mut verify_result = "ok";
for info in manifest.files() {
let result = proxmox::try_block!({
worker.log(format!(" check {}", info.filename));
match archive_type(&info.filename)? {
ArchiveType::FixedIndex =>
verify_fixed_index(
&datastore,
datastore.clone(),
&backup_dir,
info,
verified_chunks,
corrupt_chunks,
worker
verified_chunks.clone(),
corrupt_chunks.clone(),
worker.clone(),
),
ArchiveType::DynamicIndex =>
verify_dynamic_index(
&datastore,
datastore.clone(),
&backup_dir,
info,
verified_chunks,
corrupt_chunks,
worker
verified_chunks.clone(),
corrupt_chunks.clone(),
worker.clone(),
),
ArchiveType::Blob => verify_blob(&datastore, &backup_dir, info),
ArchiveType::Blob => verify_blob(datastore.clone(), &backup_dir, info),
}
});
worker.fail_on_abort()?;
crate::tools::fail_on_shutdown()?;
if let Err(err) = result {
worker.log(format!("verify {}:{}/{} failed: {}", datastore.name(), backup_dir, info.filename, err));
error_count += 1;
verify_result = "failed";
}
}
let verify_state = SnapshotVerifyState {
state: verify_result.to_string(),
upid: worker.upid().clone(),
};
manifest.unprotected["verify_state"] = serde_json::to_value(verify_state)?;
datastore.store_manifest(&backup_dir, serde_json::to_value(manifest)?)
.map_err(|err| format_err!("unable to store manifest blob - {}", err))?;
Ok(error_count == 0)
}
@ -228,32 +338,45 @@ pub fn verify_backup_dir(
/// Errors are logged to the worker log.
///
/// Returns
/// - Ok(failed_dirs) where failed_dirs had verification errors
/// - Ok((count, failed_dirs)) where failed_dirs had verification errors
/// - Err(_) if task was aborted
pub fn verify_backup_group(datastore: &DataStore, group: &BackupGroup, worker: &WorkerTask) -> Result<Vec<String>, Error> {
pub fn verify_backup_group(
datastore: Arc<DataStore>,
group: &BackupGroup,
verified_chunks: Arc<Mutex<HashSet<[u8;32]>>>,
corrupt_chunks: Arc<Mutex<HashSet<[u8;32]>>>,
progress: Option<(usize, usize)>, // (done, snapshot_count)
worker: Arc<WorkerTask>,
) -> Result<(usize, Vec<String>), Error> {
let mut errors = Vec::new();
let mut list = match group.list_backups(&datastore.base_path()) {
Ok(list) => list,
Err(err) => {
worker.log(format!("verify group {}:{} - unable to list backups: {}", datastore.name(), group, err));
return Ok(errors);
return Ok((0, errors));
}
};
worker.log(format!("verify group {}:{}", datastore.name(), group));
let mut verified_chunks = HashSet::with_capacity(1024*16); // start with 16384 chunks (up to 65GB)
let mut corrupt_chunks = HashSet::with_capacity(64); // start with 64 chunks since we assume there are few corrupt ones
let (done, snapshot_count) = progress.unwrap_or((0, list.len()));
let mut count = 0;
BackupInfo::sort_list(&mut list, false); // newest first
for info in list {
if !verify_backup_dir(datastore, &info.backup_dir, &mut verified_chunks, &mut corrupt_chunks, worker)?{
count += 1;
if !verify_backup_dir(datastore.clone(), &info.backup_dir, verified_chunks.clone(), corrupt_chunks.clone(), worker.clone())?{
errors.push(info.backup_dir.to_string());
}
if snapshot_count != 0 {
let pos = done + count;
let percentage = ((pos as f64) * 100.0)/(snapshot_count as f64);
worker.log(format!("percentage done: {:.2}% ({} of {} snapshots)", percentage, pos, snapshot_count));
}
}
Ok(errors)
Ok((count, errors))
}
/// Verify all backups inside a datastore
@ -263,23 +386,49 @@ pub fn verify_backup_group(datastore: &DataStore, group: &BackupGroup, worker: &
/// Returns
/// - Ok(failed_dirs) where failed_dirs had verification errors
/// - Err(_) if task was aborted
pub fn verify_all_backups(datastore: &DataStore, worker: &WorkerTask) -> Result<Vec<String>, Error> {
pub fn verify_all_backups(datastore: Arc<DataStore>, worker: Arc<WorkerTask>) -> Result<Vec<String>, Error> {
let mut errors = Vec::new();
let list = match BackupGroup::list_groups(&datastore.base_path()) {
Ok(list) => list,
let mut list = match BackupGroup::list_groups(&datastore.base_path()) {
Ok(list) => list
.into_iter()
.filter(|group| !(group.backup_type() == "host" && group.backup_id() == "benchmark"))
.collect::<Vec<BackupGroup>>(),
Err(err) => {
worker.log(format!("verify datastore {} - unable to list backups: {}", datastore.name(), err));
return Ok(errors);
}
};
worker.log(format!("verify datastore {}", datastore.name()));
list.sort_unstable();
let mut snapshot_count = 0;
for group in list.iter() {
snapshot_count += group.list_backups(&datastore.base_path())?.len();
}
// start with 16384 chunks (up to 65GB)
let verified_chunks = Arc::new(Mutex::new(HashSet::with_capacity(1024*16)));
// start with 64 chunks since we assume there are few corrupt ones
let corrupt_chunks = Arc::new(Mutex::new(HashSet::with_capacity(64)));
worker.log(format!("verify datastore {} ({} snapshots)", datastore.name(), snapshot_count));
let mut done = 0;
for group in list {
let mut group_errors = verify_backup_group(datastore, &group, worker)?;
let (count, mut group_errors) = verify_backup_group(
datastore.clone(),
&group,
verified_chunks.clone(),
corrupt_chunks.clone(),
Some((done, snapshot_count)),
worker.clone(),
)?;
errors.append(&mut group_errors);
done += count;
}
Ok(errors)

View File

@ -37,6 +37,7 @@ async fn run() -> Result<(), Error> {
config::update_self_signed_cert(false)?;
proxmox_backup::rrd::create_rrdb_dir()?;
proxmox_backup::config::jobstate::create_jobstate_dir()?;
if let Err(err) = generate_auth_key() {
bail!("unable to generate auth key - {}", err);

View File

@ -1026,6 +1026,7 @@ async fn create_backup(
&backup_id,
backup_time,
verbose,
false
).await?;
let previous_manifest = if let Ok(previous_manifest) = client.download_previous_manifest().await {

View File

@ -9,7 +9,7 @@ use proxmox_backup::tools;
use proxmox_backup::config;
use proxmox_backup::api2::{self, types::* };
use proxmox_backup::client::*;
use proxmox_backup::tools::ticket::*;
use proxmox_backup::tools::ticket::Ticket;
use proxmox_backup::auth_helpers::*;
mod proxmox_backup_manager;
@ -59,12 +59,8 @@ fn connect() -> Result<HttpClient, Error> {
.verify_cert(false); // not required for connection to localhost
let client = if uid.is_root() {
let ticket = assemble_rsa_ticket(
private_auth_key(),
"PBS",
Some(Userid::root_userid()),
None,
)?;
let ticket = Ticket::new("PBS", Userid::root_userid())?
.sign(private_auth_key(), None)?;
options = options.password(Some(ticket));
HttpClient::new("localhost", Userid::root_userid(), options)?
} else {

View File

@ -18,13 +18,21 @@ use proxmox_backup::server::{ApiConfig, rest::*};
use proxmox_backup::auth_helpers::*;
use proxmox_backup::tools::disks::{ DiskManage, zfs_pool_stats };
fn main() {
use proxmox_backup::api2::pull::do_sync_job;
fn main() -> Result<(), Error> {
proxmox_backup::tools::setup_safe_path_env();
if let Err(err) = proxmox_backup::tools::runtime::main(run()) {
eprintln!("Error: {}", err);
std::process::exit(-1);
let backup_uid = proxmox_backup::backup::backup_user()?.uid;
let backup_gid = proxmox_backup::backup::backup_group()?.gid;
let running_uid = nix::unistd::Uid::effective();
let running_gid = nix::unistd::Gid::effective();
if running_uid != backup_uid || running_gid != backup_gid {
bail!("proxy not running as backup user or group (got uid {} gid {})", running_uid, running_gid);
}
proxmox_backup::tools::runtime::main(run())
}
async fn run() -> Result<(), Error> {
@ -41,15 +49,11 @@ async fn run() -> Result<(), Error> {
let mut config = ApiConfig::new(
buildcfg::JS_DIR, &proxmox_backup::api2::ROUTER, RpcEnvironmentType::PUBLIC)?;
// add default dirs which includes jquery and bootstrap
// my $base = '/usr/share/libpve-http-server-perl';
// add_dirs($self->{dirs}, '/css/' => "$base/css/");
// add_dirs($self->{dirs}, '/js/' => "$base/js/");
// add_dirs($self->{dirs}, '/fonts/' => "$base/fonts/");
config.add_alias("novnc", "/usr/share/novnc-pve");
config.add_alias("extjs", "/usr/share/javascript/extjs");
config.add_alias("fontawesome", "/usr/share/fonts-font-awesome");
config.add_alias("xtermjs", "/usr/share/pve-xtermjs");
config.add_alias("locale", "/usr/share/pbs-i18n");
config.add_alias("widgettoolkit", "/usr/share/javascript/proxmox-widget-toolkit");
config.add_alias("css", "/usr/share/javascript/proxmox-backup/css");
config.add_alias("docs", "/usr/share/doc/proxmox-backup/html");
@ -83,8 +87,6 @@ async fn run() -> Result<(), Error> {
let acceptor = Arc::clone(&acceptor);
async move {
sock.set_nodelay(true).unwrap();
sock.set_send_buffer_size(1024*1024).unwrap();
sock.set_recv_buffer_size(1024*1024).unwrap();
Ok(tokio_openssl::accept(&acceptor, sock)
.await
.ok() // handshake errors aren't be fatal, so return None to filter
@ -298,7 +300,8 @@ async fn schedule_datastore_garbage_collection() {
};
let next = match compute_next_event(&event, last, false) {
Ok(next) => next,
Ok(Some(next)) => next,
Ok(None) => continue,
Err(err) => {
eprintln!("compute_next_event for '{}' failed - {}", event_str, err);
continue;
@ -409,7 +412,8 @@ async fn schedule_datastore_prune() {
};
let next = match compute_next_event(&event, last, false) {
Ok(next) => next,
Ok(Some(next)) => next,
Ok(None) => continue,
Err(err) => {
eprintln!("compute_next_event for '{}' failed - {}", event_str, err);
continue;
@ -472,10 +476,7 @@ async fn schedule_datastore_prune() {
async fn schedule_datastore_sync_jobs() {
use proxmox_backup::{
backup::DataStore,
client::{ HttpClient, HttpClientOptions, BackupRepository, pull::pull_store },
server::{ WorkerTask },
config::{ sync::{self, SyncJobConfig}, remote::{self, Remote} },
config::{ sync::{self, SyncJobConfig}, jobstate::{self, Job} },
tools::systemd::time::{ parse_calendar_event, compute_next_event },
};
@ -487,14 +488,6 @@ async fn schedule_datastore_sync_jobs() {
Ok((config, _digest)) => config,
};
let remote_config = match remote::config() {
Err(err) => {
eprintln!("unable to read remote config - {}", err);
return;
}
Ok((config, _digest)) => config,
};
for (job_id, (_, job_config)) in config.sections {
let job_config: SyncJobConfig = match serde_json::from_value(job_config) {
Ok(c) => c,
@ -519,22 +512,17 @@ async fn schedule_datastore_sync_jobs() {
let worker_type = "syncjob";
let last = match lookup_last_worker(worker_type, &job_id) {
Ok(Some(upid)) => {
if proxmox_backup::server::worker_is_active_local(&upid) {
continue;
}
upid.starttime
},
Ok(None) => 0,
let last = match jobstate::last_run_time(worker_type, &job_id) {
Ok(time) => time,
Err(err) => {
eprintln!("lookup_last_job_start failed: {}", err);
eprintln!("could not get last run time of {} {}: {}", worker_type, job_id, err);
continue;
}
};
let next = match compute_next_event(&event, last, false) {
Ok(next) => next,
Ok(Some(next)) => next,
Ok(None) => continue,
Err(err) => {
eprintln!("compute_next_event for '{}' failed - {}", event_str, err);
continue;
@ -550,57 +538,15 @@ async fn schedule_datastore_sync_jobs() {
};
if next > now { continue; }
let job_id2 = job_id.clone();
let tgt_store = match DataStore::lookup_datastore(&job_config.store) {
Ok(datastore) => datastore,
Err(err) => {
eprintln!("lookup_datastore '{}' failed - {}", job_config.store, err);
continue;
}
};
let remote: Remote = match remote_config.lookup("remote", &job_config.remote) {
Ok(remote) => remote,
Err(err) => {
eprintln!("remote_config lookup failed: {}", err);
continue;
}
let job = match Job::new(worker_type, &job_id) {
Ok(job) => job,
Err(_) => continue, // could not get lock
};
let userid = Userid::backup_userid().clone();
let delete = job_config.remove_vanished.unwrap_or(true);
if let Err(err) = WorkerTask::spawn(
worker_type,
Some(job_id.clone()),
userid.clone(),
false,
move |worker| async move {
worker.log(format!("Starting datastore sync job '{}'", job_id));
worker.log(format!("task triggered by schedule '{}'", event_str));
worker.log(format!("Sync datastore '{}' from '{}/{}'",
job_config.store, job_config.remote, job_config.remote_store));
let options = HttpClientOptions::new()
.password(Some(remote.password.clone()))
.fingerprint(remote.fingerprint.clone());
let client = HttpClient::new(&remote.host, &remote.userid, options)?;
let _auth_info = client.login() // make sure we can auth
.await
.map_err(|err| format_err!("remote connection to '{}' failed - {}", remote.host, err))?;
let src_repo = BackupRepository::new(Some(remote.userid), Some(remote.host), job_config.remote_store);
pull_store(&worker, &client, &src_repo, tgt_store, delete, userid).await?;
Ok(())
}
) {
eprintln!("unable to start datastore sync job {} - {}", job_id2, err);
if let Err(err) = do_sync_job(job, job_config, &userid, Some(event_str)) {
eprintln!("unable to start datastore sync job {} - {}", &job_id, err);
}
}
}

View File

@ -68,7 +68,7 @@ struct Speed {
struct BenchmarkResult {
/// TLS upload speed
tls: Speed,
/// SHA256 checksum comptation speed
/// SHA256 checksum computation speed
sha256: Speed,
/// ZStd level 1 compression speed
compress: Speed,
@ -187,7 +187,7 @@ fn render_result(
.header("TLS (maximal backup upload speed)")
.right_align(false).renderer(render_speed))
.column(ColumnConfig::new("sha256")
.header("SHA256 checksum comptation speed")
.header("SHA256 checksum computation speed")
.right_align(false).renderer(render_speed))
.column(ColumnConfig::new("compress")
.header("ZStd level 1 compression speed")
@ -226,6 +226,7 @@ async fn test_upload_speed(
"benchmark",
backup_time,
false,
true
).await?;
if verbose { eprintln!("Start TLS speed test"); }

View File

@ -141,7 +141,7 @@ async fn mount_do(param: Value, pipe: Option<RawFd>) -> Result<Value, Error> {
let (manifest, _) = client.download_manifest().await?;
let file_info = manifest.lookup_file_info(&archive_name)?;
let file_info = manifest.lookup_file_info(&server_archive_name)?;
if server_archive_name.ends_with(".didx") {
let index = client.download_dynamic_index(&manifest, &server_archive_name).await?;

View File

@ -239,7 +239,7 @@ pub fn zpool_commands() -> CommandLineInterface {
.insert("create",
CliCommand::new(&API_METHOD_CREATE_ZPOOL)
.arg_param(&["name"])
.completion_cb("devices", complete_disk_name) // fixme: comlete the list
.completion_cb("devices", complete_disk_name) // fixme: complete the list
);
cmd_def.into()

View File

@ -53,6 +53,7 @@ impl BackupWriter {
backup_id: &str,
backup_time: DateTime<Utc>,
debug: bool,
benchmark: bool
) -> Result<Arc<BackupWriter>, Error> {
let param = json!({
@ -60,7 +61,8 @@ impl BackupWriter {
"backup-id": backup_id,
"backup-time": backup_time.timestamp(),
"store": datastore,
"debug": debug
"debug": debug,
"benchmark": benchmark
});
let req = HttpClient::request_builder(
@ -629,7 +631,7 @@ impl BackupWriter {
})
}
/// Upload speed test - prints result ot stderr
/// Upload speed test - prints result to stderr
pub async fn upload_speedtest(&self, verbose: bool) -> Result<f64, Error> {
let mut data = vec![];

View File

@ -292,7 +292,6 @@ impl HttpClient {
let mut httpc = hyper::client::HttpConnector::new();
httpc.set_nodelay(true); // important for h2 download performance!
httpc.set_recv_buffer_size(Some(1024*1024)); //important for h2 download performance!
httpc.enforce_http(false); // we want https...
let https = HttpsConnector::with_connector(httpc, ssl_connector_builder.build());

View File

@ -18,6 +18,7 @@ use crate::buildcfg;
pub mod acl;
pub mod cached_user_info;
pub mod datastore;
pub mod jobstate;
pub mod network;
pub mod remote;
pub mod sync;

263
src/config/jobstate.rs Normal file
View File

@ -0,0 +1,263 @@
//! Generic JobState handling
//!
//! A 'Job' can have 3 states
//! - Created, when a schedule was created but never executed
//! - Started, when a job is running right now
//! - Finished, when a job was running in the past
//!
//! and is identified by 2 values: jobtype and jobname (e.g. 'syncjob' and 'myfirstsyncjob')
//!
//! This module Provides 2 helper structs to handle those coniditons
//! 'Job' which handles locking and writing to a file
//! 'JobState' which is the actual state
//!
//! an example usage would be
//! ```no_run
//! # use anyhow::{bail, Error};
//! # use proxmox_backup::server::TaskState;
//! # use proxmox_backup::config::jobstate::*;
//! # fn some_code() -> TaskState { TaskState::OK { endtime: 0 } }
//! # fn code() -> Result<(), Error> {
//! // locks the correct file under /var/lib
//! // or fails if someone else holds the lock
//! let mut job = match Job::new("jobtype", "jobname") {
//! Ok(job) => job,
//! Err(err) => bail!("could not lock jobstate"),
//! };
//!
//! // job holds the lock, we can start it
//! job.start("someupid")?;
//! // do something
//! let task_state = some_code();
//! job.finish(task_state)?;
//!
//! // release the lock
//! drop(job);
//! # Ok(())
//! # }
//!
//! ```
use std::fs::File;
use std::path::{Path, PathBuf};
use std::time::Duration;
use anyhow::{bail, format_err, Error};
use proxmox::tools::fs::{
create_path, file_read_optional_string, open_file_locked, replace_file, CreateOptions,
};
use serde::{Deserialize, Serialize};
use crate::server::{upid_read_status, worker_is_active_local, TaskState, UPID};
use crate::tools::epoch_now_u64;
#[serde(rename_all = "kebab-case")]
#[derive(Serialize, Deserialize)]
/// Represents the State of a specific Job
pub enum JobState {
/// A job was created at 'time', but never started/finished
Created { time: i64 },
/// The Job was last started in 'upid',
Started { upid: String },
/// The Job was last started in 'upid', which finished with 'state'
Finished { upid: String, state: TaskState },
}
/// Represents a Job and holds the correct lock
pub struct Job {
jobtype: String,
jobname: String,
/// The State of the job
pub state: JobState,
_lock: File,
}
const JOB_STATE_BASEDIR: &str = "/var/lib/proxmox-backup/jobstates";
/// Create jobstate stat dir with correct permission
pub fn create_jobstate_dir() -> Result<(), Error> {
let backup_user = crate::backup::backup_user()?;
let opts = CreateOptions::new()
.owner(backup_user.uid)
.group(backup_user.gid);
create_path(JOB_STATE_BASEDIR, None, Some(opts))
.map_err(|err: Error| format_err!("unable to create rrdb stat dir - {}", err))?;
Ok(())
}
fn get_path(jobtype: &str, jobname: &str) -> PathBuf {
let mut path = PathBuf::from(JOB_STATE_BASEDIR);
path.push(format!("{}-{}.json", jobtype, jobname));
path
}
fn get_lock<P>(path: P) -> Result<File, Error>
where
P: AsRef<Path>,
{
let mut path = path.as_ref().to_path_buf();
path.set_extension("lck");
let lock = open_file_locked(&path, Duration::new(10, 0))?;
let backup_user = crate::backup::backup_user()?;
nix::unistd::chown(&path, Some(backup_user.uid), Some(backup_user.gid))?;
Ok(lock)
}
/// Removes the statefile of a job, this is useful if we delete a job
pub fn remove_state_file(jobtype: &str, jobname: &str) -> Result<(), Error> {
let mut path = get_path(jobtype, jobname);
let _lock = get_lock(&path)?;
std::fs::remove_file(&path).map_err(|err| {
format_err!(
"cannot remove statefile for {} - {}: {}",
jobtype,
jobname,
err
)
})?;
path.set_extension("lck");
// ignore errors
let _ = std::fs::remove_file(&path).map_err(|err| {
format_err!(
"cannot remove lockfile for {} - {}: {}",
jobtype,
jobname,
err
)
});
Ok(())
}
/// Creates the statefile with the state 'Created'
/// overwrites if it exists already
pub fn create_state_file(jobtype: &str, jobname: &str) -> Result<(), Error> {
let mut job = Job::new(jobtype, jobname)?;
job.write_state()
}
/// Returns the last run time of a job by reading the statefile
/// Note that this is not locked
pub fn last_run_time(jobtype: &str, jobname: &str) -> Result<i64, Error> {
match JobState::load(jobtype, jobname)? {
JobState::Created { time } => Ok(time),
JobState::Started { upid } | JobState::Finished { upid, .. } => {
let upid: UPID = upid
.parse()
.map_err(|err| format_err!("could not parse upid from state: {}", err))?;
Ok(upid.starttime)
}
}
}
impl JobState {
/// Loads and deserializes the jobstate from type and name.
/// When the loaded state indicates a started UPID,
/// we go and check if it has already stopped, and
/// returning the correct state.
///
/// This does not update the state in the file.
pub fn load(jobtype: &str, jobname: &str) -> Result<Self, Error> {
if let Some(state) = file_read_optional_string(get_path(jobtype, jobname))? {
match serde_json::from_str(&state)? {
JobState::Started { upid } => {
let parsed: UPID = upid
.parse()
.map_err(|err| format_err!("error parsing upid: {}", err))?;
if !worker_is_active_local(&parsed) {
let state = upid_read_status(&parsed)
.map_err(|err| format_err!("error reading upid log status: {}", err))?;
Ok(JobState::Finished { upid, state })
} else {
Ok(JobState::Started { upid })
}
}
other => Ok(other),
}
} else {
Ok(JobState::Created {
time: epoch_now_u64()? as i64 - 30,
})
}
}
}
impl Job {
/// Creates a new instance of a job with the correct lock held
/// (will be hold until the job is dropped again).
///
/// This does not load the state from the file, to do that,
/// 'load' must be called
pub fn new(jobtype: &str, jobname: &str) -> Result<Self, Error> {
let path = get_path(jobtype, jobname);
let _lock = get_lock(&path)?;
Ok(Self {
jobtype: jobtype.to_string(),
jobname: jobname.to_string(),
state: JobState::Created {
time: epoch_now_u64()? as i64,
},
_lock,
})
}
/// Start the job and update the statefile accordingly
/// Fails if the job was already started
pub fn start(&mut self, upid: &str) -> Result<(), Error> {
match self.state {
JobState::Started { .. } => {
bail!("cannot start job that is started!");
}
_ => {}
}
self.state = JobState::Started {
upid: upid.to_string(),
};
self.write_state()
}
/// Finish the job and update the statefile accordingly with the given taskstate
/// Fails if the job was not yet started
pub fn finish(&mut self, state: TaskState) -> Result<(), Error> {
let upid = match &self.state {
JobState::Created { .. } => bail!("cannot finish when not started"),
JobState::Started { upid } => upid,
JobState::Finished { upid, .. } => upid,
}
.to_string();
self.state = JobState::Finished { upid, state };
self.write_state()
}
pub fn jobtype(&self) -> &str {
&self.jobtype
}
pub fn jobname(&self) -> &str {
&self.jobname
}
fn write_state(&mut self) -> Result<(), Error> {
let serialized = serde_json::to_string(&self.state)?;
let path = get_path(&self.jobtype, &self.jobname);
let backup_user = crate::backup::backup_user()?;
let mode = nix::sys::stat::Mode::from_bits_truncate(0o0644);
// set the correct owner/group/permissions while saving file
// owner(rw) = backup, group(r)= backup
let options = CreateOptions::new()
.perm(mode)
.owner(backup_user.uid)
.group(backup_user.gid);
replace_file(path, serialized.as_bytes(), options)
}
}

View File

@ -600,4 +600,101 @@ mod test {
Ok(())
}
#[test]
fn test_network_config_parser_no_blank_1() -> Result<(), Error> {
let input = "auto lo\n\
iface lo inet loopback\n\
iface lo inet6 loopback\n\
auto ens18\n\
iface ens18 inet static\n\
\taddress 192.168.20.144/20\n\
\tgateway 192.168.16.1\n\
# comment\n\
iface ens20 inet static\n\
\taddress 192.168.20.145/20\n\
iface ens21 inet manual\n\
iface ens22 inet manual\n";
let mut parser = NetworkParser::new(&input.as_bytes()[..]);
let config = parser.parse_interfaces(None)?;
let output = String::try_from(config)?;
let expected = "auto lo\n\
iface lo inet loopback\n\
\n\
iface lo inet6 loopback\n\
\n\
auto ens18\n\
iface ens18 inet static\n\
\taddress 192.168.20.144/20\n\
\tgateway 192.168.16.1\n\
#comment\n\
\n\
iface ens20 inet static\n\
\taddress 192.168.20.145/20\n\
\n\
iface ens21 inet manual\n\
\n\
iface ens22 inet manual\n\
\n";
assert_eq!(output, expected);
Ok(())
}
#[test]
fn test_network_config_parser_no_blank_2() -> Result<(), Error> {
// Adapted from bug 2926
let input = "### Hetzner Online GmbH installimage\n\
\n\
source /etc/network/interfaces.d/*\n\
\n\
auto lo\n\
iface lo inet loopback\n\
iface lo inet6 loopback\n\
\n\
auto enp4s0\n\
iface enp4s0 inet static\n\
\taddress 10.10.10.10/24\n\
\tgateway 10.10.10.1\n\
\t# route 10.10.20.10/24 via 10.10.20.1\n\
\tup route add -net 10.10.20.10 netmask 255.255.255.0 gw 10.10.20.1 dev enp4s0\n\
\n\
iface enp4s0 inet6 static\n\
\taddress fe80::5496:35ff:fe99:5a6a/64\n\
\tgateway fe80::1\n";
let mut parser = NetworkParser::new(&input.as_bytes()[..]);
let config = parser.parse_interfaces(None)?;
let output = String::try_from(config)?;
let expected = "### Hetzner Online GmbH installimage\n\
\n\
source /etc/network/interfaces.d/*\n\
\n\
auto lo\n\
iface lo inet loopback\n\
\n\
iface lo inet6 loopback\n\
\n\
auto enp4s0\n\
iface enp4s0 inet static\n\
\taddress 10.10.10.10/24\n\
\tgateway 10.10.10.1\n\
\t# route 10.10.20.10/24 via 10.10.20.1\n\
\tup route add -net 10.10.20.10 netmask 255.255.255.0 gw 10.10.20.1 dev enp4s0\n\
\n\
iface enp4s0 inet6 static\n\
\taddress fe80::5496:35ff:fe99:5a6a/64\n\
\tgateway fe80::1\n\
\n";
assert_eq!(output, expected);
Ok(())
}
}

View File

@ -210,9 +210,7 @@ impl <R: BufRead> NetworkParser<R> {
self.eat(Token::Newline)?;
continue;
}
Token::Newline => break,
Token::EOF => break,
unexpected => bail!("unexpected token {:?} (expected iface attribute)", unexpected),
_ => break,
}
match self.peek()? {

View File

@ -29,6 +29,7 @@ use super::ApiConfig;
use crate::auth_helpers::*;
use crate::api2::types::Userid;
use crate::tools;
use crate::tools::ticket::Ticket;
use crate::config::cached_user_info::CachedUserInfo;
extern "C" { fn tzset(); }
@ -312,7 +313,13 @@ pub async fn handle_api_request<Env: RpcEnvironment, S: 'static + BuildHasher +
Ok(resp)
}
fn get_index(userid: Option<Userid>, token: Option<String>, api: &Arc<ApiConfig>, parts: Parts) -> Response<Body> {
fn get_index(
userid: Option<Userid>,
token: Option<String>,
language: Option<String>,
api: &Arc<ApiConfig>,
parts: Parts,
) -> Response<Body> {
let nodename = proxmox::tools::nodename();
let userid = userid.as_ref().map(|u| u.as_str()).unwrap_or("");
@ -332,10 +339,18 @@ fn get_index(userid: Option<Userid>, token: Option<String>, api: &Arc<ApiConfig>
}
}
let mut lang = String::from("");
if let Some(language) = language {
if Path::new(&format!("/usr/share/pbs-i18n/pbs-lang-{}.js", language)).exists() {
lang = language;
}
}
let data = json!({
"NodeName": nodename,
"UserName": userid,
"CSRFPreventionToken": token,
"language": lang,
"debug": debug,
});
@ -440,12 +455,14 @@ async fn handle_static_file_download(filename: PathBuf) -> Result<Response<Body
}
}
fn extract_auth_data(headers: &http::HeaderMap) -> (Option<String>, Option<String>) {
fn extract_auth_data(headers: &http::HeaderMap) -> (Option<String>, Option<String>, Option<String>) {
let mut ticket = None;
let mut language = None;
if let Some(raw_cookie) = headers.get("COOKIE") {
if let Ok(cookie) = raw_cookie.to_str() {
ticket = tools::extract_auth_cookie(cookie, "PBSAuthCookie");
ticket = tools::extract_cookie(cookie, "PBSAuthCookie");
language = tools::extract_cookie(cookie, "PBSLangCookie");
}
}
@ -454,7 +471,7 @@ fn extract_auth_data(headers: &http::HeaderMap) -> (Option<String>, Option<Strin
_ => None,
};
(ticket, token)
(ticket, token, language)
}
fn check_auth(
@ -463,17 +480,11 @@ fn check_auth(
token: &Option<String>,
user_info: &CachedUserInfo,
) -> Result<Userid, Error> {
let ticket_lifetime = tools::ticket::TICKET_LIFETIME;
let userid = match ticket {
Some(ticket) => match tools::ticket::verify_rsa_ticket(public_auth_key(), "PBS", &ticket, None, -300, ticket_lifetime) {
Ok((_age, Some(userid))) => userid,
Ok((_, None)) => bail!("ticket without username."),
Err(err) => return Err(err),
}
None => bail!("missing ticket"),
};
let ticket = ticket.as_ref().map(String::as_str);
let userid: Userid = Ticket::parse(&ticket.ok_or_else(|| format_err!("missing ticket"))?)?
.verify_with_time_frame(public_auth_key(), "PBS", None, -300..ticket_lifetime)?;
if !user_info.is_active_user(&userid) {
bail!("user account disabled or expired.");
@ -531,7 +542,7 @@ pub async fn handle_request(api: Arc<ApiConfig>, req: Request<Body>) -> Result<R
) {
// explicitly allow those calls without auth
} else {
let (ticket, token) = extract_auth_data(&parts.headers);
let (ticket, token, _) = extract_auth_data(&parts.headers);
match check_auth(&method, &ticket, &token, &user_info) {
Ok(userid) => rpcenv.set_user(Some(userid.to_string())),
Err(err) => {
@ -578,20 +589,20 @@ pub async fn handle_request(api: Arc<ApiConfig>, req: Request<Body>) -> Result<R
}
if comp_len == 0 {
let (ticket, token) = extract_auth_data(&parts.headers);
let (ticket, token, language) = extract_auth_data(&parts.headers);
if ticket != None {
match check_auth(&method, &ticket, &token, &user_info) {
Ok(userid) => {
let new_token = assemble_csrf_prevention_token(csrf_secret(), &userid);
return Ok(get_index(Some(userid), Some(new_token), &api, parts));
return Ok(get_index(Some(userid), Some(new_token), language, &api, parts));
}
_ => {
tokio::time::delay_until(Instant::from_std(delay_unauth_time)).await;
return Ok(get_index(None, None, &api, parts));
return Ok(get_index(None, None, language, &api, parts));
}
}
} else {
return Ok(get_index(None, None, &api, parts));
return Ok(get_index(None, None, language, &api, parts));
}
} else {
let filename = api.find_alias(&components);

View File

@ -2,9 +2,9 @@ use std::sync::atomic::{AtomicUsize, Ordering};
use anyhow::{bail, Error};
use chrono::Local;
use lazy_static::lazy_static;
use regex::Regex;
use proxmox::api::schema::{ApiStringFormat, Schema, StringSchema};
use proxmox::const_regex;
use proxmox::sys::linux::procfs;
use crate::api2::types::Userid;
@ -20,6 +20,7 @@ use crate::api2::types::Userid;
/// ```
/// Please note that we use tokio, so a single thread can run multiple
/// tasks.
// #[api] - manually implemented API type
#[derive(Debug, Clone)]
pub struct UPID {
/// The Unix PID
@ -40,7 +41,26 @@ pub struct UPID {
pub node: String,
}
proxmox::forward_serialize_to_display!(UPID);
proxmox::forward_deserialize_to_from_str!(UPID);
const_regex! {
pub PROXMOX_UPID_REGEX = concat!(
r"^UPID:(?P<node>[a-zA-Z0-9]([a-zA-Z0-9\-]*[a-zA-Z0-9])?):(?P<pid>[0-9A-Fa-f]{8}):",
r"(?P<pstart>[0-9A-Fa-f]{8,9}):(?P<task_id>[0-9A-Fa-f]{8,16}):(?P<starttime>[0-9A-Fa-f]{8}):",
r"(?P<wtype>[^:\s]+):(?P<wid>[^:\s]*):(?P<userid>[^:\s]+):$"
);
}
pub const PROXMOX_UPID_FORMAT: ApiStringFormat =
ApiStringFormat::Pattern(&PROXMOX_UPID_REGEX);
impl UPID {
pub const API_SCHEMA: Schema = StringSchema::new("Unique Process/Task Identifier")
.min_length("UPID:N:12345678:12345678:12345678:::".len())
.max_length(128) // arbitrary
.format(&PROXMOX_UPID_FORMAT)
.schema();
/// Create a new UPID
pub fn new(
@ -92,17 +112,7 @@ impl std::str::FromStr for UPID {
type Err = Error;
fn from_str(s: &str) -> Result<Self, Self::Err> {
lazy_static! {
static ref REGEX: Regex = Regex::new(concat!(
r"^UPID:(?P<node>[a-zA-Z0-9]([a-zA-Z0-9\-]*[a-zA-Z0-9])?):(?P<pid>[0-9A-Fa-f]{8}):",
r"(?P<pstart>[0-9A-Fa-f]{8,9}):(?P<task_id>[0-9A-Fa-f]{8,16}):(?P<starttime>[0-9A-Fa-f]{8}):",
r"(?P<wtype>[^:\s]+):(?P<wid>[^:\s]*):(?P<userid>[^:\s]+):$"
)).unwrap();
}
if let Some(cap) = REGEX.captures(s) {
if let Some(cap) = PROXMOX_UPID_REGEX.captures(s) {
Ok(UPID {
pid: i32::from_str_radix(&cap["pid"], 16).unwrap(),
pstart: u64::from_str_radix(&cap["pstart"], 16).unwrap(),

View File

@ -1,6 +1,6 @@
use std::collections::HashMap;
use std::fs::File;
use std::io::{BufRead, BufReader};
use std::io::{Read, BufRead, BufReader};
use std::panic::UnwindSafe;
use std::sync::atomic::{AtomicBool, Ordering};
use std::sync::{Arc, Mutex};
@ -11,6 +11,7 @@ use futures::*;
use lazy_static::lazy_static;
use nix::unistd::Pid;
use serde_json::{json, Value};
use serde::{Serialize, Deserialize};
use tokio::sync::oneshot;
use proxmox::sys::linux::procfs;
@ -155,7 +156,7 @@ pub async fn abort_worker(upid: UPID) -> Result<(), Error> {
super::send_command(socketname, cmd).map_ok(|_| ()).await
}
fn parse_worker_status_line(line: &str) -> Result<(String, UPID, Option<(i64, String)>), Error> {
fn parse_worker_status_line(line: &str) -> Result<(String, UPID, Option<TaskState>), Error> {
let data = line.splitn(3, ' ').collect::<Vec<&str>>();
@ -165,7 +166,8 @@ fn parse_worker_status_line(line: &str) -> Result<(String, UPID, Option<(i64, St
1 => Ok((data[0].to_owned(), data[0].parse::<UPID>()?, None)),
3 => {
let endtime = i64::from_str_radix(data[1], 16)?;
Ok((data[0].to_owned(), data[0].parse::<UPID>()?, Some((endtime, data[2].to_owned()))))
let state = TaskState::from_endtime_and_message(endtime, data[2])?;
Ok((data[0].to_owned(), data[0].parse::<UPID>()?, Some(state)))
}
_ => bail!("wrong number of components"),
}
@ -189,9 +191,12 @@ pub fn create_task_log_dirs() -> Result<(), Error> {
Ok(())
}
/// Read exits status from task log file
pub fn upid_read_status(upid: &UPID) -> Result<String, Error> {
let mut status = String::from("unknown");
/// Read endtime (time of last log line) and exitstatus from task log file
/// If there is not a single line with at valid datetime, we assume the
/// starttime to be the endtime
pub fn upid_read_status(upid: &UPID) -> Result<TaskState, Error> {
let mut status = TaskState::Unknown { endtime: upid.starttime };
let path = upid.log_path();
@ -202,22 +207,36 @@ pub fn upid_read_status(upid: &UPID) -> Result<String, Error> {
use std::io::SeekFrom;
let _ = file.seek(SeekFrom::End(-8192)); // ignore errors
let reader = BufReader::new(file);
let mut data = Vec::with_capacity(8192);
file.read_to_end(&mut data)?;
for line in reader.lines() {
let line = line?;
// task logs should end with newline, we do not want it here
if data[data.len()-1] == b'\n' {
data.pop();
}
let mut iter = line.splitn(2, ": TASK ");
if iter.next() == None { continue; }
match iter.next() {
None => continue,
Some(rest) => {
if rest == "OK" {
status = String::from(rest);
} else if rest.starts_with("WARNINGS: ") {
status = String::from(rest);
} else if rest.starts_with("ERROR: ") {
status = String::from(&rest[7..]);
let last_line = {
let mut start = 0;
for pos in (0..data.len()).rev() {
if data[pos] == b'\n' {
start = pos + 1;
break;
}
}
&data[start..]
};
let last_line = std::str::from_utf8(last_line)
.map_err(|err| format_err!("upid_read_status: utf8 parse failed: {}", err))?;
let mut iter = last_line.splitn(2, ": ");
if let Some(time_str) = iter.next() {
if let Ok(endtime) = chrono::DateTime::parse_from_rfc3339(time_str) {
let endtime = endtime.timestamp();
if let Some(rest) = iter.next().and_then(|rest| rest.strip_prefix("TASK ")) {
if let Ok(state) = TaskState::from_endtime_and_message(endtime, rest) {
status = state;
}
}
}
@ -226,6 +245,76 @@ pub fn upid_read_status(upid: &UPID) -> Result<String, Error> {
Ok(status)
}
/// Task State
#[derive(Debug, PartialEq, Eq, Serialize, Deserialize)]
pub enum TaskState {
/// The Task ended with an undefined state
Unknown { endtime: i64 },
/// The Task ended and there were no errors or warnings
OK { endtime: i64 },
/// The Task had 'count' amount of warnings and no errors
Warning { count: u64, endtime: i64 },
/// The Task ended with the error described in 'message'
Error { message: String, endtime: i64 },
}
impl TaskState {
pub fn endtime(&self) -> i64 {
match *self {
TaskState::Unknown { endtime } => endtime,
TaskState::OK { endtime } => endtime,
TaskState::Warning { endtime, .. } => endtime,
TaskState::Error { endtime, .. } => endtime,
}
}
fn result_text(&self) -> String {
match self {
TaskState::Error { message, .. } => format!("TASK ERROR: {}", message),
other => format!("TASK {}", other),
}
}
fn from_endtime_and_message(endtime: i64, s: &str) -> Result<Self, Error> {
if s == "unknown" {
Ok(TaskState::Unknown { endtime })
} else if s == "OK" {
Ok(TaskState::OK { endtime })
} else if s.starts_with("WARNINGS: ") {
let count: u64 = s[10..].parse()?;
Ok(TaskState::Warning{ count, endtime })
} else if s.len() > 0 {
let message = if s.starts_with("ERROR: ") { &s[7..] } else { s }.to_string();
Ok(TaskState::Error{ message, endtime })
} else {
bail!("unable to parse Task Status '{}'", s);
}
}
}
impl std::cmp::PartialOrd for TaskState {
fn partial_cmp(&self, other: &Self) -> Option<std::cmp::Ordering> {
Some(self.endtime().cmp(&other.endtime()))
}
}
impl std::cmp::Ord for TaskState {
fn cmp(&self, other: &Self) -> std::cmp::Ordering {
self.endtime().cmp(&other.endtime())
}
}
impl std::fmt::Display for TaskState {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
match self {
TaskState::Unknown { .. } => write!(f, "unknown"),
TaskState::OK { .. }=> write!(f, "OK"),
TaskState::Warning { count, .. } => write!(f, "WARNINGS: {}", count),
TaskState::Error { message, .. } => write!(f, "{}", message),
}
}
}
/// Task details including parsed UPID
///
/// If there is no `state`, the task is still running.
@ -236,9 +325,7 @@ pub struct TaskListInfo {
/// UPID string representation
pub upid_str: String,
/// Task `(endtime, status)` if already finished
///
/// The `status` is either `unknown`, `OK`, `WARN`, or `ERROR: ...`
pub state: Option<(i64, String)>, // endtime, status
pub state: Option<TaskState>, // endtime, status
}
// atomically read/update the task list, update status of finished tasks
@ -278,14 +365,14 @@ fn update_active_workers(new_upid: Option<&UPID>) -> Result<Vec<TaskListInfo>, E
None => {
println!("Detected stopped UPID {}", upid_str);
let status = upid_read_status(&upid)
.unwrap_or_else(|_| String::from("unknown"));
.unwrap_or_else(|_| TaskState::Unknown { endtime: Local::now().timestamp() });
finish_list.push(TaskListInfo {
upid, upid_str, state: Some((Local::now().timestamp(), status))
upid, upid_str, state: Some(status)
});
},
Some((endtime, status)) => {
Some(status) => {
finish_list.push(TaskListInfo {
upid, upid_str, state: Some((endtime, status))
upid, upid_str, state: Some(status)
})
}
}
@ -321,7 +408,7 @@ fn update_active_workers(new_upid: Option<&UPID>) -> Result<Vec<TaskListInfo>, E
task_list.sort_unstable_by(|b, a| { // lastest on top
match (&a.state, &b.state) {
(Some(s1), Some(s2)) => s1.0.cmp(&s2.0),
(Some(s1), Some(s2)) => s1.cmp(&s2),
(Some(_), None) => std::cmp::Ordering::Less,
(None, Some(_)) => std::cmp::Ordering::Greater,
_ => a.upid.starttime.cmp(&b.upid.starttime),
@ -330,8 +417,8 @@ fn update_active_workers(new_upid: Option<&UPID>) -> Result<Vec<TaskListInfo>, E
let mut raw = String::new();
for info in &task_list {
if let Some((endtime, status)) = &info.state {
raw.push_str(&format!("{} {:08X} {}\n", info.upid_str, endtime, status));
if let Some(status) = &info.state {
raw.push_str(&format!("{} {:08X} {}\n", info.upid_str, status.endtime(), status));
} else {
raw.push_str(&info.upid_str);
raw.push('\n');
@ -473,8 +560,6 @@ impl WorkerTask {
{
println!("register worker thread");
let (p, c) = oneshot::channel::<()>();
let worker = WorkerTask::new(worker_type, worker_id, userid, to_stdout)?;
let upid_str = worker.upid.to_string();
@ -495,31 +580,30 @@ impl WorkerTask {
};
worker.log_result(&result);
p.send(()).unwrap();
});
tokio::spawn(c.map(|_| ()));
Ok(upid_str)
}
/// get the Text of the result
pub fn get_log_text(&self, result: &Result<(), Error>) -> String {
/// create state from self and a result
pub fn create_state(&self, result: &Result<(), Error>) -> TaskState {
let warn_count = self.data.lock().unwrap().warn_count;
let endtime = Local::now().timestamp();
if let Err(err) = result {
format!("ERROR: {}", err)
TaskState::Error { message: err.to_string(), endtime }
} else if warn_count > 0 {
format!("WARNINGS: {}", warn_count)
TaskState::Warning { count: warn_count, endtime }
} else {
"OK".to_string()
TaskState::OK { endtime }
}
}
/// Log task result, remove task from running list
pub fn log_result(&self, result: &Result<(), Error>) {
self.log(format!("TASK {}", self.get_log_text(result)));
let state = self.create_state(result);
self.log(state.result_text());
WORKER_TASK_LIST.lock().unwrap().remove(&self.upid.task_id);
let _ = update_active_workers(None);

View File

@ -62,32 +62,6 @@ pub trait BufferedRead {
fn buffered_read(&mut self, offset: u64) -> Result<&[u8], Error>;
}
/// Directly map a type into a binary buffer. This is mostly useful
/// for reading structured data from a byte stream (file). You need to
/// make sure that the buffer location does not change, so please
/// avoid vec resize while you use such map.
///
/// This function panics if the buffer is not large enough.
pub fn map_struct<T>(buffer: &[u8]) -> Result<&T, Error> {
if buffer.len() < ::std::mem::size_of::<T>() {
bail!("unable to map struct - buffer too small");
}
Ok(unsafe { &*(buffer.as_ptr() as *const T) })
}
/// Directly map a type into a mutable binary buffer. This is mostly
/// useful for writing structured data into a byte stream (file). You
/// need to make sure that the buffer location does not change, so
/// please avoid vec resize while you use such map.
///
/// This function panics if the buffer is not large enough.
pub fn map_struct_mut<T>(buffer: &mut [u8]) -> Result<&mut T, Error> {
if buffer.len() < ::std::mem::size_of::<T>() {
bail!("unable to map struct - buffer too small");
}
Ok(unsafe { &mut *(buffer.as_ptr() as *mut T) })
}
/// Split a file into equal sized chunks. The last chunk may be
/// smaller. Note: We cannot implement an `Iterator`, because iterators
/// cannot return a borrowed buffer ref (we want zero-copy)
@ -352,9 +326,9 @@ pub fn assert_if_modified(digest1: &str, digest2: &str) -> Result<(), Error> {
Ok(())
}
/// Extract authentication cookie from cookie header.
/// Extract a specific cookie from cookie header.
/// We assume cookie_name is already url encoded.
pub fn extract_auth_cookie(cookie: &str, cookie_name: &str) -> Option<String> {
pub fn extract_cookie(cookie: &str, cookie_name: &str) -> Option<String> {
for pair in cookie.split(';') {
let (name, value) = match pair.find('=') {
Some(i) => (pair[..i].trim(), pair[(i + 1)..].trim()),

View File

@ -133,7 +133,7 @@ impl DiskManage {
})
}
/// Information about file system type and unsed device for a path
/// Information about file system type and used device for a path
///
/// Returns tuple (fs_type, device, mount_source)
pub fn find_mounted_device(

View File

@ -111,7 +111,7 @@ fn parse_zpool_list_item(i: &str) -> IResult<&str, ZFSPoolInfo> {
Ok((i, stat))
}
/// Parse zpool list outout
/// Parse zpool list output
///
/// Note: This does not reveal any details on how the pool uses the devices, because
/// the zpool list output format is not really defined...

View File

@ -53,7 +53,7 @@ fn parse_zpool_status_vdev(i: &str) -> IResult<&str, ZFSPoolVDevState> {
let (i, vdev_name) = notspace1(i)?;
if let Ok((n, _)) = preceded(multispace0, line_ending)(i) { // sepecial device
if let Ok((n, _)) = preceded(multispace0, line_ending)(i) { // special device
let vdev = ZFSPoolVDevState {
name: vdev_name.to_string(),
lvl: indent_level,
@ -67,6 +67,19 @@ fn parse_zpool_status_vdev(i: &str) -> IResult<&str, ZFSPoolVDevState> {
}
let (i, state) = preceded(multispace1, notspace1)(i)?;
if let Ok((n, _)) = preceded(multispace0, line_ending)(i) { // spares
let vdev = ZFSPoolVDevState {
name: vdev_name.to_string(),
lvl: indent_level,
state: Some(state.to_string()),
read: None,
write: None,
cksum: None,
msg: None,
};
return Ok((n, vdev));
}
let (i, read) = preceded(multispace1, parse_u64)(i)?;
let (i, write) = preceded(multispace1, parse_u64)(i)?;
let (i, cksum) = preceded(multispace1, parse_u64)(i)?;
@ -465,3 +478,40 @@ errors: No known data errors
Ok(())
}
#[test]
fn test_zpool_status_parser_spares() -> Result<(), Error> {
let output = r###" pool: tank
state: ONLINE
scan: none requested
config:
NAME STATE READ WRITE CKSUM
tank ONLINE 0 0 0
mirror-0 ONLINE 0 0 0
/dev/sda1 ONLINE 0 0 0
/dev/sda2 ONLINE 0 0 0
mirror-1 ONLINE 0 0 0
/dev/sda3 ONLINE 0 0 0
/dev/sda4 ONLINE 0 0 0
logs
/dev/sda5 ONLINE 0 0 0
spares
/dev/sdb AVAIL
/dev/sdc AVAIL
errors: No known data errors
"###;
let key_value_list = parse_zpool_status(&output)?;
for (k, v) in key_value_list {
println!("{} => {}", k,v);
if k == "config" {
let vdev_list = parse_zpool_status_config_tree(&v)?;
let _tree = vdev_list_to_tree(&vdev_list);
}
}
Ok(())
}

View File

@ -80,6 +80,11 @@ impl From<usize> for HumanByte {
HumanByte { b: v }
}
}
impl From<u64> for HumanByte {
fn from(v: u64) -> Self {
HumanByte { b: v as usize }
}
}
#[test]
fn correct_byte_convert() {

View File

@ -83,6 +83,17 @@ pub fn reload_daemon() -> Result<(), Error> {
Ok(())
}
pub fn disable_unit(unit: &str) -> Result<(), Error> {
let mut command = std::process::Command::new("systemctl");
command.arg("disable");
command.arg(unit);
crate::tools::run_command(command, None)?;
Ok(())
}
pub fn enable_unit(unit: &str) -> Result<(), Error> {
let mut command = std::process::Command::new("systemctl");

View File

@ -145,6 +145,9 @@ fn parse_date_time_comp(max: usize) -> impl Fn(&str) -> IResult<&str, DateTimeVa
let (i, value) = parse_time_comp(max)(i)?;
if let (i, Some(end)) = opt(preceded(tag(".."), parse_time_comp(max)))(i)? {
if value > end {
return Err(parse_error(i, "range start is bigger than end"));
}
return Ok((i, DateTimeValue::Range(value, end)))
}
@ -183,6 +186,25 @@ fn parse_time_spec(i: &str) -> IResult<&str, (Vec<DateTimeValue>, Vec<DateTimeVa
}
}
fn parse_date_spec(i: &str) -> IResult<&str, (Vec<DateTimeValue>, Vec<DateTimeValue>, Vec<DateTimeValue>)> {
// TODO: implement ~ for days (man systemd.time)
if let Ok((i, (year, month, day))) = tuple((
parse_date_time_comp_list(2200), // the upper limit for systemd, stay compatible
preceded(tag("-"), parse_date_time_comp_list(13)),
preceded(tag("-"), parse_date_time_comp_list(32)),
))(i) {
Ok((i, (year, month, day)))
} else if let Ok((i, (month, day))) = tuple((
parse_date_time_comp_list(13),
preceded(tag("-"), parse_date_time_comp_list(32)),
))(i) {
Ok((i, (Vec::new(), month, day)))
} else {
Err(parse_error(i, "invalid date spec"))
}
}
pub fn parse_calendar_event(i: &str) -> Result<CalendarEvent, Error> {
parse_complete_line("calendar event", i, parse_calendar_event_incomplete)
}
@ -191,7 +213,7 @@ fn parse_calendar_event_incomplete(mut i: &str) -> IResult<&str, CalendarEvent>
let mut has_dayspec = false;
let mut has_timespec = false;
let has_datespec = false;
let mut has_datespec = false;
let mut event = CalendarEvent::default();
@ -228,8 +250,52 @@ fn parse_calendar_event_incomplete(mut i: &str) -> IResult<&str, CalendarEvent>
..Default::default()
}));
}
"monthly" | "yearly" | "quarterly" | "semiannually" => {
return Err(parse_error(i, "unimplemented date or time specification"));
"monthly" => {
return Ok(("", CalendarEvent {
hour: vec![DateTimeValue::Single(0)],
minute: vec![DateTimeValue::Single(0)],
second: vec![DateTimeValue::Single(0)],
day: vec![DateTimeValue::Single(1)],
..Default::default()
}));
}
"yearly" | "annually" => {
return Ok(("", CalendarEvent {
hour: vec![DateTimeValue::Single(0)],
minute: vec![DateTimeValue::Single(0)],
second: vec![DateTimeValue::Single(0)],
day: vec![DateTimeValue::Single(1)],
month: vec![DateTimeValue::Single(1)],
..Default::default()
}));
}
"quarterly" => {
return Ok(("", CalendarEvent {
hour: vec![DateTimeValue::Single(0)],
minute: vec![DateTimeValue::Single(0)],
second: vec![DateTimeValue::Single(0)],
day: vec![DateTimeValue::Single(1)],
month: vec![
DateTimeValue::Single(1),
DateTimeValue::Single(4),
DateTimeValue::Single(7),
DateTimeValue::Single(10),
],
..Default::default()
}));
}
"semiannually" | "semi-annually" => {
return Ok(("", CalendarEvent {
hour: vec![DateTimeValue::Single(0)],
minute: vec![DateTimeValue::Single(0)],
second: vec![DateTimeValue::Single(0)],
day: vec![DateTimeValue::Single(1)],
month: vec![
DateTimeValue::Single(1),
DateTimeValue::Single(7),
],
..Default::default()
}));
}
_ => { /* continue */ }
}
@ -246,7 +312,13 @@ fn parse_calendar_event_incomplete(mut i: &str) -> IResult<&str, CalendarEvent>
for range in range_list { event.days.insert(range); }
}
// todo: support date specs
if let (n, Some((year, month, day))) = opt(parse_date_spec)(i)? {
event.year = year;
event.month = month;
event.day = day;
has_datespec = true;
i = space0(n)?.0;
}
if let (n, Some((hour, minute, second))) = opt(parse_time_spec)(i)? {
event.hour = hour;

View File

@ -1,4 +1,6 @@
use anyhow::{bail, Error};
use std::convert::TryInto;
use anyhow::Error;
use bitflags::bitflags;
pub use super::parse_time::*;
@ -17,7 +19,7 @@ bitflags!{
}
}
#[derive(Debug)]
#[derive(Debug, Clone)]
pub enum DateTimeValue {
Single(u32),
Range(u32, u32),
@ -54,7 +56,7 @@ impl DateTimeValue {
let mut next: Option<u32> = None;
let mut set_next = |v: u32| {
if let Some(n) = next {
if v > n { next = Some(v); }
if v < n { next = Some(v); }
} else {
next = Some(v);
}
@ -91,7 +93,7 @@ impl DateTimeValue {
/// Calendar events may be used to refer to one or more points in time in a
/// single expression. They are designed after the systemd.time Calendar Events
/// specification, but are not guaranteed to be 100% compatible.
#[derive(Default, Debug)]
#[derive(Default, Clone, Debug)]
pub struct CalendarEvent {
/// the days in a week this event should trigger
pub days: WeekDays,
@ -101,17 +103,15 @@ pub struct CalendarEvent {
pub minute: Vec<DateTimeValue>,
/// the hour(s) this event should trigger
pub hour: Vec<DateTimeValue>,
/* FIXME: TODO
/// the day(s) in a month this event should trigger
pub day: Vec<DateTimeValue>,
/// the month(s) in a year this event should trigger
pub month: Vec<DateTimeValue>,
/// the years(s) this event should trigger
pub year: Vec<DateTimeValue>,
*/
}
#[derive(Default)]
#[derive(Default, Clone, Debug)]
pub struct TimeSpan {
pub nsec: u64,
pub usec: u64,
@ -155,7 +155,7 @@ pub fn compute_next_event(
event: &CalendarEvent,
last: i64,
utc: bool,
) -> Result<i64, Error> {
) -> Result<Option<i64>, Error> {
let last = last + 1; // at least one second later
@ -166,94 +166,124 @@ pub fn compute_next_event(
let mut count = 0;
loop {
if count > 1000 { // should not happen
bail!("unable to compute next calendar event");
// cancel after 1000 loops
if count > 1000 {
return Ok(None);
} else {
count += 1;
}
if !all_days && t.changes.contains(TMChanges::WDAY) { // match day first
let day_num = t.day_num();
if !event.year.is_empty() {
let year: u32 = t.year().try_into()?;
if !DateTimeValue::list_contains(&event.year, year) {
if let Some(n) = DateTimeValue::find_next(&event.year, year) {
t.add_years((n - year).try_into()?)?;
continue;
} else {
// if we have no valid year, we cannot find a correct timestamp
return Ok(None);
}
}
}
if !event.month.is_empty() {
let month: u32 = t.month().try_into()?;
if !DateTimeValue::list_contains(&event.month, month) {
if let Some(n) = DateTimeValue::find_next(&event.month, month) {
t.add_months((n - month).try_into()?)?;
} else {
// if we could not find valid month, retry next year
t.add_years(1)?;
}
continue;
}
}
if !event.day.is_empty() {
let day: u32 = t.day().try_into()?;
if !DateTimeValue::list_contains(&event.day, day) {
if let Some(n) = DateTimeValue::find_next(&event.day, day) {
t.add_days((n - day).try_into()?)?;
} else {
// if we could not find valid mday, retry next month
t.add_months(1)?;
}
continue;
}
}
if !all_days { // match day first
let day_num: u32 = t.day_num().try_into()?;
let day = WeekDays::from_bits(1<<day_num).unwrap();
if event.days.contains(day) {
t.changes.remove(TMChanges::WDAY);
} else {
if !event.days.contains(day) {
if let Some(n) = ((day_num+1)..7)
.find(|d| event.days.contains(WeekDays::from_bits(1<<d).unwrap()))
{
// try next day
t.add_days(n - day_num, true);
continue;
t.add_days((n - day_num).try_into()?)?;
} else {
// try next week
t.add_days(7 - day_num, true);
continue;
t.add_days((7 - day_num).try_into()?)?;
}
continue;
}
}
// this day
if !event.hour.is_empty() && t.changes.contains(TMChanges::HOUR) {
let hour = t.hour() as u32;
if DateTimeValue::list_contains(&event.hour, hour) {
t.changes.remove(TMChanges::HOUR);
} else {
if !event.hour.is_empty() {
let hour = t.hour().try_into()?;
if !DateTimeValue::list_contains(&event.hour, hour) {
if let Some(n) = DateTimeValue::find_next(&event.hour, hour) {
// test next hour
t.set_time(n as libc::c_int, 0, 0);
continue;
t.set_time(n.try_into()?, 0, 0)?;
} else {
// test next day
t.add_days(1, true);
continue;
t.add_days(1)?;
}
continue;
}
}
// this hour
if !event.minute.is_empty() && t.changes.contains(TMChanges::MIN) {
let minute = t.min() as u32;
if DateTimeValue::list_contains(&event.minute, minute) {
t.changes.remove(TMChanges::MIN);
} else {
if !event.minute.is_empty() {
let minute = t.min().try_into()?;
if !DateTimeValue::list_contains(&event.minute, minute) {
if let Some(n) = DateTimeValue::find_next(&event.minute, minute) {
// test next minute
t.set_min_sec(n as libc::c_int, 0);
continue;
t.set_min_sec(n.try_into()?, 0)?;
} else {
// test next hour
t.set_time(t.hour() + 1, 0, 0);
continue;
t.set_time(t.hour() + 1, 0, 0)?;
}
continue;
}
}
// this minute
if !event.second.is_empty() && t.changes.contains(TMChanges::SEC) {
let second = t.sec() as u32;
if DateTimeValue::list_contains(&event.second, second) {
t.changes.remove(TMChanges::SEC);
} else {
if !event.second.is_empty() {
let second = t.sec().try_into()?;
if !DateTimeValue::list_contains(&event.second, second) {
if let Some(n) = DateTimeValue::find_next(&event.second, second) {
// test next second
t.set_sec(n as libc::c_int);
continue;
t.set_sec(n.try_into()?)?;
} else {
// test next min
t.set_min_sec(t.min() + 1, 0);
continue;
t.set_min_sec(t.min() + 1, 0)?;
}
continue;
}
}
let next = t.into_epoch()?;
return Ok(next)
return Ok(Some(next))
}
}
#[cfg(test)]
mod test {
use anyhow::bail;
use super::*;
use proxmox::tools::time::*;
@ -280,7 +310,7 @@ mod test {
};
match compute_next_event(&event, last, true) {
Ok(next) => {
Ok(Some(next)) => {
if next == expect {
println!("next {:?} => {}", event, next);
} else {
@ -288,12 +318,25 @@ mod test {
event, gmtime(next), gmtime(expect));
}
}
Ok(None) => bail!("next {:?} failed to find a timestamp", event),
Err(err) => bail!("compute next for '{}' failed - {}", v, err),
}
Ok(expect)
};
let test_never = |v: &'static str, last: i64| -> Result<(), Error> {
let event = match parse_calendar_event(v) {
Ok(event) => event,
Err(err) => bail!("parsing '{}' failed - {}", v, err),
};
match compute_next_event(&event, last, true)? {
None => Ok(()),
Some(next) => bail!("compute next for '{}' succeeded, but expected fail - result {}", v, next),
}
};
const MIN: i64 = 60;
const HOUR: i64 = 3600;
const DAY: i64 = 3600*24;
@ -320,6 +363,13 @@ mod test {
test_value("sat", THURSDAY_00_00, THURSDAY_00_00 + 2*DAY)?;
test_value("sun", THURSDAY_00_00, THURSDAY_00_00 + 3*DAY)?;
// test multiple values for a single field
// and test that the order does not matter
test_value("5,10:4,8", THURSDAY_00_00, THURSDAY_00_00 + 5*HOUR + 4*MIN)?;
test_value("10,5:8,4", THURSDAY_00_00, THURSDAY_00_00 + 5*HOUR + 4*MIN)?;
test_value("6,4..10:23,5/5", THURSDAY_00_00, THURSDAY_00_00 + 4*HOUR + 5*MIN)?;
test_value("4..10,6:5/5,23", THURSDAY_00_00, THURSDAY_00_00 + 4*HOUR + 5*MIN)?;
// test month wrapping
test_value("sat", JUL_31_2020, JUL_31_2020 + 1*DAY)?;
test_value("sun", JUL_31_2020, JUL_31_2020 + 2*DAY)?;
@ -361,6 +411,23 @@ mod test {
n = test_value("1:0", n, THURSDAY_00_00 + i*DAY + HOUR)?;
}
// test date functionality
test_value("2020-07-31", 0, JUL_31_2020)?;
test_value("02-28", 0, (31+27)*DAY)?;
test_value("02-29", 0, 2*365*DAY + (31+28)*DAY)?; // 1972-02-29
test_value("1965/5-01-01", -1, THURSDAY_00_00)?;
test_value("2020-7..9-2/2", JUL_31_2020, JUL_31_2020 + 2*DAY)?;
test_value("2020,2021-12-31", JUL_31_2020, DEC_31_2020)?;
test_value("monthly", 0, 31*DAY)?;
test_value("quarterly", 0, (31+28+31)*DAY)?;
test_value("semiannually", 0, (31+28+31+30+31+30)*DAY)?;
test_value("yearly", 0, (365)*DAY)?;
test_never("2021-02-29", 0)?;
test_never("02-30", 0)?;
Ok(())
}

View File

@ -1,73 +1,60 @@
use anyhow::Error;
use bitflags::bitflags;
use proxmox::tools::time::*;
bitflags!{
#[derive(Default)]
pub struct TMChanges: u8 {
const SEC = 1;
const MIN = 2;
const HOUR = 4;
const MDAY = 8;
const MON = 16;
const YEAR = 32;
const WDAY = 64;
}
}
pub struct TmEditor {
utc: bool,
t: libc::tm,
pub changes: TMChanges,
}
fn is_leap_year(year: libc::c_int) -> bool {
if year % 4 != 0 { return false; }
if year % 100 != 0 { return true; }
if year % 400 != 0 { return false; }
return true;
}
fn days_in_month(mon: libc::c_int, year: libc::c_int) -> libc::c_int {
let mon = mon % 12;
static MAP: &[libc::c_int] = &[31, 28, 31, 30, 31, 30, 31, 31, 30, 31, 30, 31];
if mon == 1 && is_leap_year(year) { return 29; }
MAP[mon as usize]
}
impl TmEditor {
pub fn new(epoch: i64, utc: bool) -> Result<Self, Error> {
let mut t = if utc { gmtime(epoch)? } else { localtime(epoch)? };
t.tm_year += 1900; // real years for clarity
Ok(Self { utc, t, changes: TMChanges::all() })
let t = if utc { gmtime(epoch)? } else { localtime(epoch)? };
Ok(Self { utc, t })
}
pub fn into_epoch(mut self) -> Result<i64, Error> {
self.t.tm_year -= 1900;
let epoch = if self.utc { timegm(self.t)? } else { timelocal(self.t)? };
let epoch = if self.utc { timegm(&mut self.t)? } else { timelocal(&mut self.t)? };
Ok(epoch)
}
pub fn add_days(&mut self, days: libc::c_int, reset_time: bool) {
if days == 0 { return; }
if reset_time {
self.t.tm_hour = 0;
self.t.tm_min = 0;
self.t.tm_sec = 0;
self.changes.insert(TMChanges::HOUR|TMChanges::MIN|TMChanges::SEC);
}
self.t.tm_mday += days;
self.t.tm_wday += days;
self.changes.insert(TMChanges::MDAY|TMChanges::WDAY);
self.wrap_time();
/// increases the year by 'years' and resets all smaller fields to their minimum
pub fn add_years(&mut self, years: libc::c_int) -> Result<(), Error> {
if years == 0 { return Ok(()); }
self.t.tm_mon = 0;
self.t.tm_mday = 1;
self.t.tm_hour = 0;
self.t.tm_min = 0;
self.t.tm_sec = 0;
self.t.tm_year += years;
self.normalize_time()
}
/// increases the month by 'months' and resets all smaller fields to their minimum
pub fn add_months(&mut self, months: libc::c_int) -> Result<(), Error> {
if months == 0 { return Ok(()); }
self.t.tm_mday = 1;
self.t.tm_hour = 0;
self.t.tm_min = 0;
self.t.tm_sec = 0;
self.t.tm_mon += months;
self.normalize_time()
}
/// increases the day by 'days' and resets all smaller fields to their minimum
pub fn add_days(&mut self, days: libc::c_int) -> Result<(), Error> {
if days == 0 { return Ok(()); }
self.t.tm_hour = 0;
self.t.tm_min = 0;
self.t.tm_sec = 0;
self.t.tm_mday += days;
self.normalize_time()
}
pub fn year(&self) -> libc::c_int { self.t.tm_year + 1900 } // see man mktime
pub fn month(&self) -> libc::c_int { self.t.tm_mon + 1 }
pub fn day(&self) -> libc::c_int { self.t.tm_mday }
pub fn hour(&self) -> libc::c_int { self.t.tm_hour }
pub fn min(&self) -> libc::c_int { self.t.tm_min }
pub fn sec(&self) -> libc::c_int { self.t.tm_sec }
@ -77,109 +64,56 @@ impl TmEditor {
(self.t.tm_wday + 6) % 7
}
pub fn set_time(&mut self, hour: libc::c_int, min: libc::c_int, sec: libc::c_int) {
pub fn set_time(&mut self, hour: libc::c_int, min: libc::c_int, sec: libc::c_int) -> Result<(), Error> {
self.t.tm_hour = hour;
self.t.tm_min = min;
self.t.tm_sec = sec;
self.changes.insert(TMChanges::HOUR|TMChanges::MIN|TMChanges::SEC);
self.wrap_time();
self.normalize_time()
}
pub fn set_min_sec(&mut self, min: libc::c_int, sec: libc::c_int) {
pub fn set_min_sec(&mut self, min: libc::c_int, sec: libc::c_int) -> Result<(), Error> {
self.t.tm_min = min;
self.t.tm_sec = sec;
self.changes.insert(TMChanges::MIN|TMChanges::SEC);
self.wrap_time();
self.normalize_time()
}
fn wrap_time(&mut self) {
// sec: 0..59
if self.t.tm_sec >= 60 {
self.t.tm_min += self.t.tm_sec / 60;
self.t.tm_sec %= 60;
self.changes.insert(TMChanges::SEC|TMChanges::MIN);
fn normalize_time(&mut self) -> Result<(), Error> {
// libc normalizes it for us
if self.utc {
timegm(&mut self.t)?;
} else {
timelocal(&mut self.t)?;
}
// min: 0..59
if self.t.tm_min >= 60 {
self.t.tm_hour += self.t.tm_min / 60;
self.t.tm_min %= 60;
self.changes.insert(TMChanges::MIN|TMChanges::HOUR);
}
// hour: 0..23
if self.t.tm_hour >= 24 {
self.t.tm_mday += self.t.tm_hour / 24;
self.t.tm_wday += self.t.tm_hour / 24;
self.t.tm_hour %= 24;
self.changes.insert(TMChanges::HOUR|TMChanges::MDAY|TMChanges::WDAY);
}
// Translate to 0..($days_in_mon-1)
self.t.tm_mday -= 1;
loop {
let days_in_mon = days_in_month(self.t.tm_mon, self.t.tm_year);
if self.t.tm_mday < days_in_mon { break; }
// Wrap one month
self.t.tm_mday -= days_in_mon;
self.t.tm_mon += 1;
self.changes.insert(TMChanges::MDAY|TMChanges::WDAY|TMChanges::MON);
}
// Translate back to 1..$days_in_mon
self.t.tm_mday += 1;
// mon: 0..11
if self.t.tm_mon >= 12 {
self.t.tm_year += self.t.tm_mon / 12;
self.t.tm_mon %= 12;
self.changes.insert(TMChanges::MON|TMChanges::YEAR);
}
self.t.tm_wday %= 7;
Ok(())
}
pub fn set_sec(&mut self, v: libc::c_int) {
pub fn set_sec(&mut self, v: libc::c_int) -> Result<(), Error> {
self.t.tm_sec = v;
self.changes.insert(TMChanges::SEC);
self.wrap_time();
self.normalize_time()
}
pub fn set_min(&mut self, v: libc::c_int) {
pub fn set_min(&mut self, v: libc::c_int) -> Result<(), Error> {
self.t.tm_min = v;
self.changes.insert(TMChanges::MIN);
self.wrap_time();
self.normalize_time()
}
pub fn set_hour(&mut self, v: libc::c_int) {
pub fn set_hour(&mut self, v: libc::c_int) -> Result<(), Error> {
self.t.tm_hour = v;
self.changes.insert(TMChanges::HOUR);
self.wrap_time();
self.normalize_time()
}
pub fn set_mday(&mut self, v: libc::c_int) {
pub fn set_mday(&mut self, v: libc::c_int) -> Result<(), Error> {
self.t.tm_mday = v;
self.changes.insert(TMChanges::MDAY);
self.wrap_time();
self.normalize_time()
}
pub fn set_mon(&mut self, v: libc::c_int) {
self.t.tm_mon = v;
self.changes.insert(TMChanges::MON);
self.wrap_time();
pub fn set_mon(&mut self, v: libc::c_int) -> Result<(), Error> {
self.t.tm_mon = v - 1;
self.normalize_time()
}
pub fn set_year(&mut self, v: libc::c_int) {
self.t.tm_year = v;
self.changes.insert(TMChanges::YEAR);
self.wrap_time();
pub fn set_year(&mut self, v: libc::c_int) -> Result<(), Error> {
self.t.tm_year = v - 1900;
self.normalize_time()
}
pub fn set_wday(&mut self, v: libc::c_int) {
self.t.tm_wday = v;
self.changes.insert(TMChanges::WDAY);
self.wrap_time();
}
}

View File

@ -252,6 +252,6 @@ pub const SYSTEMD_TIMESPAN_SCHEMA: Schema = StringSchema::new(
.schema();
pub const SYSTEMD_CALENDAR_EVENT_SCHEMA: Schema = StringSchema::new(
"systemd time span")
"systemd calendar event")
.format(&ApiStringFormat::VerifyFn(super::time::verify_calendar_event))
.schema();

View File

@ -1,151 +1,321 @@
//! Generate and verify Authentication tickets
use anyhow::{bail, Error};
use base64;
use std::borrow::Cow;
use std::io;
use std::marker::PhantomData;
use openssl::pkey::{PKey, Public, Private};
use openssl::sign::{Signer, Verifier};
use anyhow::{bail, format_err, Error};
use openssl::hash::MessageDigest;
use openssl::pkey::{HasPublic, PKey, Private};
use openssl::sign::{Signer, Verifier};
use percent_encoding::{percent_decode_str, percent_encode, AsciiSet};
use crate::api2::types::Userid;
use crate::tools::epoch_now_u64;
pub const TICKET_LIFETIME: i64 = 3600*2; // 2 hours
pub const TICKET_LIFETIME: i64 = 3600 * 2; // 2 hours
const TERM_PREFIX: &str = "PBSTERM";
pub const TERM_PREFIX: &str = "PBSTERM";
pub fn assemble_term_ticket(
keypair: &PKey<Private>,
userid: &Userid,
path: &str,
port: u16,
) -> Result<String, Error> {
assemble_rsa_ticket(
keypair,
TERM_PREFIX,
None,
Some(&format!("{}{}{}", userid, path, port)),
)
/// Stringified ticket data must not contain colons...
const TICKET_ASCIISET: &AsciiSet = &percent_encoding::CONTROLS.add(b':');
/// An empty type implementing [`ToString`] and [`FromStr`](std::str::FromStr), used for tickets
/// with no data.
pub struct Empty;
impl ToString for Empty {
fn to_string(&self) -> String {
String::new()
}
}
pub fn verify_term_ticket(
keypair: &PKey<Public>,
userid: &Userid,
path: &str,
port: u16,
ticket: &str,
) -> Result<(i64, Option<Userid>), Error> {
verify_rsa_ticket(
keypair,
TERM_PREFIX,
ticket,
Some(&format!("{}{}{}", userid, path, port)),
-300,
TICKET_LIFETIME,
)
}
impl std::str::FromStr for Empty {
type Err = Error;
pub fn assemble_rsa_ticket(
keypair: &PKey<Private>,
prefix: &str,
data: Option<&Userid>,
secret_data: Option<&str>,
) -> Result<String, Error> {
let epoch = epoch_now_u64()?;
let timestamp = format!("{:08X}", epoch);
let mut plain = prefix.to_owned();
plain.push(':');
if let Some(data) = data {
use std::fmt::Write;
write!(plain, "{}", data)?;
plain.push(':');
}
plain.push_str(&timestamp);
let mut full = plain.clone();
if let Some(secret) = secret_data {
full.push(':');
full.push_str(secret);
}
let mut signer = Signer::new(MessageDigest::sha256(), &keypair)?;
signer.update(full.as_bytes())?;
let sign = signer.sign_to_vec()?;
let sign_b64 = base64::encode_config(&sign, base64::STANDARD_NO_PAD);
Ok(format!("{}::{}", plain, sign_b64))
}
pub fn verify_rsa_ticket(
keypair: &PKey<Public>,
prefix: &str,
ticket: &str,
secret_data: Option<&str>,
min_age: i64,
max_age: i64,
) -> Result<(i64, Option<Userid>), Error> {
use std::collections::VecDeque;
let mut parts: VecDeque<&str> = ticket.split(':').collect();
match parts.pop_front() {
Some(text) => if text != prefix { bail!("ticket with invalid prefix"); }
None => bail!("ticket without prefix"),
}
let sign_b64 = match parts.pop_back() {
Some(v) => v,
None => bail!("ticket without signature"),
};
match parts.pop_back() {
Some(text) => if text != "" { bail!("ticket with invalid signature separator"); }
None => bail!("ticket without signature separator"),
}
let mut data = None;
let mut full = match parts.len() {
2 => {
data = Some(parts[0].to_owned());
format!("{}:{}:{}", prefix, parts[0], parts[1])
fn from_str(s: &str) -> Result<Self, Error> {
if !s.is_empty() {
bail!("unexpected ticket data, should be empty");
}
1 => format!("{}:{}", prefix, parts[0]),
_ => bail!("ticket with invalid number of components"),
};
if let Some(secret) = secret_data {
full.push(':');
full.push_str(secret);
Ok(Empty)
}
}
/// An API ticket consists of a ticket type (prefix), type-dependent data, optional additional
/// authenticaztion data, a timestamp and a signature. We store these values in the form
/// `<prefix>:<stringified data>:<timestamp>::<signature>`.
///
/// The signature is made over the string consisting of prefix, data, timestamp and aad joined
/// together by colons. If there is no additional authentication data it will be skipped together
/// with the colon separating it from the timestamp.
pub struct Ticket<T>
where
T: ToString + std::str::FromStr,
{
prefix: Cow<'static, str>,
data: String,
time: i64,
signature: Option<Vec<u8>>,
_type_marker: PhantomData<T>,
}
impl<T> Ticket<T>
where
T: ToString + std::str::FromStr,
<T as std::str::FromStr>::Err: std::fmt::Debug,
{
/// Prepare a new ticket for signing.
pub fn new(prefix: &'static str, data: &T) -> Result<Self, Error> {
Ok(Self {
prefix: Cow::Borrowed(prefix),
data: data.to_string(),
time: epoch_now_u64()? as i64,
signature: None,
_type_marker: PhantomData,
})
}
/// Get the ticket prefix.
pub fn prefix(&self) -> &str {
&self.prefix
}
/// Get the ticket's time stamp in seconds since the unix epoch.
pub fn time(&self) -> i64 {
self.time
}
/// Get the raw string data contained in the ticket. The `verify` method will call `parse()`
/// this in the end, so using this method directly is discouraged as it does not verify the
/// signature.
pub fn raw_data(&self) -> &str {
&self.data
}
/// Serialize the ticket into a writer.
///
/// This only writes a string. We use `io::write` instead of `fmt::Write` so we can reuse the
/// same function for openssl's `Verify`, which only implements `io::Write`.
fn write_data(&self, f: &mut dyn io::Write) -> Result<(), Error> {
write!(
f,
"{}:{}:{:08X}",
percent_encode(self.prefix.as_bytes(), &TICKET_ASCIISET),
percent_encode(self.data.as_bytes(), &TICKET_ASCIISET),
self.time,
)
.map_err(Error::from)
}
/// Write additional authentication data to the verifier.
fn write_aad(f: &mut dyn io::Write, aad: Option<&str>) -> Result<(), Error> {
if let Some(aad) = aad {
write!(f, ":{}", percent_encode(aad.as_bytes(), &TICKET_ASCIISET))?;
}
Ok(())
}
/// Change the ticket's time, used mostly for testing.
#[cfg(test)]
fn change_time(&mut self, time: i64) -> &mut Self {
self.time = time;
self
}
/// Sign the ticket.
pub fn sign(&mut self, keypair: &PKey<Private>, aad: Option<&str>) -> Result<String, Error> {
let mut output = Vec::<u8>::new();
let mut signer = Signer::new(MessageDigest::sha256(), &keypair)
.map_err(|err| format_err!("openssl error creating signer for ticket: {}", err))?;
self.write_data(&mut output)
.map_err(|err| format_err!("error creating ticket: {}", err))?;
signer
.update(&output)
.map_err(Error::from)
.and_then(|()| Self::write_aad(&mut signer, aad))
.map_err(|err| format_err!("error signing ticket: {}", err))?;
// See `Self::write_data` for why this is safe
let mut output = unsafe { String::from_utf8_unchecked(output) };
let signature = signer
.sign_to_vec()
.map_err(|err| format_err!("error finishing ticket signature: {}", err))?;
use std::fmt::Write;
write!(
&mut output,
"::{}",
base64::encode_config(&signature, base64::STANDARD_NO_PAD),
)?;
self.signature = Some(signature);
Ok(output)
}
/// `verify` with an additional time frame parameter, not usually required since we always use
/// the same time frame.
pub fn verify_with_time_frame<P: HasPublic>(
&self,
keypair: &PKey<P>,
prefix: &str,
aad: Option<&str>,
time_frame: std::ops::Range<i64>,
) -> Result<T, Error> {
if self.prefix != prefix {
bail!("ticket with invalid prefix");
}
let signature = match self.signature.as_ref() {
Some(sig) => sig,
None => bail!("invalid ticket without signature"),
};
let age = epoch_now_u64()? as i64 - self.time;
if age < time_frame.start {
bail!("invalid ticket - timestamp newer than expected");
}
if age > time_frame.end {
bail!("invalid ticket - expired");
}
let mut verifier = Verifier::new(MessageDigest::sha256(), &keypair)?;
self.write_data(&mut verifier)
.and_then(|()| Self::write_aad(&mut verifier, aad))
.map_err(|err| format_err!("error verifying ticket: {}", err))?;
let is_valid: bool = verifier
.verify(&signature)
.map_err(|err| format_err!("openssl error verifying ticket: {}", err))?;
if !is_valid {
bail!("ticket with invalid signature");
}
self.data
.parse()
.map_err(|err| format_err!("failed to parse contained ticket data: {:?}", err))
}
/// Verify the ticket with the provided key pair. The additional authentication data needs to
/// match the one used when generating the ticket, and the ticket's age must fall into the time
/// frame.
pub fn verify<P: HasPublic>(
&self,
keypair: &PKey<P>,
prefix: &str,
aad: Option<&str>,
) -> Result<T, Error> {
self.verify_with_time_frame(keypair, prefix, aad, -300..TICKET_LIFETIME)
}
/// Parse a ticket string.
pub fn parse(ticket: &str) -> Result<Self, Error> {
let mut parts = ticket.splitn(4, ':');
let prefix = percent_decode_str(
parts
.next()
.ok_or_else(|| format_err!("ticket without prefix"))?,
)
.decode_utf8()
.map_err(|err| format_err!("invalid ticket, error decoding prefix: {}", err))?;
let data = percent_decode_str(
parts
.next()
.ok_or_else(|| format_err!("ticket without data"))?,
)
.decode_utf8()
.map_err(|err| format_err!("invalid ticket, error decoding data: {}", err))?;
let time = i64::from_str_radix(
parts
.next()
.ok_or_else(|| format_err!("ticket without timestamp"))?,
16,
)
.map_err(|err| format_err!("ticket with bad timestamp: {}", err))?;
let remainder = parts
.next()
.ok_or_else(|| format_err!("ticket without signature"))?;
// <prefix>:<data>:<time>::signature - the 4th `.next()` swallows the first colon in the
// double-colon!
if !remainder.starts_with(':') {
bail!("ticket without signature separator");
}
let signature = base64::decode_config(&remainder[1..], base64::STANDARD_NO_PAD)
.map_err(|err| format_err!("ticket with bad signature: {}", err))?;
Ok(Self {
prefix: Cow::Owned(prefix.into_owned()),
data: data.into_owned(),
time,
signature: Some(signature),
_type_marker: PhantomData,
})
}
}
pub fn term_aad(userid: &Userid, path: &str, port: u16) -> String {
format!("{}{}{}", userid, path, port)
}
#[cfg(test)]
mod test {
use openssl::pkey::{PKey, Private};
use super::Ticket;
use crate::api2::types::Userid;
use crate::tools::epoch_now_u64;
fn simple_test<F>(key: &PKey<Private>, aad: Option<&str>, modify: F)
where
F: FnOnce(&mut Ticket<Userid>) -> bool,
{
let userid = Userid::root_userid();
let mut ticket = Ticket::new("PREFIX", userid).expect("failed to create Ticket struct");
let should_work = modify(&mut ticket);
let ticket = ticket.sign(key, aad).expect("failed to sign test ticket");
let parsed =
Ticket::<Userid>::parse(&ticket).expect("failed to parse generated test ticket");
if should_work {
let check: Userid = parsed
.verify(key, "PREFIX", aad)
.expect("failed to verify test ticket");
assert_eq!(*userid, check);
} else {
parsed
.verify(key, "PREFIX", aad)
.expect_err("failed to verify test ticket");
}
}
#[test]
fn test_tickets() {
// first we need keys, for testing we use small keys for speed...
let rsa =
openssl::rsa::Rsa::generate(1024).expect("failed to generate RSA key for testing");
let key = openssl::pkey::PKey::<openssl::pkey::Private>::from_rsa(rsa)
.expect("failed to create PKey for RSA key");
simple_test(&key, Some("secret aad data"), |_| true);
simple_test(&key, None, |_| true);
simple_test(&key, None, |t| {
t.change_time(0);
false
});
simple_test(&key, None, |t| {
t.change_time(epoch_now_u64().unwrap() as i64 + 0x1000_0000);
false
});
}
let sign = base64::decode_config(sign_b64, base64::STANDARD_NO_PAD)?;
let mut verifier = Verifier::new(MessageDigest::sha256(), &keypair)?;
verifier.update(full.as_bytes())?;
if !verifier.verify(&sign)? {
bail!("ticket with invalid signature");
}
let timestamp = i64::from_str_radix(parts.pop_back().unwrap(), 16)?;
let now = epoch_now_u64()? as i64;
let age = now - timestamp;
if age < min_age {
bail!("invalid ticket - timestamp newer than expected.");
}
if age > max_age {
bail!("invalid ticket - timestamp too old.");
}
Ok((age, data.map(|s| s.parse()).transpose()?))
}

View File

@ -6,16 +6,16 @@ Ext.define('pbs-data-store-snapshots', {
{
name: 'backup-time',
type: 'date',
dateFormat: 'timestamp'
dateFormat: 'timestamp',
},
'files',
'owner',
{ name: 'size', type: 'int', allowNull: true, },
'verification',
{ name: 'size', type: 'int', allowNull: true },
{
name: 'crypt-mode',
type: 'boolean',
calculate: function(data) {
let encrypted = 0;
let crypt = {
none: 0,
mixed: 0,
@ -23,25 +23,24 @@ Ext.define('pbs-data-store-snapshots', {
encrypt: 0,
count: 0,
};
let signed = 0;
data.files.forEach(file => {
if (file.filename === 'index.json.blob') return; // is never encrypted
let mode = PBS.Utils.cryptmap.indexOf(file['crypt-mode']);
if (mode !== -1) {
crypt[file['crypt-mode']]++;
crypt.count++;
}
crypt.count++;
});
return PBS.Utils.calculateCryptMode(crypt);
}
},
},
{
name: 'matchesFilter',
type: 'boolean',
defaultValue: true,
},
]
],
});
Ext.define('PBS.DataStoreContent', {
@ -69,7 +68,7 @@ Ext.define('PBS.DataStoreContent', {
view.getStore().setSorters([
'backup-group',
'text',
'backup-time'
'backup-time',
]);
Proxmox.Utils.monStoreErrors(view, this.store);
this.reload(); // initial load
@ -87,7 +86,7 @@ Ext.define('PBS.DataStoreContent', {
this.store.setProxy({
type: 'proxmox',
timeout: 300*1000, // 5 minutes, we should make that api call faster
url: url
url: url,
});
this.store.load();
@ -123,7 +122,7 @@ Ext.define('PBS.DataStoreContent', {
expanded: false,
backup_type: item.data["backup-type"],
backup_id: item.data["backup-id"],
children: []
children: [],
};
}
@ -162,7 +161,7 @@ Ext.define('PBS.DataStoreContent', {
}
return false;
},
after: () => {},
after: Ext.emptyFn,
});
for (const item of records) {
@ -180,7 +179,7 @@ Ext.define('PBS.DataStoreContent', {
data.children = [];
for (const file of data.files) {
file.text = file.filename,
file.text = file.filename;
file['crypt-mode'] = PBS.Utils.cryptmap.indexOf(file['crypt-mode']);
file.leaf = true;
file.matchesFilter = true;
@ -191,6 +190,7 @@ Ext.define('PBS.DataStoreContent', {
children.push(data);
}
let nowSeconds = Date.now() / 1000;
let children = [];
for (const [name, group] of Object.entries(groups)) {
let last_backup = 0;
@ -200,7 +200,13 @@ Ext.define('PBS.DataStoreContent', {
'sign-only': 0,
encrypt: 0,
};
for (const item of group.children) {
let verify = {
outdated: 0,
none: 0,
failed: 0,
ok: 0,
};
for (let item of group.children) {
crypt[PBS.Utils.cryptmap[item['crypt-mode']]]++;
if (item["backup-time"] > last_backup && item.size !== null) {
last_backup = item["backup-time"];
@ -208,9 +214,24 @@ Ext.define('PBS.DataStoreContent', {
group.files = item.files;
group.size = item.size;
group.owner = item.owner;
verify.lastFailed = item.verification && item.verification.state !== 'ok';
}
if (!item.verification) {
verify.none++;
} else {
if (item.verification.state === 'ok') {
verify.ok++;
} else {
verify.failed++;
}
let task = Proxmox.Utils.parse_task_upid(item.verification.upid);
item.verification.lastTime = task.starttime;
if (nowSeconds - task.starttime > 30 * 24 * 60 * 60) {
verify.outdated++;
}
}
}
group.verification = verify;
group.count = group.children.length;
group.matchesFilter = true;
crypt.count = group.count;
@ -221,7 +242,7 @@ Ext.define('PBS.DataStoreContent', {
view.setRootNode({
expanded: true,
children: children
children: children,
});
if (selected !== undefined) {
@ -241,13 +262,13 @@ Ext.define('PBS.DataStoreContent', {
Proxmox.Utils.setErrorMask(view, false);
if (view.getStore().getFilters().length > 0) {
let searchBox = me.lookup("searchbox");
let searchvalue = searchBox.getValue();;
let searchvalue = searchBox.getValue();
me.search(searchBox, searchvalue);
}
},
onPrune: function(view, rI, cI, item, e, rec) {
var view = this.getView();
view = this.getView();
if (!(rec && rec.data)) return;
let data = rec.data;
@ -265,7 +286,8 @@ Ext.define('PBS.DataStoreContent', {
},
onVerify: function(view, rI, cI, item, e, rec) {
var view = this.getView();
let me = this;
view = me.getView();
if (!view.datastore) return;
@ -297,6 +319,7 @@ Ext.define('PBS.DataStoreContent', {
success: function(response, options) {
Ext.create('Proxmox.window.TaskViewer', {
upid: response.result.data,
taskDone: () => me.reload(),
}).show();
},
});
@ -304,7 +327,7 @@ Ext.define('PBS.DataStoreContent', {
onForget: function(view, rI, cI, item, e, rec) {
let me = this;
var view = this.getView();
view = this.getView();
if (!(rec && rec.data)) return;
let data = rec.data;
@ -359,7 +382,8 @@ Ext.define('PBS.DataStoreContent', {
let atag = document.createElement('a');
params['file-name'] = file;
atag.download = filename;
let url = new URL(`/api2/json/admin/datastore/${view.datastore}/download-decoded`, window.location.origin);
let url = new URL(`/api2/json/admin/datastore/${view.datastore}/download-decoded`,
window.location.origin);
for (const [key, value] of Object.entries(params)) {
url.searchParams.append(key, value);
}
@ -422,7 +446,7 @@ Ext.define('PBS.DataStoreContent', {
store.beginUpdate();
store.getRoot().cascadeBy({
before: function(item) {
if(me.filter(item, value)) {
if (me.filter(item, value)) {
item.set('matchesFilter', true);
if (item.parentNode && item.parentNode.id !== 'root') {
item.parentNode.childmatches = true;
@ -454,12 +478,22 @@ Ext.define('PBS.DataStoreContent', {
},
},
viewConfig: {
getRowClass: function(record, index) {
let verify = record.get('verification');
if (verify && verify.lastFailed) {
return 'proxmox-invalid-row';
}
return null;
},
},
columns: [
{
xtype: 'treecolumn',
header: gettext("Backup Group"),
dataIndex: 'text',
flex: 1
flex: 1,
},
{
header: gettext('Actions'),
@ -506,9 +540,9 @@ Ext.define('PBS.DataStoreContent', {
data.filename &&
data.filename.endsWith('pxar.didx') &&
data['crypt-mode'] < 3);
}
},
},
]
],
},
{
xtype: 'datecolumn',
@ -516,7 +550,7 @@ Ext.define('PBS.DataStoreContent', {
sortable: true,
dataIndex: 'backup-time',
format: 'Y-m-d H:i:s',
width: 150
width: 150,
},
{
header: gettext("Size"),
@ -538,6 +572,8 @@ Ext.define('PBS.DataStoreContent', {
format: '0',
header: gettext("Count"),
sortable: true,
width: 75,
align: 'right',
dataIndex: 'count',
},
{
@ -560,8 +596,80 @@ Ext.define('PBS.DataStoreContent', {
if (iconCls) {
iconTxt = `<i class="fa fa-fw fa-${iconCls}"></i> `;
}
return (iconTxt + PBS.Utils.cryptText[v]) || Proxmox.Utils.unknownText
}
return (iconTxt + PBS.Utils.cryptText[v]) || Proxmox.Utils.unknownText;
},
},
{
header: gettext('Verify State'),
sortable: true,
dataIndex: 'verification',
width: 120,
renderer: (v, meta, record) => {
let i = (cls, txt) => `<i class="fa fa-fw fa-${cls}"></i> ${txt}`;
if (v === undefined || v === null) {
return record.data.leaf ? '' : i('question-circle-o warning', gettext('None'));
}
let tip, iconCls, txt;
if (record.parentNode.id === 'root') {
if (v.failed === 0) {
if (v.none === 0) {
if (v.outdated > 0) {
tip = 'All OK, but some snapshots were not verified in last 30 days';
iconCls = 'check warning';
txt = gettext('All OK (old)');
} else {
tip = 'All snapshots verified at least once in last 30 days';
iconCls = 'check good';
txt = gettext('All OK');
}
} else if (v.ok === 0) {
tip = `${v.none} not verified yet`;
iconCls = 'question-circle-o warning';
txt = gettext('None');
} else {
tip = `${v.ok} OK, ${v.none} not verified yet`;
iconCls = 'check faded';
txt = `${v.ok} OK`;
}
} else {
tip = `${v.ok} OK, ${v.failed} failed, ${v.none} not verified yet`;
iconCls = 'times critical';
txt = v.ok === 0 && v.none === 0
? gettext('All failed')
: `${v.failed} failed`;
}
} else if (!v.state) {
return record.data.leaf ? '' : gettext('None');
} else {
let verify_time = Proxmox.Utils.render_timestamp(v.lastTime);
tip = `Last verify task started on ${verify_time}`;
txt = v.state;
iconCls = 'times critical';
if (v.state === 'ok') {
iconCls = 'check good';
let now = Date.now() / 1000;
if (now - v.lastTime > 30 * 24 * 60 * 60) {
tip = `Last verify task over 30 days ago: ${verify_time}`;
iconCls = 'check warning';
}
}
}
return `<span data-qtip="${tip}">
<i class="fa fa-fw fa-${iconCls}"></i> ${txt}
</span>`;
},
listeners: {
dblclick: function(view, el, row, col, ev, rec) {
let data = rec.data || {};
let verify = data.verification;
if (verify && verify.upid && rec.parentNode.id !== 'root') {
let win = Ext.create('Proxmox.window.TaskViewer', {
upid: verify.upid,
});
win.show();
}
},
},
},
],
@ -579,6 +687,7 @@ Ext.define('PBS.DataStoreContent', {
{
xtype: 'textfield',
reference: 'searchbox',
emptyText: gettext('group, date or owner'),
triggers: {
clear: {
cls: 'pmx-clear-trigger',
@ -588,7 +697,7 @@ Ext.define('PBS.DataStoreContent', {
this.triggers.clear.setVisible(false);
this.setValue('');
},
}
},
},
listeners: {
change: {
@ -596,6 +705,6 @@ Ext.define('PBS.DataStoreContent', {
buffer: 500,
},
},
}
},
],
});

View File

@ -54,6 +54,11 @@ all: js/proxmox-backup-gui.js css/ext6-pbs.css
js:
mkdir js
.PHONY: OnlineHelpInfo.js
OnlineHelpInfo.js:
$(MAKE) -C ../docs onlinehelpinfo
mv ../docs/output/scanrefs/OnlineHelpInfo.js .
js/proxmox-backup-gui.js: js OnlineHelpInfo.js ${JSSRC}
cat OnlineHelpInfo.js ${JSSRC} >$@.tmp
mv $@.tmp $@

View File

@ -1,6 +1,10 @@
var proxmoxOnlineHelpInfo = {
"pbs_documentation_index" : {
"link" : "/pbs-docs/index.html",
"title" : "Proxmox Backup Server Documentation Index"
}
const proxmoxOnlineHelpInfo = {
"pbs_documentation_index": {
"link": "/docs/index.html",
"title": "Proxmox Backup Server Documentation Index"
},
"chapter-zfs": {
"link": "/docs/sysadmin.html#chapter-zfs",
"title": "ZFS on Linux"
}
};

View File

@ -41,9 +41,9 @@ Ext.define('PBS.Utils', {
let files = data.count;
if (mixed > 0) {
return PBS.Utils.cryptmap.indexOf('mixed');
} else if (files === encrypted) {
} else if (files === encrypted && encrypted > 0) {
return PBS.Utils.cryptmap.indexOf('encrypt');
} else if (files === signed) {
} else if (files === signed && signed > 0) {
return PBS.Utils.cryptmap.indexOf('sign-only');
} else if ((signed+encrypted) === 0) {
return PBS.Utils.cryptmap.indexOf('none');

View File

@ -107,11 +107,27 @@ Ext.define('PBS.config.SyncJobView', {
return '';
}
if (value === 'OK') {
return `<i class="fa fa-check good"></i> ${gettext("OK")}`;
let parsed = Proxmox.Utils.parse_task_status(value);
let text = value;
let icon = '';
switch (parsed) {
case 'unknown':
icon = 'question faded';
text = Proxmox.Utils.unknownText;
break;
case 'error':
icon = 'times critical';
text = Proxmox.Utils.errorText + ': ' + value;
break;
case 'warning':
icon = 'exclamation warning';
break;
case 'ok':
icon = 'check good';
text = gettext("OK");
}
return `<i class="fa fa-times critical"></i> ${gettext("Error")}:${value}`;
return `<i class="fa fa-${icon}"></i> ${text}`;
},
render_next_run: function(value, metadat, record) {
@ -198,26 +214,26 @@ Ext.define('PBS.config.SyncJobView', {
columns: [
{
header: gettext('Sync Job'),
width: 200,
width: 100,
sortable: true,
renderer: Ext.String.htmlEncode,
dataIndex: 'id',
},
{
header: gettext('Remote'),
width: 200,
width: 100,
sortable: true,
dataIndex: 'remote',
},
{
header: gettext('Remote Store'),
width: 200,
width: 100,
sortable: true,
dataIndex: 'remote-store',
},
{
header: gettext('Local Store'),
width: 200,
width: 100,
sortable: true,
dataIndex: 'store',
},

View File

@ -4,15 +4,17 @@ Ext.define('PBS.data.CalendarEventExamples', {
field: ['value', 'text'],
data: [
//FIXME { value: '*/30', text: Ext.String.format(gettext("Every {0} minutes"), 30) },
{ value: '*/30', text: Ext.String.format(gettext("Every {0} minutes"), 30) },
{ value: 'hourly', text: gettext("Every hour") },
//FIXME { value: '*/2:00', text: gettext("Every two hours") },
{ value: '*/2:00', text: gettext("Every two hours") },
{ value: '2,22:30', text: gettext("Every day") + " 02:30, 22:30" },
{ value: 'daily', text: gettext("Every day") + " 00:00" },
{ value: 'mon..fri', text: gettext("Monday to Friday") + " 00:00" },
//FIXME{ value: 'mon..fri */1:00', text: gettext("Monday to Friday") + ': ' + gettext("hourly") },
{ value: 'mon..fri *:00', text: gettext("Monday to Friday") + ', ' + gettext("hourly") },
{ value: 'sat 18:15', text: gettext("Every Saturday") + " 18:15" },
//FIXME{ value: 'monthly', text: gettext("Every 1st of Month") + " 00:00" }, // not yet possible..
{ value: 'monthly', text: gettext("Every first day of the Month") + " 00:00" },
{ value: 'sat *-1..7 02:00', text: gettext("Every first Saturday of the month") + " 02:00" },
{ value: 'yearly', text: gettext("First day of the year") + " 00:00" },
],
});
@ -26,6 +28,8 @@ Ext.define('PBS.form.CalendarEvent', {
displayField: 'text',
queryMode: 'local',
matchFieldWidth: false,
config: {
deleteEmpty: true,
},

View File

@ -12,7 +12,11 @@
<link rel="stylesheet" type="text/css" href="/fontawesome/css/font-awesome.css" />
<link rel="stylesheet" type="text/css" href="/widgettoolkit/css/ext6-pmx.css" />
<link rel="stylesheet" type="text/css" href="/css/ext6-pbs.css" />
{{#if language}}
<script type='text/javascript' src='/locale/pbs-lang-{{ language }}.js'></script>
{{else}}
<script type='text/javascript'> function gettext(buf) { return buf; } </script>
{{/if}}
{{#if debug}}
<script type="text/javascript" src="/extjs/ext-all-debug.js"></script>
<script type="text/javascript" src="/extjs/charts-debug.js"></script>