Compare commits

...

89 Commits

Author SHA1 Message Date
3e4a67f350 bump version to 0.8.16-1 2020-09-11 15:55:37 +02:00
e0e5b4426a BackupDir: make constructor fallible
since converting from i64 epoch timestamp to DateTime is not always
possible. previously, passing invalid backup-time from client to server
(or vice-versa) panicked the corresponding tokio task. now we get proper
error messages including the invalid timestamp.

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2020-09-11 15:49:35 +02:00
7158b304f5 handle invalid mtime when formating entries
otherwise operations like catalog shell panic when viewing pxar archives
containing such entries, e.g. with mtime very far ahead into the future.

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2020-09-11 15:48:43 +02:00
833eca6d2f use non-panicky timestamp_opt where appropriate
by either printing the original, out-of-range timestamp as-is, or
bailing with a proper error message instead of panicking.

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2020-09-11 15:48:24 +02:00
151acf5d96 don't truncate DateTime nanoseconds
where we don't care about them anyway..

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2020-09-11 15:48:10 +02:00
4a363fb4a7 catalog dump: preserve original mtime
even if it can't be handled by chrono. silently replacing it with epoch
0 is confusing..

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2020-09-11 15:43:54 +02:00
229adeb746 ui/docs: add onlineHelp button for syncjobs
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-09-11 12:17:54 +02:00
1eff9a1e89 docs: add section for calendar events
and move the info defined in 'Schedules' there,
the explanation of calendar events is inspired by the systemd.time
manpage and the pve docs (especially the examples are mostly
copied/adapted from there)

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-09-11 12:17:42 +02:00
ed4f0a0edc ui: fix calendarevent examples
*/x is valid syntax for us, but not systemd, so to not confuse users
write it like systemd would accept it

also an timespec must at least have hours and minutes

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-09-11 12:17:32 +02:00
13bed6226e tools/systemd/parse_time: enable */x syntax for calendar events
we support this in pve, so also support it here to have a more
consistent syntax

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-09-11 12:17:22 +02:00
d937daedb3 docs: set html img width limitation through css
avoid hardcoding width in the docs itself, so that other render
outputs can choose another size.

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-09-11 11:10:08 +02:00
8cce51135c docs: do not render TODOs in release builts
they are not useful for endusers...

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-09-11 11:09:00 +02:00
0cfe1b3f13 docs: set GmbH as copyright holder
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-09-11 11:08:36 +02:00
05c16a6e59 docs: use alabaster theme
It's not all perfect (yet) but way cleaner and simpler to use than
the sphinx one.

Custom do the scrolling for the fixed side bar and make some other
slight adjustments.

Main issue for now is that the "Developer Appendix" is always shown
in the navigation tree, but we only include that toctree for
devbuilds...

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-09-11 11:08:13 +02:00
3294b516d3 faq: fix typo
In note block:
    Proxmox Packup Server -> Proxmox Backup Server

Signed-off-by: Dylan Whyte <d.whyte@proxmox.com>
2020-09-10 15:18:13 +02:00
139bcedc53 benchmark: update TLS reference speed
We are now faster with recent patches.
2020-09-10 12:55:43 +02:00
cf9ea3c4c7 server: set http2 max frame size
else we get the default of 16k, which is quite low for our use case.
this improves the TLS upload benchmark speed by about 30-40% for me.

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2020-09-10 12:43:51 +02:00
e84fde3e14 docs: faq: spell out PBS
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-09-10 12:35:04 +02:00
1de47507ff Add section "FAQ"
Adds an FAQ to the docs, based on question that have
been appearing on the forum.

Signed-off-by: Dylan Whyte <d.whyte@proxmox.com>
2020-09-10 11:33:00 +02:00
1a9948a488 examples/upload-speed.rs: pass new benchmark parameter 2020-09-10 09:34:51 +02:00
04c2731349 bump version to 0.8.15-1 2020-09-10 09:26:16 +02:00
5656888cc9 verify: fix done count
We need to filter out benchmark group earlier
2020-09-10 09:06:33 +02:00
5fdc5a6f3d verify: skip benchmark directory 2020-09-10 08:44:18 +02:00
61d7b5013c add benchmark flag to backup creation for proper cleanup when running a benchmark
Signed-off-by: Hannes Laimer <h.laimer@proxmox.com>
Signed-off-by: Dietmar Maurer <dietmar@proxmox.com>
2020-09-10 08:25:24 +02:00
871181d984 mount: fix mount subcommand
fixes the error, "manifest does not contain
file 'X.pxar'", that occurs when trying to mount
a pxar archive with 'proxmox-backup-client mount':

Signed-off-by: Dylan Whyte <d.whyte@proxmox.com>
2020-09-10 07:21:16 +02:00
02939e178d ui: only mark backup encrypted if there are any files
if we have a stale backup without an manifest, we do not count
the remaining files in the backup dir anymore, but this means
we now have to check here if there are really any encrypted

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-09-10 07:18:51 +02:00
3be308b949 improve server->client tcp performance for high latency links
similar to the other fix, if we do not set the buffer size manually,
we get better performance for high latency connections

restore benchmark from f.gruenbicher:

no delay, without patch: ~50MB/s
no delay, with patch: ~50MB/s
25ms delay, without patch: ~11MB/s
25ms delay, with path: ~50MB/s

my own restore benchmark:

no delay, without patch: ~1.5GiB/s
no delay, with patch: ~1.5GiB/s
25ms delay, without patch: 30MiB/s
25ms delay, with patch: ~950MiB/s

for some more details about those benchmarks see
https://lists.proxmox.com/pipermail/pbs-devel/2020-September/000600.html

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-09-10 07:15:25 +02:00
83088644da fix #2983: improve tcp performance
by leaving the buffer sizes on default, we get much better tcp performance
for high latency links

throughput is still impacted by latency, but much less so when
leaving the sizes at default.
the disadvantage is slightly higher memory usage of the server
(details below)

my local benchmarks (proxmox-backup-client benchmark):

pbs client:
PVE Host
Epyc 7351P (16core/32thread)
64GB Memory

pbs server:
VM on Host
1 Socket, 4 Cores (Host CPU type)
4GB Memory

average of 3 runs, rounded to MB/s
                    | no delay |     1ms |     5ms |     10ms |    25ms |
without this patch  |  230MB/s |  55MB/s |  13MB/s |    7MB/s |   3MB/s |
with this patch     |  293MB/s | 293MB/s | 249MB/s |  241MB/s | 104MB/s |

memory usage (resident memory) of proxmox-backup-proxy:

                    | peak during benchmarks | after benchmarks |
without this patch  |                  144MB |            100MB |
with this patch     |                  145MB |            130MB |

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-09-10 07:15:12 +02:00
14db8b52dc src/backup/chunk_store.rs: use ? insteadf of unwrap 2020-09-10 06:37:37 +02:00
597427afaf clean up .bad file handling in sweep_unused_chunks
Code cleanup, no functional change intended.

Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
2020-09-10 06:31:22 +02:00
3cddfb29be backup: ensure no fixed index writers are left over either
Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
2020-09-10 06:29:38 +02:00
e15b76369a buildsys: upload client packages also to PMG repo
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-09-08 16:48:31 +02:00
d7c1251435 ui: calendar event: disable matchFieldWidth for picker
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-09-08 16:48:31 +02:00
ea3ce82a74 ui: calendar event: enable more complex examples again
now that they (should) work.

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-09-08 16:48:31 +02:00
092378ba92 Change "data store" to "datastore" throughout docs
Before, there were mixed usages of "data store" and
"datastore" throughout the docs.
This improves consistency in the docs by using only
"datastore" throughout.

Signed-off-by: Dylan Whyte <d.whyte@proxmox.com>
2020-09-08 13:12:01 +02:00
068e526862 backup: touch all chunks, even if they exist
We need to update the atime of chunk files if they already exist,
otherwise a concurrently running GC could sweep them away.

This is protected with ChunkStore.mutex, so the fstat/unlink does not
race with touching.

Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
2020-09-08 12:51:03 +02:00
a9767cf7de gc: remove .bad files on garbage collect
The iterator of get_chunk_iterator is extended with a third parameter
indicating whether the current file is a chunk (false) or a .bad file
(true).

Count their sizes to the total of removed bytes, since it also frees
disk space.

.bad files are only deleted if the corresponding chunk exists, i.e. has
been rewritten. Otherwise we might delete data only marked bad because
of transient errors.

While at it, also clean up and use nix::unistd::unlinkat instead of
unsafe libc calls.

Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
2020-09-08 12:43:13 +02:00
aadcc2815c cleanup rename_corrupted_chunk: avoid duplicate format macro 2020-09-08 12:29:53 +02:00
0f3b7efa84 verify: rename corrupted chunks with .bad extension
This ensures that following backups will always upload the chunk,
thereby replacing it with a correct version again.

Format for renaming is <digest>.<counter>.bad where <counter> is used if
a chunk is found to be bad again before a GC cleans it up.

Care has been taken to deliberately only rename a chunk in conditions
where it is guaranteed to be an error in the chunk itself. Otherwise a
broken index file could lead to an unwanted mass-rename of chunks.

Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
2020-09-08 12:20:57 +02:00
7c77e2f94a verify: fix log units
Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
2020-09-08 12:10:19 +02:00
abd4c4cb8c ui: add translation support
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-09-08 08:54:33 +02:00
09f12d1cf3 tools: rename extract_auth_cookie to extract_cookie
It does nothing specific to authentication..

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-09-08 08:54:33 +02:00
1db4cfb308 tools/sytemd/time: add tests for multivalue fields
we did this wrong earlier, so it makes sense to add regression tests

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-09-08 07:09:43 +02:00
a4c1143664 server/worker_task: fix upid_read_status
a range from high to low in rust results in an empty range
(see std::ops::Range documentation)
so we need to generate the range from 0..data.len() and then reverse it

also, the task log contains a newline at the end, so we have to remove
that (should it exist)

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-09-08 07:06:22 +02:00
0623674f44 Edit section "Network Management"
Following changes made:
    * Remove empty column "method6" from network list output,
      so table fits in console code-block
    * Walkthrough bond, rather than a bridge as it may be a more
      common setup case

Signed-off-by: Dylan Whyte <d.whyte@proxmox.com>
2020-09-08 06:30:42 +02:00
2dd58db792 PVE integration: Add note about hiding password
Add a note to section "Proxmox VE integration" explaining
how to avoid passing password as plain text when using the
pvesm command.

Signed-off-by: Dylan Whyte <d.whyte@proxmox.com>
2020-09-08 06:19:20 +02:00
e11cfb93c0 change order of "Image Archives" and "File Archives"
Change the order of the "Image Archives" and "File
Archives" subsections, so that they match the order
which they are introduced in, in the section "Backup
Content" (minor readability improvement).

Signed-off-by: Dylan Whyte <d.whyte@proxmox.com>
2020-09-08 06:19:09 +02:00
bc0608955e Sync Jobs: add screenshots and explanation
Add screenshots of sync jobs panel in web interface
and explain how to carry out related tasks from it.

Signed-off-by: Dylan Whyte <d.whyte@proxmox.com>
2020-09-08 06:18:50 +02:00
36be19218e Network Config: Add screenshots and explanation
Add screenshots for network configuration and explain
how to carry out related tasks using the web interface.

Signed-off-by: Dylan Whyte <d.whyte@proxmox.com>
2020-09-08 06:18:15 +02:00
9fa39a46ba User Management: Add screenshots and explanation
Add screenshots for user management section in web
interface and explain how to carry out relevant tasks
using it.

Signed-off-by: Dylan Whyte <d.whyte@proxmox.com>
2020-09-08 06:18:01 +02:00
ff30b912a0 Datastore Config: add screenshots and explanation
Add screenshots from the datastore section of the
web interface and explain how to carry out tasks using
the web interface.

Signed-off-by: Dylan Whyte <d.whyte@proxmox.com>
2020-09-08 06:17:50 +02:00
b0c10a88a3 Disk Management: Add screenshots and explanation
This adds screenshots from the web interface for the
sections related to disk management and adds explanation
of how to carry out tasks using the web interface.

Signed-off-by: Dylan Whyte <d.whyte@proxmox.com>
2020-09-08 06:12:54 +02:00
ccbe6547a7 Add screenshots of web interface
Signed-off-by: Dylan Whyte <d.whyte@proxmox.com>
2020-09-08 06:12:17 +02:00
32afd60336 src/tools/systemd/time.rs: derive Clone 2020-09-07 12:37:08 +02:00
02e47b8d6e SYSTEMD_CALENDAR_EVENT_SCHEMA: fix wrong schema description 2020-09-07 09:07:55 +02:00
44055cac4d tools/systemd/time: enable dates for calendarevents
this implements parsing and calculating calendarevents that have a
basic date component (year-mon-day) with the usual syntax options
(*, ranges, lists)

and some special events:
monthly
yearly/annually (like systemd)
quarterly
semiannually,semi-annually (like systemd)

includes some regression tests

the ~ syntax for days (the last x days of the month) is not yet
implemented

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-09-04 15:36:29 +02:00
1dfc09cb6b tools/systemd/time: fix signed conversion
instead of using 'as' and silently converting wrong,
use the TryInto trait and raise an error if we cannot convert

this should only happen if we have a negative year,
but this is expected (we do not want schedules from before the year 0)

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-09-04 15:35:38 +02:00
48c56024aa tools/systemd/tm_editor: add setter/getter for months/years/days
add_* are modeled after add_days

subtract one for set_mon to have a consistent interface for all fields
(i.e. getter/setter return/expect the 'real' number, not the ones
in the tm struct)

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-09-04 15:34:27 +02:00
cf103266b3 tools/systemd/tm_editor: move conversion of the year into getter and setter
the tm struct contains the year - 1900 but we added that

if we want to use the libc normalization correctly, the tm struct
must have the correct year in it, else the computations for timezones,
etc. fail

instead add a getter that adds the years and a setter that subtracts it again

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-09-04 15:34:04 +02:00
d5cf8f606c tools/systemd/time: fix selection for multiple options
if we give multiple options/ranges for a value, e.g.
2,4,8
we always choose the biggest, instead of the smallest that is next

this happens because in DateTimeValue::find_next(value)
'next' can be set multiple times and we set it when the new
value was *bigger* than the last found 'next' value, when in reality
we have to choose the *smallest* next we can find

reverse the comparison operator to fix this

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-09-04 15:33:42 +02:00
ce7ab28cfa tools/systemd/parse_time: error out on invalid ranges
if the range is reverse (bigger..smaller) we will never find a value,
so error out during parsing

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-09-04 15:28:48 +02:00
07ca6f6e66 tools/systemd/tm_editor: remove reset_time from add_days and document it
we never passed 'false' to it anyway so remove it
(we can add it again if we should ever need it)

also remove the adding of wday (gets normalized anyway)

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-09-04 15:28:24 +02:00
15ec790a40 tools/systemd/time: convert the resulting timestamp into an option
we want to use dates for the calendarspec, and with that there are some
impossible combinations that cannot be detected during parsing
(e.g. some datetimes do not exist in some timezones, and the timezone
can change after setting the schedule)

so finding no timestamp is not an error anymore but a valid result

we omit logging in that case (since it is not an error anymore)

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-09-04 15:28:05 +02:00
cb73b2d69c tools/systemd/time: move continue out of the if/else
will be called anyway

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-09-04 15:27:20 +02:00
c931c87173 tools/systemd/time: let libc normalize time for us
mktime/gmtime can normalize time and even can handle special timezone
cases like the fact that the time 2:30 on specific day/timezone combos
do not exists

we have to convert the signature of all functions that use
normalize_time since mktime/gmtime can return an EOVERFLOW
but if this happens there is no way we can find a good time anyway

since normalize_time will always set wday according to the rest of the
time, remove set_wday

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-09-04 15:26:40 +02:00
28a0a9343c tools/systemd/tm_editor: remove TMChanges optimization
while it was correct, there was no measurable speed gain
(a benchmark yielded 2.8 ms for a spec that did not find a timestamp either way)
so remove it for simpler code

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-09-04 15:26:04 +02:00
56b666458c server/worker_task: fix 'unknown' status for some big task logs
when trying to parse the task status, we seek 8k from the end
which may be into the middle of a line, so the datetime parsing
can fail (when the log message contains ': ')

This patch does a fast search for the last line, and avoid the
'lines' iterator.
2020-09-04 10:41:13 +02:00
cd6ddb5a69 depend on proxmox 0.3.5 2020-09-04 08:11:53 +02:00
ecd55041a2 fix #2978: allow non-root to view datastore usage
for datastores where the requesting user has read or write permissions,
since the API method itself filters by that already. this is the same
permission setting and filtering that the datastore list API endpoint
does.

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2020-09-04 06:18:20 +02:00
e7e8e6d5f7 online help: use a phony target and regenerate
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-09-02 14:41:03 +02:00
49df8ac115 docs: add prototype sphinx extension for online help
goes through the sections in the documents and creates the
OnlineHelpInfo.js file from the explicitly defined section labels which
are used in the js files with 'onlineHelp' variable.
2020-09-02 14:38:27 +02:00
7397f4a390 bump version to 0.8.14-1
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-09-02 10:41:42 +02:00
8317873c06 gc: improve percentage done logs 2020-09-02 10:04:18 +02:00
deef63699e verify: also fail on server shutdown 2020-09-02 09:50:17 +02:00
c6e07769e9 ui: datastore content: eslint fixes
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-09-02 09:30:57 +02:00
423df9b1f4 ui: datastore: show more granular verify state
Allows to differ the following situations:
* some snapshots in a group where not verified
* how many snapshots failed to verify in a group
* all snapshots verified but last verification task was over 30 days
  ago

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-09-02 09:30:57 +02:00
c879e5af11 ui: datastore: mark row invalid if last snapshot verification failed
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-09-02 09:12:05 +02:00
63d9aca96f verify: log progress 2020-09-02 07:43:28 +02:00
c3b1da9e41 datastore content: search: set emptytext to searched columns
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-09-01 18:30:54 +02:00
46388e6aef datastore content: reduce count column width
Using 75 as width we can display up to 9999999 which would allow
displaying over 19 years of snapshots done each minute, so quite
enough for the common cases.

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-09-01 18:28:14 +02:00
484d439a7c datastore content: reload after verify
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-09-01 18:27:30 +02:00
ab6615134c d/postinst: always fixup termproxy user id and for all users
Anyone with a PAM account and Sys.Console access could have started a
termproxy session, adapt the regex.

Always test for broken entries and run the sed expression to make sure
eventually all occurences of the broken syntax are fixed.

Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
2020-09-01 18:02:11 +02:00
b1149ebb36 ui: DataStoreContent.js: fix wrong comma
should be semicolon

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-09-01 15:33:55 +02:00
1bfdae7933 ui: DataStoreContent: improve encrypted column
do not count files where we do not have any information

such files exist in the backup dir, but are not in the manifest
so we cannot use those files for determining if the backups are
encrypted or not

this marks encrypted/signed backups with unencrypted client.log.blob files as
encrypted/signed (respectively) instead of 'Mixed'

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-09-01 15:33:55 +02:00
4f09d31085 src/backup/verify.rs: use global hashes (instead of per group)
This makes verify more predictable.
2020-09-01 13:33:04 +02:00
58d73ddb1d src/backup/data_blob.rs: avoid useless &, data is already a reference 2020-09-01 12:56:25 +02:00
6b809ff59b src/backup/verify.rs: use separate thread to load data 2020-09-01 12:56:25 +02:00
afe08d2755 debian/control: fix versions 2020-09-01 10:19:40 +02:00
a7bc5d4eaf depend on proxmox 0.3.4 2020-08-28 06:32:33 +02:00
71 changed files with 1502 additions and 560 deletions

View File

@ -1,6 +1,6 @@
[package] [package]
name = "proxmox-backup" name = "proxmox-backup"
version = "0.8.13" version = "0.8.16"
authors = ["Dietmar Maurer <dietmar@proxmox.com>"] authors = ["Dietmar Maurer <dietmar@proxmox.com>"]
edition = "2018" edition = "2018"
license = "AGPL-3" license = "AGPL-3"
@ -26,7 +26,7 @@ futures = "0.3"
h2 = { version = "0.2", features = ["stream"] } h2 = { version = "0.2", features = ["stream"] }
handlebars = "3.0" handlebars = "3.0"
http = "0.2" http = "0.2"
hyper = "0.13" hyper = "0.13.6"
lazy_static = "1.4" lazy_static = "1.4"
libc = "0.2" libc = "0.2"
log = "0.4" log = "0.4"
@ -39,7 +39,7 @@ pam-sys = "0.5"
percent-encoding = "2.1" percent-encoding = "2.1"
pin-utils = "0.1.0" pin-utils = "0.1.0"
pathpatterns = "0.1.2" pathpatterns = "0.1.2"
proxmox = { version = "0.3.3", features = [ "sortable-macro", "api-macro", "websocket" ] } proxmox = { version = "0.3.5", features = [ "sortable-macro", "api-macro", "websocket" ] }
#proxmox = { git = "ssh://gitolite3@proxdev.maurer-it.com/rust/proxmox", version = "0.1.2", features = [ "sortable-macro", "api-macro" ] } #proxmox = { git = "ssh://gitolite3@proxdev.maurer-it.com/rust/proxmox", version = "0.1.2", features = [ "sortable-macro", "api-macro" ] }
#proxmox = { path = "../proxmox/proxmox", features = [ "sortable-macro", "api-macro", "websocket" ] } #proxmox = { path = "../proxmox/proxmox", features = [ "sortable-macro", "api-macro", "websocket" ] }
proxmox-fuse = "0.1.0" proxmox-fuse = "0.1.0"

View File

@ -150,4 +150,4 @@ upload: ${SERVER_DEB} ${CLIENT_DEB} ${DOC_DEB}
# check if working directory is clean # check if working directory is clean
git diff --exit-code --stat && git diff --exit-code --stat --staged git diff --exit-code --stat && git diff --exit-code --stat --staged
tar cf - ${SERVER_DEB} ${SERVER_DBG_DEB} ${DOC_DEB} | ssh -X repoman@repo.proxmox.com upload --product pbs --dist buster tar cf - ${SERVER_DEB} ${SERVER_DBG_DEB} ${DOC_DEB} | ssh -X repoman@repo.proxmox.com upload --product pbs --dist buster
tar cf - ${CLIENT_DEB} ${CLIENT_DBG_DEB} | ssh -X repoman@repo.proxmox.com upload --product "pbs,pve" --dist buster tar cf - ${CLIENT_DEB} ${CLIENT_DBG_DEB} | ssh -X repoman@repo.proxmox.com upload --product "pbs,pve,pmg" --dist buster

66
debian/changelog vendored
View File

@ -1,3 +1,68 @@
rust-proxmox-backup (0.8.16-1) unstable; urgency=medium
* BackupDir: make constructor fallible
* handle invalid mtime when formating entries
* ui/docs: add onlineHelp button for syncjobs
* docs: add section for calendar events
* tools/systemd/parse_time: enable */x syntax for calendar events
* docs: set html img width limitation through css
* docs: use alabaster theme
* server: set http2 max frame size
* doc: Add section "FAQ"
-- Proxmox Support Team <support@proxmox.com> Fri, 11 Sep 2020 15:54:57 +0200
rust-proxmox-backup (0.8.15-1) unstable; urgency=medium
* verify: skip benchmark directory
* add benchmark flag to backup creation for proper cleanup when running
a benchmark
* mount: fix mount subcommand
* ui: only mark backup encrypted if there are any files
* fix #2983: improve tcp performance
* improve ui and docs
* verify: rename corrupted chunks with .bad extension
* gc: remove .bad files on garbage collect
* ui: add translation support
* server/worker_task: fix upid_read_status
* tools/systemd/time: enable dates for calendarevents
* server/worker_task: fix 'unknown' status for some big task logs
-- Proxmox Support Team <support@proxmox.com> Thu, 10 Sep 2020 09:25:59 +0200
rust-proxmox-backup (0.8.14-1) unstable; urgency=medium
* verify speed up: use separate IO thread, use datastore-wide cache (instead
of per group)
* ui: datastore content: improve encrypted column
* ui: datastore content: show more granular verify state, especially for
backup group rows
* verify: log progress in percent
-- Proxmox Support Team <support@proxmox.com> Wed, 02 Sep 2020 09:36:47 +0200
rust-proxmox-backup (0.8.13-1) unstable; urgency=medium rust-proxmox-backup (0.8.13-1) unstable; urgency=medium
* improve and add to documentation * improve and add to documentation
@ -429,4 +494,3 @@ proxmox-backup (0.1-1) unstable; urgency=medium
* first try * first try
-- Proxmox Support Team <support@proxmox.com> Fri, 30 Nov 2018 13:03:28 +0100 -- Proxmox Support Team <support@proxmox.com> Fri, 30 Nov 2018 13:03:28 +0100

9
debian/control vendored
View File

@ -34,10 +34,10 @@ Build-Depends: debhelper (>= 11),
librust-pathpatterns-0.1+default-dev (>= 0.1.2-~~), librust-pathpatterns-0.1+default-dev (>= 0.1.2-~~),
librust-percent-encoding-2+default-dev (>= 2.1-~~), librust-percent-encoding-2+default-dev (>= 2.1-~~),
librust-pin-utils-0.1+default-dev, librust-pin-utils-0.1+default-dev,
librust-proxmox-0.3+api-macro-dev (>= 0.3.3-~~), librust-proxmox-0.3+api-macro-dev (>= 0.3.5-~~),
librust-proxmox-0.3+default-dev (>= 0.3.3-~~), librust-proxmox-0.3+default-dev (>= 0.3.5-~~),
librust-proxmox-0.3+sortable-macro-dev (>= 0.3.3-~~), librust-proxmox-0.3+sortable-macro-dev (>= 0.3.5-~~),
librust-proxmox-0.3+websocket-dev (>= 0.3.3-~~), librust-proxmox-0.3+websocket-dev (>= 0.3.5-~~),
librust-proxmox-fuse-0.1+default-dev, librust-proxmox-fuse-0.1+default-dev,
librust-pxar-0.6+default-dev, librust-pxar-0.6+default-dev,
librust-pxar-0.6+futures-io-dev, librust-pxar-0.6+futures-io-dev,
@ -103,6 +103,7 @@ Depends: fonts-font-awesome,
libjs-extjs (>= 6.0.1), libjs-extjs (>= 6.0.1),
libzstd1 (>= 1.3.8), libzstd1 (>= 1.3.8),
lvm2, lvm2,
pbs-i18n,
proxmox-backup-docs, proxmox-backup-docs,
proxmox-mini-journalreader, proxmox-mini-journalreader,
proxmox-widget-toolkit (>= 2.2-4), proxmox-widget-toolkit (>= 2.2-4),

1
debian/control.in vendored
View File

@ -4,6 +4,7 @@ Depends: fonts-font-awesome,
libjs-extjs (>= 6.0.1), libjs-extjs (>= 6.0.1),
libzstd1 (>= 1.3.8), libzstd1 (>= 1.3.8),
lvm2, lvm2,
pbs-i18n,
proxmox-backup-docs, proxmox-backup-docs,
proxmox-mini-journalreader, proxmox-mini-journalreader,
proxmox-widget-toolkit (>= 2.2-4), proxmox-widget-toolkit (>= 2.2-4),

9
debian/postinst vendored
View File

@ -15,11 +15,10 @@ case "$1" in
fi fi
deb-systemd-invoke $_dh_action proxmox-backup.service proxmox-backup-proxy.service >/dev/null || true deb-systemd-invoke $_dh_action proxmox-backup.service proxmox-backup-proxy.service >/dev/null || true
if test -n "$2"; then # FIXME: Remove in future version once we're sure no broken entries remain in anyone's files
if dpkg --compare-versions "$2" 'le' '0.8.10-1'; then if grep -q -e ':termproxy::[^@]\+: ' /var/log/proxmox-backup/tasks/active; then
echo "Fixing up termproxy user id in task log..." echo "Fixing up termproxy user id in task log..."
flock -w 30 /var/log/proxmox-backup/tasks/active.lock sed -i 's/:termproxy::root: /:termproxy::root@pam: /' /var/log/proxmox-backup/tasks/active flock -w 30 /var/log/proxmox-backup/tasks/active.lock sed -i 's/:termproxy::\([^@]\+\): /:termproxy::\1@pam: /' /var/log/proxmox-backup/tasks/active
fi
fi fi
;; ;;

View File

@ -28,7 +28,6 @@ COMPILEDIR := ../target/debug
SPHINXOPTS += -t devbuild SPHINXOPTS += -t devbuild
endif endif
# Sphinx internal variables. # Sphinx internal variables.
ALLSPHINXOPTS = -d $(BUILDDIR)/doctrees $(SPHINXOPTS) . ALLSPHINXOPTS = -d $(BUILDDIR)/doctrees $(SPHINXOPTS) .
@ -68,9 +67,17 @@ proxmox-backup-manager.1: proxmox-backup-manager/man1.rst proxmox-backup-manage
proxmox-backup-proxy.1: proxmox-backup-proxy/man1.rst proxmox-backup-proxy/description.rst proxmox-backup-proxy.1: proxmox-backup-proxy/man1.rst proxmox-backup-proxy/description.rst
rst2man $< >$@ rst2man $< >$@
.PHONY: onlinehelpinfo
onlinehelpinfo:
@echo "Generating OnlineHelpInfo.js..."
$(SPHINXBUILD) -b proxmox-scanrefs $(ALLSPHINXOPTS) $(BUILDDIR)/scanrefs
@echo "Build finished. OnlineHelpInfo.js is in $(BUILDDIR)/scanrefs."
.PHONY: html .PHONY: html
html: ${GENERATED_SYNOPSIS} html: ${GENERATED_SYNOPSIS}
$(SPHINXBUILD) -b html $(ALLSPHINXOPTS) $(BUILDDIR)/html $(SPHINXBUILD) -b html $(ALLSPHINXOPTS) $(BUILDDIR)/html
cp images/proxmox-logo.svg $(BUILDDIR)/html/_static/
cp custom.css $(BUILDDIR)/html/_static/
@echo @echo
@echo "Build finished. The HTML pages are in $(BUILDDIR)/html." @echo "Build finished. The HTML pages are in $(BUILDDIR)/html."

View File

@ -0,0 +1,133 @@
#!/usr/bin/env python3
# debugging stuff
from pprint import pprint
from typing import cast
import json
import re
import os
import io
from docutils import nodes
from sphinx.builders import Builder
from sphinx.util import logging
logger = logging.getLogger(__name__)
# refs are added in the following manner before the title of a section (note underscore and newline before title):
# .. _my-label:
#
# Section to ref
# --------------
#
#
# then referred to like (note missing underscore):
# "see :ref:`my-label`"
#
# the benefit of using this is if a label is explicitly set for a section,
# we can refer to it with this anchor #my-label in the html,
# even if the section name changes.
#
# see https://www.sphinx-doc.org/en/master/usage/restructuredtext/roles.html#role-ref
def scan_extjs_files(wwwdir="../www"): # a bit rough i know, but we can optimize later
js_files = []
used_anchors = []
logger.info("scanning extjs files for onlineHelp definitions")
for root, dirs, files in os.walk("{}".format(wwwdir)):
#print(root, dirs, files)
for filename in files:
if filename.endswith('.js'):
js_files.append(os.path.join(root, filename))
for js_file in js_files:
fd = open(js_file).read()
match = re.search("onlineHelp:\s*[\'\"](.*?)[\'\"]", fd) # match object is tuple
if match:
anchor = match.groups()[0]
anchor = re.sub('_', '-', anchor) # normalize labels
logger.info("found onlineHelp: {} in {}".format(anchor, js_file))
used_anchors.append(anchor)
return used_anchors
def setup(app):
logger.info('Mapping reference labels...')
app.add_builder(ReflabelMapper)
return {
'version': '0.1',
'parallel_read_safe': True,
'parallel_write_safe': True,
}
class ReflabelMapper(Builder):
name = 'proxmox-scanrefs'
def init(self):
self.docnames = []
self.env.online_help = {}
self.env.online_help['pbs_documentation_index'] = {
'link': '/docs/index.html',
'title': 'Proxmox Backup Server Documentation Index',
}
self.env.used_anchors = scan_extjs_files()
if not os.path.isdir(self.outdir):
os.mkdir(self.outdir)
self.output_filename = os.path.join(self.outdir, 'OnlineHelpInfo.js')
self.output = io.open(self.output_filename, 'w', encoding='UTF-8')
def write_doc(self, docname, doctree):
for node in doctree.traverse(nodes.section):
#pprint(vars(node))
if hasattr(node, 'expect_referenced_by_id') and len(node['ids']) > 1: # explicit labels
filename = self.env.doc2path(docname)
filename_html = re.sub('.rst', '.html', filename)
labelid = node['ids'][1] # [0] is predefined by sphinx, we need [1] for explicit ones
title = cast(nodes.title, node[0])
logger.info('traversing section {}'.format(title.astext()))
ref_name = getattr(title, 'rawsource', title.astext())
self.env.online_help[labelid] = {'link': '', 'title': ''}
self.env.online_help[labelid]['link'] = "/docs/" + os.path.basename(filename_html) + "#{}".format(labelid)
self.env.online_help[labelid]['title'] = ref_name
return
def get_outdated_docs(self):
return 'all documents'
def prepare_writing(self, docnames):
return
def get_target_uri(self, docname, typ=None):
return ''
def validate_anchors(self):
#pprint(self.env.online_help)
to_remove = []
for anchor in self.env.used_anchors:
if anchor not in self.env.online_help:
logger.info("[-] anchor {} is missing from onlinehelp!".format(anchor))
for anchor in self.env.online_help:
if anchor not in self.env.used_anchors and anchor != 'pbs_documentation_index':
logger.info("[*] anchor {} not used! deleting...".format(anchor))
to_remove.append(anchor)
for anchor in to_remove:
self.env.online_help.pop(anchor, None)
return
def finish(self):
# generate OnlineHelpInfo.js output
self.validate_anchors()
self.output.write("const proxmoxOnlineHelpInfo = ")
self.output.write(json.dumps(self.env.online_help, indent=2))
self.output.write(";\n")
self.output.close()
return

View File

@ -24,6 +24,13 @@ good deduplication rates for file archives.
The Proxmox Backup Server supports both strategies. The Proxmox Backup Server supports both strategies.
Image Archives: ``<name>.img``
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
This is used for virtual machine images and other large binary
data. Content is split into fixed-sized chunks.
File Archives: ``<name>.pxar`` File Archives: ``<name>.pxar``
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
@ -34,13 +41,6 @@ the :ref:`pxar-format`, split into variable-sized chunks. The format
is optimized to achieve good deduplication rates. is optimized to achieve good deduplication rates.
Image Archives: ``<name>.img``
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
This is used for virtual machine images and other large binary
data. Content is split into fixed-sized chunks.
Binary Data (BLOBs) Binary Data (BLOBs)
^^^^^^^^^^^^^^^^^^^ ^^^^^^^^^^^^^^^^^^^
@ -148,11 +148,17 @@ when setting up the backup server.
Disk Management Disk Management
~~~~~~~~~~~~~~~ ~~~~~~~~~~~~~~~
.. image:: images/screenshots/pbs-gui-disks.png
:align: right
:alt: List of disks
Proxmox Backup Server comes with a set of disk utilities, which are Proxmox Backup Server comes with a set of disk utilities, which are
accessed using the ``disk`` subcommand. This subcommand allows you to initialize accessed using the ``disk`` subcommand. This subcommand allows you to initialize
disks, create various filesystems, and get information about the disks. disks, create various filesystems, and get information about the disks.
To view the disks connected to the system, use the ``list`` subcommand of To view the disks connected to the system, navigate to **Administration ->
Disks** in the web interface or use the ``list`` subcommand of
``disk``: ``disk``:
.. code-block:: console .. code-block:: console
@ -174,32 +180,33 @@ To initialize a disk with a new GPT, use the ``initialize`` subcommand:
# proxmox-backup-manager disk initialize sdX # proxmox-backup-manager disk initialize sdX
You can create an ``ext4`` or ``xfs`` filesystem on a disk, using ``fs .. image:: images/screenshots/pbs-gui-disks-dir-create.png
create``. The following command creates an ``ext4`` filesystem and passes the :align: right
``--add-datastore`` parameter, in order to automatically create a datastore on :alt: Create a directory
the disk (in this case ``sdd``). This will create a datastore at the location
``/mnt/datastore/store1``: You can create an ``ext4`` or ``xfs`` filesystem on a disk using ``fs
create``, or by navigating to **Administration -> Disks -> Directory** in the
web interface and creating one from there. The following command creates an
``ext4`` filesystem and passes the ``--add-datastore`` parameter, in order to
automatically create a datastore on the disk (in this case ``sdd``). This will
create a datastore at the location ``/mnt/datastore/store1``:
.. code-block:: console .. code-block:: console
# proxmox-backup-manager disk fs create store1 --disk sdd --filesystem ext4 --add-datastore true # proxmox-backup-manager disk fs create store1 --disk sdd --filesystem ext4 --add-datastore true
create datastore 'store1' on disk sdd
Percentage done: 1
...
Percentage done: 99
TASK OK
You can also create a ``zpool`` with various raid levels. The command below .. image:: images/screenshots/pbs-gui-disks-zfs-create.png
creates a mirrored ``zpool`` using two disks (``sdb`` & ``sdc``) and mounts it :align: right
on the root directory (default): :alt: Create a directory
You can also create a ``zpool`` with various raid levels from **Administration
-> Disks -> Zpool** in the web interface, or by using ``zpool create``. The command
below creates a mirrored ``zpool`` using two disks (``sdb`` & ``sdc``) and
mounts it on the root directory (default):
.. code-block:: console .. code-block:: console
# proxmox-backup-manager disk zpool create zpool1 --devices sdb,sdc --raidlevel mirror # proxmox-backup-manager disk zpool create zpool1 --devices sdb,sdc --raidlevel mirror
create Mirror zpool 'zpool1' on devices 'sdb,sdc'
# "zpool" "create" "-o" "ashift=12" "zpool1" "mirror" "sdb" "sdc"
TASK OK
.. note:: .. note::
You can also pass the ``--add-datastore`` parameter here, to automatically You can also pass the ``--add-datastore`` parameter here, to automatically
@ -209,31 +216,57 @@ You can use ``disk fs list`` and ``disk zpool list`` to keep track of your
filesystems and zpools respectively. filesystems and zpools respectively.
If a disk supports S.M.A.R.T. capability, and you have this enabled, you can If a disk supports S.M.A.R.T. capability, and you have this enabled, you can
display S.M.A.R.T. attributes using the command: display S.M.A.R.T. attributes from the web interface or by using the command:
.. code-block:: console .. code-block:: console
# proxmox-backup-manager disk smart-attributes sdX # proxmox-backup-manager disk smart-attributes sdX
Datastore Configuration Datastore Configuration
~~~~~~~~~~~~~~~~~~~~~~~ ~~~~~~~~~~~~~~~~~~~~~~~
.. image:: images/screenshots/pbs-gui-datastore.png
:align: right
:alt: Datastore Overview
You can configure multiple datastores. Minimum one datastore needs to be You can configure multiple datastores. Minimum one datastore needs to be
configured. The datastore is identified by a simple `name` and points to a configured. The datastore is identified by a simple *name* and points to a
directory on the filesystem. Each datastore also has associated retention directory on the filesystem. Each datastore also has associated retention
settings of how many backup snapshots for each interval of ``hourly``, settings of how many backup snapshots for each interval of ``hourly``,
``daily``, ``weekly``, ``monthly``, ``yearly`` as well as a time-independent ``daily``, ``weekly``, ``monthly``, ``yearly`` as well as a time-independent
number of backups to keep in that store. :ref:`Pruning <pruning>` and number of backups to keep in that store. :ref:`Pruning <pruning>` and
:ref:`garbage collection <garbage-collection>` can also be configured to run :ref:`garbage collection <garbage-collection>` can also be configured to run
periodically based on a configured :term:`schedule` per datastore. periodically based on a configured schedule (see :ref:`calendar-events`) per datastore.
The following command creates a new datastore called ``store1`` on :file:`/backup/disk1/store1` Creating a Datastore
^^^^^^^^^^^^^^^^^^^^
.. image:: images/screenshots/pbs-gui-datastore-create-general.png
:align: right
:alt: Create a datastore
You can create a new datastore from the web GUI, by navigating to **Datastore** in
the menu tree and clicking **Create**. Here:
* *Name* refers to the name of the datastore
* *Backing Path* is the path to the directory upon which you want to create the
datastore
* *GC Schedule* refers to the time and intervals at which garbage collection
runs
* *Prune Schedule* refers to the frequency at which pruning takes place
* *Prune Options* set the amount of backups which you would like to keep (see :ref:`Pruning <pruning>`).
Alternatively you can create a new datastore from the command line. The
following command creates a new datastore called ``store1`` on :file:`/backup/disk1/store1`
.. code-block:: console .. code-block:: console
# proxmox-backup-manager datastore create store1 /backup/disk1/store1 # proxmox-backup-manager datastore create store1 /backup/disk1/store1
To list existing datastores run: Managing Datastores
^^^^^^^^^^^^^^^^^^^
To list existing datastores from the command line run:
.. code-block:: console .. code-block:: console
@ -244,13 +277,15 @@ To list existing datastores run:
│ store1 │ /backup/disk1/store1 │ This is my default storage. │ │ store1 │ /backup/disk1/store1 │ This is my default storage. │
└────────┴──────────────────────┴─────────────────────────────┘ └────────┴──────────────────────┴─────────────────────────────┘
You can change settings of a datastore, for example to set a prune and garbage You can change the garbage collection and prune settings of a datastore, by
collection schedule or retention settings using ``update`` subcommand and view editing the datastore from the GUI or by using the ``update`` subcommand. For
a datastore with the ``show`` subcommand: example, the below command changes the garbage collection schedule using the
``update`` subcommand and prints the properties of the datastore with the
``show`` subcommand:
.. code-block:: console .. code-block:: console
# proxmox-backup-manager datastore update store1 --keep-last 7 --prune-schedule daily --gc-schedule 'Tue 04:27' # proxmox-backup-manager datastore update store1 --gc-schedule 'Tue 04:27'
# proxmox-backup-manager datastore show store1 # proxmox-backup-manager datastore show store1
┌────────────────┬─────────────────────────────┐ ┌────────────────┬─────────────────────────────┐
│ Name │ Value │ │ Name │ Value │
@ -328,6 +363,10 @@ directories will store the chunked data after a backup operation has been execut
User Management User Management
~~~~~~~~~~~~~~~ ~~~~~~~~~~~~~~~
.. image:: images/screenshots/pbs-gui-user-management.png
:align: right
:alt: User management
Proxmox Backup Server supports several authentication realms, and you need to Proxmox Backup Server supports several authentication realms, and you need to
choose the realm when you add a new user. Possible realms are: choose the realm when you add a new user. Possible realms are:
@ -352,19 +391,21 @@ users:
│ root@pam │ 1 │ │ │ │ │ Superuser │ │ root@pam │ 1 │ │ │ │ │ Superuser │
└─────────────┴────────┴────────┴───────────┴──────────┴────────────────┴────────────────────┘ └─────────────┴────────┴────────┴───────────┴──────────┴────────────────┴────────────────────┘
.. image:: images/screenshots/pbs-gui-user-management-add-user.png
:align: right
:alt: Add a new user
The superuser has full administration rights on everything, so you The superuser has full administration rights on everything, so you
normally want to add other users with less privileges: normally want to add other users with less privileges. You can create a new
user with the ``user create`` subcommand or through the web interface, under
**Configuration -> User Management**. The ``create`` subcommand lets you specify
many options like ``--email`` or ``--password``. You can update or change any
user properties using the ``update`` subcommand later (**Edit** in the GUI):
.. code-block:: console .. code-block:: console
# proxmox-backup-manager user create john@pbs --email john@example.com # proxmox-backup-manager user create john@pbs --email john@example.com
The create command lets you specify many options like ``--email`` or
``--password``. You can update or change any of them using the
update command later:
.. code-block:: console
# proxmox-backup-manager user update john@pbs --firstname John --lastname Smith # proxmox-backup-manager user update john@pbs --firstname John --lastname Smith
# proxmox-backup-manager user update john@pbs --comment "An example user." # proxmox-backup-manager user update john@pbs --comment "An example user."
@ -442,9 +483,14 @@ following roles exist:
**RemoteSyncOperator** **RemoteSyncOperator**
Is allowed to read data from a remote. Is allowed to read data from a remote.
You can use the ``acl`` subcommand to manage and monitor user permissions. For :align: right
example, the command below will add the user ``john@pbs`` as a :alt: Add permissions for user
**DatastoreAdmin** for the data store ``store1``, located at ``/backup/disk1/store1``:
You can manage datastore permissions from **Configuration -> Permissions** in
the web interface. Likewise, you can use the ``acl`` subcommand to manage and
monitor user permissions from the command line. For example, the command below
will add the user ``john@pbs`` as a **DatastoreAdmin** for the datastore
``store1``, located at ``/backup/disk1/store1``:
.. code-block:: console .. code-block:: console
@ -461,81 +507,97 @@ You can monitor the roles of each user using the following command:
│ john@pbs │ /datastore/disk1 │ 1 │ DatastoreAdmin │ │ john@pbs │ /datastore/disk1 │ 1 │ DatastoreAdmin │
└──────────┴──────────────────┴───────────┴────────────────┘ └──────────┴──────────────────┴───────────┴────────────────┘
A single user can be assigned multiple permission sets for different data stores. A single user can be assigned multiple permission sets for different datastores.
.. Note:: .. Note::
Naming convention is important here. For data stores on the host, Naming convention is important here. For datastores on the host,
you must use the convention ``/datastore/{storename}``. For example, to set you must use the convention ``/datastore/{storename}``. For example, to set
permissions for a data store mounted at ``/mnt/backup/disk4/store2``, you would use permissions for a datastore mounted at ``/mnt/backup/disk4/store2``, you would use
``/datastore/store2`` for the path. For remote stores, use the convention ``/datastore/store2`` for the path. For remote stores, use the convention
``/remote/{remote}/{storename}``, where ``{remote}`` signifies the name of the ``/remote/{remote}/{storename}``, where ``{remote}`` signifies the name of the
remote (see `Remote` below) and ``{storename}`` is the name of the data store on remote (see `Remote` below) and ``{storename}`` is the name of the datastore on
the remote. the remote.
Network Management Network Management
~~~~~~~~~~~~~~~~~~ ~~~~~~~~~~~~~~~~~~
Proxmox Backup Server provides an interface for network configuration, through the
``network`` subcommand. This allows you to carry out some basic network Proxmox Backup Server provides both a web interface and a command line tool for
management tasks such as adding, configuring and removing network interfaces. network configuration. You can find the configuration options in the web
interface under the **Network Interfaces** section of the **Configuration** menu
tree item. The command line tool is accessed via the ``network`` subcommand.
These interfaces allow you to carry out some basic network management tasks,
such as adding, configuring, and removing network interfaces.
.. note:: Any changes made to the network configuration are not
applied, until you click on **Apply Configuration** or enter the ``network
reload`` command. This allows you to make many changes at once. It also allows
you to ensure that your changes are correct before applying them, as making a
mistake here can render the server inaccessible over the network.
To get a list of available interfaces, use the following command: To get a list of available interfaces, use the following command:
.. code-block:: console .. code-block:: console
# proxmox-backup-manager network list # proxmox-backup-manager network list
┌───────┬────────┬───────────┬────────┬─────────┬───────────────────┬──────────────┬──────────────┐ ┌───────┬────────┬───────────┬────────┬─────────────┬──────────────┬──────────────┐
│ name │ type │ autostart │ method │ method6 │ address │ gateway │ ports/slaves │ │ name │ type │ autostart │ method │ address │ gateway │ ports/slaves │
╞═══════╪════════╪═══════════╪════════╪═════════╪═══════════════════╪══════════════╪══════════════╡ ╞═══════╪════════╪═══════════╪════════╪═════════════╪══════════════╪══════════════╡
│ bond0 │ bond │ 1 │ manual │ │ │ │ ens18 ens19 │ │ bond0 │ bond │ 1 │ static │ x.x.x.x/x │ x.x.x.x │ ens18 ens19 │
├───────┼────────┼───────────┼────────┼─────────┼───────────────────┼──────────────┼──────────────┤ ├───────┼────────┼───────────┼────────┼─────────────┼──────────────┼──────────────┤
│ ens18 │ eth │ 1 │ manual │ │ │ │ │ ens18 │ eth │ 1 │ manual │ │ │ │
├───────┼────────┼───────────┼────────┼─────────┼───────────────────┼──────────────┼──────────────┤ ├───────┼────────┼───────────┼────────┼─────────────┼──────────────┼──────────────┤
│ ens19 │ eth │ 1 │ manual │ │ │ │ │ ens19 │ eth │ 1 │ manual │ │ │ │
├───────┼────────┼───────────────────┼────────────────────────────┼──────────────────────────── ───────┴───────────────────┴─────────────────────────────────────────────────
│ vmbr0 │ bridge │ 1 │ static │ │ x.x.x.x/x │ x.x.x.x │ bond0 │
└───────┴────────┴───────────┴────────┴─────────┴───────────────────┴──────────────┴──────────────┘ .. image:: images/screenshots/pbs-gui-network-create-bond.png
:align: right
:alt: Add a network interface
To add a new network interface, use the ``create`` subcommand with the relevant To add a new network interface, use the ``create`` subcommand with the relevant
parameters. The following command shows a template for creating a new bridge: parameters. The following command shows a template for creating the bond shown
in the list above:
.. code-block:: console .. code-block:: console
# proxmox-backup-manager network create vmbr1 --autostart true --cidr x.x.x.x/x --gateway x.x.x.x --bridge_ports iface_name --type bridge # proxmox-backup-manager network create bond0 --type bond --bond_mode active-backup --slaves ens18,ens19 --autostart true --cidr x.x.x.x/x --gateway x.x.x.x
You can make changes to the configuration of a network interface with the You can make changes to the configuration of a network interface with the
``update`` subcommand: ``update`` subcommand:
.. code-block:: console .. code-block:: console
# proxmox-backup-manager network update vmbr1 --cidr y.y.y.y/y # proxmox-backup-manager network update bond0 --cidr y.y.y.y/y
You can also remove a network interface: You can also remove a network interface:
.. code-block:: console .. code-block:: console
# proxmox-backup-manager network remove vmbr1 # proxmox-backup-manager network remove bond0
To view the changes made to the network configuration file, before committing The pending changes for the network configuration file will appear at the bottom of the
them, use the command: web interface. You can also view these changes, by using the command:
.. code-block:: console .. code-block:: console
# proxmox-backup-manager network changes # proxmox-backup-manager network changes
If you would like to cancel all changes at this point, you can do this using: If you would like to cancel all changes at this point, you can either click on
the **Revert** button or use the following command:
.. code-block:: console .. code-block:: console
# proxmox-backup-manager network revert # proxmox-backup-manager network revert
If you are happy with the changes and would like to write them into the If you are happy with the changes and would like to write them into the
configuration file, the command is: configuration file, select **Apply Configuration**. The corresponding command
is:
.. code-block:: console .. code-block:: console
# proxmox-backup-manager network reload # proxmox-backup-manager network reload
You can also configure DNS settings using the ``dns`` subcommand of You can also configure DNS settings, from the **DNS** section
of **Configuration** or by using the ``dns`` subcommand of
``proxmox-backup-manager``. ``proxmox-backup-manager``.
:term:`Remote` :term:`Remote`
@ -543,18 +605,26 @@ You can also configure DNS settings using the ``dns`` subcommand of
A remote refers to a separate Proxmox Backup Server installation and a user on that A remote refers to a separate Proxmox Backup Server installation and a user on that
installation, from which you can `sync` datastores to a local datastore with a installation, from which you can `sync` datastores to a local datastore with a
`Sync Job`. `Sync Job`. You can configure remotes in the web interface, under **Configuration
-> Remotes**. Alternatively, you can use the ``remote`` subcommand.
.. image:: images/screenshots/pbs-gui-remote-add.png
.. image:: images/screenshots/pbs-gui-permissions-add.png
:align: right
:alt: Add a remote
To add a remote, you need its hostname or ip, a userid and password on the To add a remote, you need its hostname or ip, a userid and password on the
remote, and its certificate fingerprint. To get the fingerprint, use the remote, and its certificate fingerprint. To get the fingerprint, use the
``proxmox-backup-manager cert info`` command on the remote. ``proxmox-backup-manager cert info`` command on the remote, or navigate to
**Dashboard** in the remote's web interface and select **Show Fingerprint**.
.. code-block:: console .. code-block:: console
# proxmox-backup-manager cert info |grep Fingerprint # proxmox-backup-manager cert info |grep Fingerprint
Fingerprint (sha256): 64:d3:ff:3a:50:38:53:5a:9b:f7:50:...:ab:fe Fingerprint (sha256): 64:d3:ff:3a:50:38:53:5a:9b:f7:50:...:ab:fe
Using the information specified above, add the remote with: Using the information specified above, you can add a remote from the **Remotes**
configuration panel, or by using the command:
.. code-block:: console .. code-block:: console
@ -574,14 +644,20 @@ Use the ``list``, ``show``, ``update``, ``remove`` subcommands of
└──────┴──────────────┴──────────┴───────────────────────────────────────────┴─────────┘ └──────┴──────────────┴──────────┴───────────────────────────────────────────┴─────────┘
# proxmox-backup-manager remote remove pbs2 # proxmox-backup-manager remote remove pbs2
.. _syncjobs:
Sync Jobs Sync Jobs
~~~~~~~~~ ~~~~~~~~~
Sync jobs are configured to pull the contents of a datastore on a `Remote` to a .. image:: images/screenshots/pbs-gui-syncjob-add.png
local datastore. You can either start the sync job manually on the GUI or :align: right
provide it with a :term:`schedule` to run regularly. The :alt: Add a remote
``proxmox-backup-manager sync-job`` command is used to manage sync jobs:
Sync jobs are configured to pull the contents of a datastore on a **Remote** to a
local datastore. You can either start a sync job manually on the GUI or
provide it with a schedule (see :ref:`calendar-events`) to run regularly. You can manage sync jobs
under **Configuration -> Sync Jobs** in the web interface, or using the
``proxmox-backup-manager sync-job`` command:
.. code-block:: console .. code-block:: console
@ -595,12 +671,13 @@ provide it with a :term:`schedule` to run regularly. The
└────────────┴───────┴────────┴──────────────┴───────────┴─────────┘ └────────────┴───────┴────────┴──────────────┴───────────┴─────────┘
# proxmox-backup-manager sync-job remove pbs2-local # proxmox-backup-manager sync-job remove pbs2-local
Garbage Collection Garbage Collection
~~~~~~~~~~~~~~~~~~ ~~~~~~~~~~~~~~~~~~
You can monitor and run :ref:`garbage collection <garbage-collection>` on the You can monitor and run :ref:`garbage collection <garbage-collection>` on the
Proxmox Backup Server using the ``garbage-collection`` subcommand of Proxmox Backup Server using the ``garbage-collection`` subcommand of
``proxmox-backup-manager``. You can use the ``start`` subcommand to manually start garbage ``proxmox-backup-manager``. You can use the ``start`` subcommand to manually start garbage
collection on an entire data store and the ``status`` subcommand to see collection on an entire datastore and the ``status`` subcommand to see
attributes relating to the :ref:`garbage collection <garbage-collection>`. attributes relating to the :ref:`garbage collection <garbage-collection>`.
@ -1210,7 +1287,7 @@ Garbage Collection
~~~~~~~~~~~~~~~~~~ ~~~~~~~~~~~~~~~~~~
The ``prune`` command removes only the backup index files, not the data The ``prune`` command removes only the backup index files, not the data
from the data store. This task is left to the garbage collection from the datastore. This task is left to the garbage collection
command. It is recommended to carry out garbage collection on a regular basis. command. It is recommended to carry out garbage collection on a regular basis.
The garbage collection works in two phases. In the first phase, all The garbage collection works in two phases. In the first phase, all
@ -1307,6 +1384,10 @@ as ``user1@pbs``.
# pvesm add pbs store2 --server localhost --datastore store2 # pvesm add pbs store2 --server localhost --datastore store2
# pvesm set store2 --username user1@pbs --password <secret> # pvesm set store2 --username user1@pbs --password <secret>
.. note:: If you would rather not pass your password as plain text, you can pass
the ``--password`` parameter, without any arguments. This will cause the
program to prompt you for a password upon entering the command.
If your backup server uses a self signed certificate, you need to add If your backup server uses a self signed certificate, you need to add
the certificate fingerprint to the configuration. You can get the the certificate fingerprint to the configuration. You can get the
fingerprint by running the following command on the backup server: fingerprint by running the following command on the backup server:

100
docs/calendarevents.rst Normal file
View File

@ -0,0 +1,100 @@
.. _calendar-events:
Calendar Events
===============
Introduction and Format
-----------------------
Certain tasks, for example pruning and garbage collection, need to be
performed on a regular basis. Proxmox Backup Server uses a format inspired
by the systemd Time and Date Specification (see `systemd.time manpage`_)
called `calendar events` for its schedules.
`Calendar events` are expressions to specify one or more points in time.
They are mostly compatible with systemds calendar events.
The general format is as follows:
.. code-block:: console
:caption: Calendar event
[WEEKDAY] [[YEARS-]MONTHS-DAYS] [HOURS:MINUTES[:SECONDS]]
Note that there either has to be at least a weekday, date or time part.
If the weekday or date part is omitted, all (week)days are included.
If the time part is omitted, the time 00:00:00 is implied.
(e.g. '2020-01-01' refers to '2020-01-01 00:00:00')
Weekdays are specified with the abbreviated english version:
`mon, tue, wed, thu, fri, sat, sun`.
Each field can contain multiple values in the following formats:
* comma-separated: e.g., 01,02,03
* as a range: e.g., 01..10
* as a repetition: e.g, 05/10 (means starting at 5 every 10)
* and a combination of the above: e.g., 01,05..10,12/02
* or a `*` for every possible value: e.g., \*:00
There are some special values that have specific meaning:
================================= ==============================
Value Syntax
================================= ==============================
`minutely` `*-*-* *:*:00`
`hourly` `*-*-* *:00:00`
`daily` `*-*-* 00:00:00`
`weekly` `mon *-*-* 00:00:00`
`monthly` `*-*-01 00:00:00`
`yearly` or `annualy` `*-01-01 00:00:00`
`quarterly` `*-01,04,07,10-01 00:00:00`
`semiannually` or `semi-annually` `*-01,07-01 00:00:00`
================================= ==============================
Here is a table with some useful examples:
======================== ============================= ===================================
Example Alternative Explanation
======================== ============================= ===================================
`mon,tue,wed,thu,fri` `mon..fri` Every working day at 00:00
`sat,sun` `sat..sun` Only on weekends at 00:00
`mon,wed,fri` -- Monday, Wednesday, Friday at 00:00
`12:05` -- Every day at 12:05 PM
`*:00/5` `0/1:0/5` Every five minutes
`mon..wed *:30/10` `mon,tue,wed *:30/10` Monday, Tuesday, Wednesday 30, 40 and 50 minutes after every full hour
`mon..fri 8..17,22:0/15` -- Every working day every 15 minutes between 8 AM and 6 PM and between 10 PM and 11 PM
`fri 12..13:5/20` `fri 12,13:5/20` Friday at 12:05, 12:25, 12:45, 13:05, 13:25 and 13:45
`12,14,16,18,20,22:5` `12/2:5` Every day starting at 12:05 until 22:05, every 2 hours
`*:*` `0/1:0/1` Every minute (minimum interval)
`*-05` -- On the 5th day of every Month
`Sat *-1..7 15:00` -- First Saturday each Month at 15:00
`2015-10-21` -- 21st October 2015 at 00:00
======================== ============================= ===================================
Differences to systemd
----------------------
Not all features of systemd calendar events are implemented:
* no unix timestamps (e.g. `@12345`): instead use date and time to specify
a specific point in time
* no timezone: all schedules use the set timezone on the server
* no sub-second resolution
* no reverse day syntax (e.g. 2020-03~01)
* no repetition of ranges (e.g. 1..10/2)
Notes on scheduling
-------------------
In `Proxmox Backup`_ scheduling for most tasks is done in the
`proxmox-backup-proxy`. This daemon checks all job schedules
if they are due every minute. This means that even if
`calendar events` can contain seconds, it will only be checked
once a minute.
Also, all schedules will be checked against the timezone set
in the `Proxmox Backup`_ server.

View File

@ -18,9 +18,12 @@
# documentation root, use os.path.abspath to make it absolute, like shown here. # documentation root, use os.path.abspath to make it absolute, like shown here.
# #
import os import os
# import sys import sys
# sys.path.insert(0, os.path.abspath('.')) # sys.path.insert(0, os.path.abspath('.'))
# custom extensions
sys.path.append(os.path.abspath("./_ext"))
# -- Implement custom formatter for code-blocks --------------------------- # -- Implement custom formatter for code-blocks ---------------------------
# #
# * use smaller font # * use smaller font
@ -46,7 +49,7 @@ PygmentsBridge.latex_formatter = CustomLatexFormatter
# extensions coming with Sphinx (named 'sphinx.ext.*') or your custom # extensions coming with Sphinx (named 'sphinx.ext.*') or your custom
# ones. # ones.
extensions = ["sphinx.ext.graphviz", "sphinx.ext.todo"] extensions = ["sphinx.ext.graphviz", "sphinx.ext.todo", "proxmox-scanrefs"]
todo_link_only = True todo_link_only = True
@ -71,7 +74,7 @@ rst_epilog = epilog_file.read()
# General information about the project. # General information about the project.
project = 'Proxmox Backup' project = 'Proxmox Backup'
copyright = '2019-2020, Proxmox Support Team' copyright = '2019-2020, Proxmox Server Solutions GmbH'
author = 'Proxmox Support Team' author = 'Proxmox Support Team'
# The version info for the project you're documenting, acts as replacement for # The version info for the project you're documenting, acts as replacement for
@ -145,7 +148,7 @@ pygments_style = 'sphinx'
# keep_warnings = False # keep_warnings = False
# If true, `todo` and `todoList` produce output, else they produce nothing. # If true, `todo` and `todoList` produce output, else they produce nothing.
todo_include_todos = True todo_include_todos = not tags.has('release')
# -- Options for HTML output ---------------------------------------------- # -- Options for HTML output ----------------------------------------------
@ -153,13 +156,32 @@ todo_include_todos = True
# The theme to use for HTML and HTML Help pages. See the documentation for # The theme to use for HTML and HTML Help pages. See the documentation for
# a list of builtin themes. # a list of builtin themes.
# #
html_theme = 'sphinxdoc' html_theme = 'alabaster'
# Theme options are theme-specific and customize the look and feel of a theme # Theme options are theme-specific and customize the look and feel of a theme
# further. For a list of options available for each theme, see the # further. For a list of options available for each theme, see the
# documentation. # documentation.
# #
# html_theme_options = {} html_theme_options = {
'fixed_sidebar': True,
#'sidebar_includehidden': False,
'sidebar_collapse': False, # FIXME: documented, but does not works?!
'show_relbar_bottom': True, # FIXME: documented, but does not works?!
'show_powered_by': False,
'logo': 'proxmox-logo.svg',
'logo_name': True, # show project name below logo
#'logo_text_align': 'center',
#'description': 'Fast, Secure & Efficient.',
'sidebar_width': '300px',
'page_width': '1280px',
# font styles
'head_font_family': 'Lato, sans-serif',
'caption_font_family': 'Lato, sans-serif',
'caption_font_size': '20px',
'font_family': 'Open Sans, sans-serif',
}
# Add any paths that contain custom themes here, relative to this directory. # Add any paths that contain custom themes here, relative to this directory.
# html_theme_path = [] # html_theme_path = []
@ -176,7 +198,7 @@ html_theme = 'sphinxdoc'
# The name of an image file (relative to this directory) to place at the top # The name of an image file (relative to this directory) to place at the top
# of the sidebar. # of the sidebar.
# #
html_logo = 'images/proxmox-logo.svg' #html_logo = 'images/proxmox-logo.svg' # replaced by html_theme_options.logo
# The name of an image file (relative to this directory) to use as a favicon of # The name of an image file (relative to this directory) to use as a favicon of
# the docs. This file should be a Windows icon file (.ico) being 16x16 or 32x32 # the docs. This file should be a Windows icon file (.ico) being 16x16 or 32x32
@ -229,7 +251,7 @@ html_static_path = ['_static']
# If true, links to the reST sources are added to the pages. # If true, links to the reST sources are added to the pages.
# #
# html_show_sourcelink = True html_show_sourcelink = False
# If true, "Created using Sphinx" is shown in the HTML footer. Default is True. # If true, "Created using Sphinx" is shown in the HTML footer. Default is True.
# #

15
docs/custom.css Normal file
View File

@ -0,0 +1,15 @@
div.sphinxsidebar {
height: calc(100% - 20px);
overflow: auto;
}
h1.logo-name {
font-size: 24px;
}
div.body img {
width: 250px;
}
pre {
padding: 5px 10px;
}

View File

@ -38,3 +38,6 @@
.. _RFC3399: https://tools.ietf.org/html/rfc3339 .. _RFC3399: https://tools.ietf.org/html/rfc3339
.. _UTC: https://en.wikipedia.org/wiki/Coordinated_Universal_Time .. _UTC: https://en.wikipedia.org/wiki/Coordinated_Universal_Time
.. _ISO Week date: https://en.wikipedia.org/wiki/ISO_week_date .. _ISO Week date: https://en.wikipedia.org/wiki/ISO_week_date
.. _systemd.time manpage: https://manpages.debian.org/buster/systemd/systemd.time.7.en.html

71
docs/faq.rst Normal file
View File

@ -0,0 +1,71 @@
FAQ
===
What distribution is Proxmox Backup Server (PBS) based on?
----------------------------------------------------------
Proxmox Backup Server is based on `Debian GNU/Linux <https://www.debian.org/>`_.
Which platforms are supported as a backup source (client)?
----------------------------------------------------------
The client tool works on most modern Linux systems, meaning you are not limited
to Debian-based backups.
Will Proxmox Backup Server run on a 32-bit processor?
-----------------------------------------------------
Proxmox Backup Server only supports 64-bit CPUs (AMD or Intel). There are no
future plans to support 32-bit processors.
How long will my Proxmox Backup Server version be supported?
------------------------------------------------------------
+-----------------------+--------------------+---------------+------------+--------------------+
|Proxmox Backup Version | Debian Version | First Release | Debian EOL | Proxmox Backup EOL |
+=======================+====================+===============+============+====================+
|Proxmox Backup 1.x | Debian 10 (Buster) | tba | tba | tba |
+-----------------------+--------------------+---------------+------------+--------------------+
Can I copy or synchronize my datastore to another location?
-----------------------------------------------------------
Proxmox Backup Server allows you to copy or synchroize datastores to other
locations, through the use of *Remotes* and *Sync Jobs*. *Remote* is the term
given to a separate server, which has a datastore that can be synced to a local store.
A *Sync Job* is the process which is used to pull the contents of a datastore from
a *Remote* to a local datastore.
Can Proxmox Backup Server verify data integrity of a backup archive?
--------------------------------------------------------------------
Proxmox Backup Server uses a built-in SHA-256 checksum algorithm, to ensure
data integrity. Within each backup, a manifest file (index.json) is created,
which contains a list of all the backup files, along with their sizes and
checksums. This manifest file is used to verify the integrity of each backup.
When backing up to remote servers, do I have to trust the remote server?
------------------------------------------------------------------------
Proxmox Backup Server supports client-side encryption, meaning your data is
encrypted before it reaches the server. Thus, in the event that an attacker
gains access to the server, they will not be able to read the data.
.. note:: Encryption is not enabled by default. To set up encryption, see the
`Encryption
<https://pbs.proxmox.com/docs/administration-guide.html#encryption>`_ section
of the Proxmox Backup Server Administration Guide.
Is the backup incremental/deduplicated?
---------------------------------------
With Proxmox Backup Server, backups are sent incremental and data is
deduplicated on the server.
This minimizes both the storage consumed and the network impact.

View File

@ -51,14 +51,3 @@ Glossary
A remote Proxmox Backup Server installation and credentials for a user on it. A remote Proxmox Backup Server installation and credentials for a user on it.
You can pull datastores from a remote to a local datastore in order to You can pull datastores from a remote to a local datastore in order to
have redundant backups. have redundant backups.
Schedule
Certain tasks, for example pruning and garbage collection, need to be
performed on a regular basis. Proxmox Backup Server uses a subset of the
`systemd Time and Date Specification
<https://www.freedesktop.org/software/systemd/man/systemd.time.html#>`_.
The subset currently supports time of day specifications and weekdays, in
addition to the shorthand expressions 'minutely', 'hourly', 'daily'.
There is no support for specifying timezones, the tasks are run in the
timezone configured on the server.

Binary file not shown.

After

Width:  |  Height:  |  Size: 18 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 60 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 12 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 43 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 79 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 26 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 14 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 20 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 21 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 18 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 54 KiB

View File

@ -24,6 +24,7 @@ in the section entitled "GNU Free Documentation License".
installation.rst installation.rst
administration-guide.rst administration-guide.rst
sysadmin.rst sysadmin.rst
faq.rst
.. raw:: latex .. raw:: latex
@ -36,6 +37,7 @@ in the section entitled "GNU Free Documentation License".
command-syntax.rst command-syntax.rst
file-formats.rst file-formats.rst
backup-protocol.rst backup-protocol.rst
calendarevents.rst
glossary.rst glossary.rst
GFDL.rst GFDL.rst
@ -49,4 +51,3 @@ in the section entitled "GNU Free Documentation License".
* :ref:`genindex` * :ref:`genindex`

View File

@ -22,7 +22,7 @@ Architecture
------------ ------------
Proxmox Backup Server uses a `client-server model`_. The server stores the Proxmox Backup Server uses a `client-server model`_. The server stores the
backup data and provides an API to create and manage data stores. With the backup data and provides an API to create and manage datastores. With the
API, it's also possible to manage disks and other server-side resources. API, it's also possible to manage disks and other server-side resources.
The backup client uses this API to access the backed up data. With the command The backup client uses this API to access the backed up data. With the command

View File

@ -1,3 +1,6 @@
.. _chapter-zfs:
ZFS on Linux ZFS on Linux
------------ ------------

View File

@ -18,7 +18,7 @@ async fn upload_speed() -> Result<f64, Error> {
let backup_time = chrono::Utc::now(); let backup_time = chrono::Utc::now();
let client = BackupWriter::start(client, None, datastore, "host", "speedtest", backup_time, false).await?; let client = BackupWriter::start(client, None, datastore, "host", "speedtest", backup_time, false, true).await?;
println!("start upload speed test"); println!("start upload speed test");
let res = client.upload_speedtest(true).await?; let res = client.upload_speedtest(true).await?;

View File

@ -1,6 +1,7 @@
use std::collections::{HashSet, HashMap}; use std::collections::{HashSet, HashMap};
use std::ffi::OsStr; use std::ffi::OsStr;
use std::os::unix::ffi::OsStrExt; use std::os::unix::ffi::OsStrExt;
use std::sync::{Arc, Mutex};
use anyhow::{bail, format_err, Error}; use anyhow::{bail, format_err, Error};
use futures::*; use futures::*;
@ -229,7 +230,7 @@ pub fn list_snapshot_files(
let datastore = DataStore::lookup_datastore(&store)?; let datastore = DataStore::lookup_datastore(&store)?;
let snapshot = BackupDir::new(backup_type, backup_id, backup_time); let snapshot = BackupDir::new(backup_type, backup_id, backup_time)?;
let allowed = (user_privs & (PRIV_DATASTORE_AUDIT | PRIV_DATASTORE_READ)) != 0; let allowed = (user_privs & (PRIV_DATASTORE_AUDIT | PRIV_DATASTORE_READ)) != 0;
if !allowed { check_backup_owner(&datastore, snapshot.group(), &userid)?; } if !allowed { check_backup_owner(&datastore, snapshot.group(), &userid)?; }
@ -279,7 +280,7 @@ fn delete_snapshot(
let user_info = CachedUserInfo::new()?; let user_info = CachedUserInfo::new()?;
let user_privs = user_info.lookup_privs(&userid, &["datastore", &store]); let user_privs = user_info.lookup_privs(&userid, &["datastore", &store]);
let snapshot = BackupDir::new(backup_type, backup_id, backup_time); let snapshot = BackupDir::new(backup_type, backup_id, backup_time)?;
let datastore = DataStore::lookup_datastore(&store)?; let datastore = DataStore::lookup_datastore(&store)?;
@ -489,7 +490,7 @@ pub fn verify(
match (backup_type, backup_id, backup_time) { match (backup_type, backup_id, backup_time) {
(Some(backup_type), Some(backup_id), Some(backup_time)) => { (Some(backup_type), Some(backup_id), Some(backup_time)) => {
worker_id = format!("{}_{}_{}_{:08X}", store, backup_type, backup_id, backup_time); worker_id = format!("{}_{}_{}_{:08X}", store, backup_type, backup_id, backup_time);
let dir = BackupDir::new(backup_type, backup_id, backup_time); let dir = BackupDir::new(backup_type, backup_id, backup_time)?;
backup_dir = Some(dir); backup_dir = Some(dir);
} }
(Some(backup_type), Some(backup_id), None) => { (Some(backup_type), Some(backup_id), None) => {
@ -512,18 +513,27 @@ pub fn verify(
userid, userid,
to_stdout, to_stdout,
move |worker| { move |worker| {
let verified_chunks = Arc::new(Mutex::new(HashSet::with_capacity(1024*16)));
let corrupt_chunks = Arc::new(Mutex::new(HashSet::with_capacity(64)));
let failed_dirs = if let Some(backup_dir) = backup_dir { let failed_dirs = if let Some(backup_dir) = backup_dir {
let mut verified_chunks = HashSet::with_capacity(1024*16);
let mut corrupt_chunks = HashSet::with_capacity(64);
let mut res = Vec::new(); let mut res = Vec::new();
if !verify_backup_dir(&datastore, &backup_dir, &mut verified_chunks, &mut corrupt_chunks, &worker)? { if !verify_backup_dir(datastore, &backup_dir, verified_chunks, corrupt_chunks, worker.clone())? {
res.push(backup_dir.to_string()); res.push(backup_dir.to_string());
} }
res res
} else if let Some(backup_group) = backup_group { } else if let Some(backup_group) = backup_group {
verify_backup_group(&datastore, &backup_group, &worker)? let (_count, failed_dirs) = verify_backup_group(
datastore,
&backup_group,
verified_chunks,
corrupt_chunks,
None,
worker.clone(),
)?;
failed_dirs
} else { } else {
verify_all_backups(&datastore, &worker)? verify_all_backups(datastore, worker.clone())?
}; };
if failed_dirs.len() > 0 { if failed_dirs.len() > 0 {
worker.log("Failed to verify following snapshots:"); worker.log("Failed to verify following snapshots:");
@ -887,7 +897,7 @@ fn download_file(
let backup_id = tools::required_string_param(&param, "backup-id")?; let backup_id = tools::required_string_param(&param, "backup-id")?;
let backup_time = tools::required_integer_param(&param, "backup-time")?; let backup_time = tools::required_integer_param(&param, "backup-time")?;
let backup_dir = BackupDir::new(backup_type, backup_id, backup_time); let backup_dir = BackupDir::new(backup_type, backup_id, backup_time)?;
let allowed = (user_privs & PRIV_DATASTORE_READ) != 0; let allowed = (user_privs & PRIV_DATASTORE_READ) != 0;
if !allowed { check_backup_owner(&datastore, backup_dir.group(), &userid)?; } if !allowed { check_backup_owner(&datastore, backup_dir.group(), &userid)?; }
@ -960,7 +970,7 @@ fn download_file_decoded(
let backup_id = tools::required_string_param(&param, "backup-id")?; let backup_id = tools::required_string_param(&param, "backup-id")?;
let backup_time = tools::required_integer_param(&param, "backup-time")?; let backup_time = tools::required_integer_param(&param, "backup-time")?;
let backup_dir = BackupDir::new(backup_type, backup_id, backup_time); let backup_dir = BackupDir::new(backup_type, backup_id, backup_time)?;
let allowed = (user_privs & PRIV_DATASTORE_READ) != 0; let allowed = (user_privs & PRIV_DATASTORE_READ) != 0;
if !allowed { check_backup_owner(&datastore, backup_dir.group(), &userid)?; } if !allowed { check_backup_owner(&datastore, backup_dir.group(), &userid)?; }
@ -1073,7 +1083,7 @@ fn upload_backup_log(
let backup_id = tools::required_string_param(&param, "backup-id")?; let backup_id = tools::required_string_param(&param, "backup-id")?;
let backup_time = tools::required_integer_param(&param, "backup-time")?; let backup_time = tools::required_integer_param(&param, "backup-time")?;
let backup_dir = BackupDir::new(backup_type, backup_id, backup_time); let backup_dir = BackupDir::new(backup_type, backup_id, backup_time)?;
let userid: Userid = rpcenv.get_user().unwrap().parse()?; let userid: Userid = rpcenv.get_user().unwrap().parse()?;
check_backup_owner(&datastore, backup_dir.group(), &userid)?; check_backup_owner(&datastore, backup_dir.group(), &userid)?;
@ -1149,7 +1159,7 @@ fn catalog(
let user_info = CachedUserInfo::new()?; let user_info = CachedUserInfo::new()?;
let user_privs = user_info.lookup_privs(&userid, &["datastore", &store]); let user_privs = user_info.lookup_privs(&userid, &["datastore", &store]);
let backup_dir = BackupDir::new(backup_type, backup_id, backup_time); let backup_dir = BackupDir::new(backup_type, backup_id, backup_time)?;
let allowed = (user_privs & PRIV_DATASTORE_READ) != 0; let allowed = (user_privs & PRIV_DATASTORE_READ) != 0;
if !allowed { check_backup_owner(&datastore, backup_dir.group(), &userid)?; } if !allowed { check_backup_owner(&datastore, backup_dir.group(), &userid)?; }
@ -1266,7 +1276,7 @@ fn pxar_file_download(
let backup_id = tools::required_string_param(&param, "backup-id")?; let backup_id = tools::required_string_param(&param, "backup-id")?;
let backup_time = tools::required_integer_param(&param, "backup-time")?; let backup_time = tools::required_integer_param(&param, "backup-time")?;
let backup_dir = BackupDir::new(backup_type, backup_id, backup_time); let backup_dir = BackupDir::new(backup_type, backup_id, backup_time)?;
let allowed = (user_privs & PRIV_DATASTORE_READ) != 0; let allowed = (user_privs & PRIV_DATASTORE_READ) != 0;
if !allowed { check_backup_owner(&datastore, backup_dir.group(), &userid)?; } if !allowed { check_backup_owner(&datastore, backup_dir.group(), &userid)?; }
@ -1407,7 +1417,7 @@ fn get_notes(
let user_info = CachedUserInfo::new()?; let user_info = CachedUserInfo::new()?;
let user_privs = user_info.lookup_privs(&userid, &["datastore", &store]); let user_privs = user_info.lookup_privs(&userid, &["datastore", &store]);
let backup_dir = BackupDir::new(backup_type, backup_id, backup_time); let backup_dir = BackupDir::new(backup_type, backup_id, backup_time)?;
let allowed = (user_privs & PRIV_DATASTORE_READ) != 0; let allowed = (user_privs & PRIV_DATASTORE_READ) != 0;
if !allowed { check_backup_owner(&datastore, backup_dir.group(), &userid)?; } if !allowed { check_backup_owner(&datastore, backup_dir.group(), &userid)?; }
@ -1460,7 +1470,7 @@ fn set_notes(
let user_info = CachedUserInfo::new()?; let user_info = CachedUserInfo::new()?;
let user_privs = user_info.lookup_privs(&userid, &["datastore", &store]); let user_privs = user_info.lookup_privs(&userid, &["datastore", &store]);
let backup_dir = BackupDir::new(backup_type, backup_id, backup_time); let backup_dir = BackupDir::new(backup_type, backup_id, backup_time)?;
let allowed = (user_privs & PRIV_DATASTORE_READ) != 0; let allowed = (user_privs & PRIV_DATASTORE_READ) != 0;
if !allowed { check_backup_owner(&datastore, backup_dir.group(), &userid)?; } if !allowed { check_backup_owner(&datastore, backup_dir.group(), &userid)?; }

View File

@ -57,7 +57,8 @@ pub fn list_sync_jobs(
job.next_run = (|| -> Option<i64> { job.next_run = (|| -> Option<i64> {
let schedule = job.schedule.as_ref()?; let schedule = job.schedule.as_ref()?;
let event = parse_calendar_event(&schedule).ok()?; let event = parse_calendar_event(&schedule).ok()?;
compute_next_event(&event, last, false).ok() // ignore errors
compute_next_event(&event, last, false).unwrap_or_else(|_| None)
})(); })();
} }

View File

@ -38,6 +38,7 @@ pub const API_METHOD_UPGRADE_BACKUP: ApiMethod = ApiMethod::new(
("backup-id", false, &BACKUP_ID_SCHEMA), ("backup-id", false, &BACKUP_ID_SCHEMA),
("backup-time", false, &BACKUP_TIME_SCHEMA), ("backup-time", false, &BACKUP_TIME_SCHEMA),
("debug", true, &BooleanSchema::new("Enable verbose debug logging.").schema()), ("debug", true, &BooleanSchema::new("Enable verbose debug logging.").schema()),
("benchmark", true, &BooleanSchema::new("Job is a benchmark (do not keep data).").schema()),
]), ]),
) )
).access( ).access(
@ -56,6 +57,7 @@ fn upgrade_to_backup_protocol(
async move { async move {
let debug = param["debug"].as_bool().unwrap_or(false); let debug = param["debug"].as_bool().unwrap_or(false);
let benchmark = param["benchmark"].as_bool().unwrap_or(false);
let userid: Userid = rpcenv.get_user().unwrap().parse()?; let userid: Userid = rpcenv.get_user().unwrap().parse()?;
@ -90,16 +92,29 @@ async move {
let backup_group = BackupGroup::new(backup_type, backup_id); let backup_group = BackupGroup::new(backup_type, backup_id);
let worker_type = if backup_type == "host" && backup_id == "benchmark" {
if !benchmark {
bail!("unable to run benchmark without --benchmark flags");
}
"benchmark"
} else {
if benchmark {
bail!("benchmark flags is only allowed on 'host/benchmark'");
}
"backup"
};
// lock backup group to only allow one backup per group at a time // lock backup group to only allow one backup per group at a time
let (owner, _group_guard) = datastore.create_locked_backup_group(&backup_group, &userid)?; let (owner, _group_guard) = datastore.create_locked_backup_group(&backup_group, &userid)?;
// permission check // permission check
if owner != userid { // only the owner is allowed to create additional snapshots if owner != userid && worker_type != "benchmark" {
// only the owner is allowed to create additional snapshots
bail!("backup owner check failed ({} != {})", userid, owner); bail!("backup owner check failed ({} != {})", userid, owner);
} }
let last_backup = BackupInfo::last_backup(&datastore.base_path(), &backup_group, true).unwrap_or(None); let last_backup = BackupInfo::last_backup(&datastore.base_path(), &backup_group, true).unwrap_or(None);
let backup_dir = BackupDir::new_with_group(backup_group.clone(), backup_time); let backup_dir = BackupDir::new_with_group(backup_group.clone(), backup_time)?;
let _last_guard = if let Some(last) = &last_backup { let _last_guard = if let Some(last) = &last_backup {
if backup_dir.backup_time() <= last.backup_dir.backup_time() { if backup_dir.backup_time() <= last.backup_dir.backup_time() {
@ -116,14 +131,15 @@ async move {
let (path, is_new, _snap_guard) = datastore.create_locked_backup_dir(&backup_dir)?; let (path, is_new, _snap_guard) = datastore.create_locked_backup_dir(&backup_dir)?;
if !is_new { bail!("backup directory already exists."); } if !is_new { bail!("backup directory already exists."); }
WorkerTask::spawn("backup", Some(worker_id), userid.clone(), true, move |worker| {
WorkerTask::spawn(worker_type, Some(worker_id), userid.clone(), true, move |worker| {
let mut env = BackupEnvironment::new( let mut env = BackupEnvironment::new(
env_type, userid, worker.clone(), datastore, backup_dir); env_type, userid, worker.clone(), datastore, backup_dir);
env.debug = debug; env.debug = debug;
env.last_backup = last_backup; env.last_backup = last_backup;
env.log(format!("starting new backup on datastore '{}': {:?}", store, path)); env.log(format!("starting new {} on datastore '{}': {:?}", worker_type, store, path));
let service = H2Service::new(env.clone(), worker.clone(), &BACKUP_API_ROUTER, debug); let service = H2Service::new(env.clone(), worker.clone(), &BACKUP_API_ROUTER, debug);
@ -143,6 +159,7 @@ async move {
let window_size = 32*1024*1024; // max = (1 << 31) - 2 let window_size = 32*1024*1024; // max = (1 << 31) - 2
http.http2_initial_stream_window_size(window_size); http.http2_initial_stream_window_size(window_size);
http.http2_initial_connection_window_size(window_size); http.http2_initial_connection_window_size(window_size);
http.http2_max_frame_size(4*1024*1024);
http.serve_connection(conn, service) http.serve_connection(conn, service)
.map_err(Error::from) .map_err(Error::from)
@ -160,7 +177,11 @@ async move {
req = req_fut => req, req = req_fut => req,
abrt = abort_future => abrt, abrt = abort_future => abrt,
}; };
if benchmark {
env.log("benchmark finished successfully");
env.remove_backup()?;
return Ok(());
}
match (res, env.ensure_finished()) { match (res, env.ensure_finished()) {
(Ok(_), Ok(())) => { (Ok(_), Ok(())) => {
env.log("backup finished successfully"); env.log("backup finished successfully");

View File

@ -457,11 +457,11 @@ impl BackupEnvironment {
/// Mark backup as finished /// Mark backup as finished
pub fn finish_backup(&self) -> Result<(), Error> { pub fn finish_backup(&self) -> Result<(), Error> {
let mut state = self.state.lock().unwrap(); let mut state = self.state.lock().unwrap();
// test if all writer are correctly closed
state.ensure_unfinished()?; state.ensure_unfinished()?;
if state.dynamic_writers.len() != 0 { // test if all writer are correctly closed
if state.dynamic_writers.len() != 0 || state.fixed_writers.len() != 0 {
bail!("found open index writer - unable to finish backup"); bail!("found open index writer - unable to finish backup");
} }

View File

@ -83,7 +83,7 @@ fn upgrade_to_backup_reader_protocol(
let env_type = rpcenv.env_type(); let env_type = rpcenv.env_type();
let backup_dir = BackupDir::new(backup_type, backup_id, backup_time); let backup_dir = BackupDir::new(backup_type, backup_id, backup_time)?;
let path = datastore.base_path(); let path = datastore.base_path();
//let files = BackupInfo::list_files(&path, &backup_dir)?; //let files = BackupInfo::list_files(&path, &backup_dir)?;
@ -121,6 +121,7 @@ fn upgrade_to_backup_reader_protocol(
let window_size = 32*1024*1024; // max = (1 << 31) - 2 let window_size = 32*1024*1024; // max = (1 << 31) - 2
http.http2_initial_stream_window_size(window_size); http.http2_initial_stream_window_size(window_size);
http.http2_initial_connection_window_size(window_size); http.http2_initial_connection_window_size(window_size);
http.http2_max_frame_size(4*1024*1024);
http.serve_connection(conn, service) http.serve_connection(conn, service)
.map_err(Error::from) .map_err(Error::from)

View File

@ -74,6 +74,9 @@ use crate::config::acl::{
}, },
}, },
}, },
access: {
permission: &Permission::Anybody,
},
)] )]
/// List Datastore usages and estimates /// List Datastore usages and estimates
fn datastore_status( fn datastore_status(

View File

@ -559,6 +559,8 @@ pub struct GarbageCollectionStatus {
pub pending_bytes: u64, pub pending_bytes: u64,
/// Number of pending chunks (pending removal - kept for safety). /// Number of pending chunks (pending removal - kept for safety).
pub pending_chunks: usize, pub pending_chunks: usize,
/// Number of chunks marked as .bad by verify that have been removed by GC.
pub removed_bad: usize,
} }
impl Default for GarbageCollectionStatus { impl Default for GarbageCollectionStatus {
@ -573,6 +575,7 @@ impl Default for GarbageCollectionStatus {
removed_chunks: 0, removed_chunks: 0,
pending_bytes: 0, pending_bytes: 0,
pending_chunks: 0, pending_chunks: 0,
removed_bad: 0,
} }
} }
} }

View File

@ -2,9 +2,10 @@ use crate::tools;
use anyhow::{bail, format_err, Error}; use anyhow::{bail, format_err, Error};
use regex::Regex; use regex::Regex;
use std::convert::TryFrom;
use std::os::unix::io::RawFd; use std::os::unix::io::RawFd;
use chrono::{DateTime, TimeZone, SecondsFormat, Utc}; use chrono::{DateTime, LocalResult, TimeZone, SecondsFormat, Utc};
use std::path::{PathBuf, Path}; use std::path::{PathBuf, Path};
use lazy_static::lazy_static; use lazy_static::lazy_static;
@ -106,7 +107,7 @@ impl BackupGroup {
if file_type != nix::dir::Type::Directory { return Ok(()); } if file_type != nix::dir::Type::Directory { return Ok(()); }
let dt = backup_time.parse::<DateTime<Utc>>()?; let dt = backup_time.parse::<DateTime<Utc>>()?;
let backup_dir = BackupDir::new(self.backup_type.clone(), self.backup_id.clone(), dt.timestamp()); let backup_dir = BackupDir::new(self.backup_type.clone(), self.backup_id.clone(), dt.timestamp())?;
let files = list_backup_files(l2_fd, backup_time)?; let files = list_backup_files(l2_fd, backup_time)?;
list.push(BackupInfo { backup_dir, files }); list.push(BackupInfo { backup_dir, files });
@ -208,19 +209,22 @@ pub struct BackupDir {
impl BackupDir { impl BackupDir {
pub fn new<T, U>(backup_type: T, backup_id: U, timestamp: i64) -> Self pub fn new<T, U>(backup_type: T, backup_id: U, timestamp: i64) -> Result<Self, Error>
where where
T: Into<String>, T: Into<String>,
U: Into<String>, U: Into<String>,
{ {
// Note: makes sure that nanoseconds is 0 let group = BackupGroup::new(backup_type.into(), backup_id.into());
Self { BackupDir::new_with_group(group, timestamp)
group: BackupGroup::new(backup_type.into(), backup_id.into()),
backup_time: Utc.timestamp(timestamp, 0),
}
} }
pub fn new_with_group(group: BackupGroup, timestamp: i64) -> Self {
Self { group, backup_time: Utc.timestamp(timestamp, 0) } pub fn new_with_group(group: BackupGroup, timestamp: i64) -> Result<Self, Error> {
let backup_time = match Utc.timestamp_opt(timestamp, 0) {
LocalResult::Single(time) => time,
_ => bail!("can't create BackupDir with invalid backup time {}", timestamp),
};
Ok(Self { group, backup_time })
} }
pub fn group(&self) -> &BackupGroup { pub fn group(&self) -> &BackupGroup {
@ -257,7 +261,7 @@ impl std::str::FromStr for BackupDir {
let group = BackupGroup::new(cap.get(1).unwrap().as_str(), cap.get(2).unwrap().as_str()); let group = BackupGroup::new(cap.get(1).unwrap().as_str(), cap.get(2).unwrap().as_str());
let backup_time = cap.get(3).unwrap().as_str().parse::<DateTime<Utc>>()?; let backup_time = cap.get(3).unwrap().as_str().parse::<DateTime<Utc>>()?;
Ok(BackupDir::from((group, backup_time.timestamp()))) BackupDir::try_from((group, backup_time.timestamp()))
} }
} }
@ -270,9 +274,11 @@ impl std::fmt::Display for BackupDir {
} }
} }
impl From<(BackupGroup, i64)> for BackupDir { impl TryFrom<(BackupGroup, i64)> for BackupDir {
fn from((group, timestamp): (BackupGroup, i64)) -> Self { type Error = Error;
Self { group, backup_time: Utc.timestamp(timestamp, 0) }
fn try_from((group, timestamp): (BackupGroup, i64)) -> Result<Self, Error> {
BackupDir::new_with_group(group, timestamp)
} }
} }
@ -334,7 +340,7 @@ impl BackupInfo {
if file_type != nix::dir::Type::Directory { return Ok(()); } if file_type != nix::dir::Type::Directory { return Ok(()); }
let dt = backup_time.parse::<DateTime<Utc>>()?; let dt = backup_time.parse::<DateTime<Utc>>()?;
let backup_dir = BackupDir::new(backup_type, backup_id, dt.timestamp()); let backup_dir = BackupDir::new(backup_type, backup_id, dt.timestamp())?;
let files = list_backup_files(l2_fd, backup_time)?; let files = list_backup_files(l2_fd, backup_time)?;

View File

@ -5,7 +5,7 @@ use std::io::{Read, Write, Seek, SeekFrom};
use std::os::unix::ffi::OsStrExt; use std::os::unix::ffi::OsStrExt;
use anyhow::{bail, format_err, Error}; use anyhow::{bail, format_err, Error};
use chrono::offset::{TimeZone, Local}; use chrono::offset::{TimeZone, Local, LocalResult};
use pathpatterns::{MatchList, MatchType}; use pathpatterns::{MatchList, MatchType};
use proxmox::tools::io::ReadExt; use proxmox::tools::io::ReadExt;
@ -533,17 +533,17 @@ impl <R: Read + Seek> CatalogReader<R> {
self.dump_dir(&path, pos)?; self.dump_dir(&path, pos)?;
} }
CatalogEntryType::File => { CatalogEntryType::File => {
let dt = Local let mtime_string = match Local.timestamp_opt(mtime as i64, 0) {
.timestamp_opt(mtime as i64, 0) LocalResult::Single(time) => time.to_rfc3339_opts(chrono::SecondsFormat::Secs, false),
.single() // chrono docs say timestamp_opt can only be None or Single! _ => (mtime as i64).to_string(),
.unwrap_or_else(|| Local.timestamp(0, 0)); };
println!( println!(
"{} {:?} {} {}", "{} {:?} {} {}",
etype, etype,
path, path,
size, size,
dt.to_rfc3339_opts(chrono::SecondsFormat::Secs, false), mtime_string,
); );
} }
_ => { _ => {

View File

@ -187,7 +187,7 @@ impl ChunkStore {
pub fn get_chunk_iterator( pub fn get_chunk_iterator(
&self, &self,
) -> Result< ) -> Result<
impl Iterator<Item = (Result<tools::fs::ReadDirEntry, Error>, usize)> + std::iter::FusedIterator, impl Iterator<Item = (Result<tools::fs::ReadDirEntry, Error>, usize, bool)> + std::iter::FusedIterator,
Error Error
> { > {
use nix::dir::Dir; use nix::dir::Dir;
@ -219,19 +219,21 @@ impl ChunkStore {
Some(Ok(entry)) => { Some(Ok(entry)) => {
// skip files if they're not a hash // skip files if they're not a hash
let bytes = entry.file_name().to_bytes(); let bytes = entry.file_name().to_bytes();
if bytes.len() != 64 { if bytes.len() != 64 && bytes.len() != 64 + ".0.bad".len() {
continue; continue;
} }
if !bytes.iter().all(u8::is_ascii_hexdigit) { if !bytes.iter().take(64).all(u8::is_ascii_hexdigit) {
continue; continue;
} }
return Some((Ok(entry), percentage));
let bad = bytes.ends_with(".bad".as_bytes());
return Some((Ok(entry), percentage, bad));
} }
Some(Err(err)) => { Some(Err(err)) => {
// stop after first error // stop after first error
done = true; done = true;
// and pass the error through: // and pass the error through:
return Some((Err(err), percentage)); return Some((Err(err), percentage, false));
} }
None => (), // open next directory None => (), // open next directory
} }
@ -261,7 +263,7 @@ impl ChunkStore {
// other errors are fatal, so end our iteration // other errors are fatal, so end our iteration
done = true; done = true;
// and pass the error through: // and pass the error through:
return Some((Err(format_err!("unable to read subdir '{}' - {}", subdir, err)), percentage)); return Some((Err(format_err!("unable to read subdir '{}' - {}", subdir, err)), percentage, false));
} }
} }
} }
@ -280,6 +282,7 @@ impl ChunkStore {
worker: &WorkerTask, worker: &WorkerTask,
) -> Result<(), Error> { ) -> Result<(), Error> {
use nix::sys::stat::fstatat; use nix::sys::stat::fstatat;
use nix::unistd::{unlinkat, UnlinkatFlags};
let mut min_atime = phase1_start_time - 3600*24; // at least 24h (see mount option relatime) let mut min_atime = phase1_start_time - 3600*24; // at least 24h (see mount option relatime)
@ -292,10 +295,10 @@ impl ChunkStore {
let mut last_percentage = 0; let mut last_percentage = 0;
let mut chunk_count = 0; let mut chunk_count = 0;
for (entry, percentage) in self.get_chunk_iterator()? { for (entry, percentage, bad) in self.get_chunk_iterator()? {
if last_percentage != percentage { if last_percentage != percentage {
last_percentage = percentage; last_percentage = percentage;
worker.log(format!("{}%, processed {} chunks", percentage, chunk_count)); worker.log(format!("percentage done: phase2 {}% (processed {} chunks)", percentage, chunk_count));
} }
worker.fail_on_abort()?; worker.fail_on_abort()?;
@ -321,14 +324,47 @@ impl ChunkStore {
let lock = self.mutex.lock(); let lock = self.mutex.lock();
if let Ok(stat) = fstatat(dirfd, filename, nix::fcntl::AtFlags::AT_SYMLINK_NOFOLLOW) { if let Ok(stat) = fstatat(dirfd, filename, nix::fcntl::AtFlags::AT_SYMLINK_NOFOLLOW) {
if stat.st_atime < min_atime { if bad {
// filename validity checked in iterator
let orig_filename = std::ffi::CString::new(&filename.to_bytes()[..64])?;
match fstatat(
dirfd,
orig_filename.as_c_str(),
nix::fcntl::AtFlags::AT_SYMLINK_NOFOLLOW)
{
Ok(_) => {
match unlinkat(Some(dirfd), filename, UnlinkatFlags::NoRemoveDir) {
Err(err) =>
worker.warn(format!(
"unlinking corrupt chunk {:?} failed on store '{}' - {}",
filename,
self.name,
err,
)),
Ok(_) => {
status.removed_bad += 1;
status.removed_bytes += stat.st_size as u64;
}
}
},
Err(nix::Error::Sys(nix::errno::Errno::ENOENT)) => {
// chunk hasn't been rewritten yet, keep .bad file
},
Err(err) => {
// some other error, warn user and keep .bad file around too
worker.warn(format!(
"error during stat on '{:?}' - {}",
orig_filename,
err,
));
}
}
} else if stat.st_atime < min_atime {
//let age = now - stat.st_atime; //let age = now - stat.st_atime;
//println!("UNLINK {} {:?}", age/(3600*24), filename); //println!("UNLINK {} {:?}", age/(3600*24), filename);
let res = unsafe { libc::unlinkat(dirfd, filename.as_ptr(), 0) }; if let Err(err) = unlinkat(Some(dirfd), filename, UnlinkatFlags::NoRemoveDir) {
if res != 0 {
let err = nix::Error::last();
bail!( bail!(
"unlink chunk {:?} failed on store '{}' - {}", "unlinking chunk {:?} failed on store '{}' - {}",
filename, filename,
self.name, self.name,
err, err,
@ -366,6 +402,7 @@ impl ChunkStore {
if let Ok(metadata) = std::fs::metadata(&chunk_path) { if let Ok(metadata) = std::fs::metadata(&chunk_path) {
if metadata.is_file() { if metadata.is_file() {
self.touch_chunk(digest)?;
return Ok((true, metadata.len())); return Ok((true, metadata.len()));
} else { } else {
bail!("Got unexpected file type on store '{}' for chunk {}", self.name, digest_str); bail!("Got unexpected file type on store '{}' for chunk {}", self.name, digest_str);

View File

@ -10,7 +10,7 @@
use std::io::Write; use std::io::Write;
use anyhow::{bail, Error}; use anyhow::{bail, Error};
use chrono::{Local, TimeZone, DateTime}; use chrono::{Local, DateTime};
use openssl::hash::MessageDigest; use openssl::hash::MessageDigest;
use openssl::pkcs5::pbkdf2_hmac; use openssl::pkcs5::pbkdf2_hmac;
use openssl::symm::{decrypt_aead, Cipher, Crypter, Mode}; use openssl::symm::{decrypt_aead, Cipher, Crypter, Mode};
@ -219,7 +219,7 @@ impl CryptConfig {
created: DateTime<Local>, created: DateTime<Local>,
) -> Result<Vec<u8>, Error> { ) -> Result<Vec<u8>, Error> {
let modified = Local.timestamp(Local::now().timestamp(), 0); let modified = Local::now();
let key_config = super::KeyConfig { kdf: None, created, modified, data: self.enc_key.to_vec() }; let key_config = super::KeyConfig { kdf: None, created, modified, data: self.enc_key.to_vec() };
let data = serde_json::to_string(&key_config)?.as_bytes().to_vec(); let data = serde_json::to_string(&key_config)?.as_bytes().to_vec();

View File

@ -304,7 +304,7 @@ impl DataBlob {
let digest = match config { let digest = match config {
Some(config) => config.compute_digest(data), Some(config) => config.compute_digest(data),
None => openssl::sha::sha256(&data), None => openssl::sha::sha256(data),
}; };
if &digest != expected_digest { if &digest != expected_digest {
bail!("detected chunk with wrong digest."); bail!("detected chunk with wrong digest.");

View File

@ -85,7 +85,7 @@ impl DataStore {
pub fn get_chunk_iterator( pub fn get_chunk_iterator(
&self, &self,
) -> Result< ) -> Result<
impl Iterator<Item = (Result<tools::fs::ReadDirEntry, Error>, usize)>, impl Iterator<Item = (Result<tools::fs::ReadDirEntry, Error>, usize, bool)>,
Error Error
> { > {
self.chunk_store.get_chunk_iterator() self.chunk_store.get_chunk_iterator()
@ -430,6 +430,12 @@ impl DataStore {
let image_list = self.list_images()?; let image_list = self.list_images()?;
let image_count = image_list.len();
let mut done = 0;
let mut last_percentage: usize = 0;
for path in image_list { for path in image_list {
worker.fail_on_abort()?; worker.fail_on_abort()?;
@ -444,6 +450,14 @@ impl DataStore {
self.index_mark_used_chunks(index, &path, status, worker)?; self.index_mark_used_chunks(index, &path, status, worker)?;
} }
} }
done += 1;
let percentage = done*100/image_count;
if percentage > last_percentage {
worker.log(format!("percentage done: phase1 {}% ({} of {} index files)",
percentage, done, image_count));
last_percentage = percentage;
}
} }
Ok(()) Ok(())
@ -481,6 +495,9 @@ impl DataStore {
if gc_status.pending_bytes > 0 { if gc_status.pending_bytes > 0 {
worker.log(&format!("Pending removals: {} (in {} chunks)", HumanByte::from(gc_status.pending_bytes), gc_status.pending_chunks)); worker.log(&format!("Pending removals: {} (in {} chunks)", HumanByte::from(gc_status.pending_bytes), gc_status.pending_chunks));
} }
if gc_status.removed_bad > 0 {
worker.log(&format!("Removed bad files: {}", gc_status.removed_bad));
}
worker.log(&format!("Original data usage: {}", HumanByte::from(gc_status.index_data_bytes))); worker.log(&format!("Original data usage: {}", HumanByte::from(gc_status.index_data_bytes)));

View File

@ -6,7 +6,7 @@ use super::chunk_store::*;
use super::{IndexFile, ChunkReadInfo}; use super::{IndexFile, ChunkReadInfo};
use crate::tools::{self, epoch_now_u64}; use crate::tools::{self, epoch_now_u64};
use chrono::{Local, TimeZone}; use chrono::{Local, LocalResult, TimeZone};
use std::fs::File; use std::fs::File;
use std::io::Write; use std::io::Write;
use std::os::unix::io::AsRawFd; use std::os::unix::io::AsRawFd;
@ -150,7 +150,10 @@ impl FixedIndexReader {
println!("ChunkSize: {}", self.chunk_size); println!("ChunkSize: {}", self.chunk_size);
println!( println!(
"CTime: {}", "CTime: {}",
Local.timestamp(self.ctime as i64, 0).format("%c") match Local.timestamp_opt(self.ctime as i64, 0) {
LocalResult::Single(ctime) => ctime.format("%c").to_string(),
_ => (self.ctime as i64).to_string(),
}
); );
println!("UUID: {:?}", self.uuid); println!("UUID: {:?}", self.uuid);
} }

View File

@ -1,7 +1,7 @@
use anyhow::{bail, format_err, Context, Error}; use anyhow::{bail, format_err, Context, Error};
use serde::{Deserialize, Serialize}; use serde::{Deserialize, Serialize};
use chrono::{Local, TimeZone, DateTime}; use chrono::{Local, DateTime};
use proxmox::tools::fs::{file_get_contents, replace_file, CreateOptions}; use proxmox::tools::fs::{file_get_contents, replace_file, CreateOptions};
use proxmox::try_block; use proxmox::try_block;
@ -136,7 +136,7 @@ pub fn encrypt_key_with_passphrase(
enc_data.extend_from_slice(&tag); enc_data.extend_from_slice(&tag);
enc_data.extend_from_slice(&encrypted_key); enc_data.extend_from_slice(&encrypted_key);
let created = Local.timestamp(Local::now().timestamp(), 0); let created = Local::now();
Ok(KeyConfig { Ok(KeyConfig {
kdf: Some(kdf), kdf: Some(kdf),

View File

@ -1,4 +1,7 @@
use std::collections::HashSet; use std::collections::HashSet;
use std::sync::{Arc, Mutex};
use std::sync::atomic::{Ordering, AtomicUsize};
use std::time::Instant;
use anyhow::{bail, format_err, Error}; use anyhow::{bail, format_err, Error};
@ -6,12 +9,12 @@ use crate::server::WorkerTask;
use crate::api2::types::*; use crate::api2::types::*;
use super::{ use super::{
DataStore, BackupGroup, BackupDir, BackupInfo, IndexFile, DataStore, DataBlob, BackupGroup, BackupDir, BackupInfo, IndexFile,
CryptMode, CryptMode,
FileInfo, ArchiveType, archive_type, FileInfo, ArchiveType, archive_type,
}; };
fn verify_blob(datastore: &DataStore, backup_dir: &BackupDir, info: &FileInfo) -> Result<(), Error> { fn verify_blob(datastore: Arc<DataStore>, backup_dir: &BackupDir, info: &FileInfo) -> Result<(), Error> {
let blob = datastore.load_blob(backup_dir, &info.filename)?; let blob = datastore.load_blob(backup_dir, &info.filename)?;
@ -36,48 +39,125 @@ fn verify_blob(datastore: &DataStore, backup_dir: &BackupDir, info: &FileInfo) -
} }
} }
fn rename_corrupted_chunk(
datastore: Arc<DataStore>,
digest: &[u8;32],
worker: Arc<WorkerTask>,
) {
let (path, digest_str) = datastore.chunk_path(digest);
let mut counter = 0;
let mut new_path = path.clone();
loop {
new_path.set_file_name(format!("{}.{}.bad", digest_str, counter));
if new_path.exists() && counter < 9 { counter += 1; } else { break; }
}
match std::fs::rename(&path, &new_path) {
Ok(_) => {
worker.log(format!("corrupted chunk renamed to {:?}", &new_path));
},
Err(err) => {
match err.kind() {
std::io::ErrorKind::NotFound => { /* ignored */ },
_ => worker.log(format!("could not rename corrupted chunk {:?} - {}", &path, err))
}
}
};
}
// We use a separate thread to read/load chunks, so that we can do
// load and verify in parallel to increase performance.
fn chunk_reader_thread(
datastore: Arc<DataStore>,
index: Box<dyn IndexFile + Send>,
verified_chunks: Arc<Mutex<HashSet<[u8;32]>>>,
corrupt_chunks: Arc<Mutex<HashSet<[u8;32]>>>,
errors: Arc<AtomicUsize>,
worker: Arc<WorkerTask>,
) -> std::sync::mpsc::Receiver<(DataBlob, [u8;32], u64)> {
let (sender, receiver) = std::sync::mpsc::sync_channel(3); // buffer up to 3 chunks
std::thread::spawn(move|| {
for pos in 0..index.index_count() {
let info = index.chunk_info(pos).unwrap();
let size = info.range.end - info.range.start;
if verified_chunks.lock().unwrap().contains(&info.digest) {
continue; // already verified
}
if corrupt_chunks.lock().unwrap().contains(&info.digest) {
let digest_str = proxmox::tools::digest_to_hex(&info.digest);
worker.log(format!("chunk {} was marked as corrupt", digest_str));
errors.fetch_add(1, Ordering::SeqCst);
continue;
}
match datastore.load_chunk(&info.digest) {
Err(err) => {
corrupt_chunks.lock().unwrap().insert(info.digest);
worker.log(format!("can't verify chunk, load failed - {}", err));
errors.fetch_add(1, Ordering::SeqCst);
rename_corrupted_chunk(datastore.clone(), &info.digest, worker.clone());
continue;
}
Ok(chunk) => {
if sender.send((chunk, info.digest, size)).is_err() {
break; // receiver gone - simply stop
}
}
}
}
});
receiver
}
fn verify_index_chunks( fn verify_index_chunks(
datastore: &DataStore, datastore: Arc<DataStore>,
index: Box<dyn IndexFile>, index: Box<dyn IndexFile + Send>,
verified_chunks: &mut HashSet<[u8;32]>, verified_chunks: Arc<Mutex<HashSet<[u8;32]>>>,
corrupt_chunks: &mut HashSet<[u8; 32]>, corrupt_chunks: Arc<Mutex<HashSet<[u8; 32]>>>,
crypt_mode: CryptMode, crypt_mode: CryptMode,
worker: &WorkerTask, worker: Arc<WorkerTask>,
) -> Result<(), Error> { ) -> Result<(), Error> {
let mut errors = 0; let errors = Arc::new(AtomicUsize::new(0));
for pos in 0..index.index_count() {
let start_time = Instant::now();
let chunk_channel = chunk_reader_thread(
datastore.clone(),
index,
verified_chunks.clone(),
corrupt_chunks.clone(),
errors.clone(),
worker.clone(),
);
let mut read_bytes = 0;
let mut decoded_bytes = 0;
loop {
worker.fail_on_abort()?; worker.fail_on_abort()?;
crate::tools::fail_on_shutdown()?;
let info = index.chunk_info(pos).unwrap(); let (chunk, digest, size) = match chunk_channel.recv() {
Ok(tuple) => tuple,
if verified_chunks.contains(&info.digest) { Err(std::sync::mpsc::RecvError) => break,
continue; // already verified
}
if corrupt_chunks.contains(&info.digest) {
let digest_str = proxmox::tools::digest_to_hex(&info.digest);
worker.log(format!("chunk {} was marked as corrupt", digest_str));
errors += 1;
continue;
}
let chunk = match datastore.load_chunk(&info.digest) {
Err(err) => {
corrupt_chunks.insert(info.digest);
worker.log(format!("can't verify chunk, load failed - {}", err));
errors += 1;
continue;
},
Ok(chunk) => chunk,
}; };
read_bytes += chunk.raw_size();
decoded_bytes += size;
let chunk_crypt_mode = match chunk.crypt_mode() { let chunk_crypt_mode = match chunk.crypt_mode() {
Err(err) => { Err(err) => {
corrupt_chunks.insert(info.digest); corrupt_chunks.lock().unwrap().insert(digest);
worker.log(format!("can't verify chunk, unknown CryptMode - {}", err)); worker.log(format!("can't verify chunk, unknown CryptMode - {}", err));
errors += 1; errors.fetch_add(1, Ordering::SeqCst);
continue; continue;
}, },
Ok(mode) => mode, Ok(mode) => mode,
@ -89,21 +169,33 @@ fn verify_index_chunks(
chunk_crypt_mode, chunk_crypt_mode,
crypt_mode crypt_mode
)); ));
errors += 1; errors.fetch_add(1, Ordering::SeqCst);
} }
let size = info.range.end - info.range.start; if let Err(err) = chunk.verify_unencrypted(size as usize, &digest) {
corrupt_chunks.lock().unwrap().insert(digest);
if let Err(err) = chunk.verify_unencrypted(size as usize, &info.digest) {
corrupt_chunks.insert(info.digest);
worker.log(format!("{}", err)); worker.log(format!("{}", err));
errors += 1; errors.fetch_add(1, Ordering::SeqCst);
rename_corrupted_chunk(datastore.clone(), &digest, worker.clone());
} else { } else {
verified_chunks.insert(info.digest); verified_chunks.lock().unwrap().insert(digest);
} }
} }
if errors > 0 { let elapsed = start_time.elapsed().as_secs_f64();
let read_bytes_mib = (read_bytes as f64)/(1024.0*1024.0);
let decoded_bytes_mib = (decoded_bytes as f64)/(1024.0*1024.0);
let read_speed = read_bytes_mib/elapsed;
let decode_speed = decoded_bytes_mib/elapsed;
let error_count = errors.load(Ordering::SeqCst);
worker.log(format!(" verified {:.2}/{:.2} MiB in {:.2} seconds, speed {:.2}/{:.2} MiB/s ({} errors)",
read_bytes_mib, decoded_bytes_mib, elapsed, read_speed, decode_speed, error_count));
if errors.load(Ordering::SeqCst) > 0 {
bail!("chunks could not be verified"); bail!("chunks could not be verified");
} }
@ -111,12 +203,12 @@ fn verify_index_chunks(
} }
fn verify_fixed_index( fn verify_fixed_index(
datastore: &DataStore, datastore: Arc<DataStore>,
backup_dir: &BackupDir, backup_dir: &BackupDir,
info: &FileInfo, info: &FileInfo,
verified_chunks: &mut HashSet<[u8;32]>, verified_chunks: Arc<Mutex<HashSet<[u8;32]>>>,
corrupt_chunks: &mut HashSet<[u8;32]>, corrupt_chunks: Arc<Mutex<HashSet<[u8;32]>>>,
worker: &WorkerTask, worker: Arc<WorkerTask>,
) -> Result<(), Error> { ) -> Result<(), Error> {
let mut path = backup_dir.relative_path(); let mut path = backup_dir.relative_path();
@ -137,12 +229,12 @@ fn verify_fixed_index(
} }
fn verify_dynamic_index( fn verify_dynamic_index(
datastore: &DataStore, datastore: Arc<DataStore>,
backup_dir: &BackupDir, backup_dir: &BackupDir,
info: &FileInfo, info: &FileInfo,
verified_chunks: &mut HashSet<[u8;32]>, verified_chunks: Arc<Mutex<HashSet<[u8;32]>>>,
corrupt_chunks: &mut HashSet<[u8;32]>, corrupt_chunks: Arc<Mutex<HashSet<[u8;32]>>>,
worker: &WorkerTask, worker: Arc<WorkerTask>,
) -> Result<(), Error> { ) -> Result<(), Error> {
let mut path = backup_dir.relative_path(); let mut path = backup_dir.relative_path();
@ -172,11 +264,11 @@ fn verify_dynamic_index(
/// - Ok(false) if there were verification errors /// - Ok(false) if there were verification errors
/// - Err(_) if task was aborted /// - Err(_) if task was aborted
pub fn verify_backup_dir( pub fn verify_backup_dir(
datastore: &DataStore, datastore: Arc<DataStore>,
backup_dir: &BackupDir, backup_dir: &BackupDir,
verified_chunks: &mut HashSet<[u8;32]>, verified_chunks: Arc<Mutex<HashSet<[u8;32]>>>,
corrupt_chunks: &mut HashSet<[u8;32]>, corrupt_chunks: Arc<Mutex<HashSet<[u8;32]>>>,
worker: &WorkerTask worker: Arc<WorkerTask>
) -> Result<bool, Error> { ) -> Result<bool, Error> {
let mut manifest = match datastore.load_manifest(&backup_dir) { let mut manifest = match datastore.load_manifest(&backup_dir) {
@ -198,27 +290,28 @@ pub fn verify_backup_dir(
match archive_type(&info.filename)? { match archive_type(&info.filename)? {
ArchiveType::FixedIndex => ArchiveType::FixedIndex =>
verify_fixed_index( verify_fixed_index(
&datastore, datastore.clone(),
&backup_dir, &backup_dir,
info, info,
verified_chunks, verified_chunks.clone(),
corrupt_chunks, corrupt_chunks.clone(),
worker worker.clone(),
), ),
ArchiveType::DynamicIndex => ArchiveType::DynamicIndex =>
verify_dynamic_index( verify_dynamic_index(
&datastore, datastore.clone(),
&backup_dir, &backup_dir,
info, info,
verified_chunks, verified_chunks.clone(),
corrupt_chunks, corrupt_chunks.clone(),
worker worker.clone(),
), ),
ArchiveType::Blob => verify_blob(&datastore, &backup_dir, info), ArchiveType::Blob => verify_blob(datastore.clone(), &backup_dir, info),
} }
}); });
worker.fail_on_abort()?; worker.fail_on_abort()?;
crate::tools::fail_on_shutdown()?;
if let Err(err) = result { if let Err(err) = result {
worker.log(format!("verify {}:{}/{} failed: {}", datastore.name(), backup_dir, info.filename, err)); worker.log(format!("verify {}:{}/{} failed: {}", datastore.name(), backup_dir, info.filename, err));
@ -245,32 +338,45 @@ pub fn verify_backup_dir(
/// Errors are logged to the worker log. /// Errors are logged to the worker log.
/// ///
/// Returns /// Returns
/// - Ok(failed_dirs) where failed_dirs had verification errors /// - Ok((count, failed_dirs)) where failed_dirs had verification errors
/// - Err(_) if task was aborted /// - Err(_) if task was aborted
pub fn verify_backup_group(datastore: &DataStore, group: &BackupGroup, worker: &WorkerTask) -> Result<Vec<String>, Error> { pub fn verify_backup_group(
datastore: Arc<DataStore>,
group: &BackupGroup,
verified_chunks: Arc<Mutex<HashSet<[u8;32]>>>,
corrupt_chunks: Arc<Mutex<HashSet<[u8;32]>>>,
progress: Option<(usize, usize)>, // (done, snapshot_count)
worker: Arc<WorkerTask>,
) -> Result<(usize, Vec<String>), Error> {
let mut errors = Vec::new(); let mut errors = Vec::new();
let mut list = match group.list_backups(&datastore.base_path()) { let mut list = match group.list_backups(&datastore.base_path()) {
Ok(list) => list, Ok(list) => list,
Err(err) => { Err(err) => {
worker.log(format!("verify group {}:{} - unable to list backups: {}", datastore.name(), group, err)); worker.log(format!("verify group {}:{} - unable to list backups: {}", datastore.name(), group, err));
return Ok(errors); return Ok((0, errors));
} }
}; };
worker.log(format!("verify group {}:{}", datastore.name(), group)); worker.log(format!("verify group {}:{}", datastore.name(), group));
let mut verified_chunks = HashSet::with_capacity(1024*16); // start with 16384 chunks (up to 65GB) let (done, snapshot_count) = progress.unwrap_or((0, list.len()));
let mut corrupt_chunks = HashSet::with_capacity(64); // start with 64 chunks since we assume there are few corrupt ones
let mut count = 0;
BackupInfo::sort_list(&mut list, false); // newest first BackupInfo::sort_list(&mut list, false); // newest first
for info in list { for info in list {
if !verify_backup_dir(datastore, &info.backup_dir, &mut verified_chunks, &mut corrupt_chunks, worker)?{ count += 1;
if !verify_backup_dir(datastore.clone(), &info.backup_dir, verified_chunks.clone(), corrupt_chunks.clone(), worker.clone())?{
errors.push(info.backup_dir.to_string()); errors.push(info.backup_dir.to_string());
} }
if snapshot_count != 0 {
let pos = done + count;
let percentage = ((pos as f64) * 100.0)/(snapshot_count as f64);
worker.log(format!("percentage done: {:.2}% ({} of {} snapshots)", percentage, pos, snapshot_count));
}
} }
Ok(errors) Ok((count, errors))
} }
/// Verify all backups inside a datastore /// Verify all backups inside a datastore
@ -280,12 +386,15 @@ pub fn verify_backup_group(datastore: &DataStore, group: &BackupGroup, worker: &
/// Returns /// Returns
/// - Ok(failed_dirs) where failed_dirs had verification errors /// - Ok(failed_dirs) where failed_dirs had verification errors
/// - Err(_) if task was aborted /// - Err(_) if task was aborted
pub fn verify_all_backups(datastore: &DataStore, worker: &WorkerTask) -> Result<Vec<String>, Error> { pub fn verify_all_backups(datastore: Arc<DataStore>, worker: Arc<WorkerTask>) -> Result<Vec<String>, Error> {
let mut errors = Vec::new(); let mut errors = Vec::new();
let mut list = match BackupGroup::list_groups(&datastore.base_path()) { let mut list = match BackupGroup::list_groups(&datastore.base_path()) {
Ok(list) => list, Ok(list) => list
.into_iter()
.filter(|group| !(group.backup_type() == "host" && group.backup_id() == "benchmark"))
.collect::<Vec<BackupGroup>>(),
Err(err) => { Err(err) => {
worker.log(format!("verify datastore {} - unable to list backups: {}", datastore.name(), err)); worker.log(format!("verify datastore {} - unable to list backups: {}", datastore.name(), err));
return Ok(errors); return Ok(errors);
@ -294,11 +403,32 @@ pub fn verify_all_backups(datastore: &DataStore, worker: &WorkerTask) -> Result<
list.sort_unstable(); list.sort_unstable();
worker.log(format!("verify datastore {}", datastore.name())); let mut snapshot_count = 0;
for group in list.iter() {
snapshot_count += group.list_backups(&datastore.base_path())?.len();
}
// start with 16384 chunks (up to 65GB)
let verified_chunks = Arc::new(Mutex::new(HashSet::with_capacity(1024*16)));
// start with 64 chunks since we assume there are few corrupt ones
let corrupt_chunks = Arc::new(Mutex::new(HashSet::with_capacity(64)));
worker.log(format!("verify datastore {} ({} snapshots)", datastore.name(), snapshot_count));
let mut done = 0;
for group in list { for group in list {
let mut group_errors = verify_backup_group(datastore, &group, worker)?; let (count, mut group_errors) = verify_backup_group(
datastore.clone(),
&group,
verified_chunks.clone(),
corrupt_chunks.clone(),
Some((done, snapshot_count)),
worker.clone(),
)?;
errors.append(&mut group_errors); errors.append(&mut group_errors);
done += count;
} }
Ok(errors) Ok(errors)

View File

@ -8,7 +8,7 @@ use std::sync::{Arc, Mutex};
use std::task::Context; use std::task::Context;
use anyhow::{bail, format_err, Error}; use anyhow::{bail, format_err, Error};
use chrono::{Local, DateTime, Utc, TimeZone}; use chrono::{Local, LocalResult, DateTime, Utc, TimeZone};
use futures::future::FutureExt; use futures::future::FutureExt;
use futures::stream::{StreamExt, TryStreamExt}; use futures::stream::{StreamExt, TryStreamExt};
use serde_json::{json, Value}; use serde_json::{json, Value};
@ -257,7 +257,11 @@ pub async fn api_datastore_latest_snapshot(
list.sort_unstable_by(|a, b| b.backup_time.cmp(&a.backup_time)); list.sort_unstable_by(|a, b| b.backup_time.cmp(&a.backup_time));
let backup_time = Utc.timestamp(list[0].backup_time, 0); let backup_time = match Utc.timestamp_opt(list[0].backup_time, 0) {
LocalResult::Single(time) => time,
_ => bail!("last snapshot of backup group {:?} has invalid timestmap {}.",
group.group_path(), list[0].backup_time),
};
Ok((group.backup_type().to_owned(), group.backup_id().to_owned(), backup_time)) Ok((group.backup_type().to_owned(), group.backup_id().to_owned(), backup_time))
} }
@ -373,7 +377,7 @@ async fn list_backup_groups(param: Value) -> Result<Value, Error> {
let render_last_backup = |_v: &Value, record: &Value| -> Result<String, Error> { let render_last_backup = |_v: &Value, record: &Value| -> Result<String, Error> {
let item: GroupListItem = serde_json::from_value(record.to_owned())?; let item: GroupListItem = serde_json::from_value(record.to_owned())?;
let snapshot = BackupDir::new(item.backup_type, item.backup_id, item.last_backup); let snapshot = BackupDir::new(item.backup_type, item.backup_id, item.last_backup)?;
Ok(snapshot.relative_path().to_str().unwrap().to_owned()) Ok(snapshot.relative_path().to_str().unwrap().to_owned())
}; };
@ -444,7 +448,7 @@ async fn list_snapshots(param: Value) -> Result<Value, Error> {
let render_snapshot_path = |_v: &Value, record: &Value| -> Result<String, Error> { let render_snapshot_path = |_v: &Value, record: &Value| -> Result<String, Error> {
let item: SnapshotListItem = serde_json::from_value(record.to_owned())?; let item: SnapshotListItem = serde_json::from_value(record.to_owned())?;
let snapshot = BackupDir::new(item.backup_type, item.backup_id, item.backup_time); let snapshot = BackupDir::new(item.backup_type, item.backup_id, item.backup_time)?;
Ok(snapshot.relative_path().to_str().unwrap().to_owned()) Ok(snapshot.relative_path().to_str().unwrap().to_owned())
}; };
@ -986,7 +990,15 @@ async fn create_backup(
} }
} }
let backup_time = Utc.timestamp(backup_time_opt.unwrap_or_else(|| Utc::now().timestamp()), 0); let backup_time = match backup_time_opt {
Some(timestamp) => {
match Utc.timestamp_opt(timestamp, 0) {
LocalResult::Single(time) => time,
_ => bail!("Invalid backup-time parameter: {}", timestamp),
}
},
_ => Utc::now(),
};
let client = connect(repo.host(), repo.user())?; let client = connect(repo.host(), repo.user())?;
record_repository(&repo); record_repository(&repo);
@ -1026,6 +1038,7 @@ async fn create_backup(
&backup_id, &backup_id,
backup_time, backup_time,
verbose, verbose,
false
).await?; ).await?;
let previous_manifest = if let Ok(previous_manifest) = client.download_previous_manifest().await { let previous_manifest = if let Ok(previous_manifest) = client.download_previous_manifest().await {
@ -1034,7 +1047,7 @@ async fn create_backup(
None None
}; };
let snapshot = BackupDir::new(backup_type, backup_id, backup_time.timestamp()); let snapshot = BackupDir::new(backup_type, backup_id, backup_time.timestamp())?;
let mut manifest = BackupManifest::new(snapshot); let mut manifest = BackupManifest::new(snapshot);
let mut catalog = None; let mut catalog = None;
@ -1559,7 +1572,7 @@ async fn prune_async(mut param: Value) -> Result<Value, Error> {
let render_snapshot_path = |_v: &Value, record: &Value| -> Result<String, Error> { let render_snapshot_path = |_v: &Value, record: &Value| -> Result<String, Error> {
let item: PruneListItem = serde_json::from_value(record.to_owned())?; let item: PruneListItem = serde_json::from_value(record.to_owned())?;
let snapshot = BackupDir::new(item.backup_type, item.backup_id, item.backup_time); let snapshot = BackupDir::new(item.backup_type, item.backup_id, item.backup_time)?;
Ok(snapshot.relative_path().to_str().unwrap().to_owned()) Ok(snapshot.relative_path().to_str().unwrap().to_owned())
}; };
@ -1751,8 +1764,9 @@ async fn complete_backup_snapshot_do(param: &HashMap<String, String>) -> Vec<Str
if let (Some(backup_id), Some(backup_type), Some(backup_time)) = if let (Some(backup_id), Some(backup_type), Some(backup_time)) =
(item["backup-id"].as_str(), item["backup-type"].as_str(), item["backup-time"].as_i64()) (item["backup-id"].as_str(), item["backup-type"].as_str(), item["backup-time"].as_i64())
{ {
let snapshot = BackupDir::new(backup_type, backup_id, backup_time); if let Ok(snapshot) = BackupDir::new(backup_type, backup_id, backup_time) {
result.push(snapshot.relative_path().to_str().unwrap().to_owned()); result.push(snapshot.relative_path().to_str().unwrap().to_owned());
}
} }
} }
} }

View File

@ -53,6 +53,7 @@ async fn run() -> Result<(), Error> {
config.add_alias("extjs", "/usr/share/javascript/extjs"); config.add_alias("extjs", "/usr/share/javascript/extjs");
config.add_alias("fontawesome", "/usr/share/fonts-font-awesome"); config.add_alias("fontawesome", "/usr/share/fonts-font-awesome");
config.add_alias("xtermjs", "/usr/share/pve-xtermjs"); config.add_alias("xtermjs", "/usr/share/pve-xtermjs");
config.add_alias("locale", "/usr/share/pbs-i18n");
config.add_alias("widgettoolkit", "/usr/share/javascript/proxmox-widget-toolkit"); config.add_alias("widgettoolkit", "/usr/share/javascript/proxmox-widget-toolkit");
config.add_alias("css", "/usr/share/javascript/proxmox-backup/css"); config.add_alias("css", "/usr/share/javascript/proxmox-backup/css");
config.add_alias("docs", "/usr/share/doc/proxmox-backup/html"); config.add_alias("docs", "/usr/share/doc/proxmox-backup/html");
@ -86,8 +87,6 @@ async fn run() -> Result<(), Error> {
let acceptor = Arc::clone(&acceptor); let acceptor = Arc::clone(&acceptor);
async move { async move {
sock.set_nodelay(true).unwrap(); sock.set_nodelay(true).unwrap();
sock.set_send_buffer_size(1024*1024).unwrap();
sock.set_recv_buffer_size(1024*1024).unwrap();
Ok(tokio_openssl::accept(&acceptor, sock) Ok(tokio_openssl::accept(&acceptor, sock)
.await .await
.ok() // handshake errors aren't be fatal, so return None to filter .ok() // handshake errors aren't be fatal, so return None to filter
@ -301,7 +300,8 @@ async fn schedule_datastore_garbage_collection() {
}; };
let next = match compute_next_event(&event, last, false) { let next = match compute_next_event(&event, last, false) {
Ok(next) => next, Ok(Some(next)) => next,
Ok(None) => continue,
Err(err) => { Err(err) => {
eprintln!("compute_next_event for '{}' failed - {}", event_str, err); eprintln!("compute_next_event for '{}' failed - {}", event_str, err);
continue; continue;
@ -412,7 +412,8 @@ async fn schedule_datastore_prune() {
}; };
let next = match compute_next_event(&event, last, false) { let next = match compute_next_event(&event, last, false) {
Ok(next) => next, Ok(Some(next)) => next,
Ok(None) => continue,
Err(err) => { Err(err) => {
eprintln!("compute_next_event for '{}' failed - {}", event_str, err); eprintln!("compute_next_event for '{}' failed - {}", event_str, err);
continue; continue;
@ -520,7 +521,8 @@ async fn schedule_datastore_sync_jobs() {
}; };
let next = match compute_next_event(&event, last, false) { let next = match compute_next_event(&event, last, false) {
Ok(next) => next, Ok(Some(next)) => next,
Ok(None) => continue,
Err(err) => { Err(err) => {
eprintln!("compute_next_event for '{}' failed - {}", event_str, err); eprintln!("compute_next_event for '{}' failed - {}", event_str, err);
continue; continue;

View File

@ -3,7 +3,7 @@ use std::sync::Arc;
use anyhow::{Error}; use anyhow::{Error};
use serde_json::Value; use serde_json::Value;
use chrono::{TimeZone, Utc}; use chrono::Utc;
use serde::Serialize; use serde::Serialize;
use proxmox::api::{ApiMethod, RpcEnvironment}; use proxmox::api::{ApiMethod, RpcEnvironment};
@ -82,7 +82,7 @@ struct BenchmarkResult {
static BENCHMARK_RESULT_2020_TOP: BenchmarkResult = BenchmarkResult { static BENCHMARK_RESULT_2020_TOP: BenchmarkResult = BenchmarkResult {
tls: Speed { tls: Speed {
speed: None, speed: None,
top: 1_000_000.0 * 590.0, // TLS to localhost, AMD Ryzen 7 2700X top: 1_000_000.0 * 690.0, // TLS to localhost, AMD Ryzen 7 2700X
}, },
sha256: Speed { sha256: Speed {
speed: None, speed: None,
@ -212,7 +212,7 @@ async fn test_upload_speed(
verbose: bool, verbose: bool,
) -> Result<(), Error> { ) -> Result<(), Error> {
let backup_time = Utc.timestamp(Utc::now().timestamp(), 0); let backup_time = Utc::now();
let client = connect(repo.host(), repo.user())?; let client = connect(repo.host(), repo.user())?;
record_repository(&repo); record_repository(&repo);
@ -226,6 +226,7 @@ async fn test_upload_speed(
"benchmark", "benchmark",
backup_time, backup_time,
false, false,
true
).await?; ).await?;
if verbose { eprintln!("Start TLS speed test"); } if verbose { eprintln!("Start TLS speed test"); }

View File

@ -1,7 +1,7 @@
use std::path::PathBuf; use std::path::PathBuf;
use anyhow::{bail, format_err, Error}; use anyhow::{bail, format_err, Error};
use chrono::{Local, TimeZone}; use chrono::Local;
use serde::{Deserialize, Serialize}; use serde::{Deserialize, Serialize};
use proxmox::api::api; use proxmox::api::api;
@ -112,7 +112,7 @@ fn create(kdf: Option<Kdf>, path: Option<String>) -> Result<(), Error> {
match kdf { match kdf {
Kdf::None => { Kdf::None => {
let created = Local.timestamp(Local::now().timestamp(), 0); let created = Local::now();
store_key_config( store_key_config(
&path, &path,
@ -180,7 +180,7 @@ fn change_passphrase(kdf: Option<Kdf>, path: Option<String>) -> Result<(), Error
match kdf { match kdf {
Kdf::None => { Kdf::None => {
let modified = Local.timestamp(Local::now().timestamp(), 0); let modified = Local::now();
store_key_config( store_key_config(
&path, &path,

View File

@ -141,7 +141,7 @@ async fn mount_do(param: Value, pipe: Option<RawFd>) -> Result<Value, Error> {
let (manifest, _) = client.download_manifest().await?; let (manifest, _) = client.download_manifest().await?;
let file_info = manifest.lookup_file_info(&archive_name)?; let file_info = manifest.lookup_file_info(&server_archive_name)?;
if server_archive_name.ends_with(".didx") { if server_archive_name.ends_with(".didx") {
let index = client.download_dynamic_index(&manifest, &server_archive_name).await?; let index = client.download_dynamic_index(&manifest, &server_archive_name).await?;

View File

@ -53,6 +53,7 @@ impl BackupWriter {
backup_id: &str, backup_id: &str,
backup_time: DateTime<Utc>, backup_time: DateTime<Utc>,
debug: bool, debug: bool,
benchmark: bool
) -> Result<Arc<BackupWriter>, Error> { ) -> Result<Arc<BackupWriter>, Error> {
let param = json!({ let param = json!({
@ -60,7 +61,8 @@ impl BackupWriter {
"backup-id": backup_id, "backup-id": backup_id,
"backup-time": backup_time.timestamp(), "backup-time": backup_time.timestamp(),
"store": datastore, "store": datastore,
"debug": debug "debug": debug,
"benchmark": benchmark
}); });
let req = HttpClient::request_builder( let req = HttpClient::request_builder(

View File

@ -292,7 +292,6 @@ impl HttpClient {
let mut httpc = hyper::client::HttpConnector::new(); let mut httpc = hyper::client::HttpConnector::new();
httpc.set_nodelay(true); // important for h2 download performance! httpc.set_nodelay(true); // important for h2 download performance!
httpc.set_recv_buffer_size(Some(1024*1024)); //important for h2 download performance!
httpc.enforce_http(false); // we want https... httpc.enforce_http(false); // we want https...
let https = HttpsConnector::with_connector(httpc, ssl_connector_builder.build()); let https = HttpsConnector::with_connector(httpc, ssl_connector_builder.build());

View File

@ -347,7 +347,7 @@ pub async fn pull_group(
let mut remote_snapshots = std::collections::HashSet::new(); let mut remote_snapshots = std::collections::HashSet::new();
for item in list { for item in list {
let snapshot = BackupDir::new(item.backup_type, item.backup_id, item.backup_time); let snapshot = BackupDir::new(item.backup_type, item.backup_id, item.backup_time)?;
// in-progress backups can't be synced // in-progress backups can't be synced
if let None = item.size { if let None = item.size {

View File

@ -8,7 +8,7 @@ use std::path::Path;
use anyhow::{bail, format_err, Error}; use anyhow::{bail, format_err, Error};
use nix::sys::stat::Mode; use nix::sys::stat::Mode;
use pxar::{mode, Entry, EntryKind, Metadata}; use pxar::{mode, Entry, EntryKind, Metadata, format::StatxTimestamp};
/// Get the file permissions as `nix::Mode` /// Get the file permissions as `nix::Mode`
pub fn perms_from_metadata(meta: &Metadata) -> Result<Mode, Error> { pub fn perms_from_metadata(meta: &Metadata) -> Result<Mode, Error> {
@ -114,13 +114,19 @@ fn mode_string(entry: &Entry) -> String {
) )
} }
pub fn format_single_line_entry(entry: &Entry) -> String { fn format_mtime(mtime: &StatxTimestamp) -> String {
use chrono::offset::TimeZone; use chrono::offset::TimeZone;
match chrono::Local.timestamp_opt(mtime.secs, mtime.nanos) {
chrono::LocalResult::Single(mtime) => mtime.format("%Y-%m-%d %H:%M:%S").to_string(),
_ => format!("{}.{}", mtime.secs, mtime.nanos),
}
}
pub fn format_single_line_entry(entry: &Entry) -> String {
let mode_string = mode_string(entry); let mode_string = mode_string(entry);
let meta = entry.metadata(); let meta = entry.metadata();
let mtime = chrono::Local.timestamp(meta.stat.mtime.secs, meta.stat.mtime.nanos);
let (size, link) = match entry.kind() { let (size, link) = match entry.kind() {
EntryKind::File { size, .. } => (format!("{}", *size), String::new()), EntryKind::File { size, .. } => (format!("{}", *size), String::new()),
@ -134,7 +140,7 @@ pub fn format_single_line_entry(entry: &Entry) -> String {
"{} {:<13} {} {:>8} {:?}{}", "{} {:<13} {} {:>8} {:?}{}",
mode_string, mode_string,
format!("{}/{}", meta.stat.uid, meta.stat.gid), format!("{}/{}", meta.stat.uid, meta.stat.gid),
mtime.format("%Y-%m-%d %H:%M:%S"), format_mtime(&meta.stat.mtime),
size, size,
entry.path(), entry.path(),
link, link,
@ -142,12 +148,9 @@ pub fn format_single_line_entry(entry: &Entry) -> String {
} }
pub fn format_multi_line_entry(entry: &Entry) -> String { pub fn format_multi_line_entry(entry: &Entry) -> String {
use chrono::offset::TimeZone;
let mode_string = mode_string(entry); let mode_string = mode_string(entry);
let meta = entry.metadata(); let meta = entry.metadata();
let mtime = chrono::Local.timestamp(meta.stat.mtime.secs, meta.stat.mtime.nanos);
let (size, link, type_name) = match entry.kind() { let (size, link, type_name) = match entry.kind() {
EntryKind::File { size, .. } => (format!("{}", *size), String::new(), "file"), EntryKind::File { size, .. } => (format!("{}", *size), String::new(), "file"),
@ -196,6 +199,6 @@ pub fn format_multi_line_entry(entry: &Entry) -> String {
mode_string, mode_string,
meta.stat.uid, meta.stat.uid,
meta.stat.gid, meta.stat.gid,
mtime.format("%Y-%m-%d %H:%M:%S"), format_mtime(&meta.stat.mtime),
) )
} }

View File

@ -313,7 +313,13 @@ pub async fn handle_api_request<Env: RpcEnvironment, S: 'static + BuildHasher +
Ok(resp) Ok(resp)
} }
fn get_index(userid: Option<Userid>, token: Option<String>, api: &Arc<ApiConfig>, parts: Parts) -> Response<Body> { fn get_index(
userid: Option<Userid>,
token: Option<String>,
language: Option<String>,
api: &Arc<ApiConfig>,
parts: Parts,
) -> Response<Body> {
let nodename = proxmox::tools::nodename(); let nodename = proxmox::tools::nodename();
let userid = userid.as_ref().map(|u| u.as_str()).unwrap_or(""); let userid = userid.as_ref().map(|u| u.as_str()).unwrap_or("");
@ -333,10 +339,18 @@ fn get_index(userid: Option<Userid>, token: Option<String>, api: &Arc<ApiConfig>
} }
} }
let mut lang = String::from("");
if let Some(language) = language {
if Path::new(&format!("/usr/share/pbs-i18n/pbs-lang-{}.js", language)).exists() {
lang = language;
}
}
let data = json!({ let data = json!({
"NodeName": nodename, "NodeName": nodename,
"UserName": userid, "UserName": userid,
"CSRFPreventionToken": token, "CSRFPreventionToken": token,
"language": lang,
"debug": debug, "debug": debug,
}); });
@ -441,12 +455,14 @@ async fn handle_static_file_download(filename: PathBuf) -> Result<Response<Body
} }
} }
fn extract_auth_data(headers: &http::HeaderMap) -> (Option<String>, Option<String>) { fn extract_auth_data(headers: &http::HeaderMap) -> (Option<String>, Option<String>, Option<String>) {
let mut ticket = None; let mut ticket = None;
let mut language = None;
if let Some(raw_cookie) = headers.get("COOKIE") { if let Some(raw_cookie) = headers.get("COOKIE") {
if let Ok(cookie) = raw_cookie.to_str() { if let Ok(cookie) = raw_cookie.to_str() {
ticket = tools::extract_auth_cookie(cookie, "PBSAuthCookie"); ticket = tools::extract_cookie(cookie, "PBSAuthCookie");
language = tools::extract_cookie(cookie, "PBSLangCookie");
} }
} }
@ -455,7 +471,7 @@ fn extract_auth_data(headers: &http::HeaderMap) -> (Option<String>, Option<Strin
_ => None, _ => None,
}; };
(ticket, token) (ticket, token, language)
} }
fn check_auth( fn check_auth(
@ -526,7 +542,7 @@ pub async fn handle_request(api: Arc<ApiConfig>, req: Request<Body>) -> Result<R
) { ) {
// explicitly allow those calls without auth // explicitly allow those calls without auth
} else { } else {
let (ticket, token) = extract_auth_data(&parts.headers); let (ticket, token, _) = extract_auth_data(&parts.headers);
match check_auth(&method, &ticket, &token, &user_info) { match check_auth(&method, &ticket, &token, &user_info) {
Ok(userid) => rpcenv.set_user(Some(userid.to_string())), Ok(userid) => rpcenv.set_user(Some(userid.to_string())),
Err(err) => { Err(err) => {
@ -573,20 +589,20 @@ pub async fn handle_request(api: Arc<ApiConfig>, req: Request<Body>) -> Result<R
} }
if comp_len == 0 { if comp_len == 0 {
let (ticket, token) = extract_auth_data(&parts.headers); let (ticket, token, language) = extract_auth_data(&parts.headers);
if ticket != None { if ticket != None {
match check_auth(&method, &ticket, &token, &user_info) { match check_auth(&method, &ticket, &token, &user_info) {
Ok(userid) => { Ok(userid) => {
let new_token = assemble_csrf_prevention_token(csrf_secret(), &userid); let new_token = assemble_csrf_prevention_token(csrf_secret(), &userid);
return Ok(get_index(Some(userid), Some(new_token), &api, parts)); return Ok(get_index(Some(userid), Some(new_token), language, &api, parts));
} }
_ => { _ => {
tokio::time::delay_until(Instant::from_std(delay_unauth_time)).await; tokio::time::delay_until(Instant::from_std(delay_unauth_time)).await;
return Ok(get_index(None, None, &api, parts)); return Ok(get_index(None, None, language, &api, parts));
} }
} }
} else { } else {
return Ok(get_index(None, None, &api, parts)); return Ok(get_index(None, None, language, &api, parts));
} }
} else { } else {
let filename = api.find_alias(&components); let filename = api.find_alias(&components);

View File

@ -1,6 +1,6 @@
use std::collections::HashMap; use std::collections::HashMap;
use std::fs::File; use std::fs::File;
use std::io::{BufRead, BufReader}; use std::io::{Read, BufRead, BufReader};
use std::panic::UnwindSafe; use std::panic::UnwindSafe;
use std::sync::atomic::{AtomicBool, Ordering}; use std::sync::atomic::{AtomicBool, Ordering};
use std::sync::{Arc, Mutex}; use std::sync::{Arc, Mutex};
@ -195,8 +195,8 @@ pub fn create_task_log_dirs() -> Result<(), Error> {
/// If there is not a single line with at valid datetime, we assume the /// If there is not a single line with at valid datetime, we assume the
/// starttime to be the endtime /// starttime to be the endtime
pub fn upid_read_status(upid: &UPID) -> Result<TaskState, Error> { pub fn upid_read_status(upid: &UPID) -> Result<TaskState, Error> {
let mut endtime = upid.starttime;
let mut status = TaskState::Unknown { endtime }; let mut status = TaskState::Unknown { endtime: upid.starttime };
let path = upid.log_path(); let path = upid.log_path();
@ -207,22 +207,34 @@ pub fn upid_read_status(upid: &UPID) -> Result<TaskState, Error> {
use std::io::SeekFrom; use std::io::SeekFrom;
let _ = file.seek(SeekFrom::End(-8192)); // ignore errors let _ = file.seek(SeekFrom::End(-8192)); // ignore errors
let reader = BufReader::new(file); let mut data = Vec::with_capacity(8192);
file.read_to_end(&mut data)?;
for line in reader.lines() { // task logs should end with newline, we do not want it here
let line = line?; if data[data.len()-1] == b'\n' {
data.pop();
}
let mut iter = line.splitn(2, ": "); let last_line = {
if let Some(time_str) = iter.next() { let mut start = 0;
endtime = chrono::DateTime::parse_from_rfc3339(time_str) for pos in (0..data.len()).rev() {
.map_err(|err| format_err!("cannot parse '{}': {}", time_str, err))? if data[pos] == b'\n' {
.timestamp(); start = pos + 1;
} else { break;
continue; }
} }
match iter.next().and_then(|rest| rest.strip_prefix("TASK ")) { &data[start..]
None => continue, };
Some(rest) => {
let last_line = std::str::from_utf8(last_line)
.map_err(|err| format_err!("upid_read_status: utf8 parse failed: {}", err))?;
let mut iter = last_line.splitn(2, ": ");
if let Some(time_str) = iter.next() {
if let Ok(endtime) = chrono::DateTime::parse_from_rfc3339(time_str) {
let endtime = endtime.timestamp();
if let Some(rest) = iter.next().and_then(|rest| rest.strip_prefix("TASK ")) {
if let Ok(state) = TaskState::from_endtime_and_message(endtime, rest) { if let Ok(state) = TaskState::from_endtime_and_message(endtime, rest) {
status = state; status = state;
} }

View File

@ -326,9 +326,9 @@ pub fn assert_if_modified(digest1: &str, digest2: &str) -> Result<(), Error> {
Ok(()) Ok(())
} }
/// Extract authentication cookie from cookie header. /// Extract a specific cookie from cookie header.
/// We assume cookie_name is already url encoded. /// We assume cookie_name is already url encoded.
pub fn extract_auth_cookie(cookie: &str, cookie_name: &str) -> Option<String> { pub fn extract_cookie(cookie: &str, cookie_name: &str) -> Option<String> {
for pair in cookie.split(';') { for pair in cookie.split(';') {
let (name, value) = match pair.find('=') { let (name, value) = match pair.find('=') {
Some(i) => (pair[..i].trim(), pair[(i + 1)..].trim()), Some(i) => (pair[..i].trim(), pair[(i + 1)..].trim()),

View File

@ -1,5 +1,5 @@
use anyhow::{Error}; use anyhow::{Error};
use chrono::{TimeZone, Local}; use chrono::Local;
use std::io::Write; use std::io::Write;
/// Log messages with timestamps into files /// Log messages with timestamps into files
@ -56,7 +56,7 @@ impl FileLogger {
stdout.write_all(b"\n").unwrap(); stdout.write_all(b"\n").unwrap();
} }
let line = format!("{}: {}\n", Local.timestamp(Local::now().timestamp(), 0).to_rfc3339(), msg); let line = format!("{}: {}\n", Local::now().to_rfc3339(), msg);
self.file.write_all(line.as_bytes()).unwrap(); self.file.write_all(line.as_bytes()).unwrap();
} }
} }

View File

@ -1,6 +1,6 @@
use anyhow::{Error}; use anyhow::{Error};
use serde_json::Value; use serde_json::Value;
use chrono::{Local, TimeZone}; use chrono::{Local, TimeZone, LocalResult};
pub fn strip_server_file_expenstion(name: &str) -> String { pub fn strip_server_file_expenstion(name: &str) -> String {
@ -25,8 +25,11 @@ pub fn render_epoch(value: &Value, _record: &Value) -> Result<String, Error> {
if value.is_null() { return Ok(String::new()); } if value.is_null() { return Ok(String::new()); }
let text = match value.as_i64() { let text = match value.as_i64() {
Some(epoch) => { Some(epoch) => {
Local.timestamp(epoch, 0).format("%c").to_string() match Local.timestamp_opt(epoch, 0) {
} LocalResult::Single(epoch) => epoch.format("%c").to_string(),
_ => epoch.to_string(),
}
},
None => { None => {
value.to_string() value.to_string()
} }

View File

@ -145,6 +145,9 @@ fn parse_date_time_comp(max: usize) -> impl Fn(&str) -> IResult<&str, DateTimeVa
let (i, value) = parse_time_comp(max)(i)?; let (i, value) = parse_time_comp(max)(i)?;
if let (i, Some(end)) = opt(preceded(tag(".."), parse_time_comp(max)))(i)? { if let (i, Some(end)) = opt(preceded(tag(".."), parse_time_comp(max)))(i)? {
if value > end {
return Err(parse_error(i, "range start is bigger than end"));
}
return Ok((i, DateTimeValue::Range(value, end))) return Ok((i, DateTimeValue::Range(value, end)))
} }
@ -158,10 +161,17 @@ fn parse_date_time_comp(max: usize) -> impl Fn(&str) -> IResult<&str, DateTimeVa
} }
} }
fn parse_date_time_comp_list(max: usize) -> impl Fn(&str) -> IResult<&str, Vec<DateTimeValue>> { fn parse_date_time_comp_list(start: u32, max: usize) -> impl Fn(&str) -> IResult<&str, Vec<DateTimeValue>> {
move |i: &str| { move |i: &str| {
if i.starts_with("*") { if i.starts_with("*") {
return Ok((&i[1..], Vec::new())); let i = &i[1..];
if i.starts_with("/") {
let (n, repeat) = parse_time_comp(max)(&i[1..])?;
if repeat > 0 {
return Ok((n, vec![DateTimeValue::Repeated(start, repeat)]));
}
}
return Ok((i, Vec::new()));
} }
separated_nonempty_list(tag(","), parse_date_time_comp(max))(i) separated_nonempty_list(tag(","), parse_date_time_comp(max))(i)
@ -171,9 +181,9 @@ fn parse_date_time_comp_list(max: usize) -> impl Fn(&str) -> IResult<&str, Vec<D
fn parse_time_spec(i: &str) -> IResult<&str, (Vec<DateTimeValue>, Vec<DateTimeValue>, Vec<DateTimeValue>)> { fn parse_time_spec(i: &str) -> IResult<&str, (Vec<DateTimeValue>, Vec<DateTimeValue>, Vec<DateTimeValue>)> {
let (i, (hour, minute, opt_second)) = tuple(( let (i, (hour, minute, opt_second)) = tuple((
parse_date_time_comp_list(24), parse_date_time_comp_list(0, 24),
preceded(tag(":"), parse_date_time_comp_list(60)), preceded(tag(":"), parse_date_time_comp_list(0, 60)),
opt(preceded(tag(":"), parse_date_time_comp_list(60))), opt(preceded(tag(":"), parse_date_time_comp_list(0, 60))),
))(i)?; ))(i)?;
if let Some(second) = opt_second { if let Some(second) = opt_second {
@ -183,6 +193,25 @@ fn parse_time_spec(i: &str) -> IResult<&str, (Vec<DateTimeValue>, Vec<DateTimeVa
} }
} }
fn parse_date_spec(i: &str) -> IResult<&str, (Vec<DateTimeValue>, Vec<DateTimeValue>, Vec<DateTimeValue>)> {
// TODO: implement ~ for days (man systemd.time)
if let Ok((i, (year, month, day))) = tuple((
parse_date_time_comp_list(0, 2200), // the upper limit for systemd, stay compatible
preceded(tag("-"), parse_date_time_comp_list(1, 13)),
preceded(tag("-"), parse_date_time_comp_list(1, 32)),
))(i) {
Ok((i, (year, month, day)))
} else if let Ok((i, (month, day))) = tuple((
parse_date_time_comp_list(1, 13),
preceded(tag("-"), parse_date_time_comp_list(1, 32)),
))(i) {
Ok((i, (Vec::new(), month, day)))
} else {
Err(parse_error(i, "invalid date spec"))
}
}
pub fn parse_calendar_event(i: &str) -> Result<CalendarEvent, Error> { pub fn parse_calendar_event(i: &str) -> Result<CalendarEvent, Error> {
parse_complete_line("calendar event", i, parse_calendar_event_incomplete) parse_complete_line("calendar event", i, parse_calendar_event_incomplete)
} }
@ -191,7 +220,7 @@ fn parse_calendar_event_incomplete(mut i: &str) -> IResult<&str, CalendarEvent>
let mut has_dayspec = false; let mut has_dayspec = false;
let mut has_timespec = false; let mut has_timespec = false;
let has_datespec = false; let mut has_datespec = false;
let mut event = CalendarEvent::default(); let mut event = CalendarEvent::default();
@ -228,8 +257,52 @@ fn parse_calendar_event_incomplete(mut i: &str) -> IResult<&str, CalendarEvent>
..Default::default() ..Default::default()
})); }));
} }
"monthly" | "yearly" | "quarterly" | "semiannually" => { "monthly" => {
return Err(parse_error(i, "unimplemented date or time specification")); return Ok(("", CalendarEvent {
hour: vec![DateTimeValue::Single(0)],
minute: vec![DateTimeValue::Single(0)],
second: vec![DateTimeValue::Single(0)],
day: vec![DateTimeValue::Single(1)],
..Default::default()
}));
}
"yearly" | "annually" => {
return Ok(("", CalendarEvent {
hour: vec![DateTimeValue::Single(0)],
minute: vec![DateTimeValue::Single(0)],
second: vec![DateTimeValue::Single(0)],
day: vec![DateTimeValue::Single(1)],
month: vec![DateTimeValue::Single(1)],
..Default::default()
}));
}
"quarterly" => {
return Ok(("", CalendarEvent {
hour: vec![DateTimeValue::Single(0)],
minute: vec![DateTimeValue::Single(0)],
second: vec![DateTimeValue::Single(0)],
day: vec![DateTimeValue::Single(1)],
month: vec![
DateTimeValue::Single(1),
DateTimeValue::Single(4),
DateTimeValue::Single(7),
DateTimeValue::Single(10),
],
..Default::default()
}));
}
"semiannually" | "semi-annually" => {
return Ok(("", CalendarEvent {
hour: vec![DateTimeValue::Single(0)],
minute: vec![DateTimeValue::Single(0)],
second: vec![DateTimeValue::Single(0)],
day: vec![DateTimeValue::Single(1)],
month: vec![
DateTimeValue::Single(1),
DateTimeValue::Single(7),
],
..Default::default()
}));
} }
_ => { /* continue */ } _ => { /* continue */ }
} }
@ -246,7 +319,13 @@ fn parse_calendar_event_incomplete(mut i: &str) -> IResult<&str, CalendarEvent>
for range in range_list { event.days.insert(range); } for range in range_list { event.days.insert(range); }
} }
// todo: support date specs if let (n, Some((year, month, day))) = opt(parse_date_spec)(i)? {
event.year = year;
event.month = month;
event.day = day;
has_datespec = true;
i = space0(n)?.0;
}
if let (n, Some((hour, minute, second))) = opt(parse_time_spec)(i)? { if let (n, Some((hour, minute, second))) = opt(parse_time_spec)(i)? {
event.hour = hour; event.hour = hour;

View File

@ -1,4 +1,6 @@
use anyhow::{bail, Error}; use std::convert::TryInto;
use anyhow::Error;
use bitflags::bitflags; use bitflags::bitflags;
pub use super::parse_time::*; pub use super::parse_time::*;
@ -17,7 +19,7 @@ bitflags!{
} }
} }
#[derive(Debug)] #[derive(Debug, Clone)]
pub enum DateTimeValue { pub enum DateTimeValue {
Single(u32), Single(u32),
Range(u32, u32), Range(u32, u32),
@ -54,7 +56,7 @@ impl DateTimeValue {
let mut next: Option<u32> = None; let mut next: Option<u32> = None;
let mut set_next = |v: u32| { let mut set_next = |v: u32| {
if let Some(n) = next { if let Some(n) = next {
if v > n { next = Some(v); } if v < n { next = Some(v); }
} else { } else {
next = Some(v); next = Some(v);
} }
@ -91,7 +93,7 @@ impl DateTimeValue {
/// Calendar events may be used to refer to one or more points in time in a /// Calendar events may be used to refer to one or more points in time in a
/// single expression. They are designed after the systemd.time Calendar Events /// single expression. They are designed after the systemd.time Calendar Events
/// specification, but are not guaranteed to be 100% compatible. /// specification, but are not guaranteed to be 100% compatible.
#[derive(Default, Debug)] #[derive(Default, Clone, Debug)]
pub struct CalendarEvent { pub struct CalendarEvent {
/// the days in a week this event should trigger /// the days in a week this event should trigger
pub days: WeekDays, pub days: WeekDays,
@ -101,17 +103,15 @@ pub struct CalendarEvent {
pub minute: Vec<DateTimeValue>, pub minute: Vec<DateTimeValue>,
/// the hour(s) this event should trigger /// the hour(s) this event should trigger
pub hour: Vec<DateTimeValue>, pub hour: Vec<DateTimeValue>,
/* FIXME: TODO
/// the day(s) in a month this event should trigger /// the day(s) in a month this event should trigger
pub day: Vec<DateTimeValue>, pub day: Vec<DateTimeValue>,
/// the month(s) in a year this event should trigger /// the month(s) in a year this event should trigger
pub month: Vec<DateTimeValue>, pub month: Vec<DateTimeValue>,
/// the years(s) this event should trigger /// the years(s) this event should trigger
pub year: Vec<DateTimeValue>, pub year: Vec<DateTimeValue>,
*/
} }
#[derive(Default)] #[derive(Default, Clone, Debug)]
pub struct TimeSpan { pub struct TimeSpan {
pub nsec: u64, pub nsec: u64,
pub usec: u64, pub usec: u64,
@ -155,7 +155,7 @@ pub fn compute_next_event(
event: &CalendarEvent, event: &CalendarEvent,
last: i64, last: i64,
utc: bool, utc: bool,
) -> Result<i64, Error> { ) -> Result<Option<i64>, Error> {
let last = last + 1; // at least one second later let last = last + 1; // at least one second later
@ -166,94 +166,124 @@ pub fn compute_next_event(
let mut count = 0; let mut count = 0;
loop { loop {
if count > 1000 { // should not happen // cancel after 1000 loops
bail!("unable to compute next calendar event"); if count > 1000 {
return Ok(None);
} else { } else {
count += 1; count += 1;
} }
if !all_days && t.changes.contains(TMChanges::WDAY) { // match day first if !event.year.is_empty() {
let day_num = t.day_num(); let year: u32 = t.year().try_into()?;
if !DateTimeValue::list_contains(&event.year, year) {
if let Some(n) = DateTimeValue::find_next(&event.year, year) {
t.add_years((n - year).try_into()?)?;
continue;
} else {
// if we have no valid year, we cannot find a correct timestamp
return Ok(None);
}
}
}
if !event.month.is_empty() {
let month: u32 = t.month().try_into()?;
if !DateTimeValue::list_contains(&event.month, month) {
if let Some(n) = DateTimeValue::find_next(&event.month, month) {
t.add_months((n - month).try_into()?)?;
} else {
// if we could not find valid month, retry next year
t.add_years(1)?;
}
continue;
}
}
if !event.day.is_empty() {
let day: u32 = t.day().try_into()?;
if !DateTimeValue::list_contains(&event.day, day) {
if let Some(n) = DateTimeValue::find_next(&event.day, day) {
t.add_days((n - day).try_into()?)?;
} else {
// if we could not find valid mday, retry next month
t.add_months(1)?;
}
continue;
}
}
if !all_days { // match day first
let day_num: u32 = t.day_num().try_into()?;
let day = WeekDays::from_bits(1<<day_num).unwrap(); let day = WeekDays::from_bits(1<<day_num).unwrap();
if event.days.contains(day) { if !event.days.contains(day) {
t.changes.remove(TMChanges::WDAY);
} else {
if let Some(n) = ((day_num+1)..7) if let Some(n) = ((day_num+1)..7)
.find(|d| event.days.contains(WeekDays::from_bits(1<<d).unwrap())) .find(|d| event.days.contains(WeekDays::from_bits(1<<d).unwrap()))
{ {
// try next day // try next day
t.add_days(n - day_num, true); t.add_days((n - day_num).try_into()?)?;
continue;
} else { } else {
// try next week // try next week
t.add_days(7 - day_num, true); t.add_days((7 - day_num).try_into()?)?;
continue;
} }
continue;
} }
} }
// this day // this day
if !event.hour.is_empty() && t.changes.contains(TMChanges::HOUR) { if !event.hour.is_empty() {
let hour = t.hour() as u32; let hour = t.hour().try_into()?;
if DateTimeValue::list_contains(&event.hour, hour) { if !DateTimeValue::list_contains(&event.hour, hour) {
t.changes.remove(TMChanges::HOUR);
} else {
if let Some(n) = DateTimeValue::find_next(&event.hour, hour) { if let Some(n) = DateTimeValue::find_next(&event.hour, hour) {
// test next hour // test next hour
t.set_time(n as libc::c_int, 0, 0); t.set_time(n.try_into()?, 0, 0)?;
continue;
} else { } else {
// test next day // test next day
t.add_days(1, true); t.add_days(1)?;
continue;
} }
continue;
} }
} }
// this hour // this hour
if !event.minute.is_empty() && t.changes.contains(TMChanges::MIN) { if !event.minute.is_empty() {
let minute = t.min() as u32; let minute = t.min().try_into()?;
if DateTimeValue::list_contains(&event.minute, minute) { if !DateTimeValue::list_contains(&event.minute, minute) {
t.changes.remove(TMChanges::MIN);
} else {
if let Some(n) = DateTimeValue::find_next(&event.minute, minute) { if let Some(n) = DateTimeValue::find_next(&event.minute, minute) {
// test next minute // test next minute
t.set_min_sec(n as libc::c_int, 0); t.set_min_sec(n.try_into()?, 0)?;
continue;
} else { } else {
// test next hour // test next hour
t.set_time(t.hour() + 1, 0, 0); t.set_time(t.hour() + 1, 0, 0)?;
continue;
} }
continue;
} }
} }
// this minute // this minute
if !event.second.is_empty() && t.changes.contains(TMChanges::SEC) { if !event.second.is_empty() {
let second = t.sec() as u32; let second = t.sec().try_into()?;
if DateTimeValue::list_contains(&event.second, second) { if !DateTimeValue::list_contains(&event.second, second) {
t.changes.remove(TMChanges::SEC);
} else {
if let Some(n) = DateTimeValue::find_next(&event.second, second) { if let Some(n) = DateTimeValue::find_next(&event.second, second) {
// test next second // test next second
t.set_sec(n as libc::c_int); t.set_sec(n.try_into()?)?;
continue;
} else { } else {
// test next min // test next min
t.set_min_sec(t.min() + 1, 0); t.set_min_sec(t.min() + 1, 0)?;
continue;
} }
continue;
} }
} }
let next = t.into_epoch()?; let next = t.into_epoch()?;
return Ok(next) return Ok(Some(next))
} }
} }
#[cfg(test)] #[cfg(test)]
mod test { mod test {
use anyhow::bail;
use super::*; use super::*;
use proxmox::tools::time::*; use proxmox::tools::time::*;
@ -280,7 +310,7 @@ mod test {
}; };
match compute_next_event(&event, last, true) { match compute_next_event(&event, last, true) {
Ok(next) => { Ok(Some(next)) => {
if next == expect { if next == expect {
println!("next {:?} => {}", event, next); println!("next {:?} => {}", event, next);
} else { } else {
@ -288,12 +318,25 @@ mod test {
event, gmtime(next), gmtime(expect)); event, gmtime(next), gmtime(expect));
} }
} }
Ok(None) => bail!("next {:?} failed to find a timestamp", event),
Err(err) => bail!("compute next for '{}' failed - {}", v, err), Err(err) => bail!("compute next for '{}' failed - {}", v, err),
} }
Ok(expect) Ok(expect)
}; };
let test_never = |v: &'static str, last: i64| -> Result<(), Error> {
let event = match parse_calendar_event(v) {
Ok(event) => event,
Err(err) => bail!("parsing '{}' failed - {}", v, err),
};
match compute_next_event(&event, last, true)? {
None => Ok(()),
Some(next) => bail!("compute next for '{}' succeeded, but expected fail - result {}", v, next),
}
};
const MIN: i64 = 60; const MIN: i64 = 60;
const HOUR: i64 = 3600; const HOUR: i64 = 3600;
const DAY: i64 = 3600*24; const DAY: i64 = 3600*24;
@ -320,6 +363,13 @@ mod test {
test_value("sat", THURSDAY_00_00, THURSDAY_00_00 + 2*DAY)?; test_value("sat", THURSDAY_00_00, THURSDAY_00_00 + 2*DAY)?;
test_value("sun", THURSDAY_00_00, THURSDAY_00_00 + 3*DAY)?; test_value("sun", THURSDAY_00_00, THURSDAY_00_00 + 3*DAY)?;
// test multiple values for a single field
// and test that the order does not matter
test_value("5,10:4,8", THURSDAY_00_00, THURSDAY_00_00 + 5*HOUR + 4*MIN)?;
test_value("10,5:8,4", THURSDAY_00_00, THURSDAY_00_00 + 5*HOUR + 4*MIN)?;
test_value("6,4..10:23,5/5", THURSDAY_00_00, THURSDAY_00_00 + 4*HOUR + 5*MIN)?;
test_value("4..10,6:5/5,23", THURSDAY_00_00, THURSDAY_00_00 + 4*HOUR + 5*MIN)?;
// test month wrapping // test month wrapping
test_value("sat", JUL_31_2020, JUL_31_2020 + 1*DAY)?; test_value("sat", JUL_31_2020, JUL_31_2020 + 1*DAY)?;
test_value("sun", JUL_31_2020, JUL_31_2020 + 2*DAY)?; test_value("sun", JUL_31_2020, JUL_31_2020 + 2*DAY)?;
@ -361,6 +411,23 @@ mod test {
n = test_value("1:0", n, THURSDAY_00_00 + i*DAY + HOUR)?; n = test_value("1:0", n, THURSDAY_00_00 + i*DAY + HOUR)?;
} }
// test date functionality
test_value("2020-07-31", 0, JUL_31_2020)?;
test_value("02-28", 0, (31+27)*DAY)?;
test_value("02-29", 0, 2*365*DAY + (31+28)*DAY)?; // 1972-02-29
test_value("1965/5-01-01", -1, THURSDAY_00_00)?;
test_value("2020-7..9-2/2", JUL_31_2020, JUL_31_2020 + 2*DAY)?;
test_value("2020,2021-12-31", JUL_31_2020, DEC_31_2020)?;
test_value("monthly", 0, 31*DAY)?;
test_value("quarterly", 0, (31+28+31)*DAY)?;
test_value("semiannually", 0, (31+28+31+30+31+30)*DAY)?;
test_value("yearly", 0, (365)*DAY)?;
test_never("2021-02-29", 0)?;
test_never("02-30", 0)?;
Ok(()) Ok(())
} }

View File

@ -1,73 +1,60 @@
use anyhow::Error; use anyhow::Error;
use bitflags::bitflags;
use proxmox::tools::time::*; use proxmox::tools::time::*;
bitflags!{
#[derive(Default)]
pub struct TMChanges: u8 {
const SEC = 1;
const MIN = 2;
const HOUR = 4;
const MDAY = 8;
const MON = 16;
const YEAR = 32;
const WDAY = 64;
}
}
pub struct TmEditor { pub struct TmEditor {
utc: bool, utc: bool,
t: libc::tm, t: libc::tm,
pub changes: TMChanges,
}
fn is_leap_year(year: libc::c_int) -> bool {
if year % 4 != 0 { return false; }
if year % 100 != 0 { return true; }
if year % 400 != 0 { return false; }
return true;
}
fn days_in_month(mon: libc::c_int, year: libc::c_int) -> libc::c_int {
let mon = mon % 12;
static MAP: &[libc::c_int] = &[31, 28, 31, 30, 31, 30, 31, 31, 30, 31, 30, 31];
if mon == 1 && is_leap_year(year) { return 29; }
MAP[mon as usize]
} }
impl TmEditor { impl TmEditor {
pub fn new(epoch: i64, utc: bool) -> Result<Self, Error> { pub fn new(epoch: i64, utc: bool) -> Result<Self, Error> {
let mut t = if utc { gmtime(epoch)? } else { localtime(epoch)? }; let t = if utc { gmtime(epoch)? } else { localtime(epoch)? };
t.tm_year += 1900; // real years for clarity Ok(Self { utc, t })
Ok(Self { utc, t, changes: TMChanges::all() })
} }
pub fn into_epoch(mut self) -> Result<i64, Error> { pub fn into_epoch(mut self) -> Result<i64, Error> {
self.t.tm_year -= 1900; let epoch = if self.utc { timegm(&mut self.t)? } else { timelocal(&mut self.t)? };
let epoch = if self.utc { timegm(self.t)? } else { timelocal(self.t)? };
Ok(epoch) Ok(epoch)
} }
pub fn add_days(&mut self, days: libc::c_int, reset_time: bool) { /// increases the year by 'years' and resets all smaller fields to their minimum
if days == 0 { return; } pub fn add_years(&mut self, years: libc::c_int) -> Result<(), Error> {
if reset_time { if years == 0 { return Ok(()); }
self.t.tm_hour = 0; self.t.tm_mon = 0;
self.t.tm_min = 0; self.t.tm_mday = 1;
self.t.tm_sec = 0; self.t.tm_hour = 0;
self.changes.insert(TMChanges::HOUR|TMChanges::MIN|TMChanges::SEC); self.t.tm_min = 0;
} self.t.tm_sec = 0;
self.t.tm_mday += days; self.t.tm_year += years;
self.t.tm_wday += days; self.normalize_time()
self.changes.insert(TMChanges::MDAY|TMChanges::WDAY);
self.wrap_time();
} }
/// increases the month by 'months' and resets all smaller fields to their minimum
pub fn add_months(&mut self, months: libc::c_int) -> Result<(), Error> {
if months == 0 { return Ok(()); }
self.t.tm_mday = 1;
self.t.tm_hour = 0;
self.t.tm_min = 0;
self.t.tm_sec = 0;
self.t.tm_mon += months;
self.normalize_time()
}
/// increases the day by 'days' and resets all smaller fields to their minimum
pub fn add_days(&mut self, days: libc::c_int) -> Result<(), Error> {
if days == 0 { return Ok(()); }
self.t.tm_hour = 0;
self.t.tm_min = 0;
self.t.tm_sec = 0;
self.t.tm_mday += days;
self.normalize_time()
}
pub fn year(&self) -> libc::c_int { self.t.tm_year + 1900 } // see man mktime
pub fn month(&self) -> libc::c_int { self.t.tm_mon + 1 }
pub fn day(&self) -> libc::c_int { self.t.tm_mday }
pub fn hour(&self) -> libc::c_int { self.t.tm_hour } pub fn hour(&self) -> libc::c_int { self.t.tm_hour }
pub fn min(&self) -> libc::c_int { self.t.tm_min } pub fn min(&self) -> libc::c_int { self.t.tm_min }
pub fn sec(&self) -> libc::c_int { self.t.tm_sec } pub fn sec(&self) -> libc::c_int { self.t.tm_sec }
@ -77,109 +64,56 @@ impl TmEditor {
(self.t.tm_wday + 6) % 7 (self.t.tm_wday + 6) % 7
} }
pub fn set_time(&mut self, hour: libc::c_int, min: libc::c_int, sec: libc::c_int) { pub fn set_time(&mut self, hour: libc::c_int, min: libc::c_int, sec: libc::c_int) -> Result<(), Error> {
self.t.tm_hour = hour; self.t.tm_hour = hour;
self.t.tm_min = min; self.t.tm_min = min;
self.t.tm_sec = sec; self.t.tm_sec = sec;
self.changes.insert(TMChanges::HOUR|TMChanges::MIN|TMChanges::SEC); self.normalize_time()
self.wrap_time();
} }
pub fn set_min_sec(&mut self, min: libc::c_int, sec: libc::c_int) { pub fn set_min_sec(&mut self, min: libc::c_int, sec: libc::c_int) -> Result<(), Error> {
self.t.tm_min = min; self.t.tm_min = min;
self.t.tm_sec = sec; self.t.tm_sec = sec;
self.changes.insert(TMChanges::MIN|TMChanges::SEC); self.normalize_time()
self.wrap_time();
} }
fn wrap_time(&mut self) { fn normalize_time(&mut self) -> Result<(), Error> {
// libc normalizes it for us
// sec: 0..59 if self.utc {
if self.t.tm_sec >= 60 { timegm(&mut self.t)?;
self.t.tm_min += self.t.tm_sec / 60; } else {
self.t.tm_sec %= 60; timelocal(&mut self.t)?;
self.changes.insert(TMChanges::SEC|TMChanges::MIN);
} }
Ok(())
// min: 0..59
if self.t.tm_min >= 60 {
self.t.tm_hour += self.t.tm_min / 60;
self.t.tm_min %= 60;
self.changes.insert(TMChanges::MIN|TMChanges::HOUR);
}
// hour: 0..23
if self.t.tm_hour >= 24 {
self.t.tm_mday += self.t.tm_hour / 24;
self.t.tm_wday += self.t.tm_hour / 24;
self.t.tm_hour %= 24;
self.changes.insert(TMChanges::HOUR|TMChanges::MDAY|TMChanges::WDAY);
}
// Translate to 0..($days_in_mon-1)
self.t.tm_mday -= 1;
loop {
let days_in_mon = days_in_month(self.t.tm_mon, self.t.tm_year);
if self.t.tm_mday < days_in_mon { break; }
// Wrap one month
self.t.tm_mday -= days_in_mon;
self.t.tm_mon += 1;
self.changes.insert(TMChanges::MDAY|TMChanges::WDAY|TMChanges::MON);
}
// Translate back to 1..$days_in_mon
self.t.tm_mday += 1;
// mon: 0..11
if self.t.tm_mon >= 12 {
self.t.tm_year += self.t.tm_mon / 12;
self.t.tm_mon %= 12;
self.changes.insert(TMChanges::MON|TMChanges::YEAR);
}
self.t.tm_wday %= 7;
} }
pub fn set_sec(&mut self, v: libc::c_int) { pub fn set_sec(&mut self, v: libc::c_int) -> Result<(), Error> {
self.t.tm_sec = v; self.t.tm_sec = v;
self.changes.insert(TMChanges::SEC); self.normalize_time()
self.wrap_time();
} }
pub fn set_min(&mut self, v: libc::c_int) { pub fn set_min(&mut self, v: libc::c_int) -> Result<(), Error> {
self.t.tm_min = v; self.t.tm_min = v;
self.changes.insert(TMChanges::MIN); self.normalize_time()
self.wrap_time();
} }
pub fn set_hour(&mut self, v: libc::c_int) { pub fn set_hour(&mut self, v: libc::c_int) -> Result<(), Error> {
self.t.tm_hour = v; self.t.tm_hour = v;
self.changes.insert(TMChanges::HOUR); self.normalize_time()
self.wrap_time();
} }
pub fn set_mday(&mut self, v: libc::c_int) { pub fn set_mday(&mut self, v: libc::c_int) -> Result<(), Error> {
self.t.tm_mday = v; self.t.tm_mday = v;
self.changes.insert(TMChanges::MDAY); self.normalize_time()
self.wrap_time();
} }
pub fn set_mon(&mut self, v: libc::c_int) { pub fn set_mon(&mut self, v: libc::c_int) -> Result<(), Error> {
self.t.tm_mon = v; self.t.tm_mon = v - 1;
self.changes.insert(TMChanges::MON); self.normalize_time()
self.wrap_time();
} }
pub fn set_year(&mut self, v: libc::c_int) { pub fn set_year(&mut self, v: libc::c_int) -> Result<(), Error> {
self.t.tm_year = v; self.t.tm_year = v - 1900;
self.changes.insert(TMChanges::YEAR); self.normalize_time()
self.wrap_time();
} }
pub fn set_wday(&mut self, v: libc::c_int) {
self.t.tm_wday = v;
self.changes.insert(TMChanges::WDAY);
self.wrap_time();
}
} }

View File

@ -252,6 +252,6 @@ pub const SYSTEMD_TIMESPAN_SCHEMA: Schema = StringSchema::new(
.schema(); .schema();
pub const SYSTEMD_CALENDAR_EVENT_SCHEMA: Schema = StringSchema::new( pub const SYSTEMD_CALENDAR_EVENT_SCHEMA: Schema = StringSchema::new(
"systemd time span") "systemd calendar event")
.format(&ApiStringFormat::VerifyFn(super::time::verify_calendar_event)) .format(&ApiStringFormat::VerifyFn(super::time::verify_calendar_event))
.schema(); .schema();

View File

@ -6,17 +6,16 @@ Ext.define('pbs-data-store-snapshots', {
{ {
name: 'backup-time', name: 'backup-time',
type: 'date', type: 'date',
dateFormat: 'timestamp' dateFormat: 'timestamp',
}, },
'files', 'files',
'owner', 'owner',
'verification', 'verification',
{ name: 'size', type: 'int', allowNull: true, }, { name: 'size', type: 'int', allowNull: true },
{ {
name: 'crypt-mode', name: 'crypt-mode',
type: 'boolean', type: 'boolean',
calculate: function(data) { calculate: function(data) {
let encrypted = 0;
let crypt = { let crypt = {
none: 0, none: 0,
mixed: 0, mixed: 0,
@ -24,25 +23,24 @@ Ext.define('pbs-data-store-snapshots', {
encrypt: 0, encrypt: 0,
count: 0, count: 0,
}; };
let signed = 0;
data.files.forEach(file => { data.files.forEach(file => {
if (file.filename === 'index.json.blob') return; // is never encrypted if (file.filename === 'index.json.blob') return; // is never encrypted
let mode = PBS.Utils.cryptmap.indexOf(file['crypt-mode']); let mode = PBS.Utils.cryptmap.indexOf(file['crypt-mode']);
if (mode !== -1) { if (mode !== -1) {
crypt[file['crypt-mode']]++; crypt[file['crypt-mode']]++;
crypt.count++;
} }
crypt.count++;
}); });
return PBS.Utils.calculateCryptMode(crypt); return PBS.Utils.calculateCryptMode(crypt);
} },
}, },
{ {
name: 'matchesFilter', name: 'matchesFilter',
type: 'boolean', type: 'boolean',
defaultValue: true, defaultValue: true,
}, },
] ],
}); });
Ext.define('PBS.DataStoreContent', { Ext.define('PBS.DataStoreContent', {
@ -70,7 +68,7 @@ Ext.define('PBS.DataStoreContent', {
view.getStore().setSorters([ view.getStore().setSorters([
'backup-group', 'backup-group',
'text', 'text',
'backup-time' 'backup-time',
]); ]);
Proxmox.Utils.monStoreErrors(view, this.store); Proxmox.Utils.monStoreErrors(view, this.store);
this.reload(); // initial load this.reload(); // initial load
@ -88,7 +86,7 @@ Ext.define('PBS.DataStoreContent', {
this.store.setProxy({ this.store.setProxy({
type: 'proxmox', type: 'proxmox',
timeout: 300*1000, // 5 minutes, we should make that api call faster timeout: 300*1000, // 5 minutes, we should make that api call faster
url: url url: url,
}); });
this.store.load(); this.store.load();
@ -124,7 +122,7 @@ Ext.define('PBS.DataStoreContent', {
expanded: false, expanded: false,
backup_type: item.data["backup-type"], backup_type: item.data["backup-type"],
backup_id: item.data["backup-id"], backup_id: item.data["backup-id"],
children: [] children: [],
}; };
} }
@ -163,7 +161,7 @@ Ext.define('PBS.DataStoreContent', {
} }
return false; return false;
}, },
after: () => {}, after: Ext.emptyFn,
}); });
for (const item of records) { for (const item of records) {
@ -181,7 +179,7 @@ Ext.define('PBS.DataStoreContent', {
data.children = []; data.children = [];
for (const file of data.files) { for (const file of data.files) {
file.text = file.filename, file.text = file.filename;
file['crypt-mode'] = PBS.Utils.cryptmap.indexOf(file['crypt-mode']); file['crypt-mode'] = PBS.Utils.cryptmap.indexOf(file['crypt-mode']);
file.leaf = true; file.leaf = true;
file.matchesFilter = true; file.matchesFilter = true;
@ -192,6 +190,7 @@ Ext.define('PBS.DataStoreContent', {
children.push(data); children.push(data);
} }
let nowSeconds = Date.now() / 1000;
let children = []; let children = [];
for (const [name, group] of Object.entries(groups)) { for (const [name, group] of Object.entries(groups)) {
let last_backup = 0; let last_backup = 0;
@ -201,7 +200,13 @@ Ext.define('PBS.DataStoreContent', {
'sign-only': 0, 'sign-only': 0,
encrypt: 0, encrypt: 0,
}; };
for (const item of group.children) { let verify = {
outdated: 0,
none: 0,
failed: 0,
ok: 0,
};
for (let item of group.children) {
crypt[PBS.Utils.cryptmap[item['crypt-mode']]]++; crypt[PBS.Utils.cryptmap[item['crypt-mode']]]++;
if (item["backup-time"] > last_backup && item.size !== null) { if (item["backup-time"] > last_backup && item.size !== null) {
last_backup = item["backup-time"]; last_backup = item["backup-time"];
@ -209,13 +214,24 @@ Ext.define('PBS.DataStoreContent', {
group.files = item.files; group.files = item.files;
group.size = item.size; group.size = item.size;
group.owner = item.owner; group.owner = item.owner;
verify.lastFailed = item.verification && item.verification.state !== 'ok';
} }
if (item.verification && if (!item.verification) {
(!group.verification || group.verification.state !== 'failed')) { verify.none++;
group.verification = item.verification; } else {
if (item.verification.state === 'ok') {
verify.ok++;
} else {
verify.failed++;
}
let task = Proxmox.Utils.parse_task_upid(item.verification.upid);
item.verification.lastTime = task.starttime;
if (nowSeconds - task.starttime > 30 * 24 * 60 * 60) {
verify.outdated++;
}
} }
} }
group.verification = verify;
group.count = group.children.length; group.count = group.children.length;
group.matchesFilter = true; group.matchesFilter = true;
crypt.count = group.count; crypt.count = group.count;
@ -226,7 +242,7 @@ Ext.define('PBS.DataStoreContent', {
view.setRootNode({ view.setRootNode({
expanded: true, expanded: true,
children: children children: children,
}); });
if (selected !== undefined) { if (selected !== undefined) {
@ -246,13 +262,13 @@ Ext.define('PBS.DataStoreContent', {
Proxmox.Utils.setErrorMask(view, false); Proxmox.Utils.setErrorMask(view, false);
if (view.getStore().getFilters().length > 0) { if (view.getStore().getFilters().length > 0) {
let searchBox = me.lookup("searchbox"); let searchBox = me.lookup("searchbox");
let searchvalue = searchBox.getValue();; let searchvalue = searchBox.getValue();
me.search(searchBox, searchvalue); me.search(searchBox, searchvalue);
} }
}, },
onPrune: function(view, rI, cI, item, e, rec) { onPrune: function(view, rI, cI, item, e, rec) {
var view = this.getView(); view = this.getView();
if (!(rec && rec.data)) return; if (!(rec && rec.data)) return;
let data = rec.data; let data = rec.data;
@ -270,7 +286,8 @@ Ext.define('PBS.DataStoreContent', {
}, },
onVerify: function(view, rI, cI, item, e, rec) { onVerify: function(view, rI, cI, item, e, rec) {
var view = this.getView(); let me = this;
view = me.getView();
if (!view.datastore) return; if (!view.datastore) return;
@ -302,6 +319,7 @@ Ext.define('PBS.DataStoreContent', {
success: function(response, options) { success: function(response, options) {
Ext.create('Proxmox.window.TaskViewer', { Ext.create('Proxmox.window.TaskViewer', {
upid: response.result.data, upid: response.result.data,
taskDone: () => me.reload(),
}).show(); }).show();
}, },
}); });
@ -309,7 +327,7 @@ Ext.define('PBS.DataStoreContent', {
onForget: function(view, rI, cI, item, e, rec) { onForget: function(view, rI, cI, item, e, rec) {
let me = this; let me = this;
var view = this.getView(); view = this.getView();
if (!(rec && rec.data)) return; if (!(rec && rec.data)) return;
let data = rec.data; let data = rec.data;
@ -364,7 +382,8 @@ Ext.define('PBS.DataStoreContent', {
let atag = document.createElement('a'); let atag = document.createElement('a');
params['file-name'] = file; params['file-name'] = file;
atag.download = filename; atag.download = filename;
let url = new URL(`/api2/json/admin/datastore/${view.datastore}/download-decoded`, window.location.origin); let url = new URL(`/api2/json/admin/datastore/${view.datastore}/download-decoded`,
window.location.origin);
for (const [key, value] of Object.entries(params)) { for (const [key, value] of Object.entries(params)) {
url.searchParams.append(key, value); url.searchParams.append(key, value);
} }
@ -427,7 +446,7 @@ Ext.define('PBS.DataStoreContent', {
store.beginUpdate(); store.beginUpdate();
store.getRoot().cascadeBy({ store.getRoot().cascadeBy({
before: function(item) { before: function(item) {
if(me.filter(item, value)) { if (me.filter(item, value)) {
item.set('matchesFilter', true); item.set('matchesFilter', true);
if (item.parentNode && item.parentNode.id !== 'root') { if (item.parentNode && item.parentNode.id !== 'root') {
item.parentNode.childmatches = true; item.parentNode.childmatches = true;
@ -459,12 +478,22 @@ Ext.define('PBS.DataStoreContent', {
}, },
}, },
viewConfig: {
getRowClass: function(record, index) {
let verify = record.get('verification');
if (verify && verify.lastFailed) {
return 'proxmox-invalid-row';
}
return null;
},
},
columns: [ columns: [
{ {
xtype: 'treecolumn', xtype: 'treecolumn',
header: gettext("Backup Group"), header: gettext("Backup Group"),
dataIndex: 'text', dataIndex: 'text',
flex: 1 flex: 1,
}, },
{ {
header: gettext('Actions'), header: gettext('Actions'),
@ -511,9 +540,9 @@ Ext.define('PBS.DataStoreContent', {
data.filename && data.filename &&
data.filename.endsWith('pxar.didx') && data.filename.endsWith('pxar.didx') &&
data['crypt-mode'] < 3); data['crypt-mode'] < 3);
} },
}, },
] ],
}, },
{ {
xtype: 'datecolumn', xtype: 'datecolumn',
@ -521,7 +550,7 @@ Ext.define('PBS.DataStoreContent', {
sortable: true, sortable: true,
dataIndex: 'backup-time', dataIndex: 'backup-time',
format: 'Y-m-d H:i:s', format: 'Y-m-d H:i:s',
width: 150 width: 150,
}, },
{ {
header: gettext("Size"), header: gettext("Size"),
@ -543,6 +572,8 @@ Ext.define('PBS.DataStoreContent', {
format: '0', format: '0',
header: gettext("Count"), header: gettext("Count"),
sortable: true, sortable: true,
width: 75,
align: 'right',
dataIndex: 'count', dataIndex: 'count',
}, },
{ {
@ -565,29 +596,66 @@ Ext.define('PBS.DataStoreContent', {
if (iconCls) { if (iconCls) {
iconTxt = `<i class="fa fa-fw fa-${iconCls}"></i> `; iconTxt = `<i class="fa fa-fw fa-${iconCls}"></i> `;
} }
return (iconTxt + PBS.Utils.cryptText[v]) || Proxmox.Utils.unknownText return (iconTxt + PBS.Utils.cryptText[v]) || Proxmox.Utils.unknownText;
} },
}, },
{ {
header: gettext('Verify State'), header: gettext('Verify State'),
sortable: true, sortable: true,
dataIndex: 'verification', dataIndex: 'verification',
width: 120,
renderer: (v, meta, record) => { renderer: (v, meta, record) => {
if (v === undefined || v === null || !v.state) { let i = (cls, txt) => `<i class="fa fa-fw fa-${cls}"></i> ${txt}`;
//meta.tdCls = "x-grid-row-loading"; if (v === undefined || v === null) {
return record.data.leaf ? '' : gettext('None'); return record.data.leaf ? '' : i('question-circle-o warning', gettext('None'));
} }
let task = Proxmox.Utils.parse_task_upid(v.upid); let tip, iconCls, txt;
let verify_time = Proxmox.Utils.render_timestamp(task.starttime);
let iconCls = v.state === 'ok' ? 'check good' : 'times critical';
let tip = `Verify task started on ${verify_time}`;
if (record.parentNode.id === 'root') { if (record.parentNode.id === 'root') {
tip = v.state === 'ok' if (v.failed === 0) {
? 'All verification OK in backup group' if (v.none === 0) {
: 'At least one failed verification in backup group!'; if (v.outdated > 0) {
tip = 'All OK, but some snapshots were not verified in last 30 days';
iconCls = 'check warning';
txt = gettext('All OK (old)');
} else {
tip = 'All snapshots verified at least once in last 30 days';
iconCls = 'check good';
txt = gettext('All OK');
}
} else if (v.ok === 0) {
tip = `${v.none} not verified yet`;
iconCls = 'question-circle-o warning';
txt = gettext('None');
} else {
tip = `${v.ok} OK, ${v.none} not verified yet`;
iconCls = 'check faded';
txt = `${v.ok} OK`;
}
} else {
tip = `${v.ok} OK, ${v.failed} failed, ${v.none} not verified yet`;
iconCls = 'times critical';
txt = v.ok === 0 && v.none === 0
? gettext('All failed')
: `${v.failed} failed`;
}
} else if (!v.state) {
return record.data.leaf ? '' : gettext('None');
} else {
let verify_time = Proxmox.Utils.render_timestamp(v.lastTime);
tip = `Last verify task started on ${verify_time}`;
txt = v.state;
iconCls = 'times critical';
if (v.state === 'ok') {
iconCls = 'check good';
let now = Date.now() / 1000;
if (now - v.lastTime > 30 * 24 * 60 * 60) {
tip = `Last verify task over 30 days ago: ${verify_time}`;
iconCls = 'check warning';
}
}
} }
return `<span data-qtip="${tip}"> return `<span data-qtip="${tip}">
<i class="fa fa-fw fa-${iconCls}"></i> ${v.state} <i class="fa fa-fw fa-${iconCls}"></i> ${txt}
</span>`; </span>`;
}, },
listeners: { listeners: {
@ -619,6 +687,7 @@ Ext.define('PBS.DataStoreContent', {
{ {
xtype: 'textfield', xtype: 'textfield',
reference: 'searchbox', reference: 'searchbox',
emptyText: gettext('group, date or owner'),
triggers: { triggers: {
clear: { clear: {
cls: 'pmx-clear-trigger', cls: 'pmx-clear-trigger',
@ -628,7 +697,7 @@ Ext.define('PBS.DataStoreContent', {
this.triggers.clear.setVisible(false); this.triggers.clear.setVisible(false);
this.setValue(''); this.setValue('');
}, },
} },
}, },
listeners: { listeners: {
change: { change: {
@ -636,6 +705,6 @@ Ext.define('PBS.DataStoreContent', {
buffer: 500, buffer: 500,
}, },
}, },
} },
], ],
}); });

View File

@ -54,6 +54,11 @@ all: js/proxmox-backup-gui.js css/ext6-pbs.css
js: js:
mkdir js mkdir js
.PHONY: OnlineHelpInfo.js
OnlineHelpInfo.js:
$(MAKE) -C ../docs onlinehelpinfo
mv ../docs/output/scanrefs/OnlineHelpInfo.js .
js/proxmox-backup-gui.js: js OnlineHelpInfo.js ${JSSRC} js/proxmox-backup-gui.js: js OnlineHelpInfo.js ${JSSRC}
cat OnlineHelpInfo.js ${JSSRC} >$@.tmp cat OnlineHelpInfo.js ${JSSRC} >$@.tmp
mv $@.tmp $@ mv $@.tmp $@

View File

@ -1,6 +1,10 @@
var proxmoxOnlineHelpInfo = { const proxmoxOnlineHelpInfo = {
"pbs_documentation_index" : { "pbs_documentation_index": {
"link" : "/pbs-docs/index.html", "link": "/docs/index.html",
"title" : "Proxmox Backup Server Documentation Index" "title": "Proxmox Backup Server Documentation Index"
} },
"chapter-zfs": {
"link": "/docs/sysadmin.html#chapter-zfs",
"title": "ZFS on Linux"
}
}; };

View File

@ -41,9 +41,9 @@ Ext.define('PBS.Utils', {
let files = data.count; let files = data.count;
if (mixed > 0) { if (mixed > 0) {
return PBS.Utils.cryptmap.indexOf('mixed'); return PBS.Utils.cryptmap.indexOf('mixed');
} else if (files === encrypted) { } else if (files === encrypted && encrypted > 0) {
return PBS.Utils.cryptmap.indexOf('encrypt'); return PBS.Utils.cryptmap.indexOf('encrypt');
} else if (files === signed) { } else if (files === signed && signed > 0) {
return PBS.Utils.cryptmap.indexOf('sign-only'); return PBS.Utils.cryptmap.indexOf('sign-only');
} else if ((signed+encrypted) === 0) { } else if ((signed+encrypted) === 0) {
return PBS.Utils.cryptmap.indexOf('none'); return PBS.Utils.cryptmap.indexOf('none');

View File

@ -4,15 +4,17 @@ Ext.define('PBS.data.CalendarEventExamples', {
field: ['value', 'text'], field: ['value', 'text'],
data: [ data: [
//FIXME { value: '*/30', text: Ext.String.format(gettext("Every {0} minutes"), 30) }, { value: '*:0/30', text: Ext.String.format(gettext("Every {0} minutes"), 30) },
{ value: 'hourly', text: gettext("Every hour") }, { value: 'hourly', text: gettext("Every hour") },
//FIXME { value: '*/2:00', text: gettext("Every two hours") }, { value: '0/2:00', text: gettext("Every two hours") },
{ value: '2,22:30', text: gettext("Every day") + " 02:30, 22:30" }, { value: '2,22:30', text: gettext("Every day") + " 02:30, 22:30" },
{ value: 'daily', text: gettext("Every day") + " 00:00" }, { value: 'daily', text: gettext("Every day") + " 00:00" },
{ value: 'mon..fri', text: gettext("Monday to Friday") + " 00:00" }, { value: 'mon..fri', text: gettext("Monday to Friday") + " 00:00" },
//FIXME{ value: 'mon..fri */1:00', text: gettext("Monday to Friday") + ': ' + gettext("hourly") }, { value: 'mon..fri *:00', text: gettext("Monday to Friday") + ', ' + gettext("hourly") },
{ value: 'sat 18:15', text: gettext("Every Saturday") + " 18:15" }, { value: 'sat 18:15', text: gettext("Every Saturday") + " 18:15" },
//FIXME{ value: 'monthly', text: gettext("Every 1st of Month") + " 00:00" }, // not yet possible.. { value: 'monthly', text: gettext("Every first day of the Month") + " 00:00" },
{ value: 'sat *-1..7 02:00', text: gettext("Every first Saturday of the month") + " 02:00" },
{ value: 'yearly', text: gettext("First day of the year") + " 00:00" },
], ],
}); });
@ -26,6 +28,8 @@ Ext.define('PBS.form.CalendarEvent', {
displayField: 'text', displayField: 'text',
queryMode: 'local', queryMode: 'local',
matchFieldWidth: false,
config: { config: {
deleteEmpty: true, deleteEmpty: true,
}, },

View File

@ -12,7 +12,11 @@
<link rel="stylesheet" type="text/css" href="/fontawesome/css/font-awesome.css" /> <link rel="stylesheet" type="text/css" href="/fontawesome/css/font-awesome.css" />
<link rel="stylesheet" type="text/css" href="/widgettoolkit/css/ext6-pmx.css" /> <link rel="stylesheet" type="text/css" href="/widgettoolkit/css/ext6-pmx.css" />
<link rel="stylesheet" type="text/css" href="/css/ext6-pbs.css" /> <link rel="stylesheet" type="text/css" href="/css/ext6-pbs.css" />
{{#if language}}
<script type='text/javascript' src='/locale/pbs-lang-{{ language }}.js'></script>
{{else}}
<script type='text/javascript'> function gettext(buf) { return buf; } </script> <script type='text/javascript'> function gettext(buf) { return buf; } </script>
{{/if}}
{{#if debug}} {{#if debug}}
<script type="text/javascript" src="/extjs/ext-all-debug.js"></script> <script type="text/javascript" src="/extjs/ext-all-debug.js"></script>
<script type="text/javascript" src="/extjs/charts-debug.js"></script> <script type="text/javascript" src="/extjs/charts-debug.js"></script>

View File

@ -5,6 +5,8 @@ Ext.define('PBS.window.SyncJobEdit', {
userid: undefined, userid: undefined,
onlineHelp: 'syncjobs',
isAdd: true, isAdd: true,
subject: gettext('SyncJob'), subject: gettext('SyncJob'),