Compare commits

...

290 Commits

Author SHA1 Message Date
9d79cec4d5 bump version to 0.9.6-1
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-11-04 19:13:04 +01:00
4935681cf4 ui: sync jobs: add tooltip for remove vanished
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-11-04 19:07:07 +01:00
669fa672d9 ui: sync jobs: reorder fields
group local ones togeteher on the left side, and source + schedule
on the right side.

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-11-04 19:05:48 +01:00
a797583535 ui: sync jobs: fix originalValue of owner and improve label
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-11-04 19:04:42 +01:00
54ed1b2a71 ui: sync jobs: only set default schedule when creating new jobs
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-11-04 19:04:06 +01:00
8e12e86f0b ui: add shell panel under administration
some users prefer an inline console
we still have the pop-out console in 'Administration'

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-11-04 18:16:49 +01:00
fe7bdc9d29 proxy: also rotate auth.log file
no need for triggering re-open here, we always re-open that file.

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-11-04 17:26:34 +01:00
546b6a23df proxy: logrotate: do not serialize sending async log-reopen commands
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-11-04 17:26:34 +01:00
4fdf13f95f api: factor out auth logger and use for all API authentication failures
we have information here not available in the access log, especially
if the /api2/extjs formatter is used, which encapsulates errors in a
200 response.

So keep the auth log for now, but extend it use from create ticket
calls to all authentication failures for API calls, this ensures one
can also fail2ban tokens.

Do that logging in a central place, which makes it simple but means
that we do not have the user ID information available to include in
the log.

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-11-04 17:26:34 +01:00
385681c9ab worker task: fix passing upid to send command
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-11-04 17:16:55 +01:00
be99df2767 log rotate: only add .zst to new file after second rotation
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-11-04 17:16:55 +01:00
30200b5c4a ui: fix task description for log rotate
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-11-04 14:20:44 +01:00
f47c1d3a2f proxy: use new datastore notify settings 2020-11-04 11:54:29 +01:00
6e545d0058 config: allow to configure who receives job notify emails 2020-11-04 11:54:29 +01:00
84006f98b2 ui: SyncJobEdit: fix sending 'delete' values on SyncJob creation
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-11-04 11:39:52 +01:00
42ca9e918a sync: improve log format 2020-11-04 09:10:56 +01:00
ea93bea7bf proxy: log if there are too many open connections 2020-11-04 08:49:35 +01:00
0081903f7c fix bug #2870: use updated tickets 2020-11-04 08:20:36 +01:00
c53797f627 ui: set default deduplication factor to 1.0 2020-11-04 07:12:55 +01:00
e1d367df47 proxy: use env PROXMOX_DEBUG to enable/disable debug output
We only print early connection errors when this env var is set.
2020-11-04 06:55:57 +01:00
71f413cd27 cleanup: use Arc to count open connections 2020-11-04 06:35:44 +01:00
48aa2b93b7 fix #3106: correctly queue incoming connections 2020-11-04 06:24:42 +01:00
641862ddad bump version to 0.9.5-1
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-11-03 17:41:26 +01:00
2f08ee1fe3 report: add more commands/files to check
add all of our configuration files in /etc/proxmox-backup/ further,
call some ZFS tool to get that status.

Also, use the subscription command form manager, as we often require
more info than the status. Also, adapt formatting a bit.

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-11-03 17:33:16 +01:00
93f077c5cf report: avoid lazy_static for command/files/.. definitions
those are not in a hot code path, and it is not really much work to
build them on the go..

It may not matther much, but it is unnecessary. Rust will probably
inline most of it anyway..

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-11-03 17:27:16 +01:00
941342f70e manager: report: call method directly, avoid HTTPS request
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-11-03 17:23:43 +01:00
9a556c8a30 manager: add report cli command
Signed-off-by: Hannes Laimer <h.laimer@proxmox.com>
2020-11-03 15:16:42 +01:00
46dce62be6 report: add webui button for system report
Signed-off-by: Hannes Laimer <h.laimer@proxmox.com>
2020-11-03 15:16:42 +01:00
b0ef9631e6 report: add api endpoint and function to generate report
Signed-off-by: Hannes Laimer <h.laimer@proxmox.com>
2020-11-03 15:16:42 +01:00
fb0d9833af ui: task filter: add button icons
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-11-03 14:49:04 +01:00
bfe4b7d782 ui: task filter: reorder to avoid wasting vertical space
Includes some eslint fixes and label changes as well, was to much
work to split that out in its own commit.

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-11-03 14:48:20 +01:00
185dab7678 ui: add panel/Tasks and use it for the node tasks
this is a panel that is heavily inspired from widget-toolkits
node/Tasks panel, but is adapted to use the extended api calls of
pbs (e.g. since/until filter)

has 'filter' panel (like pmgs log tracker gui), but it is collapsible

if we extend the api calls of the other projects, we can merge this
again into the widget-toolkit one and use that

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-11-03 11:35:21 +01:00
c1fa057cce api2/node/tasks: add optional until filter
so that users select specific time ranges with 'since' and 'until'
(e.g. a single day)

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-11-03 11:35:21 +01:00
f66565203a api2/status: remove list_task api call
we do not need it anymore, we can do everything with nodes/NODE/tasks
instead

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-11-03 11:35:21 +01:00
a2a7dd1535 api2/node/tasks: add optional since/typefilter/statusfilter
and change all users of the /status/tasks api call to this

with this change we can now delete the /status/tasks api call

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-11-03 11:35:21 +01:00
e7dd169fdf api2/node/tasks: change limit behaviour when it is 0
instead of returning 0 elements (which does not really make sense anyway),
change it so that there is no limit anymore (besides usize::MAX)

this is technically a breaking change for the api, but i guess
no one is using limit=0 for anything sensible anyway

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-11-03 11:35:21 +01:00
fa31f4c54c server/worker_task: add tasktype to return the api type of a taskstate
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-11-03 11:35:21 +01:00
038ee59960 cleanup: use const_regex, use BACKUP_ID_REGEX for api too 2020-11-03 06:36:50 +01:00
e1c1533790 fix #3039: use the same ID regex for info and api
in the api we use PROXMOX_SAFE_ID_REGEX for backup ids, but here
(where we use it to list them) we use a local regex

since the first is a superset of the one used here, simply extend
the local one

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-11-03 06:25:06 +01:00
9de7c71a81 docs: extend managing remotes
with information about required privileges and limitations

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2020-11-02 21:13:24 +01:00
aa64e06540 sync: add access check tests
should cover all the current scenarios. remote server-side checks can't
be meaningfully unit-tested, but they are simple enough so should
hopefully never break.

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2020-11-02 21:13:24 +01:00
18077ac633 user.cfg/user info: add test constructors
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2020-11-02 21:13:24 +01:00
a71a009313 proxy: drop now unused UPID import
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-11-02 21:08:38 +01:00
b6ba5acd29 proxmox-backup-proxy: use only jobstate for garbage_collection schedule
in case the garbage_collection errors out, we never set the in-memory
state, so if it failed, the last 'good' starttime was considered
for the schedule

this could lead to the job running every minute instead of the
correct schedule

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-11-02 21:08:38 +01:00
4fdf5ddf5b api2/admin/datastore: start the garbage_collection task with our helper
instead of manually, this has the advantage that we now set
the jobstate correctly and can return with an error if it is
currently running (instead of failing in the task)

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-11-02 21:08:38 +01:00
c724f65805 server/gc_job: add 'to_stdout'
we will use this for the manual api call

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-11-02 21:08:38 +01:00
79c9bf55b9 backup/{dynamic, fixed}_index: improve error message for small index files
index files that were smaller than their respective header size,
would fail with

"failed to fill whole buffer"

instead now check explicitely for the size and fail with
"index too small (size)"

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-11-02 21:08:38 +01:00
788d82d9b7 gc: mark_used_chunks: reduce implementation noise
try do reduce some unecessary lines, make match arms more precise so
one can faster see what's actually happening.

Also, avoid
> return Err(format_err!(...))
stuff, just use bail!()

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-11-02 21:08:38 +01:00
2f0b92352d garbage collect: improve index error messages
so that in case of a broken index file, the user knows which it is

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-11-02 20:08:50 +01:00
b7f2be5137 log rotate task: make task archive limits be binary based
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-11-02 19:53:30 +01:00
72aa1834dc log rotate task: adapt internal jobstate ID, set worker one to None for now
as we have only one logrotate task currently..

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-11-02 19:53:30 +01:00
fe4cc5b1a1 server: implement access log rotation with re-open via command socket
re-use the future we already have for task log rotation to trigger
it.

Move the FileLogger in ApiConfig into an Arc, so that we can actually
update it and REST using the new one.

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-11-02 19:53:30 +01:00
04b053d87e server: write main daemon PID to run directory
so that we can easily get the main PID of the last recently launched
daemon. Will be used to get the control socket of that one for access
lgo rotate in a future patch

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-11-02 19:50:24 +01:00
b469011fd1 command socket: make create_control_socket private
this is internal for now, use the comanndo socket struct
implementation, and ideally not a new one but the existing ones
created in the proxy and api daemons.

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-11-02 19:50:24 +01:00
a68768cf31 server: use generalized commando socket for worker tasks commands
Allows to extend the use of that socket in the future, e.g., for log
rotate re-open signaling.

To reflect this we use a more general name, and change the commandos
to a more clear namespace.

Both are actually somewhat a breaking change, but the single real
world issue it should be able to cause is, that one won't be able to
stop task from older daemons, which still use the older abstract
socket name format.

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-11-02 19:48:04 +01:00
f3df613cb7 server: add CommandoSocket where multiple users can register commands
This is a preparatory step to replace the task control socket with it
and provide a "reopen log file" command for the rest server.

Kept it simple by disallowing to register new commands after the
socket gets spawned, this avoids the need for locking.

If we really need that we can always wrap it in a Arc<RWLock<..>> or
something like that, or even nicer, register at compile time.

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-11-02 19:32:22 +01:00
056ee78567 config: network: use error message when parsing netmask failed
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-11-02 19:32:22 +01:00
3cd529ea51 tools: file logger: avoid some possible unwraps in log method
writing to a file can explode quite easily.
time formatting to rfc3339 should be more robust, but it has a few
conditions where it could fail, so catch that too (and only really
do it if required).

The writes to stdout are left as is, it normally is redirected to
journal which is in memory, and thus breaks later than most stuff,
and at that point we probably do not care anymore anyway.

It could make sense to actually return a result here..

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-11-02 19:32:22 +01:00
3aade17125 tools: log rotate: compressing rotated files
We renamed the last one always to a file without compression
extension, even if it was .zst previously. So always add the correct
ending to the new last one, if compress was true.

Further, we cannot detect if there'd be a compression required if we
rotated (renamed) it already to the file with .zst included.

So check on rotation itself if it would be a "no .zst" -> ",zst"
transition, and call compress there.

it really should be OK now *knocking wood*

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-11-02 18:35:13 +01:00
1dc2fe20dd tools: log rotate: fix file ending for compressed files
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-11-02 18:35:13 +01:00
645a47ff6e config: support netmask when parsing interfaces file 2020-11-02 14:32:35 +01:00
b1456a8ea7 ui: fix verificationjob task description 2020-11-02 10:15:52 +01:00
a9fcbec9dc file logger: allow reopening file
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-11-02 10:03:10 +01:00
346a488e35 pull out /run and /var/log directory constants to buildcfg
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-11-02 10:03:10 +01:00
3066f56481 notify: add link to server GUI 2020-11-02 09:12:14 +01:00
07ca4e3609 gc: remove extra empty lines in email notification template 2020-11-02 09:12:14 +01:00
dcd75edb72 ui: fix dashboard subscription
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-11-02 08:08:44 +01:00
59af9ca98e sync: allow sync for non-superusers
by requiring
- Datastore.Backup permission for target datastore
- Remote.Read permission for source remote/datastore
- Datastore.Prune if vanished snapshots should be removed
- Datastore.Modify if another user should own the freshly synced
snapshots

reading a sync job entry only requires knowing about both the source
remote and the target datastore.

note that this does not affect the Authid used to authenticate with the
remote, which of course also needs permissions to access the source
datastore.

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2020-11-02 07:10:12 +01:00
f1694b062d fix #2864: add owner option to sync
instead of hard-coding 'backup@pam'. this allows a bit more flexibility
(e.g., syncing to a datastore that can directly be used as restore
source) without overly complicating things.

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2020-11-02 07:08:05 +01:00
fa7aceeb15 manager: subscription commands s/delete/remove/
no idea why I added it as "delete", for all other such operations we
use the "remove" sub-command...

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-11-01 13:19:30 +01:00
0e16f57e37 apt: sort packages for update notifcation mail
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-10-31 22:58:52 +01:00
bc00289bce add daily update and maintenance task
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-10-31 22:51:26 +01:00
86d602457a api: apt: implement support to send notification email on new updates
again, base idea copied off PVE, but, we safe the information about
which pending version we send a mail out already in a separate
object, to keep the api return type APTUpdateInfo clean.

This also makes a few things a bit easier, as we can update the
package status without saving/restoring the notify information.

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-10-31 22:51:26 +01:00
33508b1237 api: implement apt pkg cache
based on the idea of PVE

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-10-31 21:42:49 +01:00
b282557563 api: apt: factor out and improve calling apt update
apt changes some of its state/cache also if it errors out, most of
the time, so we actually want to print both, stderr and stdout.

Further, only warn if its exit code is non-zero, for the same
rationale, it may bring updates available even if it errors (e.g.,
because a future pbs-enterprise repo is additionally configured but
not accessible).

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-10-31 21:31:59 +01:00
e6513bd5de api/tools: split out apt helpers from api to own module
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-10-31 21:31:36 +01:00
5911f74096 api types: derive Debug for APTUpdateInfo
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-10-31 21:31:36 +01:00
0bb74e54b1 worker task: drop debug prints
they are not useful anymore, rather noisy

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-10-31 21:31:36 +01:00
f254a27071 tools: do not unnecessarily prefix module path
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-10-31 21:31:36 +01:00
d0abba3397 trivial: fix typo in comment
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-10-31 21:31:36 +01:00
54adea366c ui: ACL view: do not save grid state
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-10-31 11:36:48 +01:00
ba2e4b15da ui: improve ACL view layout
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-10-31 11:33:31 +01:00
0ccdd1b6a4 ui: bump sync/verify grid stateid
so that people get the improved view by default

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-10-31 10:58:57 +01:00
fb66c85363 ui: improve sync job view layout
Avoid overuse of flex, that is as bad as having all to fixed widths.

In spirit similar to the previous commit for the verify panel, see
that for some rationale.

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-10-31 10:56:51 +01:00
aae4c30ceb ui: improve verify job view layout, show job-id
Avoid overuse of flex, that is as bad as having all to fixed widths.

* Set date-time fields to 150 px as they are fixed width text.
* Duration is maximal 3 units, so it can be made fixed too.
* Schedule is flex with lower and upper limits, this is useful as
  it's a field which can be both, quite short (daily) or long
  (mon..fri *-10..12-1..7 02:00/30:30)
* Status and comment is flex, this way we always get a filled grid

Move status after last verify date and duration field, increases
information density at the left of the grid - reducing need for eye
movement, also, it groups together the "information about last job"
nicer.

Show job-id by default even if they are auto generated when adding
over the gui, as it can help finding the respective job faster when
getting a mail with an error.

Reported-by: Dietmar Maurer <dietmar@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-10-31 10:56:51 +01:00
0656344ae4 ui: administration: set icons for tabs
orient on PVE, the ones for Updates, ServerStatus, should by
self-explanatory.

Services is in PVE named "System", but reusing that cogs icon makes
similar sense here too, and seems in line with search result of a
"service icons" query.

Syslog is the same as our general log icon, but as we also use this
normally for worker task logs and that is present here too, I
changed the worker task log icon to the alternative list, which
resembles a task view window - so IMO even better than before.

Sync that change also into the always present tasks button at the top
right.

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-10-31 09:11:11 +01:00
1143f6ca93 cleanup: fix wording in GC status emails 2020-10-31 07:56:42 +01:00
90e94aa280 docs: client: avoid that repo gets detected as email address
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-10-30 17:08:08 +01:00
c0af05e143 docs: fixup bad RST table format
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-10-30 17:05:49 +01:00
4aef06f1b6 docs: add token example to client, and reformat a bit
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-10-30 17:01:22 +01:00
034cf70b72 docs: add API tokens to documentation
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2020-10-30 16:46:19 +01:00
8b600f9965 api: replace auth_id with auth-id
in parameters, and fix up the completion for the ACL update parameter.

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2020-10-30 16:46:19 +01:00
e4e280183e privs: add some more comments explaining privileges
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2020-10-30 16:42:30 +01:00
2fc45a97a9 privs: remove PRIV_REMOVE_PRUNE
it's not used anywhere, and not needed either until the day we might
implement push syncs.

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2020-10-30 16:42:26 +01:00
b7ce2e575f verify jobs: add permissions
equivalent to verifying a whole datastore, except for reading job
(entries), which is accessible to regular Datastore.Audit/Backup users
as well.

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2020-10-30 16:36:52 +01:00
09f6a24078 verify: introduce & use new Datastore.Verify privilege
for verifying a whole datastore. Datastore.Backup now allows verifying
only backups owned by the triggering user.

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2020-10-30 16:36:52 +01:00
b728a69e7d privs: use Datastore.Modify|Backup to set backup notes
Datastore.Backup is limited to owned groups, as usual.

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2020-10-30 16:36:52 +01:00
1401f4be5f privs: allow reading notes with Datastore.Audit
they are returned when reading the manifest, which just requires
Datastore.Audit as well. Datastore.Read is for reading backup contents,
not metadata.

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2020-10-30 16:36:52 +01:00
fdb4416bae ui: permission path selector: cbind typeAhead to editable
ExtJS throws an exception if 'typeAhead' is true but 'editable' is
false.

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-10-30 16:31:53 +01:00
abe1edfc95 update d/control
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2020-10-30 16:11:50 +01:00
e4a864bd21 impl From<Authid> for Userid
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2020-10-30 15:19:07 +01:00
7a7368ee08 bump proxmox dependency to 0.7.0 for totp udpates
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2020-10-30 15:19:07 +01:00
e707fd2b3b ui: Utils: add product specific task descriptions
and sort them alphabetically

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-10-30 14:05:17 +01:00
625a56b75e server/rest: accept also = as token separator
Like we do in Proxmox VE

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-10-30 13:34:26 +01:00
6d8a1ac9e4 server/rest: user constants for HTTP headers
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-10-30 13:33:36 +01:00
362739054e api tokens: add authorization method
and properly decode secret (which is a no-op with the current scheme).

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2020-10-30 13:15:14 +01:00
2762481cc8 proxmox-backup-manager: add subscription commands
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-10-30 13:03:58 +01:00
652506e6b8 api: define subscription module and methods as public
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-10-30 13:03:58 +01:00
926d253126 api: define subscription key schema and use it
nicer to have the correct regex checked in parameter verification
already

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-10-30 12:57:14 +01:00
1cd951c93e proxy: fix warnings
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2020-10-30 12:49:43 +01:00
3b707fbb8f proxy: split out code to run garbage collection job 2020-10-30 11:01:45 +01:00
b15751bf55 check_schedule cleanup: use &str instead of String
This way we can avoid many clone() calls.
2020-10-30 09:49:50 +01:00
82c05b41fa proxy: extract commonly used logic for scheduling into new function
Signed-off-by: Hannes Laimer <h.laimer@proxmox.com>
2020-10-30 09:49:50 +01:00
b8d9079835 proxy: move prune logic into new file
Signed-off-by: Hannes Laimer <h.laimer@proxmox.com>
2020-10-30 09:49:50 +01:00
f8a682a873 ui: user menu: allow changing language while logged in
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-10-30 09:46:04 +01:00
b03a19b6e8 bump version to 0.9.4-2
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-10-29 20:25:37 +01:00
603a6bd183 d/postinst: followup: grep and sed use different regex escaping ..
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-10-29 20:25:37 +01:00
83b039af35 d/postinst: make more resilient
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-10-29 19:58:41 +01:00
c9299e76fc bump version to 0.9.3-2
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-10-29 17:20:04 +01:00
2f1a46f748 ui: move user, token and permissions into an access control tab panel
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-10-29 16:47:18 +01:00
2b38dfb456 d/control: update
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-10-29 16:18:40 +01:00
f487a622ce ui: datastore summary: handle missing snapshot of a types
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-10-29 15:52:53 +01:00
906ef6c5bd api2/access/user: fix return type schema
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2020-10-29 15:20:10 +01:00
ea1853a17b api2/access/user: drop Option, treat empty Vec as None
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2020-10-29 15:17:54 +01:00
221177ba41 fixup hardcoded paths
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2020-10-29 15:15:17 +01:00
184a37635b gui: add API token ACLs
and the needed API token selector.

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2020-10-29 15:14:27 +01:00
b2da7fbd1c acls: allow viewing/editing user's token ACLs
even for otherwise unprivileged users.

since effective privileges of an API token are always intersected with
those of their owning user, this does not allow an unprivileged user to
elevate their privileges in practice, but avoids the need to involve a
privileged user to deploy API tokens.

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2020-10-29 15:14:27 +01:00
7fe76d3491 gui: add API token UI
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2020-10-29 15:14:27 +01:00
e6b5bf69a3 gui: add permissions button to user view
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2020-10-29 15:14:27 +01:00
4615325f9e manager: add user permissions command
useful for debugging complex ACL setups.

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2020-10-29 15:14:27 +01:00
2156dec5a9 manager: add token commands
to generate, list and delete tokens. adding them to ACLs already works
out of the box.

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2020-10-29 15:14:27 +01:00
16245d540c tasks: allow unpriv users to read their tokens' tasks
and tighten down the return schema while we're at it.

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2020-10-29 15:14:27 +01:00
bff8557298 owner checks: handle backups owned by API tokens
a user should be allowed to read/list/overwrite backups owned by their
own tokens, but a token should not be able to read/list/overwrite
backups owned by their owning user.

when changing ownership of a backup group, a user should be able to
transfer ownership to/from their own tokens if the backup is owned by
them (or one of their tokens).

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2020-10-29 15:14:27 +01:00
34aa8e13b6 client/remote: allow using ApiToken + secret
in place of user + password.

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2020-10-29 15:14:27 +01:00
babab85b56 api: add permissions endpoint
and adapt privilege calculation to return propagate flag

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2020-10-29 15:14:27 +01:00
6746bbb1a2 api: allow listing users + tokens
since it's not possible to extend existing structs, UserWithTokens
duplicates most of user::User.. to avoid duplicating user::ApiToken as
well, this returns full API token IDs, not just the token name part.

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2020-10-29 15:14:27 +01:00
942078c40b api: add API token endpoints
beneath the user endpoint.

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2020-10-29 15:14:27 +01:00
c30816c1f8 REST: extract and handle API tokens
and refactor handling of headers in the REST server while we're at it.

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2020-10-29 15:14:27 +01:00
e6dc35acb8 replace Userid with Authid
in most generic places. this is accompanied by a change in
RpcEnvironment to purposefully break existing call sites.

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2020-10-29 15:11:39 +01:00
e10c5c74f6 bump proxmox dependency to 0.6.0 for api tokens and tfa
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2020-10-29 15:11:39 +01:00
f8adf8f83f config: add token.shadow file
containing pairs of token ids and hashed secret values.

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2020-10-29 15:11:39 +01:00
e0538349e2 api: add Authid as wrapper around Userid
with an optional Tokenname, appended with '!' as delimiter in the string
representation like for PVE.

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2020-10-29 15:11:39 +01:00
0903403ce7 bump version to 0.9.3-1
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-10-29 14:58:21 +01:00
b6563f48ad GC: improve task logs
Make it more clear that removed files are chunks (not indexes or
something like that, user cannot know that we do not touch them here)

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-10-29 14:47:39 +01:00
932390bd46 GC: fix logging leftover bad chunks
fixes commit b4fb262335, which copied
over the "Removed bad files:" block, but only adapted the log text,
not the actual variable.

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-10-29 14:40:29 +01:00
6b7688aa98 ui: datastore: fix sync/verify job removal prompt
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-10-29 14:34:31 +01:00
ab0cf7e6a1 ui: drop id field from verify/sync add window
the config is shared between multiple datastores with the ID as, well
the unique ID, but we only show those of a single datastore.

So if a user adds a new one with a fixed ID "12345" but a job with
that ID exists already on another store, they get a error about
duplicate IDs, but cannot relate as that duplicate job is not visible
(filtered away)

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-10-29 14:22:43 +01:00
264779e704 server/worker_task: simplify task log writing
instead of prerotating 1000 tasks
(which resulted in 2 writes each time an active worker was finished)
simply append finished tasks to the archive (which will be rotated)

page cache should be good enough so that we can get the task logs fast

since existing installations might have an 'index' file, we
still have to read tasks from there, but only if it exists

this simplifies the TaskListInfoIterator a good amount

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-10-29 12:41:20 +01:00
7f3d91003c worker task: remove debug print, faster modulo
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-10-29 12:35:33 +01:00
14e0862509 api: datstore status: introduce proper structs and restore compatibility
by moving the properties of the storage status out again to the top
level object

also introduce proper structs for the types used, to get type-safety
and better documentation for the api calls

this changes the backup counts from an array of [groups,snapshots] to
an object/struct with { groups, snapshots } and include 'other' types
(though we do not have any at this moment)

this way it is better documented

this also adapts the ui code to cope with the api changes

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-10-29 12:31:27 +01:00
9e733dae48 send sync job status emails 2020-10-29 12:22:50 +01:00
bfea476be2 schedule_datastore_sync_jobs: remove unneccessary clone() 2020-10-29 12:22:41 +01:00
385cf2bd9d send_job_status_mail: corectly escape html characters 2020-10-29 11:22:08 +01:00
d6373f3525 garbage_collection: log deduplication factor 2020-10-29 11:13:01 +01:00
01f37e01c3 ui: datastore: use pointer cursor for edit notes
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-10-29 10:45:37 +01:00
b4fb262335 garbage_collection: log bad chunks (still_bad value) 2020-10-29 10:24:31 +01:00
5499bd3dee fix #2998: encode mtime as i64 instead of u64
saves files mtime as i64 instead of u64 which enables backup of
files with negative mtime

the catalog_decode_i64 is compatible to encoded u64 values (if < 2^63)
but not reverse, so all "old" catalogs can be read with the new
decoder, but catalogs that contain negative mtimes will decode wrongly
on older clients

also remove the arbitrary maximum value of 2^63 - 1 for
encode_u64 (we just use up to 10 bytes now) and correctly
decode them and update the comments accordingly

adds also test for i64 encode/decode and for compatibility between
u64 encode and i64 decode

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-10-29 08:51:10 +01:00
d771a608f5 verify: directly pass manifest to filter function
In order to avoid loading the manifest twice during verify.
2020-10-29 07:59:19 +01:00
227a39b34b bump version to 0.9.2-2
re-use the changelog as this was not released publicly and it's just
a small fix

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-10-28 23:05:58 +01:00
f9beae9cc9 client: adapt to change datastroe status return schema
fixes commit 16f9f244cf which extended
the return schema of the status API but did not adapted the client
status command to that.

Simply define our own tiny return schema and use that.

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-10-28 22:59:40 +01:00
4430f199c4 bump version to 0.9.2-1
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-10-28 21:27:15 +01:00
eef18365e8 tools: socket: fix typo in comment
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-10-28 21:26:11 +01:00
319fe45261 ui: datastore: rework sync layout, make job ID optional
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-10-28 21:25:30 +01:00
f26080fab1 ui: datastore: rework verify layout, make job ID optional
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-10-28 21:25:07 +01:00
0cbdeed96b ui: datastore summary: indentation/whitespace error fix
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-10-28 21:24:25 +01:00
8b4f4d9ee4 tools/logrotate: fix compression logic
we never actually compressed any files, since we only looked at
the extension:
* if it was 'zst' (which was always true for newly rotated files), we
  would not compress it
* even if it was not 'zst', we compressed it inplace, never adding '.zst'
  (possibly compressing them multiple times as zstd)

now we add new rotated files simply as '.X' and add a 'target' to the
compress fn, where we rename it to (but now we have to unlink the source
path)

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-10-28 18:50:16 +01:00
b9cc905761 d/control.in: bump versioned dependcy for proxmox-widget-toolkit
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-10-28 18:49:35 +01:00
c9725bb829 ui: datastore: show comment, allow to edit notes
the "comment" is the first line of the "notes" field from a manifest,
show it in the grid and allow editing the full notes.

Hack the click event listener a bit together for the right aligned
edit action button, but it works out well and is efficient (only one
event listener is much cheaper than per-buttons ones).

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-10-28 18:36:12 +01:00
40492a562f ui: datastore: extend action tooltips with IDs
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-10-28 18:24:29 +01:00
db67e4fe06 ui: datastore: use simple V. for verify action button
Choosing a good icon is hard here, while the magnifying glass is
somewhat relatable, it reminds to much of a "Search" function, which
can be quite confusing here.

So use a simple "V.", even if it's probably also not to ideal..

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-10-28 18:22:22 +01:00
b4b14dc16e do_verification_job: fix "never-reverify" and refactor/comment
commit a4915dfc2b made a wrong fix, as
it did not observed that the last expressions was done under the
invariant that we had a last verification result, because if none
could be loaded we already returned true (include).

It thus broke the case for "never re-verify", which is important when
using multiple schedules, a more high frequent one for new,
unverified snapshots, and a low frequency to re-verify older snapshots,
e.g., monthly.

Fix this case again, rework the code to avoid this easy to oversee
invariant. Use a nested match to better express the implication of
each setting, and add some comments.

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-10-28 16:12:09 +01:00
c4a45ec744 document verify job structs
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-10-28 15:32:28 +01:00
5428f5ca29 do verification: always verify if manifest load fails
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-10-28 14:11:44 +01:00
328df3b507 verify: avoid generics and use &dyn Fn() for filter 2020-10-28 13:19:21 +01:00
a4915dfc2b verify: improve code reuse, fix filter function
Try to reuse verify_all_backups(), because this function has better
logging and well defined snaphot order.
2020-10-28 12:58:15 +01:00
d642802d8c jobstate: fix doctest 2020-10-28 10:52:16 +01:00
a20fcab060 fix compile warning 2020-10-28 10:47:30 +01:00
b9e7bcc272 send notification mails for GC and verify jobs 2020-10-28 10:44:23 +01:00
acc3d9df5a src/server/verify_job.rs: add missing file 2020-10-28 07:58:07 +01:00
1298618a83 move jobstate to server 2020-10-28 07:37:01 +01:00
a12388d177 ui: datastore summary: clarify that it's a deduplication factor
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-10-27 17:43:45 +01:00
1f092c7802 ui: datastore: used fixed-width icons for summary
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-10-27 17:43:10 +01:00
cd82870015 ui: datastore: change GC/Prune title and buttons a bit
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-10-27 17:42:29 +01:00
8d6b6a045f ui: datastore: add confirmation message to verify all
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-10-27 17:41:55 +01:00
1dceaed1e9 ui: DataStorePanel: save active tab statefully
so that the last selected tab for datastores will get selected
the next time any datastore is selected, even across browser
reloads

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-10-27 17:41:30 +01:00
2565fdd075 ui: MainView/NavigationTree: improve tree selection handling
this fixes some bugs related to selection handling in the treelist:
* datastores were not selected after a reload
* reloading when in a tabpanel on any tab but the first, would
  not select a treenode
* changing between datastores on any tab but the first would
  not select the same tab on the new datastore

fixed those by mostly rewriting the changePath handling for
datastores and tabpanels in general

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-10-27 17:41:30 +01:00
7ece65a01e ui: NavigationTree: add 'Add Datastore' button below datastore list
and make 'Datastore' unclickable

since we have all options and information on the relevant datastore panels,
we do not need a datastore config anymore (besides the creation,
which we add here)

this also fixes the sorted insertion and removal of new/old datastores

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-10-27 17:41:30 +01:00
028d0a1352 ui: move sync/verify jobs to the datastores
add the datastore as parameter for the store, remove
the datastore selector for the edit windows and give the datastore
to it instead

also remove the autostart from the rstore, since we only want to start
it when we change to the relevant tab

and add icons for all other datastore tabs

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-10-27 17:41:30 +01:00
68931742cb ui: add DataStoreSummary and move Statistics into it
this adds a 'Summary' panel to the datastores, similar to what we have
for PVE's nodes/guests/storages

contains an info panel with useful information, a comment field, and
the charts from the statistics panel (which can be deleted since it is
not necessary any more)

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-10-27 17:41:30 +01:00
3ea148598a ui: add DataStorePruneAndGC panel and add it to datastore panel
a simple objectgrid to display datastore gc/prune options
needs the prune inputpanel to be refactored in its own class

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-10-27 17:41:30 +01:00
cd92fd7336 ui: DataStoreContent: add 'Verify All' button
to verify the complete datastore

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-10-27 17:41:30 +01:00
d58e6313e1 api/{verify, syncjobs}: add optional datastore parameter
to limit the lists to the given datastores

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-10-27 17:41:30 +01:00
16f9f244cf admin/datastore: add more info to status call
add also the snapshot counts as well as the status of the last garbage
collection

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-10-27 17:41:30 +01:00
b683fd589c backup/datastore: save garbage collection status to disk
and load it again when opening it

this way we can persist the status of the last garbage collect across
daemon reloads and reboots

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-10-27 17:41:30 +01:00
a2285525be backup/datastore: count still bad chunks for the status
we want to show the user that there are still bad chunks after a garbage
collection

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-10-27 17:41:30 +01:00
f23497b088 apt auth: add newline to the end
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-10-27 17:41:30 +01:00
b57b3c9bfc hack: workaround unused code warning until proxmox-api-macro bump
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-10-27 17:41:30 +01:00
d3444c0891 ui: allow one to delete the description
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-10-27 13:13:00 +01:00
d28e688666 ui: code cleanup
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-10-27 13:13:00 +01:00
72c0e102ff tools: get_hardware_address: better error handling
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-10-27 13:13:00 +01:00
7b22fb257f implement subscription handling and api
mostly modelled after PVE

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-10-27 13:13:00 +01:00
2e201e7da6 tools: http: add simple general post method
This is intended for when the server needs to do requests on
arbitrary, non PBS, external HTTP resources.

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-10-27 13:13:00 +01:00
ee89416319 api: disks: cleanup use statement
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-10-27 13:13:00 +01:00
2357744bd0 introduction: fix title formatting
fix title formatting to remove warning from build

Signed-off-by: Dylan Whyte <d.whyte@proxmox.com>
2020-10-27 12:33:17 +01:00
52fe9e8ece get_hardware_address: must be uppercased
we're a bit strict here what we accept, rather than changing that
lets do it like PVE/PMG and uppercase.

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-10-26 20:18:28 +01:00
eed1bae554 api: add world accessible ping dummy endpoint
This is indented to be used for the PVE storage library, replacing
the missuse of the much more expensive status API call.

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-10-24 19:12:14 +02:00
6eb41487ce apt: improve error messages
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2020-10-22 17:13:26 +02:00
9e61c01ce4 apt: add /changelog API call similar to PVE
For proxmox packages it works the same way as PVE, by retrieving the
changelog URL and issuing a HTTP GET to it, forwarding the output to the
client. As this is only supposed to be a workaround removed in the
future, a simple block_on is used to avoid async.

For debian packages we can simply call 'apt-get changelog' and forward
it's output.

Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
2020-10-22 16:25:08 +02:00
91c9b42da3 fix #2934: list to-be-installed packages in updates
As always, libapt is mocking us with complexity, but we can get the
approximate result we want by retrieving dependencies of all
to-be-updated packages and then seeing if they are missing.

If they are, we assume they will be installed.

For this, query_detailed_info is extended to allow reading details for
non-installed packages, and this is also exposed in
list_installed_apt_packages via 'all_versions_for'. This is necessary so
we can retrieve changelogs for such packages.

Note that we cannot retrieve all that information all the time, as
querying details for packages that aren't installed takes a rather long
time.

Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
2020-10-22 16:25:08 +02:00
52d2ae48f0 apt: refactor package detail reading into function
No functional change intended.

Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
2020-10-22 16:25:08 +02:00
1872050564 apt: use 'apt-get changelog --print-uris' in get_changelog_url
Avoids custom hardcoded logic, but can only be used for debian packages
as of now. Adds a FIXME to switch over to use --print-uris only once our
package repos support that changelog format.

Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
2020-10-22 16:24:26 +02:00
efeb92efee apt: allow filter to select different package version
To get package details for a specific version instead of only the
candidate.

Also cleanup filter function with extra struct instead of unnamed &str
parameters.

Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
2020-10-22 16:24:15 +02:00
4ebda996e5 upid: use systemd escape to decode/encode the worker_id
This way we can store values containing "/" and ":".
2020-10-22 12:24:58 +02:00
5eb9dd0c8a add tools::http for generic HTTP GET and move HttpsConnector there
...to avoid having the tools:: module depend on api2.

The get_string function is based directly on hyper and thus relatively
simple, not supporting redirects for example.

Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
2020-10-21 16:22:08 +02:00
12bcbf0734 ui: verify config: eslint fix
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-10-21 15:53:54 +02:00
dc2876f6bb tools/zip: fix doc tests
the doc code was not compiling and blocking cargo test

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-10-21 14:20:16 +02:00
bdc208af48 postinst: correct invalid old datastore configs
Signed-off-by: Hannes Laimer <h.laimer@proxmox.com>
2020-10-21 12:51:35 +02:00
2ef1b6290f api proxy: remove old verification scheduling
Signed-off-by: Hannes Laimer <h.laimer@proxmox.com>
2020-10-21 12:51:35 +02:00
df0bdf6be7 ui: add task descriptions for the different types of verification(job, snapshot, group, ds)
Signed-off-by: Hannes Laimer <h.laimer@proxmox.com>
2020-10-21 12:51:35 +02:00
8b47a23002 ui: add verification job edit window
Signed-off-by: Hannes Laimer <h.laimer@proxmox.com>
2020-10-21 12:51:35 +02:00
29615fe838 ui: add verification job view
Signed-off-by: Hannes Laimer <h.laimer@proxmox.com>
2020-10-21 12:51:35 +02:00
133042b5d8 set a different worker_type based on what is going to be verified(snapshot, group, ds)
Signed-off-by: Hannes Laimer <h.laimer@proxmox.com>
2020-10-21 12:51:35 +02:00
73df9c515b proxy: add scheduling for verification jobs
Signed-off-by: Hannes Laimer <h.laimer@proxmox.com>
2020-10-21 12:51:35 +02:00
8d1beca7e8 api2: add verification admin endpoint and do_verification_job function
Signed-off-by: Hannes Laimer <h.laimer@proxmox.com>
2020-10-21 12:51:35 +02:00
9b2bad7af0 api2: add verification job config endpoint
Signed-off-by: Hannes Laimer <h.laimer@proxmox.com>
2020-10-21 12:51:35 +02:00
78efafc2d0 rename VERIFY_SCHEDULE_SCHEMA to VERIFICATION_SCHEDULE_SCHEMA
Signed-off-by: Hannes Laimer <h.laimer@proxmox.com>
2020-10-21 12:51:35 +02:00
2d3d91b1db add test for escape_unit 2020-10-21 11:31:24 +02:00
030c5c6d8a systemd::escape_unit - allow '.' and '_' 2020-10-21 11:31:24 +02:00
53a561a222 pass params by ref to recurse_files
gets rid of the return value and moving around of the zip
and decoder data
avoids cloning the path prefix on every recursion

Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2020-10-21 10:50:25 +02:00
e832860a3c whitespace fixup
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2020-10-21 10:45:44 +02:00
804f61432d api2/admin/datastore/pxar_file_download: download directory as zip
by using the new ZipEncoder and recursively add files to it
the zip only contains directories, normal files and hardlinks (by simply
copying the content), no symlinks, etc.

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2020-10-21 10:04:24 +02:00
943479f5f6 tools: add AsyncChannelWriter
similar to StdChannelWriter, but implements AsyncWrite and sends
to a tokio::sync::mpsc::Sender

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2020-10-21 10:04:22 +02:00
fdce52aa99 tools: add zip module
This modules contains the 'ZipEncoder' struct, which wraps an async writer,
to create a ZIP archive on the fly

To create a ZIP file, have a target that implements AsyncWrite,
give it to ZipEncoder::new, add entries via 'add_entry' and
at the end, call 'finish'

for now, this does not implement compression (uses ZIPs STORE mode), and
does not support empty directories or hardlinks (or any other special
files)

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2020-10-21 10:04:18 +02:00
4e32d1c590 fix for prevoius patch: we want to copy all valid tickets 2020-10-21 08:40:04 +02:00
afef7f3bba fix #3038: check user before renewing ticket
Fixes a bug in which the userid of the ticket cache is updated,
when a user connects, but the ticket itself is not.
This means a newly connected user has a previously connected
user's ticket and thus, cannot do anything, as the client will
attempt to use the invalid ticket.

e.g. if john@pbs connected to the server first, followed by
mike@pbs, the following would be stored in the ticket cache.

{
  "localhost": {
    "mike@pbs": {
      "ticket": "PBS:john@pbs:AAAA",
      "timestamp": 1601039326,
      "token": "BBBB"
    }
  }
}

Signed-off-by: Dylan Whyte <d.whyte@proxmox.com>
2020-10-21 08:34:30 +02:00
b428af9781 backup: avoid Transport endpoint is not connected error
We simply supress the error message if the finish flag is set.
2020-10-20 14:20:04 +02:00
c8774067ee paperkey: use svg as image format to provide better scalability 2020-10-20 12:04:51 +02:00
23440482d4 proxmox-backup-client: use HumanByte to render snapshot size 2020-10-20 11:43:48 +02:00
6f757b8458 logrotate: drop useless comment
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-10-20 11:11:36 +02:00
95ade8fdb5 log rotate: move basic rotation logic into module for reuse
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-10-20 11:09:17 +02:00
9e870b5f39 log rotate: do NOT compress first rotation
The first rotation is normally the one still opened by one or more
processes for writing, so it must NOT be replaced, removed, ..., as
this then makes the remaining logging, until those processes are
noticed that they should reopen the logfile due to rotation, goes
into nirvana, which is far from ideal for a log.

Only rotating (renaming) is OK for this active file, as this does not
invalidates the file and keeps open FDs intact.

So start compressing with the second rotation, which should be clear
to use, as all writers must have been told to reopen the log during
the last rotation, reopen is a fast operation and normally triggered
at least day ago (at least if one did not dropped the state file
manually), so we are fine to archive that one for real.
If we plan to allow faster rotation the whole rotation+reopen should
be locked, so that we can guarantee that all writers switched over,
but this is unlikely to be needed.

Again, this is was logrotate sanely does by default since forever.

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-10-20 11:09:17 +02:00
7827e3b93e log rotate: factor out compression in private function
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-10-20 11:09:17 +02:00
e6ca9c3235 log rotate: do NOT overwrite file with possible writers
this is not the job of logrotate, and the real 20+ years battle
tested log rotate binary does not do so either as it's actually
pretty dangerous.

If we "replace" the file we break any logger which already opened a
new one here, e.g., a dameon starting up, and thus that writer would
log to nirvana.

It's the job of a logger to create a file if not existing, it makes
no sense to do it here.

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-10-20 11:09:17 +02:00
0698f78df5 fix #2988: allow verification after finishing a snapshot
To cater to the paranoid, a new datastore-wide setting "verify-new" is
introduced. When set, a verify job will be spawned right after a new
backup is added to the store (only verifying the added snapshot).

Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
2020-10-20 10:51:13 +02:00
bcc2880461 add verify_backup_dir_with_lock for callers already holding locks
Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
2020-10-20 10:49:19 +02:00
115d927c15 unbreak build
and silence warning.

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2020-10-20 09:07:32 +02:00
df729017b4 datastore: cleanup open and load config only once
Force consumers to use the lookup_datastore method instead of
potentially opening a datastore twice, and pass the config we have
already loaded into open_with_path, removing the need for open(1).

Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
2020-10-20 07:51:05 +02:00
455f2ad228 fix missing block_in_place for remove_backup
Commit 9070d11f4c introduced this change for other call sites,
assuming it is correct, this one was missed.

Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
2020-10-20 07:48:51 +02:00
e4f5f59eea code/fmt cleanups
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-10-19 15:11:51 +02:00
16cdb9563b completion: fix ACL path completion
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2020-10-19 15:06:13 +02:00
02479720c0 REST: rename token to csrf_token
for easier differentiation with (future) api_token

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2020-10-19 14:02:19 +02:00
97168f920e set reasonable TCP keepalive timeout 2020-10-19 14:01:17 +02:00
9809772b23 fix typos
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2020-10-19 14:00:38 +02:00
4940012d0d fix indentation
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2020-10-19 14:00:26 +02:00
0c2f9621d5 d/changelog: fix typos
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2020-10-19 13:39:08 +02:00
e7372972b5 update d/control
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2020-10-19 13:39:08 +02:00
e5adbc3419 fixup worker task: add time prefix again
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-10-19 13:22:37 +02:00
41255b4d95 bump proxmox dependency to 0.5.0 for nix 0.19
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2020-10-19 12:35:03 +02:00
0c4c6a7b1c build: bump nix dependency
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2020-10-19 12:12:33 +02:00
c7e18ba08a file logger: add option to make the backup user the log file owner
and use that in ApiConfig to avoid that it is owned by root if the
proxmox-backup-api process creates it first.

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-10-19 10:37:26 +02:00
bb14d46796 http_client: set connect timeout to 10 seconds 2020-10-19 09:36:01 +02:00
e6475b09e0 cargo: bump dependency of proxmox crate
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-10-16 12:19:43 +02:00
d39d095fa4 api: access: log to separate file, reduce syslog to errors
for now log auth errors also to the syslog, on a protected (LAN
and/or firewalled) setup this should normally happen due to
missconfiguration, not tries to break in.

This reduces syslog noise *a lot*. A current full journal output from
the current boot here has 72066 lines, of which 71444 (>99% !!) are
"successful auth for user ..." messages

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-10-16 11:23:49 +02:00
86f3c2363c server/rest: also log user agent
allows easily to see if a request is from a browser or a proxmox-backup-client
CLI

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-10-16 11:23:49 +02:00
8e7e2223d8 server/rest: implement request access log
reuse the FileLogger module in append mode.
As it implements write, which is not thread safe (mutable self) and
we use it in a async context we need to serialize access using a
mutex.

Try to use the same format we do in pveproxy, namely the one which is
also used in apache or nginx by default.

Use the response extensions to pass up the userid, if we extract it
from a ticket.

The privileged and unprivileged dameons log both to the same file, to
have a unified view, and avoiding the need to handle more log files.
We avoid extra intra-process locking by reusing the fact that a write
smaller than PIPE_BUF (4k on linux) is atomic for files opened with
the 'O_APPEND' flag. For now the logged request path is not yet
guaranteed to be smaller than that, this will be improved in a future
patch.

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-10-16 11:23:49 +02:00
081c37cccf tools file logger: fix example and comments
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-10-16 11:16:29 +02:00
c0df91f8bd tools: file logger: use option struct to control behavior
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-10-16 10:48:36 +02:00
400c568f8e server: rest: also log the query part of URL
As it is part of the request and we do so in our other products

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-10-16 10:41:05 +02:00
4703ba81ce server: rest: implement max URI path and query length request limits
Add a generous limit now and return the correct error (414 URI Too
Long). Otherwise we could to pretty larger GET requests, 64 KiB and
possible bigger (at 64 KiB my simple curl test failed due to
shell/curl limitations).

For now allow a 3072 characters as combined length of URI path and
query.

This is conform with the HTTP/1.1 RFCs (e.g., RFC 7231, 6.5.12 and
RFC 2616, 3.2.1) which do not specify any limits, upper or lower, but
require that all server accessible resources mus be reachable without
getting 414, which is normally fulfilled as we have various length
limits for stuff which could be in an URI, in place, e.g.:
 * user id: max. 64 chars
 * datastore: max. 32 chars

The only known problematic API endpoint is the catalog one, used in
the GUI's pxar file browser:
GET /api2/json/admin/datastore/<id>/catalog?..&filepath=<path>

The <path> is the encoded archive path, and can be arbitrary long.

But, this is a flawed design, as even without this new limit one can
easily generate archives which cannot be browsed anymore, as hyper
only accepts requests with max. 64 KiB in the URI.
So rather, we should move that to a GET-as-POST call, which has no
such limitations (and would not need to base32 encode the path).

Note: This change was inspired by adding a request access log, which
profits from such limits as we can then rely on certain atomicity
guarantees when writing requests to the log.

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-10-16 10:40:39 +02:00
29633e2fe9 server/rest: forward real client IP on proxied request
needs new proxmox dependency to get the RpcEnvironment changes,
adding client_ip getter and setter.

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-10-16 10:36:54 +02:00
b64e9a97f3 rustdoc: overhaul backup rustdoc and add locking table
Rewrite most of the documentation to be more readable and correct
(according to the current implementations).

Add a table visualizing all different locks used to synchronize
concurrent operations.

Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
2020-10-16 09:38:01 +02:00
254b1f2213 rustdoc: add crate level doc
Contains a link to the 'backup' module's doc, as that explains a lot
about the inner workings of PBS and probably marks a good entry point
for new readers.

Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
2020-10-16 09:37:50 +02:00
1a374fcfd6 datastore: add manifest locking
Avoid races when updating manifest data by flocking a lock file.
update_manifest is used to ensure updates always happen with the lock
held.

Snapshot deletion also acquires the lock, so it cannot interfere with an
outstanding manifest write.

Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
2020-10-16 09:34:12 +02:00
e07620028d mark_used_chunks: simply ignore vanished files
In case a prune operation removed a file in the meantime.
2020-10-16 08:10:46 +02:00
b947b1e7ee server: rest: refactor code to avoid multiple log_response calls
The 'Ok::<_, Self::Error>(res)' type annotation was from a time where
we could not use async, and had a combinator here which needed
explicity type information. We switched over to async in commit
91e4587343 and, as the type annotation
is already included in the Future type, we can safely drop it.

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-10-15 13:58:47 +02:00
1e80fb8e92 code cleanups
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-10-15 13:58:47 +02:00
8d841f81ee pxar: anchor pxarexcludes starting with a slash
Given the .pxarexclude file

    foo
    /bar

The following happens:

    exclude: /foo
    exclude: /bar
    exclude: /subdir/foo
    include: /subdir/bar

since the `/bar` line is an absolute path

Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2020-10-15 12:28:31 +02:00
d9f365d79f Introduction: reword & link to encryption section
Add link from encryption sentence in  "What is Proxmox
Backup Server?" to the Encryption section of the docs.
Also, reword the sentence.

V2:
Clarify that encryption takes place on the client side

Signed-off-by: Dylan Whyte <d.whyte@proxmox.com>
2020-10-15 12:20:33 +02:00
32a4695c46 pxar: fix relative '!' rules in .pxarexclude
and reduce indentation

Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2020-10-15 12:18:34 +02:00
2081327428 more clippy lints
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2020-10-15 12:18:34 +02:00
4c0ae82e23 datastore: remove individual snapshots before group
Removing a snapshot has some more safety checks which we don't want to
ignore when removing an entire group (i.e. locking the manifest and
notifying GC).

Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
2020-10-15 07:51:09 +02:00
883aa6d5a4 datastore: remove load_manifest_json
There's no point in having that as a seperate method, just parse the
thing into a struct and write it back out correctly.

Also makes further changes to the method simpler.

Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
2020-10-15 07:19:32 +02:00
bfa54f2e85 verify: acquire shared snapshot flock and skip on error
If we can't acquire a lock (either because the snapshot disappeared, it
is about to be forgotten/pruned, or it is currently still running) skip
the snapshot. Hold the lock during verification, so that it cannot be
deleted while we are still verifying.

Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
2020-10-15 07:09:34 +02:00
238a872d1f reader: acquire shared flock on open snapshot
...to avoid it being forgotten or pruned while in use.

Update lock error message for deletions to be consistent.

Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
2020-10-15 07:09:34 +02:00
7d6c4c39e9 backup: use shared flock for base snapshot
To allow other reading operations on the base snapshot as well. No
semantic changes with this patch alone, as all other locks on snapshots
are exclusive.

Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
2020-10-15 07:09:34 +02:00
f153930066 prune: never fail, just warn about failed removals
A removal can fail if the snapshot is already gone (this is fine, our
job is done either way) or we couldn't get a lock (also fine, it can't
be removed then, just warn the user so he knows what happened and why it
wasn't removed) - keep going either way.

Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
2020-10-15 07:09:34 +02:00
836c4a278d prune: respect snapshot flock
A snapshot that's currently being read can still appear in the prune
list, but should not be removed.

Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
2020-10-15 07:09:34 +02:00
6cd8496008 introduction: history: minor rewording and fixup
Some minor spelling and grammar fixes.
Rewording of some sentences.

Signed-off-by: Dylan Whyte <d.whyte@proxmox.com>
2020-10-15 07:09:34 +02:00
61c6eafc08 AsyncIndexReader: avoid memcpy, add clippy lint fixup comment
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2020-10-14 14:10:28 +02:00
8db1468952 more clippy fixups
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2020-10-14 13:58:35 +02:00
162 changed files with 11228 additions and 2758 deletions

View File

@ -1,6 +1,6 @@
[package]
name = "proxmox-backup"
version = "0.9.1"
version = "0.9.6"
authors = ["Dietmar Maurer <dietmar@proxmox.com>"]
edition = "2018"
license = "AGPL-3"
@ -29,7 +29,7 @@ hyper = "0.13.6"
lazy_static = "1.4"
libc = "0.2"
log = "0.4"
nix = "0.16"
nix = "0.19"
num-traits = "0.2"
once_cell = "1.3.1"
openssl = "0.10"
@ -38,7 +38,7 @@ pam-sys = "0.5"
percent-encoding = "2.1"
pin-utils = "0.1.0"
pathpatterns = "0.1.2"
proxmox = { version = "0.4.3", features = [ "sortable-macro", "api-macro", "websocket" ] }
proxmox = { version = "0.7.0", features = [ "sortable-macro", "api-macro", "websocket" ] }
#proxmox = { git = "git://git.proxmox.com/git/proxmox", version = "0.1.2", features = [ "sortable-macro", "api-macro" ] }
#proxmox = { path = "../proxmox/proxmox", features = [ "sortable-macro", "api-macro", "websocket" ] }
proxmox-fuse = "0.1.0"

View File

@ -19,7 +19,8 @@ USR_SBIN := \
SERVICE_BIN := \
proxmox-backup-api \
proxmox-backup-banner \
proxmox-backup-proxy
proxmox-backup-proxy \
proxmox-daily-update \
ifeq ($(BUILD_MODE), release)
CARGO_BUILD_ARGS += --release

187
debian/changelog vendored
View File

@ -1,3 +1,186 @@
rust-proxmox-backup (0.9.6-1) unstable; urgency=medium
* fix #3106: improve queueing new incoming connections
* fix #2870: sync: ensure a updated ticket is used, if available
* proxy: log if there are too many open connections
* ui: SyncJobEdit: fix sending 'delete' values on SyncJob creation
* datastore config: allow to configure who receives job notify emails
* ui: fix task description for log rotate
* proxy: also rotate auth.log file
* ui: add shell panel under administration
* ui: sync jobs: only set default schedule when creating new jobs and some
other small fixes
-- Proxmox Support Team <support@proxmox.com> Wed, 04 Nov 2020 19:12:57 +0100
rust-proxmox-backup (0.9.5-1) unstable; urgency=medium
* ui: user menu: allow one to change the language while staying logged in
* proxmox-backup-manager: add subscription commands
* server/rest: also accept = as token separator
* privs: allow reading snapshot notes with Datastore.Audit
* privs: enforce Datastore.Modify|Backup to set backup notes
* verify: introduce and use new Datastore.Verify privilege
* docs: add API tokens to documentation
* ui: various smaller layout and icon improvements
* api: implement apt pkg cache for caching pending updates
* api: apt: implement support to send notification email on new updates
* add daily update and maintenance task
* fix #2864: add owner option to sync
* sync: allow sync for non-superusers under special conditions
* config: support depreacated netmask when parsing interfaces file
* server: implement access log rotation with re-open via command socket
* garbage collect: improve index error messages
* fix #3039: use the same ID regex for info and api
* ui: administration: allow extensive filtering of the worker task
* report: add api endpoint and function to generate report
-- Proxmox Support Team <support@proxmox.com> Tue, 03 Nov 2020 17:41:17 +0100
rust-proxmox-backup (0.9.4-2) unstable; urgency=medium
* make postinst (update) script more resilient
-- Proxmox Support Team <support@proxmox.com> Thu, 29 Oct 2020 20:09:30 +0100
rust-proxmox-backup (0.9.4-1) unstable; urgency=medium
* implement API-token
* client/remote: allow using API-token + secret
* ui/cli: implement API-token management interface and commands
* ui: add widget to view the effective permissions of a user or token
* ui: datastore summary: handle error when havin zero snapshot of any type
* ui: move user, token and permissions into an access control tab panel
-- Proxmox Support Team <support@proxmox.com> Thu, 29 Oct 2020 17:19:13 +0100
rust-proxmox-backup (0.9.3-1) unstable; urgency=medium
* fix #2998: encode mtime as i64 instead of u64
* GC: log the number of leftover bad chunks we could not yet cleanup, as no
valid one replaced them. Also log deduplication factor.
* send sync job status emails
* api: datstore status: introduce proper structs and restore compatibility
to 0.9.1
* ui: drop id field from verify/sync add window, they are now seen as internal
-- Proxmox Support Team <support@proxmox.com> Thu, 29 Oct 2020 14:58:13 +0100
rust-proxmox-backup (0.9.2-2) unstable; urgency=medium
* rework server web-interface, move more datastore related panels as tabs
inside the datastore view
* prune: never fail, just warn about failed removals
* prune/forget: skip snapshots with open readers (restore, verification)
* datastore: always ensure to remove individual snapshots before their group
* pxar: fix relative '!' rules in .pxarexclude
* pxar: anchor pxarexcludes starting with a slash
* GC: mark phase: ignore vanished index files
* server/rest: forward real client IP on proxied request and log it in
failed authentication requests
* server: rest: implement max URI path and query length request limits
* server/rest: implement request access log and log the query part of
URL and the user agent
* api: access: log to separate file, use syslog to errors only to reduce
syslog spam
* client: set HTTP connect timeout to 10 seconds
* client: sent TCP keep-alive after 2 minutes instead of the Linux default
of two hours.
* CLI completion: fix ACL path completion
* fix #2988: allow one to enable automatic verification after finishing a
snapshot, can be controlled as a per-datastore option
* various log-rotation improvements
* proxmox-backup-client: use HumanByte to render snapshot size
* paperkey: use svg as image format to provide better scalability
* backup: avoid Transport endpoint is not connected error
* fix #3038: check user before renewing ticket
* ui/tools: add zip module and allow to download an archive directory as a zip
* ui and api: add verification job config, allowing to schedule more
flexible jobs, filtering out already and/or recently verified snapshots
NOTE: the previous simple "verify all" schedule was dropped from the
datastore content, and does *not* gets migrated to the new job config.
* tasks: use systemd escape to decode/encode the task worker ID, avoiding
some display problems with problematic characters
* fix #2934: list also new to-be-installed packages in updates
* apt: add /changelog API call similar to PVE
* api: add world accessible ping dummy endpoint, to cheaply check for a
running PBS instance.
* ui: add datastore summary panel and move Statistics into it
* ui: navigation: add 'Add Datastore' button below datastore list
* ui: datastore panel: save and restore selected tab statefully
* send notification mails to email of root@pam account for GC and verify
jobs
* ui: datastore: use simple V. for verify action button
* ui: datastore: show snapshot manifest comment and allow to edit them
-- Proxmox Support Team <support@proxmox.com> Wed, 28 Oct 2020 23:05:41 +0100
rust-proxmox-backup (0.9.1-1) unstable; urgency=medium
* TLS speedups (use SslAcceptor::mozilla_intermediate_v5)
@ -16,7 +199,7 @@ rust-proxmox-backup (0.9.1-1) unstable; urgency=medium
* add "Build" section to README.rst
* reader: actually allow users to downlod their own backups
* reader: actually allow users to download their own backups
* reader: track index chunks and limit access
@ -38,7 +221,7 @@ rust-proxmox-backup (0.9.1-1) unstable; urgency=medium
* ui: Dashboard/TaskSummary: add Verifies to the Summary
* ui: implment task history limit and make it configurable
* ui: implement task history limit and make it configurable
* docs: installation: add system requirements section

12
debian/control vendored
View File

@ -24,7 +24,7 @@ Build-Depends: debhelper (>= 11),
librust-lazy-static-1+default-dev (>= 1.4-~~),
librust-libc-0.2+default-dev,
librust-log-0.4+default-dev,
librust-nix-0.16+default-dev,
librust-nix-0.19+default-dev,
librust-nom-5+default-dev (>= 5.1-~~),
librust-num-traits-0.2+default-dev,
librust-once-cell-1+default-dev (>= 1.3.1-~~),
@ -34,10 +34,10 @@ Build-Depends: debhelper (>= 11),
librust-pathpatterns-0.1+default-dev (>= 0.1.2-~~),
librust-percent-encoding-2+default-dev (>= 2.1-~~),
librust-pin-utils-0.1+default-dev,
librust-proxmox-0.4+api-macro-dev (>= 0.4.3-~~),
librust-proxmox-0.4+default-dev (>= 0.4.3-~~),
librust-proxmox-0.4+sortable-macro-dev (>= 0.4.3-~~),
librust-proxmox-0.4+websocket-dev (>= 0.4.3-~~),
librust-proxmox-0.7+api-macro-dev,
librust-proxmox-0.7+default-dev,
librust-proxmox-0.7+sortable-macro-dev,
librust-proxmox-0.7+websocket-dev,
librust-proxmox-fuse-0.1+default-dev,
librust-pxar-0.6+default-dev (>= 0.6.1-~~),
librust-pxar-0.6+futures-io-dev (>= 0.6.1-~~),
@ -107,7 +107,7 @@ Depends: fonts-font-awesome,
pbs-i18n,
proxmox-backup-docs,
proxmox-mini-journalreader,
proxmox-widget-toolkit (>= 2.3-1),
proxmox-widget-toolkit (>= 2.3-6),
pve-xtermjs (>= 4.7.0-1),
smartmontools,
${misc:Depends},

2
debian/control.in vendored
View File

@ -7,7 +7,7 @@ Depends: fonts-font-awesome,
pbs-i18n,
proxmox-backup-docs,
proxmox-mini-journalreader,
proxmox-widget-toolkit (>= 2.3-1),
proxmox-widget-toolkit (>= 2.3-6),
pve-xtermjs (>= 4.7.0-1),
smartmontools,
${misc:Depends},

16
debian/postinst vendored
View File

@ -15,10 +15,24 @@ case "$1" in
fi
deb-systemd-invoke $_dh_action proxmox-backup.service proxmox-backup-proxy.service >/dev/null || true
# FIXME: Remove with 1.1
if test -n "$2"; then
if dpkg --compare-versions "$2" 'lt' '0.9.4-1'; then
if grep -s -q -P -e '^\s+verify-schedule ' /etc/proxmox-backup/datastore.cfg; then
echo "NOTE: drop all verify schedules from datastore config."
echo "You can now add more flexible verify jobs"
flock -w 30 /etc/proxmox-backup/.datastore.lck \
sed -i '/^\s\+verify-schedule /d' /etc/proxmox-backup/datastore.cfg || true
fi
fi
if dpkg --compare-versions "$2" 'le' '0.9.5-1'; then
chown --quiet backup:backup /var/log/proxmox-backup/api/auth.log || true
fi
fi
# FIXME: Remove in future version once we're sure no broken entries remain in anyone's files
if grep -q -e ':termproxy::[^@]\+: ' /var/log/proxmox-backup/tasks/active; then
echo "Fixing up termproxy user id in task log..."
flock -w 30 /var/log/proxmox-backup/tasks/active.lock sed -i 's/:termproxy::\([^@]\+\): /:termproxy::\1@pam: /' /var/log/proxmox-backup/tasks/active
flock -w 30 /var/log/proxmox-backup/tasks/active.lock sed -i 's/:termproxy::\([^@]\+\): /:termproxy::\1@pam: /' /var/log/proxmox-backup/tasks/active || true
fi
;;

3
debian/prerm vendored
View File

@ -6,5 +6,6 @@ set -e
# modeled after dh_systemd_start output
if [ -d /run/systemd/system ] && [ "$1" = remove ]; then
deb-systemd-invoke stop 'proxmox-backup-banner.service' 'proxmox-backup-proxy.service' 'proxmox-backup.service' >/dev/null || true
deb-systemd-invoke stop 'proxmox-backup-banner.service' 'proxmox-backup-proxy.service' \
'proxmox-backup.service' 'proxmox-backup-daily-update.timer' >/dev/null || true
fi

View File

@ -1,10 +1,13 @@
etc/proxmox-backup-proxy.service /lib/systemd/system/
etc/proxmox-backup.service /lib/systemd/system/
etc/proxmox-backup-banner.service /lib/systemd/system/
etc/proxmox-backup-daily-update.service /lib/systemd/system/
etc/proxmox-backup-daily-update.timer /lib/systemd/system/
etc/pbstest-beta.list /etc/apt/sources.list.d/
usr/lib/x86_64-linux-gnu/proxmox-backup/proxmox-backup-api
usr/lib/x86_64-linux-gnu/proxmox-backup/proxmox-backup-proxy
usr/lib/x86_64-linux-gnu/proxmox-backup/proxmox-backup-banner
usr/lib/x86_64-linux-gnu/proxmox-backup/proxmox-daily-update
usr/sbin/proxmox-backup-manager
usr/share/javascript/proxmox-backup/index.hbs
usr/share/javascript/proxmox-backup/css/ext6-pbs.css

1
debian/rules vendored
View File

@ -38,6 +38,7 @@ override_dh_auto_install:
LIBDIR=/usr/lib/$(DEB_HOST_MULTIARCH)
override_dh_installsystemd:
dh_installsystemd -pproxmox-backup-server proxmox-backup-daily-update.timer
# note: we start/try-reload-restart services manually in postinst
dh_installsystemd --no-start --no-restart-after-upgrade

View File

@ -18,25 +18,25 @@ the default is the local host (``localhost``).
You can specify a port if your backup server is only reachable on a different
port (e.g. with NAT and port forwarding).
Note that if the server is an IPv6 address, you have to write it with
square brackets (e.g. [fe80::01]).
Note that if the server is an IPv6 address, you have to write it with square
brackets (for example, `[fe80::01]`).
You can pass the repository with the ``--repository`` command
line option, or by setting the ``PBS_REPOSITORY`` environment
variable.
You can pass the repository with the ``--repository`` command line option, or
by setting the ``PBS_REPOSITORY`` environment variable.
Here some examples of valid repositories and the real values
================================ ============ ================== ===========
================================ ================== ================== ===========
Example User Host:Port Datastore
================================ ============ ================== ===========
================================ ================== ================== ===========
mydatastore ``root@pam`` localhost:8007 mydatastore
myhostname:mydatastore ``root@pam`` myhostname:8007 mydatastore
user@pbs@myhostname:mydatastore ``user@pbs`` myhostname:8007 mydatastore
user\@pbs!token@host:store ``user@pbs!token`` myhostname:8007 mydatastore
192.168.55.55:1234:mydatastore ``root@pam`` 192.168.55.55:1234 mydatastore
[ff80::51]:mydatastore ``root@pam`` [ff80::51]:8007 mydatastore
[ff80::51]:1234:mydatastore ``root@pam`` [ff80::51]:1234 mydatastore
================================ ============ ================== ===========
================================ ================== ================== ===========
Environment Variables
---------------------
@ -45,16 +45,16 @@ Environment Variables
The default backup repository.
``PBS_PASSWORD``
When set, this value is used for the password required for the
backup server.
When set, this value is used for the password required for the backup server.
You can also set this to a API token secret.
``PBS_ENCRYPTION_PASSWORD``
When set, this value is used to access the secret encryption key (if
protected by password).
``PBS_FINGERPRINT`` When set, this value is used to verify the server
certificate (only used if the system CA certificates cannot
validate the certificate).
certificate (only used if the system CA certificates cannot validate the
certificate).
Output Format
@ -246,6 +246,8 @@ Restoring this backup will result in:
. .. file2
.. _encryption:
Encryption
----------

View File

@ -1,8 +1,8 @@
Introduction
============
What is Proxmox Backup Server
-----------------------------
What is Proxmox Backup Server?
------------------------------
Proxmox Backup Server is an enterprise-class, client-server backup software
package that backs up :term:`virtual machine`\ s, :term:`container`\ s, and
@ -10,12 +10,14 @@ physical hosts. It is specially optimized for the `Proxmox Virtual Environment`_
platform and allows you to back up your data securely, even between remote
sites, providing easy management with a web-based user interface.
Proxmox Backup Server supports deduplication, compression, and authenticated
It supports deduplication, compression, and authenticated
encryption (AE_). Using :term:`Rust` as the implementation language guarantees high
performance, low resource usage, and a safe, high-quality codebase.
It features strong client-side encryption. Thus, it's possible to
backup data to targets that are not fully trusted.
Proxmox Backup uses state of the art cryptography for client communication and
backup content :ref:`encryption <encryption>`. Encryption is done on the
client side, making it safer to back up data to targets that are not fully
trusted.
Architecture
@ -179,29 +181,28 @@ along with this program. If not, see AGPL3_.
History
-------
Backup is, and always was, as central aspect of IT administration.
The need to recover from data loss is fundamental and increases with
Backup is, and always has been, a central aspect of IT administration.
The need to recover from data loss is fundamental and only increases with
virtualization.
Not surprisingly, we shipped a backup tool with Proxmox VE from the
beginning. The tool is called ``vzdump`` and is able to make
For this reason, we've been shipping a backup tool with Proxmox VE, from the
beginning. This tool is called ``vzdump`` and is able to make
consistent snapshots of running LXC containers and KVM virtual
machines.
But ``vzdump`` only allowed for full backups. While this is perfect
However, ``vzdump`` only allows for full backups. While this is fine
for small backups, it becomes a burden for users with large VMs. Both
backup time and space usage was too large for this case, specially
when Users want to keep many backups of the same VMs. We need
deduplication and incremental backups to solve those problems.
backup duration and storage usage are too high for this case, especially
for users who want to keep many backups of the same VMs. To solve these
problems, we needed to offer deduplication and incremental backups.
Back in October 2018 development started. We had been looking into
Back in October 2018, development started. We investigated
several technologies and frameworks and finally decided to use
:term:`Rust` as implementation language to provide high speed and
memory efficiency. The 2018-edition of Rust seemed to be promising and
useful for our requirements.
:term:`Rust` as the implementation language, in order to provide high speed and
memory efficiency. The 2018-edition of Rust seemed promising for our
requirements.
In July 2020 we released the first beta version of Proxmox Backup
Server, followed by a first stable version in November 2020. With the
support of incremental, fully deduplicated backups, Proxmox Backup
significantly reduces the network load and saves valuable storage
space.
In July 2020, we released the first beta version of Proxmox Backup
Server, followed by the first stable version in November 2020. With support for
incremental, fully deduplicated backups, Proxmox Backup significantly reduces
network load and saves valuable storage space.

View File

@ -79,4 +79,17 @@ either start it manually on the GUI or provide it with a schedule (see
└────────────┴───────┴────────┴──────────────┴───────────┴─────────┘
# proxmox-backup-manager sync-job remove pbs2-local
For setting up sync jobs, the configuring user needs the following permissions:
#. ``Remote.Read`` on the ``/remote/{remote}/{remote-store}`` path
#. at least ``Datastore.Backup`` on the local target datastore (``/datastore/{store}``)
If the ``remove-vanished`` option is set, ``Datastore.Prune`` is required on
the local datastore as well. If the ``owner`` option is not set (defaulting to
``backup@pam``) or set to something other than the configuring user,
``Datastore.Modify`` is required as well.
.. note:: A sync job can only sync backup groups that the configured remote's
user/API token can read. If a remote is configured with a user/API token that
only has ``Datastore.Backup`` privileges, only the limited set of accessible
snapshots owned by that user/API token can be synced.

View File

@ -70,7 +70,7 @@ The resulting user list looks like this:
│ root@pam │ 1 │ │ │ │ │ Superuser │
└──────────┴────────┴────────┴───────────┴──────────┴──────────────────┴──────────────────┘
Newly created users do not have any permissions. Please read the next
Newly created users do not have any permissions. Please read the Access Control
section to learn how to set access permissions.
If you want to disable a user account, you can do that by setting ``--enable`` to ``0``
@ -85,15 +85,69 @@ Or completely remove the user with:
# proxmox-backup-manager user remove john@pbs
.. _user_tokens:
API Tokens
----------
Any authenticated user can generate API tokens which can in turn be used to
configure various clients, instead of directly providing the username and
password.
API tokens serve two purposes:
#. Easy revocation in case client gets compromised
#. Limit permissions for each client/token within the users' permission
An API token consists of two parts: an identifier consisting of the user name,
the realm and a tokenname (``user@realm!tokenname``), and a secret value. Both
need to be provided to the client in place of the user ID (``user@realm``) and
the user password, respectively.
The API token is passed from the client to the server by setting the
``Authorization`` HTTP header with method ``PBSAPIToken`` to the value
``TOKENID:TOKENSECRET``.
Generating new tokens can done using ``proxmox-backup-manager`` or the GUI:
.. code-block:: console
# proxmox-backup-manager user generate-token john@pbs client1
Result: {
"tokenid": "john@pbs!client1",
"value": "d63e505a-e3ec-449a-9bc7-1da610d4ccde"
}
.. note:: The displayed secret value needs to be saved, since it cannot be
displayed again after generating the API token.
The ``user list-tokens`` sub-command can be used to display tokens and their
metadata:
.. code-block:: console
# proxmox-backup-manager user list-tokens john@pbs
┌──────────────────┬────────┬────────┬─────────┐
│ tokenid │ enable │ expire │ comment │
╞══════════════════╪════════╪════════╪═════════╡
│ john@pbs!client1 │ 1 │ │ │
└──────────────────┴────────┴────────┴─────────┘
Similarly, the ``user delete-token`` subcommand can be used to delete a token
again.
Newly generated API tokens don't have any permissions. Please read the next
section to learn how to set access permissions.
.. _user_acl:
Access Control
--------------
By default new users do not have any permission. Instead you need to
specify what is allowed and what is not. You can do this by assigning
roles to users on specific objects like datastores or remotes. The
By default new users and API tokens do not have any permission. Instead you
need to specify what is allowed and what is not. You can do this by assigning
roles to users/tokens on specific objects like datastores or remotes. The
following roles exist:
**NoAccess**
@ -148,20 +202,21 @@ The data represented in each field is as follows:
#. The object on which the permission is set. This can be a specific object
(single datastore, remote, etc.) or a top level object, which with
propagation enabled, represents all children of the object also.
#. The user for which the permission is set
#. The user(s)/token(s) for which the permission is set
#. The role being set
You can manage datastore permissions from **Configuration -> Permissions** in the
web interface. Likewise, you can use the ``acl`` subcommand to manage and
monitor user permissions from the command line. For example, the command below
will add the user ``john@pbs`` as a **DatastoreAdmin** for the datastore
``store1``, located at ``/backup/disk1/store1``:
You can manage permissions via **Configuration -> Access Control ->
Permissions** in the web interface. Likewise, you can use the ``acl``
subcommand to manage and monitor user permissions from the command line. For
example, the command below will add the user ``john@pbs`` as a
**DatastoreAdmin** for the datastore ``store1``, located at
``/backup/disk1/store1``:
.. code-block:: console
# proxmox-backup-manager acl update /datastore/store1 DatastoreAdmin --userid john@pbs
# proxmox-backup-manager acl update /datastore/store1 DatastoreAdmin --auth-id john@pbs
You can monitor the roles of each user using the following command:
You can list the ACLs of each user/token using the following command:
.. code-block:: console
@ -172,7 +227,7 @@ You can monitor the roles of each user using the following command:
│ john@pbs │ /datastore/disk1 │ 1 │ DatastoreAdmin │
└──────────┴──────────────────┴───────────┴────────────────┘
A single user can be assigned multiple permission sets for different datastores.
A single user/token can be assigned multiple permission sets for different datastores.
.. Note::
Naming convention is important here. For datastores on the host,
@ -183,4 +238,39 @@ A single user can be assigned multiple permission sets for different datastores.
remote (see `Remote` below) and ``{storename}`` is the name of the datastore on
the remote.
API Token permissions
~~~~~~~~~~~~~~~~~~~~~
API token permissions are calculated based on ACLs containing their ID
independent of those of their corresponding user. The resulting permission set
on a given path is then intersected with that of the corresponding user.
In practice this means:
#. API tokens require their own ACL entries
#. API tokens can never do more than their corresponding user
Effective permissions
~~~~~~~~~~~~~~~~~~~~~
To calculate and display the effective permission set of a user or API token
you can use the ``proxmox-backup-manager user permission`` command:
.. code-block:: console
# proxmox-backup-manager user permissions john@pbs -- path /datastore/store1
Privileges with (*) have the propagate flag set
Path: /datastore/store1
- Datastore.Audit (*)
- Datastore.Backup (*)
- Datastore.Modify (*)
- Datastore.Prune (*)
- Datastore.Read (*)
# proxmox-backup-manager acl update /datastore/store1 DatastoreBackup --auth-id 'john@pbs!client1'
# proxmox-backup-manager user permissions 'john@pbs!test' -- path /datastore/store1
Privileges with (*) have the propagate flag set
Path: /datastore/store1
- Datastore.Backup (*)

View File

@ -1,9 +1,11 @@
include ../defines.mk
UNITS :=
UNITS := \
proxmox-backup-daily-update.timer \
DYNAMIC_UNITS := \
proxmox-backup-banner.service \
proxmox-backup-daily-update.service \
proxmox-backup.service \
proxmox-backup-proxy.service

View File

@ -0,0 +1,8 @@
[Unit]
Description=Daily Proxmox Backup Server update and maintenance activities
After=network-online.target
Wants=network-online.target
[Service]
Type=oneshot
ExecStart=%LIBEXECDIR%/proxmox-backup/proxmox-daily-update

View File

@ -0,0 +1,10 @@
[Unit]
Description=Daily Proxmox Backup Server update and maintenance activities
[Timer]
OnCalendar=*-*-* 1:00
RandomizedDelaySec=5h
Persistent=true
[Install]
WantedBy=timers.target

View File

@ -9,6 +9,7 @@ After=proxmox-backup.service
Type=notify
ExecStart=%LIBEXECDIR%/proxmox-backup/proxmox-backup-proxy
ExecReload=/bin/kill -HUP $MAINPID
PIDFile=/run/proxmox-backup/proxy.pid
Restart=on-failure
User=%PROXY_USER%
Group=%PROXY_USER%

View File

@ -7,6 +7,7 @@ After=network.target
Type=notify
ExecStart=%LIBEXECDIR%/proxmox-backup/proxmox-backup-api
ExecReload=/bin/kill -HUP $MAINPID
PIDFile=/run/proxmox-backup/api.pid
Restart=on-failure
[Install]

View File

@ -2,7 +2,7 @@ use std::io::Write;
use anyhow::{Error};
use proxmox_backup::api2::types::Userid;
use proxmox_backup::api2::types::Authid;
use proxmox_backup::client::{HttpClient, HttpClientOptions, BackupReader};
pub struct DummyWriter {
@ -26,13 +26,13 @@ async fn run() -> Result<(), Error> {
let host = "localhost";
let username = Userid::root_userid();
let auth_id = Authid::root_auth_id();
let options = HttpClientOptions::new()
.interactive(true)
.ticket_cache(true);
let client = HttpClient::new(host, 8007, username, options)?;
let client = HttpClient::new(host, 8007, auth_id, options)?;
let backup_time = proxmox::tools::time::parse_rfc3339("2019-06-28T10:49:48Z")?;

View File

@ -1,6 +1,6 @@
use anyhow::{Error};
use proxmox_backup::api2::types::Userid;
use proxmox_backup::api2::types::Authid;
use proxmox_backup::client::*;
async fn upload_speed() -> Result<f64, Error> {
@ -8,13 +8,13 @@ async fn upload_speed() -> Result<f64, Error> {
let host = "localhost";
let datastore = "store2";
let username = Userid::root_userid();
let auth_id = Authid::root_auth_id();
let options = HttpClientOptions::new()
.interactive(true)
.ticket_cache(true);
let client = HttpClient::new(host, 8007, username, options)?;
let client = HttpClient::new(host, 8007, auth_id, options)?;
let backup_time = proxmox::tools::time::epoch_i64();

View File

@ -7,6 +7,7 @@ pub mod reader;
pub mod status;
pub mod types;
pub mod version;
pub mod ping;
pub mod pull;
mod helpers;
@ -22,6 +23,7 @@ pub const SUBDIRS: SubdirMap = &[
("backup", &backup::ROUTER),
("config", &config::ROUTER),
("nodes", &NODES_ROUTER),
("ping", &ping::ROUTER),
("pull", &pull::ROUTER),
("reader", &reader::ROUTER),
("status", &status::ROUTER),

View File

@ -1,6 +1,8 @@
use anyhow::{bail, format_err, Error};
use serde_json::{json, Value};
use std::collections::HashMap;
use std::collections::HashSet;
use proxmox::api::{api, RpcEnvironment, Permission};
use proxmox::api::router::{Router, SubdirMap};
@ -11,8 +13,9 @@ use crate::tools::ticket::{self, Empty, Ticket};
use crate::auth_helpers::*;
use crate::api2::types::*;
use crate::config::acl as acl_config;
use crate::config::acl::{PRIVILEGES, PRIV_SYS_AUDIT, PRIV_PERMISSIONS_MODIFY};
use crate::config::cached_user_info::CachedUserInfo;
use crate::config::acl::{PRIVILEGES, PRIV_PERMISSIONS_MODIFY};
pub mod user;
pub mod domain;
@ -30,7 +33,8 @@ fn authenticate_user(
) -> Result<bool, Error> {
let user_info = CachedUserInfo::new()?;
if !user_info.is_active_user(&userid) {
let auth_id = Authid::from(userid.clone());
if !user_info.is_active_auth_id(&auth_id) {
bail!("user account disabled or expired.");
}
@ -68,8 +72,7 @@ fn authenticate_user(
path_vec.push(part);
}
}
user_info.check_privs(userid, &path_vec, *privilege, false)?;
user_info.check_privs(&auth_id, &path_vec, *privilege, false)?;
return Ok(false);
}
}
@ -138,6 +141,7 @@ fn create_ticket(
path: Option<String>,
privs: Option<String>,
port: Option<u16>,
rpcenv: &mut dyn RpcEnvironment,
) -> Result<Value, Error> {
match authenticate_user(&username, &password, path, privs, port) {
Ok(true) => {
@ -145,7 +149,7 @@ fn create_ticket(
let token = assemble_csrf_prevention_token(csrf_secret(), &username);
log::info!("successful auth for user '{}'", username);
crate::server::rest::auth_logger()?.log(format!("successful auth for user '{}'", username));
Ok(json!({
"username": username,
@ -157,8 +161,20 @@ fn create_ticket(
"username": username,
})),
Err(err) => {
let client_ip = "unknown"; // $rpcenv->get_client_ip() || '';
log::error!("authentication failure; rhost={} user={} msg={}", client_ip, username, err.to_string());
let client_ip = match rpcenv.get_client_ip().map(|addr| addr.ip()) {
Some(ip) => format!("{}", ip),
None => "unknown".into(),
};
let msg = format!(
"authentication failure; rhost={} user={} msg={}",
client_ip,
username,
err.to_string()
);
crate::server::rest::auth_logger()?.log(&msg);
log::error!("{}", msg);
Err(http_err!(UNAUTHORIZED, "permission check failed."))
}
}
@ -192,9 +208,10 @@ fn change_password(
) -> Result<Value, Error> {
let current_user: Userid = rpcenv
.get_user()
.get_auth_id()
.ok_or_else(|| format_err!("unknown user"))?
.parse()?;
let current_auth = Authid::from(current_user.clone());
let mut allowed = userid == current_user;
@ -202,7 +219,7 @@ fn change_password(
if !allowed {
let user_info = CachedUserInfo::new()?;
let privs = user_info.lookup_privs(&current_user, &[]);
let privs = user_info.lookup_privs(&current_auth, &[]);
if (privs & PRIV_PERMISSIONS_MODIFY) != 0 { allowed = true; }
}
@ -216,6 +233,128 @@ fn change_password(
Ok(Value::Null)
}
#[api(
input: {
properties: {
"auth-id": {
type: Authid,
optional: true,
},
path: {
schema: ACL_PATH_SCHEMA,
optional: true,
},
},
},
access: {
permission: &Permission::Anybody,
description: "Requires Sys.Audit on '/access', limited to own privileges otherwise.",
},
returns: {
description: "Map of ACL path to Map of privilege to propagate bit",
type: Object,
properties: {},
additional_properties: true,
},
)]
/// List permissions of given or currently authenticated user / API token.
///
/// Optionally limited to specific path.
pub fn list_permissions(
auth_id: Option<Authid>,
path: Option<String>,
rpcenv: &dyn RpcEnvironment,
) -> Result<HashMap<String, HashMap<String, bool>>, Error> {
let current_auth_id: Authid = rpcenv.get_auth_id().unwrap().parse()?;
let user_info = CachedUserInfo::new()?;
let user_privs = user_info.lookup_privs(&current_auth_id, &["access"]);
let auth_id = if user_privs & PRIV_SYS_AUDIT == 0 {
match auth_id {
Some(auth_id) => {
if auth_id == current_auth_id {
auth_id
} else if auth_id.is_token()
&& !current_auth_id.is_token()
&& auth_id.user() == current_auth_id.user() {
auth_id
} else {
bail!("not allowed to list permissions of {}", auth_id);
}
},
None => current_auth_id,
}
} else {
match auth_id {
Some(auth_id) => auth_id,
None => current_auth_id,
}
};
fn populate_acl_paths(
mut paths: HashSet<String>,
node: acl_config::AclTreeNode,
path: &str
) -> HashSet<String> {
for (sub_path, child_node) in node.children {
let sub_path = format!("{}/{}", path, &sub_path);
paths = populate_acl_paths(paths, child_node, &sub_path);
paths.insert(sub_path);
}
paths
}
let paths = match path {
Some(path) => {
let mut paths = HashSet::new();
paths.insert(path);
paths
},
None => {
let mut paths = HashSet::new();
let (acl_tree, _) = acl_config::config()?;
paths = populate_acl_paths(paths, acl_tree.root, "");
// default paths, returned even if no ACL exists
paths.insert("/".to_string());
paths.insert("/access".to_string());
paths.insert("/datastore".to_string());
paths.insert("/remote".to_string());
paths.insert("/system".to_string());
paths
},
};
let map = paths
.into_iter()
.fold(HashMap::new(), |mut map: HashMap<String, HashMap<String, bool>>, path: String| {
let split_path = acl_config::split_acl_path(path.as_str());
let (privs, propagated_privs) = user_info.lookup_privs_details(&auth_id, &split_path);
match privs {
0 => map, // Don't leak ACL paths where we don't have any privileges
_ => {
let priv_map = PRIVILEGES
.iter()
.fold(HashMap::new(), |mut priv_map, (name, value)| {
if value & privs != 0 {
priv_map.insert(name.to_string(), value & propagated_privs != 0);
}
priv_map
});
map.insert(path, priv_map);
map
},
}});
Ok(map)
}
#[sortable]
const SUBDIRS: SubdirMap = &sorted!([
("acl", &acl::ROUTER),
@ -223,6 +362,10 @@ const SUBDIRS: SubdirMap = &sorted!([
"password", &Router::new()
.put(&API_METHOD_CHANGE_PASSWORD)
),
(
"permissions", &Router::new()
.get(&API_METHOD_LIST_PERMISSIONS)
),
(
"ticket", &Router::new()
.post(&API_METHOD_CREATE_TICKET)

View File

@ -7,6 +7,7 @@ use proxmox::tools::fs::open_file_locked;
use crate::api2::types::*;
use crate::config::acl;
use crate::config::acl::{Role, PRIV_SYS_AUDIT, PRIV_PERMISSIONS_MODIFY};
use crate::config::cached_user_info::CachedUserInfo;
#[api(
properties: {
@ -43,8 +44,23 @@ fn extract_acl_node_data(
path: &str,
list: &mut Vec<AclListItem>,
exact: bool,
token_user: &Option<Authid>,
) {
// tokens can't have tokens, so we can early return
if let Some(token_user) = token_user {
if token_user.is_token() {
return;
}
}
for (user, roles) in &node.users {
if let Some(token_user) = token_user {
if !user.is_token()
|| user.user() != token_user.user() {
continue;
}
}
for (role, propagate) in roles {
list.push(AclListItem {
path: if path.is_empty() { String::from("/") } else { path.to_string() },
@ -56,6 +72,10 @@ fn extract_acl_node_data(
}
}
for (group, roles) in &node.groups {
if let Some(_) = token_user {
continue;
}
for (role, propagate) in roles {
list.push(AclListItem {
path: if path.is_empty() { String::from("/") } else { path.to_string() },
@ -71,7 +91,7 @@ fn extract_acl_node_data(
}
for (comp, child) in &node.children {
let new_path = format!("{}/{}", path, comp);
extract_acl_node_data(child, &new_path, list, exact);
extract_acl_node_data(child, &new_path, list, exact, token_user);
}
}
@ -98,7 +118,8 @@ fn extract_acl_node_data(
}
},
access: {
permission: &Permission::Privilege(&["access", "acl"], PRIV_SYS_AUDIT, false),
permission: &Permission::Anybody,
description: "Returns all ACLs if user has Sys.Audit on '/access/acl', or just the ACLs containing the user's API tokens.",
},
)]
/// Read Access Control List (ACLs).
@ -107,18 +128,26 @@ pub fn read_acl(
exact: bool,
mut rpcenv: &mut dyn RpcEnvironment,
) -> Result<Vec<AclListItem>, Error> {
let auth_id = rpcenv.get_auth_id().unwrap().parse()?;
//let auth_user = rpcenv.get_user().unwrap();
let user_info = CachedUserInfo::new()?;
let top_level_privs = user_info.lookup_privs(&auth_id, &["access", "acl"]);
let auth_id_filter = if (top_level_privs & PRIV_SYS_AUDIT) == 0 {
Some(auth_id)
} else {
None
};
let (mut tree, digest) = acl::config()?;
let mut list: Vec<AclListItem> = Vec::new();
if let Some(path) = &path {
if let Some(node) = &tree.find_node(path) {
extract_acl_node_data(&node, path, &mut list, exact);
extract_acl_node_data(&node, path, &mut list, exact, &auth_id_filter);
}
} else {
extract_acl_node_data(&tree.root, "", &mut list, exact);
extract_acl_node_data(&tree.root, "", &mut list, exact, &auth_id_filter);
}
rpcenv["digest"] = proxmox::tools::digest_to_hex(&digest).into();
@ -140,9 +169,9 @@ pub fn read_acl(
optional: true,
schema: ACL_PROPAGATE_SCHEMA,
},
userid: {
"auth-id": {
optional: true,
type: Userid,
type: Authid,
},
group: {
optional: true,
@ -160,7 +189,8 @@ pub fn read_acl(
},
},
access: {
permission: &Permission::Privilege(&["access", "acl"], PRIV_PERMISSIONS_MODIFY, false),
permission: &Permission::Anybody,
description: "Requires Permissions.Modify on '/access/acl', limited to updating ACLs of the user's API tokens otherwise."
},
)]
/// Update Access Control List (ACLs).
@ -168,12 +198,35 @@ pub fn update_acl(
path: String,
role: String,
propagate: Option<bool>,
userid: Option<Userid>,
auth_id: Option<Authid>,
group: Option<String>,
delete: Option<bool>,
digest: Option<String>,
_rpcenv: &mut dyn RpcEnvironment,
rpcenv: &mut dyn RpcEnvironment,
) -> Result<(), Error> {
let current_auth_id: Authid = rpcenv.get_auth_id().unwrap().parse()?;
let user_info = CachedUserInfo::new()?;
let top_level_privs = user_info.lookup_privs(&current_auth_id, &["access", "acl"]);
if top_level_privs & PRIV_PERMISSIONS_MODIFY == 0 {
if let Some(_) = group {
bail!("Unprivileged users are not allowed to create group ACL item.");
}
match &auth_id {
Some(auth_id) => {
if current_auth_id.is_token() {
bail!("Unprivileged API tokens can't set ACL items.");
} else if !auth_id.is_token() {
bail!("Unprivileged users can only set ACL items for API tokens.");
} else if auth_id.user() != current_auth_id.user() {
bail!("Unprivileged users can only set ACL items for their own API tokens.");
}
},
None => { bail!("Unprivileged user needs to provide auth_id to update ACL item."); },
};
}
let _lock = open_file_locked(acl::ACL_CFG_LOCKFILE, std::time::Duration::new(10, 0), true)?;
@ -190,11 +243,12 @@ pub fn update_acl(
if let Some(ref _group) = group {
bail!("parameter 'group' - groups are currently not supported.");
} else if let Some(ref userid) = userid {
} else if let Some(ref auth_id) = auth_id {
if !delete { // Note: we allow to delete non-existent users
let user_cfg = crate::config::user::cached_config()?;
if user_cfg.sections.get(&userid.to_string()).is_none() {
bail!("no such user.");
if user_cfg.sections.get(&auth_id.to_string()).is_none() {
bail!(format!("no such {}.",
if auth_id.is_token() { "API token" } else { "user" }));
}
}
} else {
@ -205,11 +259,11 @@ pub fn update_acl(
acl::check_acl_path(&path)?;
}
if let Some(userid) = userid {
if let Some(auth_id) = auth_id {
if delete {
tree.delete_user_role(&path, &userid, &role);
tree.delete_user_role(&path, &auth_id, &role);
} else {
tree.insert_user_role(&path, &userid, &role, propagate);
tree.insert_user_role(&path, &auth_id, &role, propagate);
}
} else if let Some(group) = group {
if delete {

View File

@ -1,12 +1,16 @@
use anyhow::{bail, Error};
use serde_json::Value;
use serde::{Serialize, Deserialize};
use serde_json::{json, Value};
use std::collections::HashMap;
use proxmox::api::{api, ApiMethod, Router, RpcEnvironment, Permission};
use proxmox::api::router::SubdirMap;
use proxmox::api::schema::{Schema, StringSchema};
use proxmox::tools::fs::open_file_locked;
use crate::api2::types::*;
use crate::config::user;
use crate::config::token_shadow;
use crate::config::acl::{PRIV_SYS_AUDIT, PRIV_PERMISSIONS_MODIFY};
use crate::config::cached_user_info::CachedUserInfo;
@ -16,14 +20,96 @@ pub const PBS_PASSWORD_SCHEMA: Schema = StringSchema::new("User Password.")
.max_length(64)
.schema();
#[api(
properties: {
userid: {
type: Userid,
},
comment: {
optional: true,
schema: SINGLE_LINE_COMMENT_SCHEMA,
},
enable: {
optional: true,
schema: user::ENABLE_USER_SCHEMA,
},
expire: {
optional: true,
schema: user::EXPIRE_USER_SCHEMA,
},
firstname: {
optional: true,
schema: user::FIRST_NAME_SCHEMA,
},
lastname: {
schema: user::LAST_NAME_SCHEMA,
optional: true,
},
email: {
schema: user::EMAIL_SCHEMA,
optional: true,
},
tokens: {
type: Array,
optional: true,
description: "List of user's API tokens.",
items: {
type: user::ApiToken
},
},
}
)]
#[derive(Serialize,Deserialize)]
/// User properties with added list of ApiTokens
pub struct UserWithTokens {
pub userid: Userid,
#[serde(skip_serializing_if="Option::is_none")]
pub comment: Option<String>,
#[serde(skip_serializing_if="Option::is_none")]
pub enable: Option<bool>,
#[serde(skip_serializing_if="Option::is_none")]
pub expire: Option<i64>,
#[serde(skip_serializing_if="Option::is_none")]
pub firstname: Option<String>,
#[serde(skip_serializing_if="Option::is_none")]
pub lastname: Option<String>,
#[serde(skip_serializing_if="Option::is_none")]
pub email: Option<String>,
#[serde(skip_serializing_if="Vec::is_empty")]
pub tokens: Vec<user::ApiToken>,
}
impl UserWithTokens {
fn new(user: user::User) -> Self {
Self {
userid: user.userid,
comment: user.comment,
enable: user.enable,
expire: user.expire,
firstname: user.firstname,
lastname: user.lastname,
email: user.email,
tokens: Vec::new(),
}
}
}
#[api(
input: {
properties: {},
properties: {
include_tokens: {
type: bool,
description: "Include user's API tokens in returned list.",
optional: true,
default: false,
},
},
},
returns: {
description: "List users (with config digest).",
type: Array,
items: { type: user::User },
items: { type: UserWithTokens },
},
access: {
permission: &Permission::Anybody,
@ -32,28 +118,60 @@ pub const PBS_PASSWORD_SCHEMA: Schema = StringSchema::new("User Password.")
)]
/// List users
pub fn list_users(
_param: Value,
include_tokens: bool,
_info: &ApiMethod,
mut rpcenv: &mut dyn RpcEnvironment,
) -> Result<Vec<user::User>, Error> {
) -> Result<Vec<UserWithTokens>, Error> {
let (config, digest) = user::config()?;
let userid: Userid = rpcenv.get_user().unwrap().parse()?;
// intentionally user only for now
let userid: Userid = rpcenv.get_auth_id().unwrap().parse()?;
let auth_id = Authid::from(userid.clone());
let user_info = CachedUserInfo::new()?;
let top_level_privs = user_info.lookup_privs(&userid, &["access", "users"]);
let top_level_privs = user_info.lookup_privs(&auth_id, &["access", "users"]);
let top_level_allowed = (top_level_privs & PRIV_SYS_AUDIT) != 0;
let filter_by_privs = |user: &user::User| {
top_level_allowed || user.userid == userid
};
let list:Vec<user::User> = config.convert_to_typed_array("user")?;
rpcenv["digest"] = proxmox::tools::digest_to_hex(&digest).into();
Ok(list.into_iter().filter(filter_by_privs).collect())
let iter = list.into_iter().filter(filter_by_privs);
let list = if include_tokens {
let tokens: Vec<user::ApiToken> = config.convert_to_typed_array("token")?;
let mut user_to_tokens = tokens
.into_iter()
.fold(
HashMap::new(),
|mut map: HashMap<Userid, Vec<user::ApiToken>>, token: user::ApiToken| {
if token.tokenid.is_token() {
map
.entry(token.tokenid.user().clone())
.or_default()
.push(token);
}
map
});
iter
.map(|user: user::User| {
let mut user = UserWithTokens::new(user);
user.tokens = user_to_tokens.remove(&user.userid).unwrap_or_default();
user
})
.collect()
} else {
iter.map(|user: user::User| UserWithTokens::new(user))
.collect()
};
Ok(list)
}
#[api(
@ -304,12 +422,340 @@ pub fn delete_user(userid: Userid, digest: Option<String>) -> Result<(), Error>
Ok(())
}
const ITEM_ROUTER: Router = Router::new()
#[api(
input: {
properties: {
userid: {
type: Userid,
},
tokenname: {
type: Tokenname,
},
},
},
returns: {
description: "Get API token metadata (with config digest).",
type: user::ApiToken,
},
access: {
permission: &Permission::Or(&[
&Permission::Privilege(&["access", "users"], PRIV_SYS_AUDIT, false),
&Permission::UserParam("userid"),
]),
},
)]
/// Read user's API token metadata
pub fn read_token(
userid: Userid,
tokenname: Tokenname,
_info: &ApiMethod,
mut rpcenv: &mut dyn RpcEnvironment,
) -> Result<user::ApiToken, Error> {
let (config, digest) = user::config()?;
let tokenid = Authid::from((userid, Some(tokenname)));
rpcenv["digest"] = proxmox::tools::digest_to_hex(&digest).into();
config.lookup("token", &tokenid.to_string())
}
#[api(
protected: true,
input: {
properties: {
userid: {
type: Userid,
},
tokenname: {
type: Tokenname,
},
comment: {
optional: true,
schema: SINGLE_LINE_COMMENT_SCHEMA,
},
enable: {
schema: user::ENABLE_USER_SCHEMA,
optional: true,
},
expire: {
schema: user::EXPIRE_USER_SCHEMA,
optional: true,
},
digest: {
optional: true,
schema: PROXMOX_CONFIG_DIGEST_SCHEMA,
},
},
},
access: {
permission: &Permission::Or(&[
&Permission::Privilege(&["access", "users"], PRIV_PERMISSIONS_MODIFY, false),
&Permission::UserParam("userid"),
]),
},
returns: {
description: "API token identifier + generated secret.",
properties: {
value: {
type: String,
description: "The API token secret",
},
tokenid: {
type: String,
description: "The API token identifier",
},
},
},
)]
/// Generate a new API token with given metadata
pub fn generate_token(
userid: Userid,
tokenname: Tokenname,
comment: Option<String>,
enable: Option<bool>,
expire: Option<i64>,
digest: Option<String>,
) -> Result<Value, Error> {
let _lock = open_file_locked(user::USER_CFG_LOCKFILE, std::time::Duration::new(10, 0), true)?;
let (mut config, expected_digest) = user::config()?;
if let Some(ref digest) = digest {
let digest = proxmox::tools::hex_to_digest(digest)?;
crate::tools::detect_modified_configuration_file(&digest, &expected_digest)?;
}
let tokenid = Authid::from((userid.clone(), Some(tokenname.clone())));
let tokenid_string = tokenid.to_string();
if let Some(_) = config.sections.get(&tokenid_string) {
bail!("token '{}' for user '{}' already exists.", tokenname.as_str(), userid);
}
let secret = format!("{:x}", proxmox::tools::uuid::Uuid::generate());
token_shadow::set_secret(&tokenid, &secret)?;
let token = user::ApiToken {
tokenid: tokenid.clone(),
comment,
enable,
expire,
};
config.set_data(&tokenid_string, "token", &token)?;
user::save_config(&config)?;
Ok(json!({
"tokenid": tokenid_string,
"value": secret
}))
}
#[api(
protected: true,
input: {
properties: {
userid: {
type: Userid,
},
tokenname: {
type: Tokenname,
},
comment: {
optional: true,
schema: SINGLE_LINE_COMMENT_SCHEMA,
},
enable: {
schema: user::ENABLE_USER_SCHEMA,
optional: true,
},
expire: {
schema: user::EXPIRE_USER_SCHEMA,
optional: true,
},
digest: {
optional: true,
schema: PROXMOX_CONFIG_DIGEST_SCHEMA,
},
},
},
access: {
permission: &Permission::Or(&[
&Permission::Privilege(&["access", "users"], PRIV_PERMISSIONS_MODIFY, false),
&Permission::UserParam("userid"),
]),
},
)]
/// Update user's API token metadata
pub fn update_token(
userid: Userid,
tokenname: Tokenname,
comment: Option<String>,
enable: Option<bool>,
expire: Option<i64>,
digest: Option<String>,
) -> Result<(), Error> {
let _lock = open_file_locked(user::USER_CFG_LOCKFILE, std::time::Duration::new(10, 0), true)?;
let (mut config, expected_digest) = user::config()?;
if let Some(ref digest) = digest {
let digest = proxmox::tools::hex_to_digest(digest)?;
crate::tools::detect_modified_configuration_file(&digest, &expected_digest)?;
}
let tokenid = Authid::from((userid, Some(tokenname)));
let tokenid_string = tokenid.to_string();
let mut data: user::ApiToken = config.lookup("token", &tokenid_string)?;
if let Some(comment) = comment {
let comment = comment.trim().to_string();
if comment.is_empty() {
data.comment = None;
} else {
data.comment = Some(comment);
}
}
if let Some(enable) = enable {
data.enable = if enable { None } else { Some(false) };
}
if let Some(expire) = expire {
data.expire = if expire > 0 { Some(expire) } else { None };
}
config.set_data(&tokenid_string, "token", &data)?;
user::save_config(&config)?;
Ok(())
}
#[api(
protected: true,
input: {
properties: {
userid: {
type: Userid,
},
tokenname: {
type: Tokenname,
},
digest: {
optional: true,
schema: PROXMOX_CONFIG_DIGEST_SCHEMA,
},
},
},
access: {
permission: &Permission::Or(&[
&Permission::Privilege(&["access", "users"], PRIV_PERMISSIONS_MODIFY, false),
&Permission::UserParam("userid"),
]),
},
)]
/// Delete a user's API token
pub fn delete_token(
userid: Userid,
tokenname: Tokenname,
digest: Option<String>,
) -> Result<(), Error> {
let _lock = open_file_locked(user::USER_CFG_LOCKFILE, std::time::Duration::new(10, 0), true)?;
let (mut config, expected_digest) = user::config()?;
if let Some(ref digest) = digest {
let digest = proxmox::tools::hex_to_digest(digest)?;
crate::tools::detect_modified_configuration_file(&digest, &expected_digest)?;
}
let tokenid = Authid::from((userid.clone(), Some(tokenname.clone())));
let tokenid_string = tokenid.to_string();
match config.sections.get(&tokenid_string) {
Some(_) => { config.sections.remove(&tokenid_string); },
None => bail!("token '{}' of user '{}' does not exist.", tokenname.as_str(), userid),
}
token_shadow::delete_secret(&tokenid)?;
user::save_config(&config)?;
Ok(())
}
#[api(
input: {
properties: {
userid: {
type: Userid,
},
},
},
returns: {
description: "List user's API tokens (with config digest).",
type: Array,
items: { type: user::ApiToken },
},
access: {
permission: &Permission::Or(&[
&Permission::Privilege(&["access", "users"], PRIV_SYS_AUDIT, false),
&Permission::UserParam("userid"),
]),
},
)]
/// List user's API tokens
pub fn list_tokens(
userid: Userid,
_info: &ApiMethod,
mut rpcenv: &mut dyn RpcEnvironment,
) -> Result<Vec<user::ApiToken>, Error> {
let (config, digest) = user::config()?;
let list:Vec<user::ApiToken> = config.convert_to_typed_array("token")?;
rpcenv["digest"] = proxmox::tools::digest_to_hex(&digest).into();
let filter_by_owner = |token: &user::ApiToken| {
if token.tokenid.is_token() {
token.tokenid.user() == &userid
} else {
false
}
};
Ok(list.into_iter().filter(filter_by_owner).collect())
}
const TOKEN_ITEM_ROUTER: Router = Router::new()
.get(&API_METHOD_READ_TOKEN)
.put(&API_METHOD_UPDATE_TOKEN)
.post(&API_METHOD_GENERATE_TOKEN)
.delete(&API_METHOD_DELETE_TOKEN);
const TOKEN_ROUTER: Router = Router::new()
.get(&API_METHOD_LIST_TOKENS)
.match_all("tokenname", &TOKEN_ITEM_ROUTER);
const USER_SUBDIRS: SubdirMap = &[
("token", &TOKEN_ROUTER),
];
const USER_ROUTER: Router = Router::new()
.get(&API_METHOD_READ_USER)
.put(&API_METHOD_UPDATE_USER)
.delete(&API_METHOD_DELETE_USER);
.delete(&API_METHOD_DELETE_USER)
.subdirs(USER_SUBDIRS);
pub const ROUTER: Router = Router::new()
.get(&API_METHOD_LIST_USERS)
.post(&API_METHOD_CREATE_USER)
.match_all("userid", &ITEM_ROUTER);
.match_all("userid", &USER_ROUTER);

View File

@ -3,10 +3,12 @@ use proxmox::list_subdirs_api_method;
pub mod datastore;
pub mod sync;
pub mod verify;
const SUBDIRS: SubdirMap = &[
("datastore", &datastore::ROUTER),
("sync", &sync::ROUTER)
("sync", &sync::ROUTER),
("verify", &verify::ROUTER)
];
pub const ROUTER: Router = Router::new()

View File

@ -2,6 +2,8 @@ use std::collections::{HashSet, HashMap};
use std::ffi::OsStr;
use std::os::unix::ffi::OsStrExt;
use std::sync::{Arc, Mutex};
use std::path::{Path, PathBuf};
use std::pin::Pin;
use anyhow::{bail, format_err, Error};
use futures::*;
@ -16,10 +18,9 @@ use proxmox::api::{
use proxmox::api::router::SubdirMap;
use proxmox::api::schema::*;
use proxmox::tools::fs::{replace_file, CreateOptions};
use proxmox::try_block;
use proxmox::{http_err, identity, list_subdirs_api_method, sortable};
use pxar::accessor::aio::Accessor;
use pxar::accessor::aio::{Accessor, FileContents, FileEntry};
use pxar::EntryKind;
use crate::api2::types::*;
@ -28,24 +29,46 @@ use crate::backup::*;
use crate::config::datastore;
use crate::config::cached_user_info::CachedUserInfo;
use crate::server::WorkerTask;
use crate::tools::{self, AsyncReaderStream, WrappedReaderStream};
use crate::server::{jobstate::Job, WorkerTask};
use crate::tools::{
self,
zip::{ZipEncoder, ZipEntry},
AsyncChannelWriter, AsyncReaderStream, WrappedReaderStream,
};
use crate::config::acl::{
PRIV_DATASTORE_AUDIT,
PRIV_DATASTORE_MODIFY,
PRIV_DATASTORE_READ,
PRIV_DATASTORE_PRUNE,
PRIV_DATASTORE_BACKUP,
PRIV_DATASTORE_VERIFY,
};
fn check_backup_owner(
fn check_priv_or_backup_owner(
store: &DataStore,
group: &BackupGroup,
userid: &Userid,
auth_id: &Authid,
required_privs: u64,
) -> Result<(), Error> {
let user_info = CachedUserInfo::new()?;
let privs = user_info.lookup_privs(&auth_id, &["datastore", store.name()]);
if privs & required_privs == 0 {
let owner = store.get_owner(group)?;
if &owner != userid {
bail!("backup owner check failed ({} != {})", userid, owner);
check_backup_owner(&owner, auth_id)?;
}
Ok(())
}
fn check_backup_owner(
owner: &Authid,
auth_id: &Authid,
) -> Result<(), Error> {
let correct_owner = owner == auth_id
|| (owner.is_token() && &Authid::from(owner.user().clone()) == auth_id);
if !correct_owner {
bail!("backup owner check failed ({} != {})", auth_id, owner);
}
Ok(())
}
@ -143,9 +166,9 @@ fn list_groups(
rpcenv: &mut dyn RpcEnvironment,
) -> Result<Vec<GroupListItem>, Error> {
let userid: Userid = rpcenv.get_user().unwrap().parse()?;
let auth_id: Authid = rpcenv.get_auth_id().unwrap().parse()?;
let user_info = CachedUserInfo::new()?;
let user_privs = user_info.lookup_privs(&userid, &["datastore", &store]);
let user_privs = user_info.lookup_privs(&auth_id, &["datastore", &store]);
let datastore = DataStore::lookup_datastore(&store)?;
@ -165,8 +188,8 @@ fn list_groups(
let list_all = (user_privs & PRIV_DATASTORE_AUDIT) != 0;
let owner = datastore.get_owner(group)?;
if !list_all {
if owner != userid { continue; }
if !list_all && check_backup_owner(&owner, &auth_id).is_err() {
continue;
}
let result_item = GroupListItem {
@ -224,16 +247,12 @@ pub fn list_snapshot_files(
rpcenv: &mut dyn RpcEnvironment,
) -> Result<Vec<BackupContent>, Error> {
let userid: Userid = rpcenv.get_user().unwrap().parse()?;
let user_info = CachedUserInfo::new()?;
let user_privs = user_info.lookup_privs(&userid, &["datastore", &store]);
let auth_id: Authid = rpcenv.get_auth_id().unwrap().parse()?;
let datastore = DataStore::lookup_datastore(&store)?;
let snapshot = BackupDir::new(backup_type, backup_id, backup_time)?;
let allowed = (user_privs & (PRIV_DATASTORE_AUDIT | PRIV_DATASTORE_READ)) != 0;
if !allowed { check_backup_owner(&datastore, snapshot.group(), &userid)?; }
check_priv_or_backup_owner(&datastore, snapshot.group(), &auth_id, PRIV_DATASTORE_AUDIT | PRIV_DATASTORE_READ)?;
let info = BackupInfo::new(&datastore.base_path(), snapshot)?;
@ -276,16 +295,12 @@ fn delete_snapshot(
rpcenv: &mut dyn RpcEnvironment,
) -> Result<Value, Error> {
let userid: Userid = rpcenv.get_user().unwrap().parse()?;
let user_info = CachedUserInfo::new()?;
let user_privs = user_info.lookup_privs(&userid, &["datastore", &store]);
let auth_id: Authid = rpcenv.get_auth_id().unwrap().parse()?;
let snapshot = BackupDir::new(backup_type, backup_id, backup_time)?;
let datastore = DataStore::lookup_datastore(&store)?;
let allowed = (user_privs & PRIV_DATASTORE_MODIFY) != 0;
if !allowed { check_backup_owner(&datastore, snapshot.group(), &userid)?; }
check_priv_or_backup_owner(&datastore, snapshot.group(), &auth_id, PRIV_DATASTORE_MODIFY)?;
datastore.remove_backup_dir(&snapshot, false)?;
@ -332,9 +347,9 @@ pub fn list_snapshots (
rpcenv: &mut dyn RpcEnvironment,
) -> Result<Vec<SnapshotListItem>, Error> {
let userid: Userid = rpcenv.get_user().unwrap().parse()?;
let auth_id: Authid = rpcenv.get_auth_id().unwrap().parse()?;
let user_info = CachedUserInfo::new()?;
let user_privs = user_info.lookup_privs(&userid, &["datastore", &store]);
let user_privs = user_info.lookup_privs(&auth_id, &["datastore", &store]);
let datastore = DataStore::lookup_datastore(&store)?;
@ -356,8 +371,8 @@ pub fn list_snapshots (
let list_all = (user_privs & PRIV_DATASTORE_AUDIT) != 0;
let owner = datastore.get_owner(group)?;
if !list_all {
if owner != userid { continue; }
if !list_all && check_backup_owner(&owner, &auth_id).is_err() {
continue;
}
let mut size = None;
@ -417,6 +432,53 @@ pub fn list_snapshots (
Ok(snapshots)
}
fn get_snapshots_count(store: &DataStore) -> Result<Counts, Error> {
let base_path = store.base_path();
let backup_list = BackupInfo::list_backups(&base_path)?;
let mut groups = HashSet::new();
let mut result = Counts {
ct: None,
host: None,
vm: None,
other: None,
};
for info in backup_list {
let group = info.backup_dir.group();
let id = group.backup_id();
let backup_type = group.backup_type();
let mut new_id = false;
if groups.insert(format!("{}-{}", &backup_type, &id)) {
new_id = true;
}
let mut counts = match backup_type {
"ct" => result.ct.take().unwrap_or(Default::default()),
"host" => result.host.take().unwrap_or(Default::default()),
"vm" => result.vm.take().unwrap_or(Default::default()),
_ => result.other.take().unwrap_or(Default::default()),
};
counts.snapshots += 1;
if new_id {
counts.groups +=1;
}
match backup_type {
"ct" => result.ct = Some(counts),
"host" => result.host = Some(counts),
"vm" => result.vm = Some(counts),
_ => result.other = Some(counts),
}
}
Ok(result)
}
#[api(
input: {
properties: {
@ -426,7 +488,7 @@ pub fn list_snapshots (
},
},
returns: {
type: StorageStatus,
type: DataStoreStatus,
},
access: {
permission: &Permission::Privilege(&["datastore", "{store}"], PRIV_DATASTORE_AUDIT | PRIV_DATASTORE_BACKUP, true),
@ -437,9 +499,19 @@ pub fn status(
store: String,
_info: &ApiMethod,
_rpcenv: &mut dyn RpcEnvironment,
) -> Result<StorageStatus, Error> {
) -> Result<DataStoreStatus, Error> {
let datastore = DataStore::lookup_datastore(&store)?;
crate::tools::disks::disk_usage(&datastore.base_path())
let storage = crate::tools::disks::disk_usage(&datastore.base_path())?;
let counts = get_snapshots_count(&datastore)?;
let gc_status = datastore.last_gc_status();
Ok(DataStoreStatus {
total: storage.total,
used: storage.used,
avail: storage.avail,
gc_status,
counts,
})
}
#[api(
@ -466,7 +538,7 @@ pub fn status(
schema: UPID_SCHEMA,
},
access: {
permission: &Permission::Privilege(&["datastore", "{store}"], PRIV_DATASTORE_READ | PRIV_DATASTORE_BACKUP, true), // fixme
permission: &Permission::Privilege(&["datastore", "{store}"], PRIV_DATASTORE_VERIFY | PRIV_DATASTORE_BACKUP, true),
},
)]
/// Verify backups.
@ -482,21 +554,31 @@ pub fn verify(
) -> Result<Value, Error> {
let datastore = DataStore::lookup_datastore(&store)?;
let auth_id: Authid = rpcenv.get_auth_id().unwrap().parse()?;
let worker_id;
let mut backup_dir = None;
let mut backup_group = None;
let mut worker_type = "verify";
match (backup_type, backup_id, backup_time) {
(Some(backup_type), Some(backup_id), Some(backup_time)) => {
worker_id = format!("{}_{}_{}_{:08X}", store, backup_type, backup_id, backup_time);
worker_id = format!("{}:{}/{}/{:08X}", store, backup_type, backup_id, backup_time);
let dir = BackupDir::new(backup_type, backup_id, backup_time)?;
check_priv_or_backup_owner(&datastore, dir.group(), &auth_id, PRIV_DATASTORE_VERIFY)?;
backup_dir = Some(dir);
worker_type = "verify_snapshot";
}
(Some(backup_type), Some(backup_id), None) => {
worker_id = format!("{}_{}_{}", store, backup_type, backup_id);
worker_id = format!("{}:{}/{}", store, backup_type, backup_id);
let group = BackupGroup::new(backup_type, backup_id);
check_priv_or_backup_owner(&datastore, &group, &auth_id, PRIV_DATASTORE_VERIFY)?;
backup_group = Some(group);
worker_type = "verify_group";
}
(None, None, None) => {
worker_id = store.clone();
@ -504,13 +586,12 @@ pub fn verify(
_ => bail!("parameters do not specify a backup group or snapshot"),
}
let userid: Userid = rpcenv.get_user().unwrap().parse()?;
let to_stdout = if rpcenv.env_type() == RpcEnvironmentType::CLI { true } else { false };
let upid_str = WorkerTask::new_thread(
"verify",
worker_type,
Some(worker_id.clone()),
userid,
auth_id.clone(),
to_stdout,
move |worker| {
let verified_chunks = Arc::new(Mutex::new(HashSet::with_capacity(1024*16)));
@ -525,6 +606,7 @@ pub fn verify(
corrupt_chunks,
worker.clone(),
worker.upid().clone(),
None,
)? {
res.push(backup_dir.to_string());
}
@ -538,10 +620,20 @@ pub fn verify(
None,
worker.clone(),
worker.upid(),
None,
)?;
failed_dirs
} else {
verify_all_backups(datastore, worker.clone(), worker.upid())?
let privs = CachedUserInfo::new()?
.lookup_privs(&auth_id, &["datastore", &store]);
let owner = if privs & PRIV_DATASTORE_VERIFY == 0 {
Some(auth_id)
} else {
None
};
verify_all_backups(datastore, worker.clone(), worker.upid(), owner, None)?
};
if failed_dirs.len() > 0 {
worker.log("Failed to verify following snapshots:");
@ -637,9 +729,7 @@ fn prune(
let backup_type = tools::required_string_param(&param, "backup-type")?;
let backup_id = tools::required_string_param(&param, "backup-id")?;
let userid: Userid = rpcenv.get_user().unwrap().parse()?;
let user_info = CachedUserInfo::new()?;
let user_privs = user_info.lookup_privs(&userid, &["datastore", &store]);
let auth_id: Authid = rpcenv.get_auth_id().unwrap().parse()?;
let dry_run = param["dry-run"].as_bool().unwrap_or(false);
@ -647,8 +737,7 @@ fn prune(
let datastore = DataStore::lookup_datastore(&store)?;
let allowed = (user_privs & PRIV_DATASTORE_MODIFY) != 0;
if !allowed { check_backup_owner(&datastore, &group, &userid)?; }
check_priv_or_backup_owner(&datastore, &group, &auth_id, PRIV_DATASTORE_MODIFY)?;
let prune_options = PruneOptions {
keep_last: param["keep-last"].as_u64(),
@ -659,7 +748,7 @@ fn prune(
keep_yearly: param["keep-yearly"].as_u64(),
};
let worker_id = format!("{}_{}_{}", store, backup_type, backup_id);
let worker_id = format!("{}:{}/{}", store, backup_type, backup_id);
let mut prune_result = Vec::new();
@ -690,9 +779,8 @@ fn prune(
// We use a WorkerTask just to have a task log, but run synchrounously
let worker = WorkerTask::new("prune", Some(worker_id), Userid::root_userid().clone(), true)?;
let worker = WorkerTask::new("prune", Some(worker_id), auth_id.clone(), true)?;
let result = try_block! {
if keep_all {
worker.log("No prune selection - keeping all files.");
} else {
@ -727,18 +815,18 @@ fn prune(
}));
if !(dry_run || keep) {
datastore.remove_backup_dir(&info.backup_dir, true)?;
if let Err(err) = datastore.remove_backup_dir(&info.backup_dir, false) {
worker.warn(
format!(
"failed to remove dir {:?}: {}",
info.backup_dir.relative_path(), err
)
);
}
}
}
Ok(())
};
worker.log_result(&result);
if let Err(err) = result {
bail!("prune failed - {}", err);
};
worker.log_result(&Ok(()));
Ok(json!(prune_result))
}
@ -766,21 +854,15 @@ fn start_garbage_collection(
) -> Result<Value, Error> {
let datastore = DataStore::lookup_datastore(&store)?;
let auth_id: Authid = rpcenv.get_auth_id().unwrap().parse()?;
println!("Starting garbage collection on store {}", store);
let job = Job::new("garbage_collection", &store)
.map_err(|_| format_err!("garbage collection already running"))?;
let to_stdout = if rpcenv.env_type() == RpcEnvironmentType::CLI { true } else { false };
let upid_str = WorkerTask::new_thread(
"garbage_collection",
Some(store.clone()),
Userid::root_userid().clone(),
to_stdout,
move |worker| {
worker.log(format!("starting garbage collection on store {}", store));
datastore.garbage_collection(&*worker, worker.upid())
},
)?;
let upid_str = crate::server::do_garbage_collection_job(job, datastore, &auth_id, None, to_stdout)
.map_err(|err| format_err!("unable to start garbage collection job on datastore {} - {}", store, err))?;
Ok(json!(upid_str))
}
@ -844,13 +926,13 @@ fn get_datastore_list(
let (config, _digest) = datastore::config()?;
let userid: Userid = rpcenv.get_user().unwrap().parse()?;
let auth_id: Authid = rpcenv.get_auth_id().unwrap().parse()?;
let user_info = CachedUserInfo::new()?;
let mut list = Vec::new();
for (store, (_, data)) in &config.sections {
let user_privs = user_info.lookup_privs(&userid, &["datastore", &store]);
let user_privs = user_info.lookup_privs(&auth_id, &["datastore", &store]);
let allowed = (user_privs & (PRIV_DATASTORE_AUDIT| PRIV_DATASTORE_BACKUP)) != 0;
if allowed {
let mut entry = json!({ "store": store });
@ -895,9 +977,7 @@ fn download_file(
let store = tools::required_string_param(&param, "store")?;
let datastore = DataStore::lookup_datastore(store)?;
let userid: Userid = rpcenv.get_user().unwrap().parse()?;
let user_info = CachedUserInfo::new()?;
let user_privs = user_info.lookup_privs(&userid, &["datastore", &store]);
let auth_id: Authid = rpcenv.get_auth_id().unwrap().parse()?;
let file_name = tools::required_string_param(&param, "file-name")?.to_owned();
@ -907,8 +987,7 @@ fn download_file(
let backup_dir = BackupDir::new(backup_type, backup_id, backup_time)?;
let allowed = (user_privs & PRIV_DATASTORE_READ) != 0;
if !allowed { check_backup_owner(&datastore, backup_dir.group(), &userid)?; }
check_priv_or_backup_owner(&datastore, backup_dir.group(), &auth_id, PRIV_DATASTORE_READ)?;
println!("Download {} from {} ({}/{})", file_name, store, backup_dir, file_name);
@ -968,9 +1047,7 @@ fn download_file_decoded(
let store = tools::required_string_param(&param, "store")?;
let datastore = DataStore::lookup_datastore(store)?;
let userid: Userid = rpcenv.get_user().unwrap().parse()?;
let user_info = CachedUserInfo::new()?;
let user_privs = user_info.lookup_privs(&userid, &["datastore", &store]);
let auth_id: Authid = rpcenv.get_auth_id().unwrap().parse()?;
let file_name = tools::required_string_param(&param, "file-name")?.to_owned();
@ -980,8 +1057,7 @@ fn download_file_decoded(
let backup_dir = BackupDir::new(backup_type, backup_id, backup_time)?;
let allowed = (user_privs & PRIV_DATASTORE_READ) != 0;
if !allowed { check_backup_owner(&datastore, backup_dir.group(), &userid)?; }
check_priv_or_backup_owner(&datastore, backup_dir.group(), &auth_id, PRIV_DATASTORE_READ)?;
let (manifest, files) = read_backup_index(&datastore, &backup_dir)?;
for file in files {
@ -1093,8 +1169,9 @@ fn upload_backup_log(
let backup_dir = BackupDir::new(backup_type, backup_id, backup_time)?;
let userid: Userid = rpcenv.get_user().unwrap().parse()?;
check_backup_owner(&datastore, backup_dir.group(), &userid)?;
let auth_id: Authid = rpcenv.get_auth_id().unwrap().parse()?;
let owner = datastore.get_owner(backup_dir.group())?;
check_backup_owner(&owner, &auth_id)?;
let mut path = datastore.base_path();
path.push(backup_dir.relative_path());
@ -1163,14 +1240,11 @@ fn catalog(
) -> Result<Value, Error> {
let datastore = DataStore::lookup_datastore(&store)?;
let userid: Userid = rpcenv.get_user().unwrap().parse()?;
let user_info = CachedUserInfo::new()?;
let user_privs = user_info.lookup_privs(&userid, &["datastore", &store]);
let auth_id: Authid = rpcenv.get_auth_id().unwrap().parse()?;
let backup_dir = BackupDir::new(backup_type, backup_id, backup_time)?;
let allowed = (user_privs & PRIV_DATASTORE_READ) != 0;
if !allowed { check_backup_owner(&datastore, backup_dir.group(), &userid)?; }
check_priv_or_backup_owner(&datastore, backup_dir.group(), &auth_id, PRIV_DATASTORE_READ)?;
let file_name = CATALOG_NAME;
@ -1243,6 +1317,66 @@ fn catalog(
Ok(res.into())
}
fn recurse_files<'a, T, W>(
zip: &'a mut ZipEncoder<W>,
decoder: &'a mut Accessor<T>,
prefix: &'a Path,
file: FileEntry<T>,
) -> Pin<Box<dyn Future<Output = Result<(), Error>> + Send + 'a>>
where
T: Clone + pxar::accessor::ReadAt + Unpin + Send + Sync + 'static,
W: tokio::io::AsyncWrite + Unpin + Send + 'static,
{
Box::pin(async move {
let metadata = file.entry().metadata();
let path = file.entry().path().strip_prefix(&prefix)?.to_path_buf();
match file.kind() {
EntryKind::File { .. } => {
let entry = ZipEntry::new(
path,
metadata.stat.mtime.secs,
metadata.stat.mode as u16,
true,
);
zip.add_entry(entry, Some(file.contents().await?))
.await
.map_err(|err| format_err!("could not send file entry: {}", err))?;
}
EntryKind::Hardlink(_) => {
let realfile = decoder.follow_hardlink(&file).await?;
let entry = ZipEntry::new(
path,
metadata.stat.mtime.secs,
metadata.stat.mode as u16,
true,
);
zip.add_entry(entry, Some(realfile.contents().await?))
.await
.map_err(|err| format_err!("could not send file entry: {}", err))?;
}
EntryKind::Directory => {
let dir = file.enter_directory().await?;
let mut readdir = dir.read_dir();
let entry = ZipEntry::new(
path,
metadata.stat.mtime.secs,
metadata.stat.mode as u16,
false,
);
zip.add_entry::<FileContents<T>>(entry, None).await?;
while let Some(entry) = readdir.next().await {
let entry = entry?.decode_entry().await?;
recurse_files(zip, decoder, prefix, entry).await?;
}
}
_ => {} // ignore all else
};
Ok(())
})
}
#[sortable]
pub const API_METHOD_PXAR_FILE_DOWNLOAD: ApiMethod = ApiMethod::new(
&ApiHandler::AsyncHttp(&pxar_file_download),
@ -1274,9 +1408,7 @@ fn pxar_file_download(
let store = tools::required_string_param(&param, "store")?;
let datastore = DataStore::lookup_datastore(&store)?;
let userid: Userid = rpcenv.get_user().unwrap().parse()?;
let user_info = CachedUserInfo::new()?;
let user_privs = user_info.lookup_privs(&userid, &["datastore", &store]);
let auth_id: Authid = rpcenv.get_auth_id().unwrap().parse()?;
let filepath = tools::required_string_param(&param, "filepath")?.to_owned();
@ -1286,8 +1418,7 @@ fn pxar_file_download(
let backup_dir = BackupDir::new(backup_type, backup_id, backup_time)?;
let allowed = (user_privs & PRIV_DATASTORE_READ) != 0;
if !allowed { check_backup_owner(&datastore, backup_dir.group(), &userid)?; }
check_priv_or_backup_owner(&datastore, backup_dir.group(), &auth_id, PRIV_DATASTORE_READ)?;
let mut components = base64::decode(&filepath)?;
if components.len() > 0 && components[0] == '/' as u8 {
@ -1325,23 +1456,55 @@ fn pxar_file_download(
.lookup(OsStr::from_bytes(file_path)).await?
.ok_or(format_err!("error opening '{:?}'", file_path))?;
let file = match file.kind() {
EntryKind::File { .. } => file,
EntryKind::Hardlink(_) => {
decoder.follow_hardlink(&file).await?
},
// TODO symlink
let body = match file.kind() {
EntryKind::File { .. } => Body::wrap_stream(
AsyncReaderStream::new(file.contents().await?).map_err(move |err| {
eprintln!("error during streaming of file '{:?}' - {}", filepath, err);
err
}),
),
EntryKind::Hardlink(_) => Body::wrap_stream(
AsyncReaderStream::new(decoder.follow_hardlink(&file).await?.contents().await?)
.map_err(move |err| {
eprintln!(
"error during streaming of hardlink '{:?}' - {}",
filepath, err
);
err
}),
),
EntryKind::Directory => {
let (sender, receiver) = tokio::sync::mpsc::channel(100);
let mut prefix = PathBuf::new();
let mut components = file.entry().path().components();
components.next_back(); // discar last
for comp in components {
prefix.push(comp);
}
let channelwriter = AsyncChannelWriter::new(sender, 1024 * 1024);
crate::server::spawn_internal_task(async move {
let mut zipencoder = ZipEncoder::new(channelwriter);
let mut decoder = decoder;
recurse_files(&mut zipencoder, &mut decoder, &prefix, file)
.await
.map_err(|err| eprintln!("error during creating of zip: {}", err))?;
zipencoder
.finish()
.await
.map_err(|err| eprintln!("error during finishing of zip: {}", err))
});
Body::wrap_stream(receiver.map_err(move |err| {
eprintln!("error during streaming of zip '{:?}' - {}", filepath, err);
err
}))
}
other => bail!("cannot download file of type {:?}", other),
};
let body = Body::wrap_stream(
AsyncReaderStream::new(file.contents().await?)
.map_err(move |err| {
eprintln!("error during streaming of '{:?}' - {}", filepath, err);
err
})
);
// fixme: set other headers ?
Ok(Response::builder()
.status(StatusCode::OK)
@ -1408,7 +1571,7 @@ fn get_rrd_stats(
},
},
access: {
permission: &Permission::Privilege(&["datastore", "{store}"], PRIV_DATASTORE_READ | PRIV_DATASTORE_BACKUP, true),
permission: &Permission::Privilege(&["datastore", "{store}"], PRIV_DATASTORE_AUDIT | PRIV_DATASTORE_BACKUP, true),
},
)]
/// Get "notes" for a specific backup
@ -1421,18 +1584,14 @@ fn get_notes(
) -> Result<String, Error> {
let datastore = DataStore::lookup_datastore(&store)?;
let userid: Userid = rpcenv.get_user().unwrap().parse()?;
let user_info = CachedUserInfo::new()?;
let user_privs = user_info.lookup_privs(&userid, &["datastore", &store]);
let auth_id: Authid = rpcenv.get_auth_id().unwrap().parse()?;
let backup_dir = BackupDir::new(backup_type, backup_id, backup_time)?;
let allowed = (user_privs & PRIV_DATASTORE_READ) != 0;
if !allowed { check_backup_owner(&datastore, backup_dir.group(), &userid)?; }
check_priv_or_backup_owner(&datastore, backup_dir.group(), &auth_id, PRIV_DATASTORE_AUDIT)?;
let manifest = datastore.load_manifest_json(&backup_dir)?;
let (manifest, _) = datastore.load_manifest(&backup_dir)?;
let notes = manifest["unprotected"]["notes"]
let notes = manifest.unprotected["notes"]
.as_str()
.unwrap_or("");
@ -1460,7 +1619,9 @@ fn get_notes(
},
},
access: {
permission: &Permission::Privilege(&["datastore", "{store}"], PRIV_DATASTORE_MODIFY, true),
permission: &Permission::Privilege(&["datastore", "{store}"],
PRIV_DATASTORE_MODIFY | PRIV_DATASTORE_BACKUP,
true),
},
)]
/// Set "notes" for a specific backup
@ -1474,20 +1635,14 @@ fn set_notes(
) -> Result<(), Error> {
let datastore = DataStore::lookup_datastore(&store)?;
let userid: Userid = rpcenv.get_user().unwrap().parse()?;
let user_info = CachedUserInfo::new()?;
let user_privs = user_info.lookup_privs(&userid, &["datastore", &store]);
let auth_id: Authid = rpcenv.get_auth_id().unwrap().parse()?;
let backup_dir = BackupDir::new(backup_type, backup_id, backup_time)?;
let allowed = (user_privs & PRIV_DATASTORE_READ) != 0;
if !allowed { check_backup_owner(&datastore, backup_dir.group(), &userid)?; }
check_priv_or_backup_owner(&datastore, backup_dir.group(), &auth_id, PRIV_DATASTORE_MODIFY)?;
let mut manifest = datastore.load_manifest_json(&backup_dir)?;
manifest["unprotected"]["notes"] = notes.into();
datastore.store_manifest(&backup_dir, manifest)?;
datastore.update_manifest(&backup_dir,|manifest| {
manifest.unprotected["notes"] = notes.into();
}).map_err(|err| format_err!("unable to update manifest blob - {}", err))?;
Ok(())
}
@ -1505,12 +1660,13 @@ fn set_notes(
schema: BACKUP_ID_SCHEMA,
},
"new-owner": {
type: Userid,
type: Authid,
},
},
},
access: {
permission: &Permission::Privilege(&["datastore", "{store}"], PRIV_DATASTORE_MODIFY, true),
permission: &Permission::Anybody,
description: "Datastore.Modify on whole datastore, or changing ownership between user and a user's token for owned backups with Datastore.Backup"
},
)]
/// Change owner of a backup group
@ -1518,18 +1674,69 @@ fn set_backup_owner(
store: String,
backup_type: String,
backup_id: String,
new_owner: Userid,
_rpcenv: &mut dyn RpcEnvironment,
new_owner: Authid,
rpcenv: &mut dyn RpcEnvironment,
) -> Result<(), Error> {
let datastore = DataStore::lookup_datastore(&store)?;
let backup_group = BackupGroup::new(backup_type, backup_id);
let auth_id: Authid = rpcenv.get_auth_id().unwrap().parse()?;
let user_info = CachedUserInfo::new()?;
if !user_info.is_active_user(&new_owner) {
bail!("user '{}' is inactive or non-existent", new_owner);
let privs = user_info.lookup_privs(&auth_id, &["datastore", &store]);
let allowed = if (privs & PRIV_DATASTORE_MODIFY) != 0 {
// High-privilege user/token
true
} else if (privs & PRIV_DATASTORE_BACKUP) != 0 {
let owner = datastore.get_owner(&backup_group)?;
match (owner.is_token(), new_owner.is_token()) {
(true, true) => {
// API token to API token, owned by same user
let owner = owner.user();
let new_owner = new_owner.user();
owner == new_owner && Authid::from(owner.clone()) == auth_id
},
(true, false) => {
// API token to API token owner
Authid::from(owner.user().clone()) == auth_id
&& new_owner == auth_id
},
(false, true) => {
// API token owner to API token
owner == auth_id
&& Authid::from(new_owner.user().clone()) == auth_id
},
(false, false) => {
// User to User, not allowed for unprivileged users
false
},
}
} else {
false
};
if !allowed {
return Err(http_err!(UNAUTHORIZED,
"{} does not have permission to change owner of backup group '{}' to {}",
auth_id,
backup_group,
new_owner,
));
}
if !user_info.is_active_auth_id(&new_owner) {
bail!("{} '{}' is inactive or non-existent",
if new_owner.is_token() {
"API token".to_string()
} else {
"user".to_string()
},
new_owner);
}
datastore.set_owner(&backup_group, &new_owner, true)?;

View File

@ -1,37 +1,66 @@
use anyhow::{format_err, Error};
use anyhow::{bail, format_err, Error};
use serde_json::Value;
use proxmox::api::{api, ApiMethod, Router, RpcEnvironment};
use proxmox::api::{api, ApiMethod, Permission, Router, RpcEnvironment};
use proxmox::api::router::SubdirMap;
use proxmox::{list_subdirs_api_method, sortable};
use crate::api2::types::*;
use crate::api2::pull::do_sync_job;
use crate::api2::config::sync::{check_sync_job_modify_access, check_sync_job_read_access};
use crate::config::cached_user_info::CachedUserInfo;
use crate::config::sync::{self, SyncJobStatus, SyncJobConfig};
use crate::server::UPID;
use crate::config::jobstate::{Job, JobState};
use crate::server::jobstate::{Job, JobState};
use crate::tools::systemd::time::{
parse_calendar_event, compute_next_event};
#[api(
input: {
properties: {},
properties: {
store: {
schema: DATASTORE_SCHEMA,
optional: true,
},
},
},
returns: {
description: "List configured jobs and their status.",
type: Array,
items: { type: sync::SyncJobStatus },
},
access: {
description: "Limited to sync jobs where user has Datastore.Audit on target datastore, and Remote.Audit on source remote.",
permission: &Permission::Anybody,
},
)]
/// List all sync jobs
pub fn list_sync_jobs(
store: Option<String>,
_param: Value,
mut rpcenv: &mut dyn RpcEnvironment,
) -> Result<Vec<SyncJobStatus>, Error> {
let auth_id: Authid = rpcenv.get_auth_id().unwrap().parse()?;
let user_info = CachedUserInfo::new()?;
let (config, digest) = sync::config()?;
let mut list: Vec<SyncJobStatus> = config.convert_to_typed_array("sync")?;
let mut list: Vec<SyncJobStatus> = config
.convert_to_typed_array("sync")?
.into_iter()
.filter(|job: &SyncJobStatus| {
if let Some(store) = &store {
&job.store == store
} else {
true
}
})
.filter(|job: &SyncJobStatus| {
let as_config: SyncJobConfig = job.clone().into();
check_sync_job_read_access(&user_info, &auth_id, &as_config)
}).collect();
for job in &mut list {
let last_state = JobState::load("syncjob", &job.id)
@ -74,7 +103,11 @@ pub fn list_sync_jobs(
schema: JOB_ID_SCHEMA,
}
}
}
},
access: {
description: "User needs Datastore.Backup on target datastore, and Remote.Read on source remote. Additionally, remove_vanished requires Datastore.Prune, and any owner other than the user themselves requires Datastore.Modify",
permission: &Permission::Anybody,
},
)]
/// Runs the sync jobs manually.
fn run_sync_job(
@ -82,15 +115,19 @@ fn run_sync_job(
_info: &ApiMethod,
rpcenv: &mut dyn RpcEnvironment,
) -> Result<String, Error> {
let auth_id: Authid = rpcenv.get_auth_id().unwrap().parse()?;
let user_info = CachedUserInfo::new()?;
let (config, _digest) = sync::config()?;
let sync_job: SyncJobConfig = config.lookup("sync", &id)?;
let userid: Userid = rpcenv.get_user().unwrap().parse()?;
if !check_sync_job_modify_access(&user_info, &auth_id, &sync_job) {
bail!("permission check failed");
}
let job = Job::new("syncjob", &id)?;
let upid_str = do_sync_job(job, sync_job, &userid, None)?;
let upid_str = do_sync_job(job, sync_job, &auth_id, None)?;
Ok(upid_str)
}

122
src/api2/admin/verify.rs Normal file
View File

@ -0,0 +1,122 @@
use anyhow::{format_err, Error};
use proxmox::api::router::SubdirMap;
use proxmox::{list_subdirs_api_method, sortable};
use proxmox::api::{api, ApiMethod, Router, RpcEnvironment};
use crate::api2::types::*;
use crate::server::do_verification_job;
use crate::server::jobstate::{Job, JobState};
use crate::config::verify;
use crate::config::verify::{VerificationJobConfig, VerificationJobStatus};
use serde_json::Value;
use crate::tools::systemd::time::{parse_calendar_event, compute_next_event};
use crate::server::UPID;
#[api(
input: {
properties: {
store: {
schema: DATASTORE_SCHEMA,
optional: true,
},
},
},
returns: {
description: "List configured jobs and their status.",
type: Array,
items: { type: verify::VerificationJobStatus },
},
)]
/// List all verification jobs
pub fn list_verification_jobs(
store: Option<String>,
_param: Value,
mut rpcenv: &mut dyn RpcEnvironment,
) -> Result<Vec<VerificationJobStatus>, Error> {
let (config, digest) = verify::config()?;
let mut list: Vec<VerificationJobStatus> = config
.convert_to_typed_array("verification")?
.into_iter()
.filter(|job: &VerificationJobStatus| {
if let Some(store) = &store {
&job.store == store
} else {
true
}
}).collect();
for job in &mut list {
let last_state = JobState::load("verificationjob", &job.id)
.map_err(|err| format_err!("could not open statefile for {}: {}", &job.id, err))?;
let (upid, endtime, state, starttime) = match last_state {
JobState::Created { time } => (None, None, None, time),
JobState::Started { upid } => {
let parsed_upid: UPID = upid.parse()?;
(Some(upid), None, None, parsed_upid.starttime)
},
JobState::Finished { upid, state } => {
let parsed_upid: UPID = upid.parse()?;
(Some(upid), Some(state.endtime()), Some(state.to_string()), parsed_upid.starttime)
},
};
job.last_run_upid = upid;
job.last_run_state = state;
job.last_run_endtime = endtime;
let last = job.last_run_endtime.unwrap_or_else(|| starttime);
job.next_run = (|| -> Option<i64> {
let schedule = job.schedule.as_ref()?;
let event = parse_calendar_event(&schedule).ok()?;
// ignore errors
compute_next_event(&event, last, false).unwrap_or_else(|_| None)
})();
}
rpcenv["digest"] = proxmox::tools::digest_to_hex(&digest).into();
Ok(list)
}
#[api(
input: {
properties: {
id: {
schema: JOB_ID_SCHEMA,
}
}
}
)]
/// Runs a verification job manually.
fn run_verification_job(
id: String,
_info: &ApiMethod,
rpcenv: &mut dyn RpcEnvironment,
) -> Result<String, Error> {
let (config, _digest) = verify::config()?;
let verification_job: VerificationJobConfig = config.lookup("verification", &id)?;
let auth_id: Authid = rpcenv.get_auth_id().unwrap().parse()?;
let job = Job::new("verificationjob", &id)?;
let upid_str = do_verification_job(job, verification_job, &auth_id, None)?;
Ok(upid_str)
}
#[sortable]
const VERIFICATION_INFO_SUBDIRS: SubdirMap = &[("run", &Router::new().post(&API_METHOD_RUN_VERIFICATION_JOB))];
const VERIFICATION_INFO_ROUTER: Router = Router::new()
.get(&list_subdirs_api_method!(VERIFICATION_INFO_SUBDIRS))
.subdirs(VERIFICATION_INFO_SUBDIRS);
pub const ROUTER: Router = Router::new()
.get(&API_METHOD_LIST_VERIFICATION_JOBS)
.match_all("id", &VERIFICATION_INFO_ROUTER);

View File

@ -16,7 +16,7 @@ use crate::backup::*;
use crate::api2::types::*;
use crate::config::acl::PRIV_DATASTORE_BACKUP;
use crate::config::cached_user_info::CachedUserInfo;
use crate::tools::fs::lock_dir_noblock;
use crate::tools::fs::lock_dir_noblock_shared;
mod environment;
use environment::*;
@ -59,12 +59,12 @@ async move {
let debug = param["debug"].as_bool().unwrap_or(false);
let benchmark = param["benchmark"].as_bool().unwrap_or(false);
let userid: Userid = rpcenv.get_user().unwrap().parse()?;
let auth_id: Authid = rpcenv.get_auth_id().unwrap().parse()?;
let store = tools::required_string_param(&param, "store")?.to_owned();
let user_info = CachedUserInfo::new()?;
user_info.check_privs(&userid, &["datastore", &store], PRIV_DATASTORE_BACKUP, false)?;
user_info.check_privs(&auth_id, &["datastore", &store], PRIV_DATASTORE_BACKUP, false)?;
let datastore = DataStore::lookup_datastore(&store)?;
@ -86,7 +86,7 @@ async move {
bail!("unexpected http version '{:?}' (expected version < 2)", parts.version);
}
let worker_id = format!("{}_{}_{}", store, backup_type, backup_id);
let worker_id = format!("{}:{}/{}", store, backup_type, backup_id);
let env_type = rpcenv.env_type();
@ -105,12 +105,15 @@ async move {
};
// lock backup group to only allow one backup per group at a time
let (owner, _group_guard) = datastore.create_locked_backup_group(&backup_group, &userid)?;
let (owner, _group_guard) = datastore.create_locked_backup_group(&backup_group, &auth_id)?;
// permission check
if owner != userid && worker_type != "benchmark" {
let correct_owner = owner == auth_id
|| (owner.is_token()
&& Authid::from(owner.user().clone()) == auth_id);
if !correct_owner && worker_type != "benchmark" {
// only the owner is allowed to create additional snapshots
bail!("backup owner check failed ({} != {})", userid, owner);
bail!("backup owner check failed ({} != {})", auth_id, owner);
}
let last_backup = {
@ -144,18 +147,18 @@ async move {
// lock last snapshot to prevent forgetting/pruning it during backup
let full_path = datastore.snapshot_path(&last.backup_dir);
Some(lock_dir_noblock(&full_path, "snapshot", "base snapshot is already locked by another operation")?)
Some(lock_dir_noblock_shared(&full_path, "snapshot", "base snapshot is already locked by another operation")?)
} else {
None
};
let (path, is_new, _snap_guard) = datastore.create_locked_backup_dir(&backup_dir)?;
let (path, is_new, snap_guard) = datastore.create_locked_backup_dir(&backup_dir)?;
if !is_new { bail!("backup directory already exists."); }
WorkerTask::spawn(worker_type, Some(worker_id), userid.clone(), true, move |worker| {
WorkerTask::spawn(worker_type, Some(worker_id), auth_id.clone(), true, move |worker| {
let mut env = BackupEnvironment::new(
env_type, userid, worker.clone(), datastore, backup_dir);
env_type, auth_id, worker.clone(), datastore, backup_dir);
env.debug = debug;
env.last_backup = last_backup;
@ -182,8 +185,22 @@ async move {
http.http2_initial_connection_window_size(window_size);
http.http2_max_frame_size(4*1024*1024);
let env3 = env2.clone();
http.serve_connection(conn, service)
.map_err(Error::from)
.map(move |result| {
match result {
Err(err) => {
// Avoid Transport endpoint is not connected (os error 107)
// fixme: find a better way to test for that error
if err.to_string().starts_with("connection error") && env3.finished() {
Ok(())
} else {
Err(Error::from(err))
}
}
Ok(()) => Ok(()),
}
})
});
let mut abort_future = abort_future
.map(|_| Err(format_err!("task aborted")));
@ -191,7 +208,7 @@ async move {
async move {
// keep flock until task ends
let _group_guard = _group_guard;
let _snap_guard = _snap_guard;
let snap_guard = snap_guard;
let _last_guard = _last_guard;
let res = select!{
@ -203,20 +220,32 @@ async move {
tools::runtime::block_in_place(|| env.remove_backup())?;
return Ok(());
}
let verify = |env: BackupEnvironment| {
if let Err(err) = env.verify_after_complete(snap_guard) {
env.log(format!(
"backup finished, but starting the requested verify task failed: {}",
err
));
}
};
match (res, env.ensure_finished()) {
(Ok(_), Ok(())) => {
env.log("backup finished successfully");
verify(env);
Ok(())
},
(Err(err), Ok(())) => {
// ignore errors after finish
env.log(format!("backup had errors but finished: {}", err));
verify(env);
Ok(())
},
(Ok(_), Err(err)) => {
env.log(format!("backup ended and finish failed: {}", err));
env.log("removing unfinished backup");
env.remove_backup()?;
tools::runtime::block_in_place(|| env.remove_backup())?;
Err(err)
},
(Err(err), Err(_)) => {

View File

@ -1,6 +1,7 @@
use anyhow::{bail, format_err, Error};
use std::sync::{Arc, Mutex};
use std::collections::HashMap;
use std::collections::{HashMap, HashSet};
use nix::dir::Dir;
use ::serde::{Serialize};
use serde_json::{json, Value};
@ -9,7 +10,7 @@ use proxmox::tools::digest_to_hex;
use proxmox::tools::fs::{replace_file, CreateOptions};
use proxmox::api::{RpcEnvironment, RpcEnvironmentType};
use crate::api2::types::Userid;
use crate::api2::types::Authid;
use crate::backup::*;
use crate::server::WorkerTask;
use crate::server::formatter::*;
@ -103,7 +104,7 @@ impl SharedBackupState {
pub struct BackupEnvironment {
env_type: RpcEnvironmentType,
result_attributes: Value,
user: Userid,
auth_id: Authid,
pub debug: bool,
pub formatter: &'static OutputFormatter,
pub worker: Arc<WorkerTask>,
@ -116,7 +117,7 @@ pub struct BackupEnvironment {
impl BackupEnvironment {
pub fn new(
env_type: RpcEnvironmentType,
user: Userid,
auth_id: Authid,
worker: Arc<WorkerTask>,
datastore: Arc<DataStore>,
backup_dir: BackupDir,
@ -136,7 +137,7 @@ impl BackupEnvironment {
Self {
result_attributes: json!({}),
env_type,
user,
auth_id,
worker,
datastore,
debug: false,
@ -472,16 +473,11 @@ impl BackupEnvironment {
bail!("backup does not contain valid files (file count == 0)");
}
// check manifest
let mut manifest = self.datastore.load_manifest_json(&self.backup_dir)
.map_err(|err| format_err!("unable to load manifest blob - {}", err))?;
// check for valid manifest and store stats
let stats = serde_json::to_value(state.backup_stat)?;
manifest["unprotected"]["chunk_upload_stats"] = stats;
self.datastore.store_manifest(&self.backup_dir, manifest)
.map_err(|err| format_err!("unable to store manifest blob - {}", err))?;
self.datastore.update_manifest(&self.backup_dir, |manifest| {
manifest.unprotected["chunk_upload_stats"] = stats;
}).map_err(|err| format_err!("unable to update manifest blob - {}", err))?;
if let Some(base) = &self.last_backup {
let path = self.datastore.snapshot_path(&base.backup_dir);
@ -499,6 +495,55 @@ impl BackupEnvironment {
Ok(())
}
/// If verify-new is set on the datastore, this will run a new verify task
/// for the backup. If not, this will return and also drop the passed lock
/// immediately.
pub fn verify_after_complete(&self, snap_lock: Dir) -> Result<(), Error> {
self.ensure_finished()?;
if !self.datastore.verify_new() {
// no verify requested, do nothing
return Ok(());
}
let worker_id = format!("{}:{}/{}/{:08X}",
self.datastore.name(),
self.backup_dir.group().backup_type(),
self.backup_dir.group().backup_id(),
self.backup_dir.backup_time());
let datastore = self.datastore.clone();
let backup_dir = self.backup_dir.clone();
WorkerTask::new_thread(
"verify",
Some(worker_id),
self.auth_id.clone(),
false,
move |worker| {
worker.log("Automatically verifying newly added snapshot");
let verified_chunks = Arc::new(Mutex::new(HashSet::with_capacity(1024*16)));
let corrupt_chunks = Arc::new(Mutex::new(HashSet::with_capacity(64)));
if !verify_backup_dir_with_lock(
datastore,
&backup_dir,
verified_chunks,
corrupt_chunks,
worker.clone(),
worker.upid().clone(),
None,
snap_lock,
)? {
bail!("verification failed - please check the log for details");
}
Ok(())
},
).map(|_| ())
}
pub fn log<S: AsRef<str>>(&self, msg: S) {
self.worker.log(msg);
}
@ -523,6 +568,12 @@ impl BackupEnvironment {
Ok(())
}
/// Return true if the finished flag is set
pub fn finished(&self) -> bool {
let state = self.state.lock().unwrap();
state.finished
}
/// Remove complete backup
pub fn remove_backup(&self) -> Result<(), Error> {
let mut state = self.state.lock().unwrap();
@ -548,12 +599,12 @@ impl RpcEnvironment for BackupEnvironment {
self.env_type
}
fn set_user(&mut self, _user: Option<String>) {
panic!("unable to change user");
fn set_auth_id(&mut self, _auth_id: Option<String>) {
panic!("unable to change auth_id");
}
fn get_user(&self) -> Option<String> {
Some(self.user.to_string())
fn get_auth_id(&self) -> Option<String> {
Some(self.auth_id.to_string())
}
}

View File

@ -4,11 +4,13 @@ use proxmox::list_subdirs_api_method;
pub mod datastore;
pub mod remote;
pub mod sync;
pub mod verify;
const SUBDIRS: SubdirMap = &[
("datastore", &datastore::ROUTER),
("remote", &remote::ROUTER),
("sync", &sync::ROUTER),
("verify", &verify::ROUTER)
];
pub const ROUTER: Router = Router::new()

View File

@ -12,6 +12,7 @@ use crate::backup::*;
use crate::config::cached_user_info::CachedUserInfo;
use crate::config::datastore::{self, DataStoreConfig, DIR_NAME_SCHEMA};
use crate::config::acl::{PRIV_DATASTORE_ALLOCATE, PRIV_DATASTORE_AUDIT, PRIV_DATASTORE_MODIFY};
use crate::server::jobstate;
#[api(
input: {
@ -34,14 +35,14 @@ pub fn list_datastores(
let (config, digest) = datastore::config()?;
let userid: Userid = rpcenv.get_user().unwrap().parse()?;
let auth_id: Authid = rpcenv.get_auth_id().unwrap().parse()?;
let user_info = CachedUserInfo::new()?;
rpcenv["digest"] = proxmox::tools::digest_to_hex(&digest).into();
let list:Vec<DataStoreConfig> = config.convert_to_typed_array("datastore")?;
let filter_by_privs = |store: &DataStoreConfig| {
let user_privs = user_info.lookup_privs(&userid, &["datastore", &store.name]);
let user_privs = user_info.lookup_privs(&auth_id, &["datastore", &store.name]);
(user_privs & PRIV_DATASTORE_AUDIT) != 0
};
@ -67,6 +68,14 @@ pub fn list_datastores(
optional: true,
schema: SINGLE_LINE_COMMENT_SCHEMA,
},
"notify-user": {
optional: true,
type: Userid,
},
"notify": {
optional: true,
type: Notify,
},
"gc-schedule": {
optional: true,
schema: GC_SCHEDULE_SCHEMA,
@ -75,10 +84,6 @@ pub fn list_datastores(
optional: true,
schema: PRUNE_SCHEDULE_SCHEMA,
},
"verify-schedule": {
optional: true,
schema: VERIFY_SCHEDULE_SCHEMA,
},
"keep-last": {
optional: true,
schema: PRUNE_SCHEMA_KEEP_LAST,
@ -131,9 +136,8 @@ pub fn create_datastore(param: Value) -> Result<(), Error> {
datastore::save_config(&config)?;
crate::config::jobstate::create_state_file("prune", &datastore.name)?;
crate::config::jobstate::create_state_file("garbage_collection", &datastore.name)?;
crate::config::jobstate::create_state_file("verify", &datastore.name)?;
jobstate::create_state_file("prune", &datastore.name)?;
jobstate::create_state_file("garbage_collection", &datastore.name)?;
Ok(())
}
@ -179,8 +183,6 @@ pub enum DeletableProperty {
gc_schedule,
/// Delete the prune job schedule.
prune_schedule,
/// Delete the verify schedule property
verify_schedule,
/// Delete the keep-last property
keep_last,
/// Delete the keep-hourly property
@ -193,6 +195,10 @@ pub enum DeletableProperty {
keep_monthly,
/// Delete the keep-yearly property
keep_yearly,
/// Delete the notify-user property
notify_user,
/// Delete the notify property
notify,
}
#[api(
@ -206,6 +212,14 @@ pub enum DeletableProperty {
optional: true,
schema: SINGLE_LINE_COMMENT_SCHEMA,
},
"notify-user": {
optional: true,
type: Userid,
},
"notify": {
optional: true,
type: Notify,
},
"gc-schedule": {
optional: true,
schema: GC_SCHEDULE_SCHEMA,
@ -214,10 +228,6 @@ pub enum DeletableProperty {
optional: true,
schema: PRUNE_SCHEDULE_SCHEMA,
},
"verify-schedule": {
optional: true,
schema: VERIFY_SCHEDULE_SCHEMA,
},
"keep-last": {
optional: true,
schema: PRUNE_SCHEMA_KEEP_LAST,
@ -266,13 +276,14 @@ pub fn update_datastore(
comment: Option<String>,
gc_schedule: Option<String>,
prune_schedule: Option<String>,
verify_schedule: Option<String>,
keep_last: Option<u64>,
keep_hourly: Option<u64>,
keep_daily: Option<u64>,
keep_weekly: Option<u64>,
keep_monthly: Option<u64>,
keep_yearly: Option<u64>,
notify: Option<Notify>,
notify_user: Option<Userid>,
delete: Option<Vec<DeletableProperty>>,
digest: Option<String>,
) -> Result<(), Error> {
@ -295,13 +306,14 @@ pub fn update_datastore(
DeletableProperty::comment => { data.comment = None; },
DeletableProperty::gc_schedule => { data.gc_schedule = None; },
DeletableProperty::prune_schedule => { data.prune_schedule = None; },
DeletableProperty::verify_schedule => { data.verify_schedule = None; },
DeletableProperty::keep_last => { data.keep_last = None; },
DeletableProperty::keep_hourly => { data.keep_hourly = None; },
DeletableProperty::keep_daily => { data.keep_daily = None; },
DeletableProperty::keep_weekly => { data.keep_weekly = None; },
DeletableProperty::keep_monthly => { data.keep_monthly = None; },
DeletableProperty::keep_yearly => { data.keep_yearly = None; },
DeletableProperty::notify => { data.notify = None; },
DeletableProperty::notify_user => { data.notify_user = None; },
}
}
}
@ -327,12 +339,6 @@ pub fn update_datastore(
data.prune_schedule = prune_schedule;
}
let mut verify_schedule_changed = false;
if verify_schedule.is_some() {
verify_schedule_changed = data.verify_schedule != verify_schedule;
data.verify_schedule = verify_schedule;
}
if keep_last.is_some() { data.keep_last = keep_last; }
if keep_hourly.is_some() { data.keep_hourly = keep_hourly; }
if keep_daily.is_some() { data.keep_daily = keep_daily; }
@ -340,6 +346,9 @@ pub fn update_datastore(
if keep_monthly.is_some() { data.keep_monthly = keep_monthly; }
if keep_yearly.is_some() { data.keep_yearly = keep_yearly; }
if notify.is_some() { data.notify = notify; }
if notify_user.is_some() { data.notify_user = notify_user; }
config.set_data(&name, "datastore", &data)?;
datastore::save_config(&config)?;
@ -347,15 +356,11 @@ pub fn update_datastore(
// we want to reset the statefiles, to avoid an immediate action in some cases
// (e.g. going from monthly to weekly in the second week of the month)
if gc_schedule_changed {
crate::config::jobstate::create_state_file("garbage_collection", &name)?;
jobstate::create_state_file("garbage_collection", &name)?;
}
if prune_schedule_changed {
crate::config::jobstate::create_state_file("prune", &name)?;
}
if verify_schedule_changed {
crate::config::jobstate::create_state_file("verify", &name)?;
jobstate::create_state_file("prune", &name)?;
}
Ok(())
@ -398,9 +403,8 @@ pub fn delete_datastore(name: String, digest: Option<String>) -> Result<(), Erro
datastore::save_config(&config)?;
// ignore errors
let _ = crate::config::jobstate::remove_state_file("prune", &name);
let _ = crate::config::jobstate::remove_state_file("garbage_collection", &name);
let _ = crate::config::jobstate::remove_state_file("verify", &name);
let _ = jobstate::remove_state_file("prune", &name);
let _ = jobstate::remove_state_file("garbage_collection", &name);
Ok(())
}

View File

@ -1,12 +1,12 @@
use anyhow::{bail, Error};
use serde_json::Value;
use ::serde::{Deserialize, Serialize};
use base64;
use proxmox::api::{api, ApiMethod, Router, RpcEnvironment, Permission};
use proxmox::tools::fs::open_file_locked;
use crate::api2::types::*;
use crate::config::cached_user_info::CachedUserInfo;
use crate::config::remote;
use crate::config::acl::{PRIV_REMOTE_AUDIT, PRIV_REMOTE_MODIFY};
@ -23,7 +23,8 @@ use crate::config::acl::{PRIV_REMOTE_AUDIT, PRIV_REMOTE_MODIFY};
},
},
access: {
permission: &Permission::Privilege(&["remote"], PRIV_REMOTE_AUDIT, false),
description: "List configured remotes filtered by Remote.Audit privileges",
permission: &Permission::Anybody,
},
)]
/// List all remotes
@ -32,16 +33,25 @@ pub fn list_remotes(
_info: &ApiMethod,
mut rpcenv: &mut dyn RpcEnvironment,
) -> Result<Vec<remote::Remote>, Error> {
let auth_id: Authid = rpcenv.get_auth_id().unwrap().parse()?;
let user_info = CachedUserInfo::new()?;
let (config, digest) = remote::config()?;
let mut list: Vec<remote::Remote> = config.convert_to_typed_array("remote")?;
// don't return password in api
for remote in &mut list {
remote.password = "".to_string();
}
let list = list
.into_iter()
.filter(|remote| {
let privs = user_info.lookup_privs(&auth_id, &["remote", &remote.name]);
privs & PRIV_REMOTE_AUDIT != 0
})
.collect();
rpcenv["digest"] = proxmox::tools::digest_to_hex(&digest).into();
Ok(list)
}
@ -67,7 +77,7 @@ pub fn list_remotes(
default: 8007,
},
userid: {
type: Userid,
type: Authid,
},
password: {
schema: remote::REMOTE_PASSWORD_SCHEMA,
@ -168,7 +178,7 @@ pub enum DeletableProperty {
},
userid: {
optional: true,
type: Userid,
type: Authid,
},
password: {
optional: true,
@ -202,7 +212,7 @@ pub fn update_remote(
comment: Option<String>,
host: Option<String>,
port: Option<u16>,
userid: Option<Userid>,
userid: Option<Authid>,
password: Option<String>,
fingerprint: Option<String>,
delete: Option<Vec<DeletableProperty>>,

View File

@ -2,13 +2,73 @@ use anyhow::{bail, Error};
use serde_json::Value;
use ::serde::{Deserialize, Serialize};
use proxmox::api::{api, Router, RpcEnvironment};
use proxmox::api::{api, Permission, Router, RpcEnvironment};
use proxmox::tools::fs::open_file_locked;
use crate::api2::types::*;
use crate::config::acl::{
PRIV_DATASTORE_AUDIT,
PRIV_DATASTORE_BACKUP,
PRIV_DATASTORE_MODIFY,
PRIV_DATASTORE_PRUNE,
PRIV_REMOTE_AUDIT,
PRIV_REMOTE_READ,
};
use crate::config::cached_user_info::CachedUserInfo;
use crate::config::sync::{self, SyncJobConfig};
// fixme: add access permissions
pub fn check_sync_job_read_access(
user_info: &CachedUserInfo,
auth_id: &Authid,
job: &SyncJobConfig,
) -> bool {
let datastore_privs = user_info.lookup_privs(&auth_id, &["datastore", &job.store]);
if datastore_privs & PRIV_DATASTORE_AUDIT == 0 {
return false;
}
let remote_privs = user_info.lookup_privs(&auth_id, &["remote", &job.remote]);
remote_privs & PRIV_REMOTE_AUDIT != 0
}
// user can run the corresponding pull job
pub fn check_sync_job_modify_access(
user_info: &CachedUserInfo,
auth_id: &Authid,
job: &SyncJobConfig,
) -> bool {
let datastore_privs = user_info.lookup_privs(&auth_id, &["datastore", &job.store]);
if datastore_privs & PRIV_DATASTORE_BACKUP == 0 {
return false;
}
if let Some(true) = job.remove_vanished {
if datastore_privs & PRIV_DATASTORE_PRUNE == 0 {
return false;
}
}
let correct_owner = match job.owner {
Some(ref owner) => {
owner == auth_id
|| (owner.is_token()
&& !auth_id.is_token()
&& owner.user() == auth_id.user())
},
// default sync owner
None => auth_id == Authid::backup_auth_id(),
};
// same permission as changing ownership after syncing
if !correct_owner && datastore_privs & PRIV_DATASTORE_MODIFY == 0 {
return false;
}
let remote_privs = user_info.lookup_privs(&auth_id, &["remote", &job.remote, &job.remote_store]);
remote_privs & PRIV_REMOTE_READ != 0
}
#[api(
input: {
@ -19,12 +79,18 @@ use crate::config::sync::{self, SyncJobConfig};
type: Array,
items: { type: sync::SyncJobConfig },
},
access: {
description: "Limited to sync job entries where user has Datastore.Audit on target datastore, and Remote.Audit on source remote.",
permission: &Permission::Anybody,
},
)]
/// List all sync jobs
pub fn list_sync_jobs(
_param: Value,
mut rpcenv: &mut dyn RpcEnvironment,
) -> Result<Vec<SyncJobConfig>, Error> {
let auth_id: Authid = rpcenv.get_auth_id().unwrap().parse()?;
let user_info = CachedUserInfo::new()?;
let (config, digest) = sync::config()?;
@ -32,6 +98,10 @@ pub fn list_sync_jobs(
rpcenv["digest"] = proxmox::tools::digest_to_hex(&digest).into();
let list = list
.into_iter()
.filter(|sync_job| check_sync_job_read_access(&user_info, &auth_id, &sync_job))
.collect();
Ok(list)
}
@ -45,6 +115,10 @@ pub fn list_sync_jobs(
store: {
schema: DATASTORE_SCHEMA,
},
owner: {
type: Authid,
optional: true,
},
remote: {
schema: REMOTE_ID_SCHEMA,
},
@ -65,13 +139,25 @@ pub fn list_sync_jobs(
},
},
},
access: {
description: "User needs Datastore.Backup on target datastore, and Remote.Read on source remote. Additionally, remove_vanished requires Datastore.Prune, and any owner other than the user themselves requires Datastore.Modify",
permission: &Permission::Anybody,
},
)]
/// Create a new sync job.
pub fn create_sync_job(param: Value) -> Result<(), Error> {
pub fn create_sync_job(
param: Value,
rpcenv: &mut dyn RpcEnvironment,
) -> Result<(), Error> {
let auth_id: Authid = rpcenv.get_auth_id().unwrap().parse()?;
let user_info = CachedUserInfo::new()?;
let _lock = open_file_locked(sync::SYNC_CFG_LOCKFILE, std::time::Duration::new(10, 0), true)?;
let sync_job: sync::SyncJobConfig = serde_json::from_value(param.clone())?;
if !check_sync_job_modify_access(&user_info, &auth_id, &sync_job) {
bail!("permission check failed");
}
let (mut config, _digest) = sync::config()?;
@ -83,7 +169,7 @@ pub fn create_sync_job(param: Value) -> Result<(), Error> {
sync::save_config(&config)?;
crate::config::jobstate::create_state_file("syncjob", &sync_job.id)?;
crate::server::jobstate::create_state_file("syncjob", &sync_job.id)?;
Ok(())
}
@ -100,15 +186,26 @@ pub fn create_sync_job(param: Value) -> Result<(), Error> {
description: "The sync job configuration.",
type: sync::SyncJobConfig,
},
access: {
description: "Limited to sync job entries where user has Datastore.Audit on target datastore, and Remote.Audit on source remote.",
permission: &Permission::Anybody,
},
)]
/// Read a sync job configuration.
pub fn read_sync_job(
id: String,
mut rpcenv: &mut dyn RpcEnvironment,
) -> Result<SyncJobConfig, Error> {
let auth_id: Authid = rpcenv.get_auth_id().unwrap().parse()?;
let user_info = CachedUserInfo::new()?;
let (config, digest) = sync::config()?;
let sync_job = config.lookup("sync", &id)?;
if !check_sync_job_read_access(&user_info, &auth_id, &sync_job) {
bail!("permission check failed");
}
rpcenv["digest"] = proxmox::tools::digest_to_hex(&digest).into();
Ok(sync_job)
@ -120,6 +217,8 @@ pub fn read_sync_job(
#[allow(non_camel_case_types)]
/// Deletable property name
pub enum DeletableProperty {
/// Delete the owner property.
owner,
/// Delete the comment property.
comment,
/// Delete the job schedule.
@ -139,6 +238,10 @@ pub enum DeletableProperty {
schema: DATASTORE_SCHEMA,
optional: true,
},
owner: {
type: Authid,
optional: true,
},
remote: {
schema: REMOTE_ID_SCHEMA,
optional: true,
@ -173,11 +276,16 @@ pub enum DeletableProperty {
},
},
},
access: {
permission: &Permission::Anybody,
description: "User needs Datastore.Backup on target datastore, and Remote.Read on source remote. Additionally, remove_vanished requires Datastore.Prune, and any owner other than the user themselves requires Datastore.Modify",
},
)]
/// Update sync job config.
pub fn update_sync_job(
id: String,
store: Option<String>,
owner: Option<Authid>,
remote: Option<String>,
remote_store: Option<String>,
remove_vanished: Option<bool>,
@ -185,7 +293,10 @@ pub fn update_sync_job(
schedule: Option<String>,
delete: Option<Vec<DeletableProperty>>,
digest: Option<String>,
rpcenv: &mut dyn RpcEnvironment,
) -> Result<(), Error> {
let auth_id: Authid = rpcenv.get_auth_id().unwrap().parse()?;
let user_info = CachedUserInfo::new()?;
let _lock = open_file_locked(sync::SYNC_CFG_LOCKFILE, std::time::Duration::new(10, 0), true)?;
@ -202,6 +313,7 @@ pub fn update_sync_job(
if let Some(delete) = delete {
for delete_prop in delete {
match delete_prop {
DeletableProperty::owner => { data.owner = None; },
DeletableProperty::comment => { data.comment = None; },
DeletableProperty::schedule => { data.schedule = None; },
DeletableProperty::remove_vanished => { data.remove_vanished = None; },
@ -221,11 +333,15 @@ pub fn update_sync_job(
if let Some(store) = store { data.store = store; }
if let Some(remote) = remote { data.remote = remote; }
if let Some(remote_store) = remote_store { data.remote_store = remote_store; }
if let Some(owner) = owner { data.owner = Some(owner); }
if schedule.is_some() { data.schedule = schedule; }
if remove_vanished.is_some() { data.remove_vanished = remove_vanished; }
if !check_sync_job_modify_access(&user_info, &auth_id, &data) {
bail!("permission check failed");
}
config.set_data(&id, "sync", &data)?;
sync::save_config(&config)?;
@ -246,9 +362,19 @@ pub fn update_sync_job(
},
},
},
access: {
permission: &Permission::Anybody,
description: "User needs Datastore.Backup on target datastore, and Remote.Read on source remote. Additionally, remove_vanished requires Datastore.Prune, and any owner other than the user themselves requires Datastore.Modify",
},
)]
/// Remove a sync job configuration
pub fn delete_sync_job(id: String, digest: Option<String>) -> Result<(), Error> {
pub fn delete_sync_job(
id: String,
digest: Option<String>,
rpcenv: &mut dyn RpcEnvironment,
) -> Result<(), Error> {
let auth_id: Authid = rpcenv.get_auth_id().unwrap().parse()?;
let user_info = CachedUserInfo::new()?;
let _lock = open_file_locked(sync::SYNC_CFG_LOCKFILE, std::time::Duration::new(10, 0), true)?;
@ -259,14 +385,19 @@ pub fn delete_sync_job(id: String, digest: Option<String>) -> Result<(), Error>
crate::tools::detect_modified_configuration_file(&digest, &expected_digest)?;
}
match config.sections.get(&id) {
Some(_) => { config.sections.remove(&id); },
None => bail!("job '{}' does not exist.", id),
match config.lookup("sync", &id) {
Ok(job) => {
if !check_sync_job_modify_access(&user_info, &auth_id, &job) {
bail!("permission check failed");
}
config.sections.remove(&id);
},
Err(_) => { bail!("job '{}' does not exist.", id) },
};
sync::save_config(&config)?;
crate::config::jobstate::remove_state_file("syncjob", &id)?;
crate::server::jobstate::remove_state_file("syncjob", &id)?;
Ok(())
}
@ -280,3 +411,116 @@ pub const ROUTER: Router = Router::new()
.get(&API_METHOD_LIST_SYNC_JOBS)
.post(&API_METHOD_CREATE_SYNC_JOB)
.match_all("id", &ITEM_ROUTER);
#[test]
fn sync_job_access_test() -> Result<(), Error> {
let (user_cfg, _) = crate::config::user::test_cfg_from_str(r###"
user: noperm@pbs
user: read@pbs
user: write@pbs
"###).expect("test user.cfg is not parsable");
let acl_tree = crate::config::acl::AclTree::from_raw(r###"
acl:1:/datastore/localstore1:read@pbs,write@pbs:DatastoreAudit
acl:1:/datastore/localstore1:write@pbs:DatastoreBackup
acl:1:/datastore/localstore2:write@pbs:DatastorePowerUser
acl:1:/datastore/localstore3:write@pbs:DatastoreAdmin
acl:1:/remote/remote1:read@pbs,write@pbs:RemoteAudit
acl:1:/remote/remote1/remotestore1:write@pbs:RemoteSyncOperator
"###).expect("test acl.cfg is not parsable");
let user_info = CachedUserInfo::test_new(user_cfg, acl_tree);
let root_auth_id = Authid::root_auth_id();
let no_perm_auth_id: Authid = "noperm@pbs".parse()?;
let read_auth_id: Authid = "read@pbs".parse()?;
let write_auth_id: Authid = "write@pbs".parse()?;
let mut job = SyncJobConfig {
id: "regular".to_string(),
remote: "remote0".to_string(),
remote_store: "remotestore1".to_string(),
store: "localstore0".to_string(),
owner: Some(write_auth_id.clone()),
comment: None,
remove_vanished: None,
schedule: None,
};
// should work without ACLs
assert_eq!(check_sync_job_read_access(&user_info, &root_auth_id, &job), true);
assert_eq!(check_sync_job_modify_access(&user_info, &root_auth_id, &job), true);
// user without permissions must fail
assert_eq!(check_sync_job_read_access(&user_info, &no_perm_auth_id, &job), false);
assert_eq!(check_sync_job_modify_access(&user_info, &no_perm_auth_id, &job), false);
// reading without proper read permissions on either remote or local must fail
assert_eq!(check_sync_job_read_access(&user_info, &read_auth_id, &job), false);
// reading without proper read permissions on local end must fail
job.remote = "remote1".to_string();
assert_eq!(check_sync_job_read_access(&user_info, &read_auth_id, &job), false);
// reading without proper read permissions on remote end must fail
job.remote = "remote0".to_string();
job.store = "localstore1".to_string();
assert_eq!(check_sync_job_read_access(&user_info, &read_auth_id, &job), false);
// writing without proper write permissions on either end must fail
job.store = "localstore0".to_string();
assert_eq!(check_sync_job_modify_access(&user_info, &write_auth_id, &job), false);
// writing without proper write permissions on local end must fail
job.remote = "remote1".to_string();
// writing without proper write permissions on remote end must fail
job.remote = "remote0".to_string();
job.store = "localstore1".to_string();
assert_eq!(check_sync_job_modify_access(&user_info, &write_auth_id, &job), false);
// reset remote to one where users have access
job.remote = "remote1".to_string();
// user with read permission can only read, but not modify/run
assert_eq!(check_sync_job_read_access(&user_info, &read_auth_id, &job), true);
job.owner = Some(read_auth_id.clone());
assert_eq!(check_sync_job_modify_access(&user_info, &read_auth_id, &job), false);
job.owner = None;
assert_eq!(check_sync_job_modify_access(&user_info, &read_auth_id, &job), false);
job.owner = Some(write_auth_id.clone());
assert_eq!(check_sync_job_modify_access(&user_info, &read_auth_id, &job), false);
// user with simple write permission can modify/run
assert_eq!(check_sync_job_read_access(&user_info, &write_auth_id, &job), true);
assert_eq!(check_sync_job_modify_access(&user_info, &write_auth_id, &job), true);
// but can't modify/run with deletion
job.remove_vanished = Some(true);
assert_eq!(check_sync_job_modify_access(&user_info, &write_auth_id, &job), false);
// unless they have Datastore.Prune as well
job.store = "localstore2".to_string();
assert_eq!(check_sync_job_modify_access(&user_info, &write_auth_id, &job), true);
// changing owner is not possible
job.owner = Some(read_auth_id.clone());
assert_eq!(check_sync_job_modify_access(&user_info, &write_auth_id, &job), false);
// also not to the default 'backup@pam'
job.owner = None;
assert_eq!(check_sync_job_modify_access(&user_info, &write_auth_id, &job), false);
// unless they have Datastore.Modify as well
job.store = "localstore3".to_string();
job.owner = Some(read_auth_id.clone());
assert_eq!(check_sync_job_modify_access(&user_info, &write_auth_id, &job), true);
job.owner = None;
assert_eq!(check_sync_job_modify_access(&user_info, &write_auth_id, &job), true);
Ok(())
}

311
src/api2/config/verify.rs Normal file
View File

@ -0,0 +1,311 @@
use anyhow::{bail, Error};
use serde_json::Value;
use ::serde::{Deserialize, Serialize};
use proxmox::api::{api, Permission, Router, RpcEnvironment};
use proxmox::tools::fs::open_file_locked;
use crate::api2::types::*;
use crate::config::acl::{
PRIV_DATASTORE_AUDIT,
PRIV_DATASTORE_BACKUP,
PRIV_DATASTORE_VERIFY,
};
use crate::config::verify::{self, VerificationJobConfig};
#[api(
input: {
properties: {},
},
returns: {
description: "List configured jobs.",
type: Array,
items: { type: verify::VerificationJobConfig },
},
access: {
permission: &Permission::Privilege(
&["datastore", "{store}"],
PRIV_DATASTORE_AUDIT | PRIV_DATASTORE_BACKUP | PRIV_DATASTORE_VERIFY,
true),
},
)]
/// List all verification jobs
pub fn list_verification_jobs(
_param: Value,
mut rpcenv: &mut dyn RpcEnvironment,
) -> Result<Vec<VerificationJobConfig>, Error> {
let (config, digest) = verify::config()?;
let list = config.convert_to_typed_array("verification")?;
rpcenv["digest"] = proxmox::tools::digest_to_hex(&digest).into();
Ok(list)
}
#[api(
protected: true,
input: {
properties: {
id: {
schema: JOB_ID_SCHEMA,
},
store: {
schema: DATASTORE_SCHEMA,
},
"ignore-verified": {
optional: true,
schema: IGNORE_VERIFIED_BACKUPS_SCHEMA,
},
"outdated-after": {
optional: true,
schema: VERIFICATION_OUTDATED_AFTER_SCHEMA,
},
comment: {
optional: true,
schema: SINGLE_LINE_COMMENT_SCHEMA,
},
schedule: {
optional: true,
schema: VERIFICATION_SCHEDULE_SCHEMA,
},
}
},
access: {
permission: &Permission::Privilege(
&["datastore", "{store}"],
PRIV_DATASTORE_VERIFY,
true),
},
)]
/// Create a new verification job.
pub fn create_verification_job(param: Value) -> Result<(), Error> {
let _lock = open_file_locked(verify::VERIFICATION_CFG_LOCKFILE, std::time::Duration::new(10, 0), true)?;
let verification_job: verify::VerificationJobConfig = serde_json::from_value(param.clone())?;
let (mut config, _digest) = verify::config()?;
if let Some(_) = config.sections.get(&verification_job.id) {
bail!("job '{}' already exists.", verification_job.id);
}
config.set_data(&verification_job.id, "verification", &verification_job)?;
verify::save_config(&config)?;
crate::server::jobstate::create_state_file("verificationjob", &verification_job.id)?;
Ok(())
}
#[api(
input: {
properties: {
id: {
schema: JOB_ID_SCHEMA,
},
},
},
returns: {
description: "The verification job configuration.",
type: verify::VerificationJobConfig,
},
access: {
permission: &Permission::Privilege(
&["datastore", "{store}"],
PRIV_DATASTORE_AUDIT | PRIV_DATASTORE_BACKUP | PRIV_DATASTORE_VERIFY,
true),
},
)]
/// Read a verification job configuration.
pub fn read_verification_job(
id: String,
mut rpcenv: &mut dyn RpcEnvironment,
) -> Result<VerificationJobConfig, Error> {
let (config, digest) = verify::config()?;
let verification_job = config.lookup("verification", &id)?;
rpcenv["digest"] = proxmox::tools::digest_to_hex(&digest).into();
Ok(verification_job)
}
#[api()]
#[derive(Serialize, Deserialize)]
#[serde(rename_all="kebab-case")]
/// Deletable property name
pub enum DeletableProperty {
/// Delete the ignore verified property.
IgnoreVerified,
/// Delete the comment property.
Comment,
/// Delete the job schedule.
Schedule,
/// Delete outdated after property.
OutdatedAfter
}
#[api(
protected: true,
input: {
properties: {
id: {
schema: JOB_ID_SCHEMA,
},
store: {
optional: true,
schema: DATASTORE_SCHEMA,
},
"ignore-verified": {
optional: true,
schema: IGNORE_VERIFIED_BACKUPS_SCHEMA,
},
"outdated-after": {
optional: true,
schema: VERIFICATION_OUTDATED_AFTER_SCHEMA,
},
comment: {
optional: true,
schema: SINGLE_LINE_COMMENT_SCHEMA,
},
schedule: {
optional: true,
schema: VERIFICATION_SCHEDULE_SCHEMA,
},
delete: {
description: "List of properties to delete.",
type: Array,
optional: true,
items: {
type: DeletableProperty,
}
},
digest: {
optional: true,
schema: PROXMOX_CONFIG_DIGEST_SCHEMA,
},
},
},
access: {
permission: &Permission::Privilege(
&["datastore", "{store}"],
PRIV_DATASTORE_VERIFY,
true),
},
)]
/// Update verification job config.
pub fn update_verification_job(
id: String,
store: Option<String>,
ignore_verified: Option<bool>,
outdated_after: Option<i64>,
comment: Option<String>,
schedule: Option<String>,
delete: Option<Vec<DeletableProperty>>,
digest: Option<String>,
) -> Result<(), Error> {
let _lock = open_file_locked(verify::VERIFICATION_CFG_LOCKFILE, std::time::Duration::new(10, 0), true)?;
// pass/compare digest
let (mut config, expected_digest) = verify::config()?;
if let Some(ref digest) = digest {
let digest = proxmox::tools::hex_to_digest(digest)?;
crate::tools::detect_modified_configuration_file(&digest, &expected_digest)?;
}
let mut data: verify::VerificationJobConfig = config.lookup("verification", &id)?;
if let Some(delete) = delete {
for delete_prop in delete {
match delete_prop {
DeletableProperty::IgnoreVerified => { data.ignore_verified = None; },
DeletableProperty::OutdatedAfter => { data.outdated_after = None; },
DeletableProperty::Comment => { data.comment = None; },
DeletableProperty::Schedule => { data.schedule = None; },
}
}
}
if let Some(comment) = comment {
let comment = comment.trim().to_string();
if comment.is_empty() {
data.comment = None;
} else {
data.comment = Some(comment);
}
}
if let Some(store) = store { data.store = store; }
if ignore_verified.is_some() { data.ignore_verified = ignore_verified; }
if outdated_after.is_some() { data.outdated_after = outdated_after; }
if schedule.is_some() { data.schedule = schedule; }
config.set_data(&id, "verification", &data)?;
verify::save_config(&config)?;
Ok(())
}
#[api(
protected: true,
input: {
properties: {
id: {
schema: JOB_ID_SCHEMA,
},
digest: {
optional: true,
schema: PROXMOX_CONFIG_DIGEST_SCHEMA,
},
},
},
access: {
permission: &Permission::Privilege(
&["datastore", "{store}"],
PRIV_DATASTORE_VERIFY,
true),
},
)]
/// Remove a verification job configuration
pub fn delete_verification_job(id: String, digest: Option<String>) -> Result<(), Error> {
let _lock = open_file_locked(verify::VERIFICATION_CFG_LOCKFILE, std::time::Duration::new(10, 0), true)?;
let (mut config, expected_digest) = verify::config()?;
if let Some(ref digest) = digest {
let digest = proxmox::tools::hex_to_digest(digest)?;
crate::tools::detect_modified_configuration_file(&digest, &expected_digest)?;
}
match config.sections.get(&id) {
Some(_) => { config.sections.remove(&id); },
None => bail!("job '{}' does not exist.", id),
}
verify::save_config(&config)?;
crate::server::jobstate::remove_state_file("verificationjob", &id)?;
Ok(())
}
const ITEM_ROUTER: Router = Router::new()
.get(&API_METHOD_READ_VERIFICATION_JOB)
.put(&API_METHOD_UPDATE_VERIFICATION_JOB)
.delete(&API_METHOD_DELETE_VERIFICATION_JOB);
pub const ROUTER: Router = Router::new()
.get(&API_METHOD_LIST_VERIFICATION_JOBS)
.post(&API_METHOD_CREATE_VERIFICATION_JOB)
.match_all("id", &ITEM_ROUTER);

View File

@ -24,20 +24,21 @@ use crate::server::WorkerTask;
use crate::tools;
use crate::tools::ticket::{self, Empty, Ticket};
pub mod apt;
pub mod disks;
pub mod dns;
pub mod network;
pub mod tasks;
pub mod subscription;
pub(crate) mod rrd;
mod apt;
mod journal;
mod services;
mod status;
mod subscription;
mod syslog;
mod time;
mod report;
pub const SHELL_CMD_SCHEMA: Schema = StringSchema::new("The command to run.")
.format(&ApiStringFormat::Enum(&[
@ -91,10 +92,12 @@ async fn termproxy(
cmd: Option<String>,
rpcenv: &mut dyn RpcEnvironment,
) -> Result<Value, Error> {
// intentionally user only for now
let userid: Userid = rpcenv
.get_user()
.get_auth_id()
.ok_or_else(|| format_err!("unknown user"))?
.parse()?;
let auth_id = Authid::from(userid.clone());
if userid.realm() != "pam" {
bail!("only pam users can use the console");
@ -137,7 +140,7 @@ async fn termproxy(
let upid = WorkerTask::spawn(
"termproxy",
None,
userid,
auth_id,
false,
move |worker| async move {
// move inside the worker so that it survives and does not close the port
@ -272,7 +275,8 @@ fn upgrade_to_websocket(
rpcenv: Box<dyn RpcEnvironment>,
) -> ApiResponseFuture {
async move {
let userid: Userid = rpcenv.get_user().unwrap().parse()?;
// intentionally user only for now
let userid: Userid = rpcenv.get_auth_id().unwrap().parse()?;
let ticket = tools::required_string_param(&param, "vncticket")?;
let port: u16 = tools::required_integer_param(&param, "port")? as u16;
@ -307,6 +311,7 @@ pub const SUBDIRS: SubdirMap = &[
("dns", &dns::ROUTER),
("journal", &journal::ROUTER),
("network", &network::ROUTER),
("report", &report::ROUTER),
("rrd", &rrd::ROUTER),
("services", &services::ROUTER),
("status", &status::ROUTER),

View File

@ -1,186 +1,15 @@
use apt_pkg_native::Cache;
use anyhow::{Error, bail};
use anyhow::{Error, bail, format_err};
use serde_json::{json, Value};
use proxmox::{list_subdirs_api_method, const_regex};
use proxmox::list_subdirs_api_method;
use proxmox::api::{api, RpcEnvironment, RpcEnvironmentType, Permission};
use proxmox::api::router::{Router, SubdirMap};
use crate::server::WorkerTask;
use crate::tools::{apt, http};
use crate::config::acl::{PRIV_SYS_AUDIT, PRIV_SYS_MODIFY};
use crate::api2::types::{APTUpdateInfo, NODE_SCHEMA, Userid, UPID_SCHEMA};
const_regex! {
VERSION_EPOCH_REGEX = r"^\d+:";
FILENAME_EXTRACT_REGEX = r"^.*/.*?_(.*)_Packages$";
}
// FIXME: Replace with call to 'apt changelog <pkg> --print-uris'. Currently
// not possible as our packages do not have a URI set in their Release file
fn get_changelog_url(
package: &str,
filename: &str,
source_pkg: &str,
version: &str,
source_version: &str,
origin: &str,
component: &str,
) -> Result<String, Error> {
if origin == "" {
bail!("no origin available for package {}", package);
}
if origin == "Debian" {
let source_version = (VERSION_EPOCH_REGEX.regex_obj)().replace_all(source_version, "");
let prefix = if source_pkg.starts_with("lib") {
source_pkg.get(0..4)
} else {
source_pkg.get(0..1)
};
let prefix = match prefix {
Some(p) => p,
None => bail!("cannot get starting characters of package name '{}'", package)
};
// note: security updates seem to not always upload a changelog for
// their package version, so this only works *most* of the time
return Ok(format!("https://metadata.ftp-master.debian.org/changelogs/main/{}/{}/{}_{}_changelog",
prefix, source_pkg, source_pkg, source_version));
} else if origin == "Proxmox" {
let version = (VERSION_EPOCH_REGEX.regex_obj)().replace_all(version, "");
let base = match (FILENAME_EXTRACT_REGEX.regex_obj)().captures(filename) {
Some(captures) => {
let base_capture = captures.get(1);
match base_capture {
Some(base_underscore) => base_underscore.as_str().replace("_", "/"),
None => bail!("incompatible filename, cannot find regex group")
}
},
None => bail!("incompatible filename, doesn't match regex")
};
return Ok(format!("http://download.proxmox.com/{}/{}_{}.changelog",
base, package, version));
}
bail!("unknown origin ({}) or component ({})", origin, component)
}
fn list_installed_apt_packages<F: Fn(&str, &str, &str) -> bool>(filter: F)
-> Vec<APTUpdateInfo> {
let mut ret = Vec::new();
// note: this is not an 'apt update', it just re-reads the cache from disk
let mut cache = Cache::get_singleton();
cache.reload();
let mut cache_iter = cache.iter();
loop {
let view = match cache_iter.next() {
Some(view) => view,
None => break
};
let current_version = match view.current_version() {
Some(vers) => vers,
None => continue
};
let candidate_version = match view.candidate_version() {
Some(vers) => vers,
// if there's no candidate (i.e. no update) get info of currently
// installed version instead
None => current_version.clone()
};
let package = view.name();
if filter(&package, &current_version, &candidate_version) {
let mut origin_res = "unknown".to_owned();
let mut section_res = "unknown".to_owned();
let mut priority_res = "unknown".to_owned();
let mut change_log_url = "".to_owned();
let mut short_desc = package.clone();
let mut long_desc = "".to_owned();
// get additional information via nested APT 'iterators'
let mut view_iter = view.versions();
while let Some(ver) = view_iter.next() {
if ver.version() == candidate_version {
if let Some(section) = ver.section() {
section_res = section;
}
if let Some(prio) = ver.priority_type() {
priority_res = prio;
}
// assume every package has only one origin file (not
// origin, but origin *file*, for some reason those seem to
// be different concepts in APT)
let mut origin_iter = ver.origin_iter();
let origin = origin_iter.next();
if let Some(origin) = origin {
if let Some(sd) = origin.short_desc() {
short_desc = sd;
}
if let Some(ld) = origin.long_desc() {
long_desc = ld;
}
// the package files appear in priority order, meaning
// the one for the candidate version is first
let mut pkg_iter = origin.file();
let pkg_file = pkg_iter.next();
if let Some(pkg_file) = pkg_file {
if let Some(origin_name) = pkg_file.origin() {
origin_res = origin_name;
}
let filename = pkg_file.file_name();
let source_pkg = ver.source_package();
let source_ver = ver.source_version();
let component = pkg_file.component();
// build changelog URL from gathered information
// ignore errors, use empty changelog instead
let url = get_changelog_url(&package, &filename, &source_pkg,
&candidate_version, &source_ver, &origin_res, &component);
if let Ok(url) = url {
change_log_url = url;
}
}
}
break;
}
}
let info = APTUpdateInfo {
package,
title: short_desc,
arch: view.arch(),
description: long_desc,
change_log_url,
origin: origin_res,
version: candidate_version,
old_version: current_version,
priority: priority_res,
section: section_res,
};
ret.push(info);
}
}
return ret;
}
use crate::api2::types::{Authid, APTUpdateInfo, NODE_SCHEMA, UPID_SCHEMA};
#[api(
input: {
@ -193,16 +22,60 @@ fn list_installed_apt_packages<F: Fn(&str, &str, &str) -> bool>(filter: F)
returns: {
description: "A list of packages with available updates.",
type: Array,
items: { type: APTUpdateInfo },
items: {
type: APTUpdateInfo
},
},
protected: true,
access: {
permission: &Permission::Privilege(&[], PRIV_SYS_AUDIT, false),
},
)]
/// List available APT updates
fn apt_update_available(_param: Value) -> Result<Value, Error> {
let ret = list_installed_apt_packages(|_pkg, cur_ver, can_ver| cur_ver != can_ver);
Ok(json!(ret))
match apt::pkg_cache_expired() {
Ok(false) => {
if let Ok(Some(cache)) = apt::read_pkg_state() {
return Ok(json!(cache.package_status));
}
},
_ => (),
}
let cache = apt::update_cache()?;
return Ok(json!(cache.package_status));
}
fn do_apt_update(worker: &WorkerTask, quiet: bool) -> Result<(), Error> {
if !quiet { worker.log("starting apt-get update") }
// TODO: set proxy /etc/apt/apt.conf.d/76pbsproxy like PVE
let mut command = std::process::Command::new("apt-get");
command.arg("update");
// apt "errors" quite easily, and run_command is a bit rigid, so handle this inline for now.
let output = command.output()
.map_err(|err| format_err!("failed to execute {:?} - {}", command, err))?;
if !quiet {
worker.log(String::from_utf8(output.stdout)?);
}
// TODO: improve run_command to allow outputting both, stderr and stdout
if !output.status.success() {
if output.status.code().is_some() {
let msg = String::from_utf8(output.stderr)
.map(|m| if m.is_empty() { String::from("no error message") } else { m })
.unwrap_or_else(|_| String::from("non utf8 error message (suppressed)"));
worker.warn(msg);
} else {
bail!("terminated by signal");
}
}
Ok(())
}
#[api(
@ -212,6 +85,13 @@ fn apt_update_available(_param: Value) -> Result<Value, Error> {
node: {
schema: NODE_SCHEMA,
},
notify: {
type: bool,
description: r#"Send notification mail about new package updates availanle to the
email address configured for 'root@pam')."#,
optional: true,
default: false,
},
quiet: {
description: "Only produces output suitable for logging, omitting progress indicators.",
type: bool,
@ -229,26 +109,46 @@ fn apt_update_available(_param: Value) -> Result<Value, Error> {
)]
/// Update the APT database
pub fn apt_update_database(
notify: Option<bool>,
quiet: Option<bool>,
rpcenv: &mut dyn RpcEnvironment,
) -> Result<String, Error> {
let userid: Userid = rpcenv.get_user().unwrap().parse()?;
let auth_id: Authid = rpcenv.get_auth_id().unwrap().parse()?;
let to_stdout = if rpcenv.env_type() == RpcEnvironmentType::CLI { true } else { false };
// FIXME: change to non-option in signature and drop below once we have proxmox-api-macro 0.2.3
let quiet = quiet.unwrap_or(API_METHOD_APT_UPDATE_DATABASE_PARAM_DEFAULT_QUIET);
let notify = notify.unwrap_or(API_METHOD_APT_UPDATE_DATABASE_PARAM_DEFAULT_NOTIFY);
let upid_str = WorkerTask::new_thread("aptupdate", None, userid, to_stdout, move |worker| {
if !quiet { worker.log("starting apt-get update") }
let upid_str = WorkerTask::new_thread("aptupdate", None, auth_id, to_stdout, move |worker| {
do_apt_update(&worker, quiet)?;
// TODO: set proxy /etc/apt/apt.conf.d/76pbsproxy like PVE
let mut cache = apt::update_cache()?;
let mut command = std::process::Command::new("apt-get");
command.arg("update");
if notify {
let mut notified = match cache.notified {
Some(notified) => notified,
None => std::collections::HashMap::new(),
};
let mut to_notify: Vec<&APTUpdateInfo> = Vec::new();
let output = crate::tools::run_command(command, None)?;
if !quiet { worker.log(output) }
// TODO: add mail notify for new updates like PVE
for pkg in &cache.package_status {
match notified.insert(pkg.package.to_owned(), pkg.version.to_owned()) {
Some(notified_version) => {
if notified_version != pkg.version {
to_notify.push(pkg);
}
},
None => to_notify.push(pkg),
}
}
if !to_notify.is_empty() {
to_notify.sort_unstable_by_key(|k| &k.package);
crate::server::send_updates_available(&to_notify)?;
}
cache.notified = Some(notified);
apt::write_pkg_cache(&cache)?;
}
Ok(())
})?;
@ -256,7 +156,67 @@ pub fn apt_update_database(
Ok(upid_str)
}
#[api(
input: {
properties: {
node: {
schema: NODE_SCHEMA,
},
name: {
description: "Package name to get changelog of.",
type: String,
},
version: {
description: "Package version to get changelog of. Omit to use candidate version.",
type: String,
optional: true,
},
},
},
returns: {
schema: UPID_SCHEMA,
},
access: {
permission: &Permission::Privilege(&[], PRIV_SYS_MODIFY, false),
},
)]
/// Retrieve the changelog of the specified package.
fn apt_get_changelog(
param: Value,
) -> Result<Value, Error> {
let name = crate::tools::required_string_param(&param, "name")?.to_owned();
let version = param["version"].as_str();
let pkg_info = apt::list_installed_apt_packages(|data| {
match version {
Some(version) => version == data.active_version,
None => data.active_version == data.candidate_version
}
}, Some(&name));
if pkg_info.len() == 0 {
bail!("Package '{}' not found", name);
}
let changelog_url = &pkg_info[0].change_log_url;
// FIXME: use 'apt-get changelog' for proxmox packages as well, once repo supports it
if changelog_url.starts_with("http://download.proxmox.com/") {
let changelog = crate::tools::runtime::block_on(http::get_string(changelog_url))
.map_err(|err| format_err!("Error downloading changelog from '{}': {}", changelog_url, err))?;
return Ok(json!(changelog));
} else {
let mut command = std::process::Command::new("apt-get");
command.arg("changelog");
command.arg("-qq"); // don't display download progress
command.arg(name);
let output = crate::tools::run_command(command, None)?;
return Ok(json!(output));
}
}
const SUBDIRS: SubdirMap = &[
("changelog", &Router::new().get(&API_METHOD_APT_GET_CHANGELOG)),
("update", &Router::new()
.get(&API_METHOD_APT_UPDATE_AVAILABLE)
.post(&API_METHOD_APT_UPDATE_DATABASE)

View File

@ -13,7 +13,7 @@ use crate::tools::disks::{
};
use crate::server::WorkerTask;
use crate::api2::types::{Userid, UPID_SCHEMA, NODE_SCHEMA, BLOCKDEVICE_NAME_SCHEMA};
use crate::api2::types::{Authid, UPID_SCHEMA, NODE_SCHEMA, BLOCKDEVICE_NAME_SCHEMA};
pub mod directory;
pub mod zfs;
@ -140,7 +140,7 @@ pub fn initialize_disk(
let to_stdout = if rpcenv.env_type() == RpcEnvironmentType::CLI { true } else { false };
let userid: Userid = rpcenv.get_user().unwrap().parse()?;
let auth_id: Authid = rpcenv.get_auth_id().unwrap().parse()?;
let info = get_disk_usage_info(&disk, true)?;
@ -149,7 +149,7 @@ pub fn initialize_disk(
}
let upid_str = WorkerTask::new_thread(
"diskinit", Some(disk.clone()), userid, to_stdout, move |worker|
"diskinit", Some(disk.clone()), auth_id, to_stdout, move |worker|
{
worker.log(format!("initialize disk {}", disk));

View File

@ -134,7 +134,7 @@ pub fn create_datastore_disk(
let to_stdout = if rpcenv.env_type() == RpcEnvironmentType::CLI { true } else { false };
let userid: Userid = rpcenv.get_user().unwrap().parse()?;
let auth_id: Authid = rpcenv.get_auth_id().unwrap().parse()?;
let info = get_disk_usage_info(&disk, true)?;
@ -143,7 +143,7 @@ pub fn create_datastore_disk(
}
let upid_str = WorkerTask::new_thread(
"dircreate", Some(name.clone()), userid, to_stdout, move |worker|
"dircreate", Some(name.clone()), auth_id, to_stdout, move |worker|
{
worker.log(format!("create datastore '{}' on disk {}", name, disk));

View File

@ -1,6 +1,6 @@
use anyhow::{bail, Error};
use serde_json::{json, Value};
use ::serde::{Deserialize, Serialize};
use serde::{Deserialize, Serialize};
use proxmox::api::{
api, Permission, RpcEnvironment, RpcEnvironmentType,
@ -256,7 +256,7 @@ pub fn create_zpool(
let to_stdout = if rpcenv.env_type() == RpcEnvironmentType::CLI { true } else { false };
let userid: Userid = rpcenv.get_user().unwrap().parse()?;
let auth_id: Authid = rpcenv.get_auth_id().unwrap().parse()?;
let add_datastore = add_datastore.unwrap_or(false);
@ -316,7 +316,7 @@ pub fn create_zpool(
}
let upid_str = WorkerTask::new_thread(
"zfscreate", Some(name.clone()), userid, to_stdout, move |worker|
"zfscreate", Some(name.clone()), auth_id, to_stdout, move |worker|
{
worker.log(format!("create {:?} zpool '{}' on devices '{}'", raidlevel, name, devices_text));

View File

@ -684,9 +684,9 @@ pub async fn reload_network_config(
network::assert_ifupdown2_installed()?;
let userid: Userid = rpcenv.get_user().unwrap().parse()?;
let auth_id: Authid = rpcenv.get_auth_id().unwrap().parse()?;
let upid_str = WorkerTask::spawn("srvreload", Some(String::from("networking")), userid, true, |_worker| async {
let upid_str = WorkerTask::spawn("srvreload", Some(String::from("networking")), auth_id, true, |_worker| async {
let _ = std::fs::rename(network::NETWORK_INTERFACES_NEW_FILENAME, network::NETWORK_INTERFACES_FILENAME);

35
src/api2/node/report.rs Normal file
View File

@ -0,0 +1,35 @@
use anyhow::Error;
use proxmox::api::{api, ApiMethod, Permission, Router, RpcEnvironment};
use serde_json::{json, Value};
use crate::api2::types::*;
use crate::config::acl::PRIV_SYS_AUDIT;
use crate::server::generate_report;
#[api(
input: {
properties: {
node: {
schema: NODE_SCHEMA,
},
},
},
returns: {
type: String,
description: "Returns report of the node"
},
access: {
permission: &Permission::Privilege(&["system", "status"], PRIV_SYS_AUDIT, false),
},
)]
/// Generate a report
fn get_report(
_param: Value,
_info: &ApiMethod,
_rpcenv: &mut dyn RpcEnvironment,
) -> Result<Value, Error> {
Ok(json!(generate_report()))
}
pub const ROUTER: Router = Router::new()
.get(&API_METHOD_GET_REPORT);

View File

@ -31,11 +31,9 @@ pub fn create_value_from_rrd(
} else {
result.push(json!({ "time": t }));
}
} else {
if let Some(value) = list[index] {
} else if let Some(value) = list[index] {
result[index][name] = value.into();
}
}
t += reso;
}
}

View File

@ -182,7 +182,7 @@ fn get_service_state(
Ok(json_service_state(&service, status))
}
fn run_service_command(service: &str, cmd: &str, userid: Userid) -> Result<Value, Error> {
fn run_service_command(service: &str, cmd: &str, auth_id: Authid) -> Result<Value, Error> {
let workerid = format!("srv{}", &cmd);
@ -196,7 +196,7 @@ fn run_service_command(service: &str, cmd: &str, userid: Userid) -> Result<Value
let upid = WorkerTask::new_thread(
&workerid,
Some(service.clone()),
userid,
auth_id,
false,
move |_worker| {
@ -244,11 +244,11 @@ fn start_service(
rpcenv: &mut dyn RpcEnvironment,
) -> Result<Value, Error> {
let userid: Userid = rpcenv.get_user().unwrap().parse()?;
let auth_id: Authid = rpcenv.get_auth_id().unwrap().parse()?;
log::info!("starting service {}", service);
run_service_command(&service, "start", userid)
run_service_command(&service, "start", auth_id)
}
#[api(
@ -274,11 +274,11 @@ fn stop_service(
rpcenv: &mut dyn RpcEnvironment,
) -> Result<Value, Error> {
let userid: Userid = rpcenv.get_user().unwrap().parse()?;
let auth_id: Authid = rpcenv.get_auth_id().unwrap().parse()?;
log::info!("stopping service {}", service);
run_service_command(&service, "stop", userid)
run_service_command(&service, "stop", auth_id)
}
#[api(
@ -304,15 +304,15 @@ fn restart_service(
rpcenv: &mut dyn RpcEnvironment,
) -> Result<Value, Error> {
let userid: Userid = rpcenv.get_user().unwrap().parse()?;
let auth_id: Authid = rpcenv.get_auth_id().unwrap().parse()?;
log::info!("re-starting service {}", service);
if &service == "proxmox-backup-proxy" {
// special case, avoid aborting running tasks
run_service_command(&service, "reload", userid)
run_service_command(&service, "reload", auth_id)
} else {
run_service_command(&service, "restart", userid)
run_service_command(&service, "restart", auth_id)
}
}
@ -339,11 +339,11 @@ fn reload_service(
rpcenv: &mut dyn RpcEnvironment,
) -> Result<Value, Error> {
let userid: Userid = rpcenv.get_user().unwrap().parse()?;
let auth_id: Authid = rpcenv.get_auth_id().unwrap().parse()?;
log::info!("reloading service {}", service);
run_service_command(&service, "reload", userid)
run_service_command(&service, "reload", auth_id)
}

View File

@ -1,12 +1,69 @@
use anyhow::{Error};
use serde_json::{json, Value};
use anyhow::{Error, format_err, bail};
use serde_json::Value;
use proxmox::api::{api, Router, RpcEnvironment, Permission};
use crate::tools;
use crate::config::acl::PRIV_SYS_AUDIT;
use crate::tools::subscription::{self, SubscriptionStatus, SubscriptionInfo};
use crate::config::acl::{PRIV_SYS_AUDIT,PRIV_SYS_MODIFY};
use crate::config::cached_user_info::CachedUserInfo;
use crate::api2::types::{NODE_SCHEMA, Userid};
use crate::api2::types::{NODE_SCHEMA, SUBSCRIPTION_KEY_SCHEMA, Authid};
#[api(
input: {
properties: {
node: {
schema: NODE_SCHEMA,
},
force: {
description: "Always connect to server, even if information in cache is up to date.",
type: bool,
optional: true,
default: false,
},
},
},
protected: true,
access: {
permission: &Permission::Privilege(&["system"], PRIV_SYS_MODIFY, false),
},
)]
/// Check and update subscription status.
pub fn check_subscription(
force: bool,
) -> Result<(), Error> {
// FIXME: drop once proxmox-api-macro is bumped to >> 5.0.0-1
let _remove_me = API_METHOD_CHECK_SUBSCRIPTION_PARAM_DEFAULT_FORCE;
let info = match subscription::read_subscription() {
Err(err) => bail!("could not read subscription status: {}", err),
Ok(Some(info)) => info,
Ok(None) => return Ok(()),
};
let server_id = tools::get_hardware_address()?;
let key = if let Some(key) = info.key {
// always update apt auth if we have a key to ensure user can access enterprise repo
subscription::update_apt_auth(Some(key.to_owned()), Some(server_id.to_owned()))?;
key
} else {
String::new()
};
if !force && info.status == SubscriptionStatus::ACTIVE {
let age = proxmox::tools::time::epoch_i64() - info.checktime.unwrap_or(i64::MAX);
if age < subscription::MAX_LOCAL_KEY_AGE {
return Ok(());
}
}
let info = subscription::check_subscription(key, server_id)?;
subscription::write_subscription(info)
.map_err(|e| format_err!("Error writing updated subscription status - {}", e))?;
Ok(())
}
#[api(
input: {
@ -18,51 +75,103 @@ use crate::api2::types::{NODE_SCHEMA, Userid};
},
returns: {
description: "Subscription status.",
properties: {
status: {
type: String,
description: "'NotFound', 'active' or 'inactive'."
},
message: {
type: String,
description: "Human readable problem description.",
},
serverid: {
type: String,
description: "The unique server ID, if permitted to access.",
},
url: {
type: String,
description: "URL to Web Shop.",
},
},
type: SubscriptionInfo,
},
access: {
permission: &Permission::Anybody,
},
)]
/// Read subscription info.
fn get_subscription(
pub fn get_subscription(
_param: Value,
rpcenv: &mut dyn RpcEnvironment,
) -> Result<Value, Error> {
let userid: Userid = rpcenv.get_user().unwrap().parse()?;
let user_info = CachedUserInfo::new()?;
let user_privs = user_info.lookup_privs(&userid, &[]);
let server_id = if (user_privs & PRIV_SYS_AUDIT) != 0 {
tools::get_hardware_address()?
} else {
"hidden".to_string()
) -> Result<SubscriptionInfo, Error> {
let url = "https://www.proxmox.com/en/proxmox-backup-server/pricing";
let info = match subscription::read_subscription() {
Err(err) => bail!("could not read subscription status: {}", err),
Ok(Some(info)) => info,
Ok(None) => SubscriptionInfo {
status: SubscriptionStatus::NOTFOUND,
message: Some("There is no subscription key".into()),
serverid: Some(tools::get_hardware_address()?),
url: Some(url.into()),
..Default::default()
},
};
let url = "https://www.proxmox.com/en/proxmox-backup-server/pricing";
Ok(json!({
"status": "NotFound",
"message": "There is no subscription key",
"serverid": server_id,
"url": url,
}))
let auth_id: Authid = rpcenv.get_auth_id().unwrap().parse()?;
let user_info = CachedUserInfo::new()?;
let user_privs = user_info.lookup_privs(&auth_id, &[]);
if (user_privs & PRIV_SYS_AUDIT) == 0 {
// not enough privileges for full state
return Ok(SubscriptionInfo {
status: info.status,
message: info.message,
url: info.url,
..Default::default()
});
};
Ok(info)
}
#[api(
input: {
properties: {
node: {
schema: NODE_SCHEMA,
},
key: {
schema: SUBSCRIPTION_KEY_SCHEMA,
},
},
},
protected: true,
access: {
permission: &Permission::Privilege(&["system"], PRIV_SYS_MODIFY, false),
},
)]
/// Set a subscription key and check it.
pub fn set_subscription(
key: String,
) -> Result<(), Error> {
let server_id = tools::get_hardware_address()?;
let info = subscription::check_subscription(key, server_id.to_owned())?;
subscription::write_subscription(info)
.map_err(|e| format_err!("Error writing subscription status - {}", e))?;
Ok(())
}
#[api(
input: {
properties: {
node: {
schema: NODE_SCHEMA,
},
},
},
protected: true,
access: {
permission: &Permission::Privilege(&["system"], PRIV_SYS_MODIFY, false),
},
)]
/// Delete subscription info.
pub fn delete_subscription() -> Result<(), Error> {
subscription::delete_subscription()
.map_err(|err| format_err!("Deleting subscription failed: {}", err))?;
Ok(())
}
pub const ROUTER: Router = Router::new()
.post(&API_METHOD_CHECK_SUBSCRIPTION)
.put(&API_METHOD_SET_SUBSCRIPTION)
.delete(&API_METHOD_DELETE_SUBSCRIPTION)
.get(&API_METHOD_GET_SUBSCRIPTION);

View File

@ -14,6 +14,16 @@ use crate::server::{self, UPID, TaskState, TaskListInfoIterator};
use crate::config::acl::{PRIV_SYS_AUDIT, PRIV_SYS_MODIFY};
use crate::config::cached_user_info::CachedUserInfo;
fn check_task_access(auth_id: &Authid, upid: &UPID) -> Result<(), Error> {
let task_auth_id = &upid.auth_id;
if auth_id == task_auth_id
|| (task_auth_id.is_token() && &Authid::from(task_auth_id.user().clone()) == auth_id) {
Ok(())
} else {
let user_info = CachedUserInfo::new()?;
user_info.check_privs(auth_id, &["system", "tasks"], PRIV_SYS_AUDIT, false)
}
}
#[api(
input: {
@ -27,7 +37,7 @@ use crate::config::cached_user_info::CachedUserInfo;
},
},
returns: {
description: "Task status nformation.",
description: "Task status information.",
properties: {
node: {
schema: NODE_SCHEMA,
@ -57,9 +67,13 @@ use crate::config::cached_user_info::CachedUserInfo;
description: "Worker ID (arbitrary ASCII string)",
},
user: {
type: String,
type: Userid,
description: "The user who started the task.",
},
tokenid: {
type: Tokenname,
optional: true,
},
status: {
type: String,
description: "'running' or 'stopped'",
@ -72,7 +86,7 @@ use crate::config::cached_user_info::CachedUserInfo;
},
},
access: {
description: "Users can access there own tasks, or need Sys.Audit on /system/tasks.",
description: "Users can access their own tasks, or need Sys.Audit on /system/tasks.",
permission: &Permission::Anybody,
},
)]
@ -84,12 +98,8 @@ async fn get_task_status(
let upid = extract_upid(&param)?;
let userid: Userid = rpcenv.get_user().unwrap().parse()?;
if userid != upid.userid {
let user_info = CachedUserInfo::new()?;
user_info.check_privs(&userid, &["system", "tasks"], PRIV_SYS_AUDIT, false)?;
}
let auth_id: Authid = rpcenv.get_auth_id().unwrap().parse()?;
check_task_access(&auth_id, &upid)?;
let mut result = json!({
"upid": param["upid"],
@ -99,9 +109,13 @@ async fn get_task_status(
"starttime": upid.starttime,
"type": upid.worker_type,
"id": upid.worker_id,
"user": upid.userid,
"user": upid.auth_id.user(),
});
if upid.auth_id.is_token() {
result["tokenid"] = Value::from(upid.auth_id.tokenname().unwrap().as_str());
}
if crate::server::worker_is_active(&upid).await? {
result["status"] = Value::from("running");
} else {
@ -161,12 +175,9 @@ async fn read_task_log(
let upid = extract_upid(&param)?;
let userid: Userid = rpcenv.get_user().unwrap().parse()?;
let auth_id: Authid = rpcenv.get_auth_id().unwrap().parse()?;
if userid != upid.userid {
let user_info = CachedUserInfo::new()?;
user_info.check_privs(&userid, &["system", "tasks"], PRIV_SYS_AUDIT, false)?;
}
check_task_access(&auth_id, &upid)?;
let test_status = param["test-status"].as_bool().unwrap_or(false);
@ -234,11 +245,11 @@ fn stop_task(
let upid = extract_upid(&param)?;
let userid: Userid = rpcenv.get_user().unwrap().parse()?;
let auth_id: Authid = rpcenv.get_auth_id().unwrap().parse()?;
if userid != upid.userid {
if auth_id != upid.auth_id {
let user_info = CachedUserInfo::new()?;
user_info.check_privs(&userid, &["system", "tasks"], PRIV_SYS_MODIFY, false)?;
user_info.check_privs(&auth_id, &["system", "tasks"], PRIV_SYS_MODIFY, false)?;
}
server::abort_worker_async(upid);
@ -260,7 +271,7 @@ fn stop_task(
},
limit: {
type: u64,
description: "Only list this amount of tasks.",
description: "Only list this amount of tasks. (0 means no limit)",
default: 50,
optional: true,
},
@ -285,6 +296,29 @@ fn stop_task(
type: String,
description: "Only list tasks from this user.",
},
since: {
type: i64,
description: "Only list tasks since this UNIX epoch.",
optional: true,
},
until: {
type: i64,
description: "Only list tasks until this UNIX epoch.",
optional: true,
},
typefilter: {
optional: true,
type: String,
description: "Only list tasks whose type contains this.",
},
statusfilter: {
optional: true,
type: Array,
description: "Only list tasks which have any one of the listed status.",
items: {
type: TaskStateType,
},
},
},
},
returns: {
@ -304,32 +338,52 @@ pub fn list_tasks(
errors: bool,
running: bool,
userfilter: Option<String>,
since: Option<i64>,
until: Option<i64>,
typefilter: Option<String>,
statusfilter: Option<Vec<TaskStateType>>,
param: Value,
mut rpcenv: &mut dyn RpcEnvironment,
) -> Result<Vec<TaskListItem>, Error> {
let userid: Userid = rpcenv.get_user().unwrap().parse()?;
let auth_id: Authid = rpcenv.get_auth_id().unwrap().parse()?;
let user_info = CachedUserInfo::new()?;
let user_privs = user_info.lookup_privs(&userid, &["system", "tasks"]);
let user_privs = user_info.lookup_privs(&auth_id, &["system", "tasks"]);
let list_all = (user_privs & PRIV_SYS_AUDIT) != 0;
let store = param["store"].as_str();
let list = TaskListInfoIterator::new(running)?;
let limit = if limit > 0 { limit as usize } else { usize::MAX };
let result: Vec<TaskListItem> = list
.take_while(|info| !info.is_err())
.skip_while(|info| {
match (info, until) {
(Ok(info), Some(until)) => info.upid.starttime > until,
(Ok(_), None) => false,
(Err(_), _) => false,
}
})
.take_while(|info| {
match (info, since) {
(Ok(info), Some(since)) => info.upid.starttime > since,
(Ok(_), None) => true,
(Err(_), _) => false,
}
})
.filter_map(|info| {
let info = match info {
Ok(info) => info,
Err(_) => return None,
};
if !list_all && info.upid.userid != userid { return None; }
if !list_all && check_task_access(&auth_id, &info.upid).is_err() {
return None;
}
if let Some(userid) = &userfilter {
if !info.upid.userid.as_str().contains(userid) { return None; }
if let Some(needle) = &userfilter {
if !info.upid.auth_id.to_string().contains(needle) { return None; }
}
if let Some(store) = store {
@ -342,7 +396,7 @@ pub fn list_tasks(
if info.upid.worker_type == "backup" || info.upid.worker_type == "restore" ||
info.upid.worker_type == "prune"
{
let prefix = format!("{}_", store);
let prefix = format!("{}:", store);
if !worker_id.starts_with(&prefix) { return None; }
} else if info.upid.worker_type == "garbage_collection" {
if worker_id != store { return None; }
@ -351,19 +405,31 @@ pub fn list_tasks(
}
}
match info.state {
Some(_) if running => return None,
Some(crate::server::TaskState::OK { .. }) if errors => return None,
if let Some(typefilter) = &typefilter {
if !info.upid.worker_type.contains(typefilter) {
return None;
}
}
match (&info.state, &statusfilter) {
(Some(_), _) if running => return None,
(Some(crate::server::TaskState::OK { .. }), _) if errors => return None,
(Some(state), Some(filters)) => {
if !filters.contains(&state.tasktype()) {
return None;
}
},
(None, Some(_)) => return None,
_ => {},
}
Some(info.into())
}).skip(start as usize)
.take(limit as usize)
.take(limit)
.collect();
let mut count = result.len() + start as usize;
if result.len() > 0 && result.len() >= limit as usize { // we have a 'virtual' entry as long as we have any new
if result.len() > 0 && result.len() >= limit { // we have a 'virtual' entry as long as we have any new
count += 1;
}

29
src/api2/ping.rs Normal file
View File

@ -0,0 +1,29 @@
use anyhow::{Error};
use serde_json::{json, Value};
use proxmox::api::{api, Router, Permission};
#[api(
returns: {
description: "Dummy ping",
type: Object,
properties: {
pong: {
description: "Always true",
type: bool,
}
}
},
access: {
description: "Anyone can access this, because it's used for a cheap check if the API daemon is online.",
permission: &Permission::World,
}
)]
/// Dummy method which replies with `{ "pong": True }`
fn ping() -> Result<Value, Error> {
Ok(json!({
"pong": true,
}))
}
pub const ROUTER: Router = Router::new()
.get(&API_METHOD_PING);

View File

@ -7,21 +7,20 @@ use futures::{select, future::FutureExt};
use proxmox::api::api;
use proxmox::api::{ApiMethod, Router, RpcEnvironment, Permission};
use crate::server::{WorkerTask};
use crate::server::{WorkerTask, jobstate::Job};
use crate::backup::DataStore;
use crate::client::{HttpClient, HttpClientOptions, BackupRepository, pull::pull_store};
use crate::api2::types::*;
use crate::config::{
remote,
sync::SyncJobConfig,
jobstate::Job,
acl::{PRIV_DATASTORE_BACKUP, PRIV_DATASTORE_PRUNE, PRIV_REMOTE_READ},
cached_user_info::CachedUserInfo,
};
pub fn check_pull_privs(
userid: &Userid,
auth_id: &Authid,
store: &str,
remote: &str,
remote_store: &str,
@ -30,11 +29,11 @@ pub fn check_pull_privs(
let user_info = CachedUserInfo::new()?;
user_info.check_privs(userid, &["datastore", store], PRIV_DATASTORE_BACKUP, false)?;
user_info.check_privs(userid, &["remote", remote, remote_store], PRIV_REMOTE_READ, false)?;
user_info.check_privs(auth_id, &["datastore", store], PRIV_DATASTORE_BACKUP, false)?;
user_info.check_privs(auth_id, &["remote", remote, remote_store], PRIV_REMOTE_READ, false)?;
if delete {
user_info.check_privs(userid, &["datastore", store], PRIV_DATASTORE_PRUNE, false)?;
user_info.check_privs(auth_id, &["datastore", store], PRIV_DATASTORE_PRUNE, false)?;
}
Ok(())
@ -57,7 +56,7 @@ pub async fn get_pull_parameters(
let src_repo = BackupRepository::new(Some(remote.userid.clone()), Some(remote.host.clone()), remote.port, remote_store.to_string());
let client = HttpClient::new(&src_repo.host(), src_repo.port(), &src_repo.user(), options)?;
let client = HttpClient::new(&src_repo.host(), src_repo.port(), &src_repo.auth_id(), options)?;
let _auth_info = client.login() // make sure we can auth
.await
.map_err(|err| format_err!("remote connection to '{}' failed - {}", remote.host, err))?;
@ -69,27 +68,31 @@ pub async fn get_pull_parameters(
pub fn do_sync_job(
mut job: Job,
sync_job: SyncJobConfig,
userid: &Userid,
auth_id: &Authid,
schedule: Option<String>,
) -> Result<String, Error> {
let job_id = job.jobname().to_string();
let worker_type = job.jobtype().to_string();
let (email, notify) = crate::server::lookup_datastore_notify_settings(&sync_job.store);
let upid_str = WorkerTask::spawn(
&worker_type,
Some(job.jobname().to_string()),
userid.clone(),
auth_id.clone(),
false,
move |worker| async move {
job.start(&worker.upid().to_string())?;
let worker2 = worker.clone();
let sync_job2 = sync_job.clone();
let worker_future = async move {
let delete = sync_job.remove_vanished.unwrap_or(true);
let sync_owner = sync_job.owner.unwrap_or(Authid::backup_auth_id().clone());
let (client, src_repo, tgt_store) = get_pull_parameters(&sync_job.store, &sync_job.remote, &sync_job.remote_store).await?;
worker.log(format!("Starting datastore sync job '{}'", job_id));
@ -99,7 +102,7 @@ pub fn do_sync_job(
worker.log(format!("Sync datastore '{}' from '{}/{}'",
sync_job.store, sync_job.remote, sync_job.remote_store));
crate::client::pull::pull_store(&worker, &client, &src_repo, tgt_store.clone(), delete, Userid::backup_userid().clone()).await?;
crate::client::pull::pull_store(&worker, &client, &src_repo, tgt_store.clone(), delete, sync_owner).await?;
worker.log(format!("sync job '{}' end", &job_id));
@ -108,12 +111,12 @@ pub fn do_sync_job(
let mut abort_future = worker2.abort_future().map(|_| Err(format_err!("sync aborted")));
let res = select!{
let result = select!{
worker = worker_future.fuse() => worker,
abort = abort_future => abort,
};
let status = worker2.create_state(&res);
let status = worker2.create_state(&result);
match job.finish(status) {
Ok(_) => {},
@ -122,7 +125,13 @@ pub fn do_sync_job(
}
}
res
if let Some(email) = email {
if let Err(err) = crate::server::send_sync_status(&email, notify, &sync_job2, &result) {
eprintln!("send sync notification failed: {}", err);
}
}
result
})?;
Ok(upid_str)
@ -165,19 +174,19 @@ async fn pull (
rpcenv: &mut dyn RpcEnvironment,
) -> Result<String, Error> {
let userid: Userid = rpcenv.get_user().unwrap().parse()?;
let auth_id: Authid = rpcenv.get_auth_id().unwrap().parse()?;
let delete = remove_vanished.unwrap_or(true);
check_pull_privs(&userid, &store, &remote, &remote_store, delete)?;
check_pull_privs(&auth_id, &store, &remote, &remote_store, delete)?;
let (client, src_repo, tgt_store) = get_pull_parameters(&store, &remote, &remote_store).await?;
// fixme: set to_stdout to false?
let upid_str = WorkerTask::spawn("sync", Some(store.clone()), userid.clone(), true, move |worker| async move {
let upid_str = WorkerTask::spawn("sync", Some(store.clone()), auth_id.clone(), true, move |worker| async move {
worker.log(format!("sync datastore '{}' start", store));
let pull_future = pull_store(&worker, &client, &src_repo, tgt_store.clone(), delete, userid);
let pull_future = pull_store(&worker, &client, &src_repo, tgt_store.clone(), delete, auth_id);
let future = select!{
success = pull_future.fuse() => success,
abort = worker.abort_future().map(|_| Err(format_err!("pull aborted"))) => abort,

View File

@ -17,6 +17,7 @@ use crate::tools;
use crate::config::acl::{PRIV_DATASTORE_READ, PRIV_DATASTORE_BACKUP};
use crate::config::cached_user_info::CachedUserInfo;
use crate::api2::helpers;
use crate::tools::fs::lock_dir_noblock_shared;
mod environment;
use environment::*;
@ -54,11 +55,11 @@ fn upgrade_to_backup_reader_protocol(
async move {
let debug = param["debug"].as_bool().unwrap_or(false);
let userid: Userid = rpcenv.get_user().unwrap().parse()?;
let auth_id: Authid = rpcenv.get_auth_id().unwrap().parse()?;
let store = tools::required_string_param(&param, "store")?.to_owned();
let user_info = CachedUserInfo::new()?;
let privs = user_info.lookup_privs(&userid, &["datastore", &store]);
let privs = user_info.lookup_privs(&auth_id, &["datastore", &store]);
let priv_read = privs & PRIV_DATASTORE_READ != 0;
let priv_backup = privs & PRIV_DATASTORE_BACKUP != 0;
@ -93,21 +94,29 @@ fn upgrade_to_backup_reader_protocol(
let backup_dir = BackupDir::new(backup_type, backup_id, backup_time)?;
if !priv_read {
let owner = datastore.get_owner(backup_dir.group())?;
if owner != userid {
let correct_owner = owner == auth_id
|| (owner.is_token()
&& Authid::from(owner.user().clone()) == auth_id);
if !correct_owner {
bail!("backup owner check failed!");
}
}
let _guard = lock_dir_noblock_shared(
&datastore.snapshot_path(&backup_dir),
"snapshot",
"locked by another operation")?;
let path = datastore.base_path();
//let files = BackupInfo::list_files(&path, &backup_dir)?;
let worker_id = format!("{}_{}_{}_{:08X}", store, backup_type, backup_id, backup_dir.backup_time());
let worker_id = format!("{}:{}/{}/{:08X}", store, backup_type, backup_id, backup_dir.backup_time());
WorkerTask::spawn("reader", Some(worker_id), userid.clone(), true, move |worker| {
WorkerTask::spawn("reader", Some(worker_id), auth_id.clone(), true, move |worker| {
let mut env = ReaderEnvironment::new(
env_type,
userid,
auth_id,
worker.clone(),
datastore,
backup_dir,
@ -146,11 +155,14 @@ fn upgrade_to_backup_reader_protocol(
use futures::future::Either;
futures::future::select(req_fut, abort_future)
.map(|res| match res {
.map(move |res| {
let _guard = _guard;
match res {
Either::Left((Ok(res), _)) => Ok(res),
Either::Left((Err(err), _)) => Err(err),
Either::Right((Ok(res), _)) => Ok(res),
Either::Right((Err(err), _)) => Err(err),
}
})
.map_ok(move |_| env.log("reader finished successfully"))
})?;

View File

@ -5,7 +5,7 @@ use serde_json::{json, Value};
use proxmox::api::{RpcEnvironment, RpcEnvironmentType};
use crate::api2::types::Userid;
use crate::api2::types::Authid;
use crate::backup::*;
use crate::server::formatter::*;
use crate::server::WorkerTask;
@ -17,7 +17,7 @@ use crate::server::WorkerTask;
pub struct ReaderEnvironment {
env_type: RpcEnvironmentType,
result_attributes: Value,
user: Userid,
auth_id: Authid,
pub debug: bool,
pub formatter: &'static OutputFormatter,
pub worker: Arc<WorkerTask>,
@ -29,7 +29,7 @@ pub struct ReaderEnvironment {
impl ReaderEnvironment {
pub fn new(
env_type: RpcEnvironmentType,
user: Userid,
auth_id: Authid,
worker: Arc<WorkerTask>,
datastore: Arc<DataStore>,
backup_dir: BackupDir,
@ -39,7 +39,7 @@ impl ReaderEnvironment {
Self {
result_attributes: json!({}),
env_type,
user,
auth_id,
worker,
datastore,
debug: false,
@ -82,12 +82,12 @@ impl RpcEnvironment for ReaderEnvironment {
self.env_type
}
fn set_user(&mut self, _user: Option<String>) {
panic!("unable to change user");
fn set_auth_id(&mut self, _auth_id: Option<String>) {
panic!("unable to change auth_id");
}
fn get_user(&self) -> Option<String> {
Some(self.user.to_string())
fn get_auth_id(&self) -> Option<String> {
Some(self.auth_id.to_string())
}
}

View File

@ -16,18 +16,14 @@ use crate::api2::types::{
DATASTORE_SCHEMA,
RRDMode,
RRDTimeFrameResolution,
TaskListItem,
TaskStateType,
Userid,
Authid,
};
use crate::server;
use crate::backup::{DataStore};
use crate::config::datastore;
use crate::tools::statistics::{linear_regression};
use crate::config::cached_user_info::CachedUserInfo;
use crate::config::acl::{
PRIV_SYS_AUDIT,
PRIV_DATASTORE_AUDIT,
PRIV_DATASTORE_BACKUP,
};
@ -87,13 +83,13 @@ fn datastore_status(
let (config, _digest) = datastore::config()?;
let userid: Userid = rpcenv.get_user().unwrap().parse()?;
let auth_id: Authid = rpcenv.get_auth_id().unwrap().parse()?;
let user_info = CachedUserInfo::new()?;
let mut list = Vec::new();
for (store, (_, _)) in &config.sections {
let user_privs = user_info.lookup_privs(&userid, &["datastore", &store]);
let user_privs = user_info.lookup_privs(&auth_id, &["datastore", &store]);
let allowed = (user_privs & (PRIV_DATASTORE_AUDIT| PRIV_DATASTORE_BACKUP)) != 0;
if !allowed {
continue;
@ -179,103 +175,8 @@ fn datastore_status(
Ok(list.into())
}
#[api(
input: {
properties: {
since: {
type: i64,
description: "Only list tasks since this UNIX epoch.",
optional: true,
},
typefilter: {
optional: true,
type: String,
description: "Only list tasks, whose type contains this string.",
},
statusfilter: {
optional: true,
type: Array,
description: "Only list tasks which have any one of the listed status.",
items: {
type: TaskStateType,
},
},
},
},
returns: {
description: "A list of tasks.",
type: Array,
items: { type: TaskListItem },
},
access: {
description: "Users can only see there own tasks, unless the have Sys.Audit on /system/tasks.",
permission: &Permission::Anybody,
},
)]
/// List tasks.
pub fn list_tasks(
since: Option<i64>,
typefilter: Option<String>,
statusfilter: Option<Vec<TaskStateType>>,
_param: Value,
rpcenv: &mut dyn RpcEnvironment,
) -> Result<Vec<TaskListItem>, Error> {
let userid: Userid = rpcenv.get_user().unwrap().parse()?;
let user_info = CachedUserInfo::new()?;
let user_privs = user_info.lookup_privs(&userid, &["system", "tasks"]);
let list_all = (user_privs & PRIV_SYS_AUDIT) != 0;
let since = since.unwrap_or_else(|| 0);
let list: Vec<TaskListItem> = server::TaskListInfoIterator::new(false)?
.take_while(|info| {
match info {
Ok(info) => info.upid.starttime > since,
Err(_) => false
}
})
.filter_map(|info| {
match info {
Ok(info) => {
if list_all || info.upid.userid == userid {
if let Some(filter) = &typefilter {
if !info.upid.worker_type.contains(filter) {
return None;
}
}
if let Some(filters) = &statusfilter {
if let Some(state) = &info.state {
let statetype = match state {
server::TaskState::OK { .. } => TaskStateType::OK,
server::TaskState::Unknown { .. } => TaskStateType::Unknown,
server::TaskState::Error { .. } => TaskStateType::Error,
server::TaskState::Warning { .. } => TaskStateType::Warning,
};
if !filters.contains(&statetype) {
return None;
}
}
}
Some(Ok(TaskListItem::from(info)))
} else {
None
}
}
Err(err) => Some(Err(err))
}
})
.collect::<Result<Vec<TaskListItem>, Error>>()?;
Ok(list.into())
}
const SUBDIRS: SubdirMap = &[
("datastore-usage", &Router::new().get(&API_METHOD_DATASTORE_STATUS)),
("tasks", &Router::new().get(&API_METHOD_LIST_TASKS)),
];
pub const ROUTER: Router = Router::new()

View File

@ -5,7 +5,7 @@ use proxmox::api::{api, schema::*};
use proxmox::const_regex;
use proxmox::{IPRE, IPRE_BRACKET, IPV4RE, IPV6RE, IPV4OCTET, IPV6H16, IPV6LS32};
use crate::backup::CryptMode;
use crate::backup::{CryptMode, BACKUP_ID_REGEX};
use crate::server::UPID;
#[macro_use]
@ -14,9 +14,11 @@ mod macros;
#[macro_use]
mod userid;
pub use userid::{Realm, RealmRef};
pub use userid::{Tokenname, TokennameRef};
pub use userid::{Username, UsernameRef};
pub use userid::Userid;
pub use userid::PROXMOX_GROUP_ID_SCHEMA;
pub use userid::Authid;
pub use userid::{PROXMOX_TOKEN_ID_SCHEMA, PROXMOX_TOKEN_NAME_SCHEMA, PROXMOX_GROUP_ID_SCHEMA};
// File names: may not contain slashes, may not start with "."
pub const FILENAME_FORMAT: ApiStringFormat = ApiStringFormat::VerifyFn(|name| {
@ -65,12 +67,14 @@ const_regex!{
pub DNS_NAME_OR_IP_REGEX = concat!(r"^(?:", DNS_NAME!(), "|", IPRE!(), r")$");
pub BACKUP_REPO_URL_REGEX = concat!(r"^^(?:(?:(", USER_ID_REGEX_STR!(), ")@)?(", DNS_NAME!(), "|", IPRE_BRACKET!() ,"):)?(?:([0-9]{1,5}):)?(", PROXMOX_SAFE_ID_REGEX_STR!(), r")$");
pub BACKUP_REPO_URL_REGEX = concat!(r"^^(?:(?:(", USER_ID_REGEX_STR!(), "|", APITOKEN_ID_REGEX_STR!(), ")@)?(", DNS_NAME!(), "|", IPRE_BRACKET!() ,"):)?(?:([0-9]{1,5}):)?(", PROXMOX_SAFE_ID_REGEX_STR!(), r")$");
pub CERT_FINGERPRINT_SHA256_REGEX = r"^(?:[0-9a-fA-F][0-9a-fA-F])(?::[0-9a-fA-F][0-9a-fA-F]){31}$";
pub ACL_PATH_REGEX = concat!(r"^(?:/|", r"(?:/", PROXMOX_SAFE_ID_REGEX_STR!(), ")+", r")$");
pub SUBSCRIPTION_KEY_REGEX = concat!(r"^pbs(?:[cbsp])-[0-9a-f]{10}$");
pub BLOCKDEVICE_NAME_REGEX = r"^(:?(:?h|s|x?v)d[a-z]+)|(:?nvme\d+n\d+)$";
pub ZPOOL_NAME_REGEX = r"^[a-zA-Z][a-z0-9A-Z\-_.:]+$";
@ -97,6 +101,9 @@ pub const CERT_FINGERPRINT_SHA256_FORMAT: ApiStringFormat =
pub const PROXMOX_SAFE_ID_FORMAT: ApiStringFormat =
ApiStringFormat::Pattern(&PROXMOX_SAFE_ID_REGEX);
pub const BACKUP_ID_FORMAT: ApiStringFormat =
ApiStringFormat::Pattern(&BACKUP_ID_REGEX);
pub const SINGLE_LINE_COMMENT_FORMAT: ApiStringFormat =
ApiStringFormat::Pattern(&SINGLE_LINE_COMMENT_REGEX);
@ -127,6 +134,9 @@ pub const CIDR_V6_FORMAT: ApiStringFormat =
pub const CIDR_FORMAT: ApiStringFormat =
ApiStringFormat::Pattern(&CIDR_REGEX);
pub const SUBSCRIPTION_KEY_FORMAT: ApiStringFormat =
ApiStringFormat::Pattern(&SUBSCRIPTION_KEY_REGEX);
pub const BLOCKDEVICE_NAME_FORMAT: ApiStringFormat =
ApiStringFormat::Pattern(&BLOCKDEVICE_NAME_REGEX);
@ -269,7 +279,7 @@ pub const BACKUP_TYPE_SCHEMA: Schema =
pub const BACKUP_ID_SCHEMA: Schema =
StringSchema::new("Backup ID.")
.format(&PROXMOX_SAFE_ID_FORMAT)
.format(&BACKUP_ID_FORMAT)
.schema();
pub const BACKUP_TIME_SCHEMA: Schema =
@ -302,7 +312,7 @@ pub const PRUNE_SCHEDULE_SCHEMA: Schema = StringSchema::new(
.format(&ApiStringFormat::VerifyFn(crate::tools::systemd::time::verify_calendar_event))
.schema();
pub const VERIFY_SCHEDULE_SCHEMA: Schema = StringSchema::new(
pub const VERIFICATION_SCHEDULE_SCHEMA: Schema = StringSchema::new(
"Run verify job at specified schedule.")
.format(&ApiStringFormat::VerifyFn(crate::tools::systemd::time::verify_calendar_event))
.schema();
@ -324,6 +334,16 @@ pub const REMOVE_VANISHED_BACKUPS_SCHEMA: Schema = BooleanSchema::new(
.default(true)
.schema();
pub const IGNORE_VERIFIED_BACKUPS_SCHEMA: Schema = BooleanSchema::new(
"Do not verify backups that are already verified if their verification is not outdated.")
.default(true)
.schema();
pub const VERIFICATION_OUTDATED_AFTER_SCHEMA: Schema = IntegerSchema::new(
"Days after that a verification becomes outdated")
.minimum(1)
.schema();
pub const SINGLE_LINE_COMMENT_SCHEMA: Schema = StringSchema::new("Comment (single line).")
.format(&SINGLE_LINE_COMMENT_FORMAT)
.schema();
@ -336,6 +356,12 @@ pub const DNS_NAME_OR_IP_SCHEMA: Schema = StringSchema::new("DNS name or IP addr
.format(&DNS_NAME_OR_IP_FORMAT)
.schema();
pub const SUBSCRIPTION_KEY_SCHEMA: Schema = StringSchema::new("Proxmox Backup Server subscription key.")
.format(&SUBSCRIPTION_KEY_FORMAT)
.min_length(15)
.max_length(16)
.schema();
pub const BLOCKDEVICE_NAME_SCHEMA: Schema = StringSchema::new("Block device name (/sys/block/<name>).")
.format(&BLOCKDEVICE_NAME_FORMAT)
.min_length(3)
@ -364,7 +390,7 @@ pub const BLOCKDEVICE_NAME_SCHEMA: Schema = StringSchema::new("Block device name
},
},
owner: {
type: Userid,
type: Authid,
optional: true,
},
},
@ -382,7 +408,7 @@ pub struct GroupListItem {
pub files: Vec<String>,
/// The owner of group
#[serde(skip_serializing_if="Option::is_none")]
pub owner: Option<Userid>,
pub owner: Option<Authid>,
}
#[api()]
@ -440,7 +466,7 @@ pub struct SnapshotVerifyState {
},
},
owner: {
type: Userid,
type: Authid,
optional: true,
},
},
@ -465,7 +491,7 @@ pub struct SnapshotListItem {
pub size: Option<u64>,
/// The owner of the snapshots group
#[serde(skip_serializing_if="Option::is_none")]
pub owner: Option<Userid>,
pub owner: Option<Authid>,
}
#[api(
@ -577,6 +603,8 @@ pub struct GarbageCollectionStatus {
pub pending_chunks: usize,
/// Number of chunks marked as .bad by verify that have been removed by GC.
pub removed_bad: usize,
/// Number of chunks still marked as .bad after garbage collection.
pub still_bad: usize,
}
impl Default for GarbageCollectionStatus {
@ -592,6 +620,7 @@ impl Default for GarbageCollectionStatus {
pending_bytes: 0,
pending_chunks: 0,
removed_bad: 0,
still_bad: 0,
}
}
}
@ -609,10 +638,75 @@ pub struct StorageStatus {
pub avail: u64,
}
#[api()]
#[derive(Serialize, Deserialize, Default)]
/// Backup Type group/snapshot counts.
pub struct TypeCounts {
/// The number of groups of the type.
pub groups: u64,
/// The number of snapshots of the type.
pub snapshots: u64,
}
#[api(
properties: {
ct: {
type: TypeCounts,
optional: true,
},
host: {
type: TypeCounts,
optional: true,
},
vm: {
type: TypeCounts,
optional: true,
},
other: {
type: TypeCounts,
optional: true,
},
},
)]
#[derive(Serialize, Deserialize)]
/// Counts of groups/snapshots per BackupType.
pub struct Counts {
/// The counts for CT backups
pub ct: Option<TypeCounts>,
/// The counts for Host backups
pub host: Option<TypeCounts>,
/// The counts for VM backups
pub vm: Option<TypeCounts>,
/// The counts for other backup types
pub other: Option<TypeCounts>,
}
#[api(
properties: {
"gc-status": { type: GarbageCollectionStatus, },
counts: { type: Counts, }
},
)]
#[derive(Serialize, Deserialize)]
#[serde(rename_all="kebab-case")]
/// Overall Datastore status and useful information.
pub struct DataStoreStatus {
/// Total space (bytes).
pub total: u64,
/// Used space (bytes).
pub used: u64,
/// Available space (bytes).
pub avail: u64,
/// Status of last GC
pub gc_status: GarbageCollectionStatus,
/// Group/Snapshot counts
pub counts: Counts,
}
#[api(
properties: {
upid: { schema: UPID_SCHEMA },
user: { type: Userid },
user: { type: Authid },
},
)]
#[derive(Serialize, Deserialize)]
@ -631,8 +725,8 @@ pub struct TaskListItem {
pub worker_type: String,
/// Worker ID (arbitrary ASCII string)
pub worker_id: Option<String>,
/// The user who started the task
pub user: Userid,
/// The authenticated entity who started the task
pub user: Authid,
/// The task end time (Epoch)
#[serde(skip_serializing_if="Option::is_none")]
pub endtime: Option<i64>,
@ -655,7 +749,7 @@ impl From<crate::server::TaskListInfo> for TaskListItem {
starttime: info.upid.starttime,
worker_type: info.upid.worker_type,
worker_id: info.upid.worker_id,
user: info.upid.userid,
user: info.upid.auth_id,
endtime,
status,
}
@ -1035,7 +1129,7 @@ pub enum RRDTimeFrameResolution {
}
#[api()]
#[derive(Serialize, Deserialize)]
#[derive(Debug, Serialize, Deserialize)]
#[serde(rename_all = "PascalCase")]
/// Describes a package for which an update is available.
pub struct APTUpdateInfo {
@ -1060,3 +1154,16 @@ pub struct APTUpdateInfo {
/// URL under which the package's changelog can be retrieved
pub change_log_url: String,
}
#[api()]
#[derive(Debug, Copy, Clone, PartialEq, Serialize, Deserialize)]
#[serde(rename_all = "lowercase")]
/// When do we send notifications
pub enum Notify {
/// Never send notification
Never,
/// Send notifications for failed and sucessful jobs
Always,
/// Send notifications for failed jobs only
Error,
}

View File

@ -1,6 +1,7 @@
//! Types for user handling.
//!
//! We have [`Username`]s and [`Realm`]s. To uniquely identify a user, they must be combined into a [`Userid`].
//! We have [`Username`]s, [`Realm`]s and [`Tokenname`]s. To uniquely identify a user/API token, they
//! must be combined into a [`Userid`] or [`Authid`].
//!
//! Since they're all string types, they're organized as follows:
//!
@ -9,13 +10,16 @@
//! with `String`, meaning you can only make references to it.
//! * [`Realm`]: an owned realm (`String` equivalent).
//! * [`RealmRef`]: a borrowed realm (`str` equivalent).
//! * [`Userid`]: an owned user id (`"user@realm"`). Note that this does not have a separate
//! borrowed type.
//! * [`Tokenname`]: an owned API token name (`String` equivalent)
//! * [`TokennameRef`]: a borrowed `Tokenname` (`str` equivalent).
//! * [`Userid`]: an owned user id (`"user@realm"`).
//! * [`Authid`]: an owned Authentication ID (a `Userid` with an optional `Tokenname`).
//! Note that `Userid` and `Authid` do not have a separate borrowed type.
//!
//! Note that `Username`s are not unique, therefore they do not implement `Eq` and cannot be
//! Note that `Username`s and `Tokenname`s are not unique, therefore they do not implement `Eq` and cannot be
//! compared directly. If a direct comparison is really required, they can be compared as strings
//! via the `as_str()` method. [`Realm`]s and [`Userid`]s on the other hand can be compared with
//! each other, as in those two cases the comparison has meaning.
//! via the `as_str()` method. [`Realm`]s, [`Userid`]s and [`Authid`]s on the other
//! hand can be compared with each other, as in those cases the comparison has meaning.
use std::borrow::Borrow;
use std::convert::TryFrom;
@ -36,19 +40,42 @@ use proxmox::const_regex;
// also see "man useradd"
macro_rules! USER_NAME_REGEX_STR { () => (r"(?:[^\s:/[:cntrl:]]+)") }
macro_rules! GROUP_NAME_REGEX_STR { () => (USER_NAME_REGEX_STR!()) }
macro_rules! TOKEN_NAME_REGEX_STR { () => (PROXMOX_SAFE_ID_REGEX_STR!()) }
macro_rules! USER_ID_REGEX_STR { () => (concat!(USER_NAME_REGEX_STR!(), r"@", PROXMOX_SAFE_ID_REGEX_STR!())) }
macro_rules! APITOKEN_ID_REGEX_STR { () => (concat!(USER_ID_REGEX_STR!() , r"!", TOKEN_NAME_REGEX_STR!())) }
const_regex! {
pub PROXMOX_USER_NAME_REGEX = concat!(r"^", USER_NAME_REGEX_STR!(), r"$");
pub PROXMOX_TOKEN_NAME_REGEX = concat!(r"^", TOKEN_NAME_REGEX_STR!(), r"$");
pub PROXMOX_USER_ID_REGEX = concat!(r"^", USER_ID_REGEX_STR!(), r"$");
pub PROXMOX_APITOKEN_ID_REGEX = concat!(r"^", APITOKEN_ID_REGEX_STR!(), r"$");
pub PROXMOX_AUTH_ID_REGEX = concat!(r"^", r"(?:", USER_ID_REGEX_STR!(), r"|", APITOKEN_ID_REGEX_STR!(), r")$");
pub PROXMOX_GROUP_ID_REGEX = concat!(r"^", GROUP_NAME_REGEX_STR!(), r"$");
}
pub const PROXMOX_USER_NAME_FORMAT: ApiStringFormat =
ApiStringFormat::Pattern(&PROXMOX_USER_NAME_REGEX);
pub const PROXMOX_TOKEN_NAME_FORMAT: ApiStringFormat =
ApiStringFormat::Pattern(&PROXMOX_TOKEN_NAME_REGEX);
pub const PROXMOX_USER_ID_FORMAT: ApiStringFormat =
ApiStringFormat::Pattern(&PROXMOX_USER_ID_REGEX);
pub const PROXMOX_TOKEN_ID_FORMAT: ApiStringFormat =
ApiStringFormat::Pattern(&PROXMOX_APITOKEN_ID_REGEX);
pub const PROXMOX_AUTH_ID_FORMAT: ApiStringFormat =
ApiStringFormat::Pattern(&PROXMOX_AUTH_ID_REGEX);
pub const PROXMOX_TOKEN_ID_SCHEMA: Schema = StringSchema::new("API Token ID")
.format(&PROXMOX_TOKEN_ID_FORMAT)
.min_length(3)
.max_length(64)
.schema();
pub const PROXMOX_TOKEN_NAME_SCHEMA: Schema = StringSchema::new("API Token name")
.format(&PROXMOX_TOKEN_NAME_FORMAT)
.min_length(3)
.max_length(64)
.schema();
pub const PROXMOX_GROUP_ID_FORMAT: ApiStringFormat =
ApiStringFormat::Pattern(&PROXMOX_GROUP_ID_REGEX);
@ -91,26 +118,6 @@ pub struct Username(String);
#[derive(Debug, Hash)]
pub struct UsernameRef(str);
#[doc(hidden)]
/// ```compile_fail
/// let a: Username = unsafe { std::mem::zeroed() };
/// let b: Username = unsafe { std::mem::zeroed() };
/// let _ = <Username as PartialEq>::eq(&a, &b);
/// ```
///
/// ```compile_fail
/// let a: &UsernameRef = unsafe { std::mem::zeroed() };
/// let b: &UsernameRef = unsafe { std::mem::zeroed() };
/// let _ = <&UsernameRef as PartialEq>::eq(a, b);
/// ```
///
/// ```compile_fail
/// let a: &UsernameRef = unsafe { std::mem::zeroed() };
/// let b: &UsernameRef = unsafe { std::mem::zeroed() };
/// let _ = <&UsernameRef as PartialEq>::eq(&a, &b);
/// ```
struct _AssertNoEqImpl;
impl UsernameRef {
fn new(s: &str) -> &Self {
unsafe { &*(s as *const str as *const UsernameRef) }
@ -286,7 +293,132 @@ impl PartialEq<Realm> for &RealmRef {
}
}
/// A complete user id consting of a user name and a realm.
#[api(
type: String,
format: &PROXMOX_TOKEN_NAME_FORMAT,
)]
/// The token ID part of an API token authentication id.
///
/// This alone does NOT uniquely identify the API token and therefore does not implement `Eq`. In
/// order to compare token IDs directly, they need to be explicitly compared as strings by calling
/// `.as_str()`.
///
/// ```compile_fail
/// fn test(a: Tokenname, b: Tokenname) -> bool {
/// a == b // illegal and does not compile
/// }
/// ```
#[derive(Clone, Debug, Hash, Deserialize, Serialize)]
pub struct Tokenname(String);
/// A reference to a token name part of an authentication id. This alone does NOT uniquely identify
/// the user.
///
/// This is like a `str` to the `String` of a [`Tokenname`].
#[derive(Debug, Hash)]
pub struct TokennameRef(str);
#[doc(hidden)]
/// ```compile_fail
/// let a: Username = unsafe { std::mem::zeroed() };
/// let b: Username = unsafe { std::mem::zeroed() };
/// let _ = <Username as PartialEq>::eq(&a, &b);
/// ```
///
/// ```compile_fail
/// let a: &UsernameRef = unsafe { std::mem::zeroed() };
/// let b: &UsernameRef = unsafe { std::mem::zeroed() };
/// let _ = <&UsernameRef as PartialEq>::eq(a, b);
/// ```
///
/// ```compile_fail
/// let a: &UsernameRef = unsafe { std::mem::zeroed() };
/// let b: &UsernameRef = unsafe { std::mem::zeroed() };
/// let _ = <&UsernameRef as PartialEq>::eq(&a, &b);
/// ```
///
/// ```compile_fail
/// let a: Tokenname = unsafe { std::mem::zeroed() };
/// let b: Tokenname = unsafe { std::mem::zeroed() };
/// let _ = <Tokenname as PartialEq>::eq(&a, &b);
/// ```
///
/// ```compile_fail
/// let a: &TokennameRef = unsafe { std::mem::zeroed() };
/// let b: &TokennameRef = unsafe { std::mem::zeroed() };
/// let _ = <&TokennameRef as PartialEq>::eq(a, b);
/// ```
///
/// ```compile_fail
/// let a: &TokennameRef = unsafe { std::mem::zeroed() };
/// let b: &TokennameRef = unsafe { std::mem::zeroed() };
/// let _ = <&TokennameRef as PartialEq>::eq(&a, &b);
/// ```
struct _AssertNoEqImpl;
impl TokennameRef {
fn new(s: &str) -> &Self {
unsafe { &*(s as *const str as *const TokennameRef) }
}
pub fn as_str(&self) -> &str {
&self.0
}
}
impl std::ops::Deref for Tokenname {
type Target = TokennameRef;
fn deref(&self) -> &TokennameRef {
self.borrow()
}
}
impl Borrow<TokennameRef> for Tokenname {
fn borrow(&self) -> &TokennameRef {
TokennameRef::new(self.0.as_str())
}
}
impl AsRef<TokennameRef> for Tokenname {
fn as_ref(&self) -> &TokennameRef {
self.borrow()
}
}
impl ToOwned for TokennameRef {
type Owned = Tokenname;
fn to_owned(&self) -> Self::Owned {
Tokenname(self.0.to_owned())
}
}
impl TryFrom<String> for Tokenname {
type Error = Error;
fn try_from(s: String) -> Result<Self, Error> {
if !PROXMOX_TOKEN_NAME_REGEX.is_match(&s) {
bail!("invalid token name");
}
Ok(Self(s))
}
}
impl<'a> TryFrom<&'a str> for &'a TokennameRef {
type Error = Error;
fn try_from(s: &'a str) -> Result<&'a TokennameRef, Error> {
if !PROXMOX_TOKEN_NAME_REGEX.is_match(s) {
bail!("invalid token name in user id");
}
Ok(TokennameRef::new(s))
}
}
/// A complete user id consisting of a user name and a realm
#[derive(Clone, Debug, Hash)]
pub struct Userid {
data: String,
@ -342,6 +474,12 @@ impl PartialEq for Userid {
}
}
impl From<Authid> for Userid {
fn from(authid: Authid) -> Self {
authid.user
}
}
impl From<(Username, Realm)> for Userid {
fn from(parts: (Username, Realm)) -> Self {
Self::from((parts.0.as_ref(), parts.1.as_ref()))
@ -366,10 +504,18 @@ impl std::str::FromStr for Userid {
type Err = Error;
fn from_str(id: &str) -> Result<Self, Error> {
let (name, realm) = match id.as_bytes().iter().rposition(|&b| b == b'@') {
Some(pos) => (&id[..pos], &id[(pos + 1)..]),
None => bail!("not a valid user id"),
};
let name_len = id
.as_bytes()
.iter()
.rposition(|&b| b == b'@')
.ok_or_else(|| format_err!("not a valid user id"))?;
let name = &id[..name_len];
let realm = &id[(name_len + 1)..];
if !PROXMOX_USER_NAME_REGEX.is_match(name) {
bail!("invalid user name in user id");
}
PROXMOX_AUTH_REALM_STRING_SCHEMA.check_constraints(realm)
.map_err(|_| format_err!("invalid realm in user id"))?;
@ -388,6 +534,10 @@ impl TryFrom<String> for Userid {
.rposition(|&b| b == b'@')
.ok_or_else(|| format_err!("not a valid user id"))?;
if !PROXMOX_USER_NAME_REGEX.is_match(&data[..name_len]) {
bail!("invalid user name in user id");
}
PROXMOX_AUTH_REALM_STRING_SCHEMA.check_constraints(&data[(name_len + 1)..])
.map_err(|_| format_err!("invalid realm in user id"))?;
@ -413,5 +563,182 @@ impl PartialEq<String> for Userid {
}
}
/// A complete authentication id consisting of a user id and an optional token name.
#[derive(Clone, Debug, Hash)]
pub struct Authid {
user: Userid,
tokenname: Option<Tokenname>
}
impl Authid {
pub const API_SCHEMA: Schema = StringSchema::new("Authentication ID")
.format(&PROXMOX_AUTH_ID_FORMAT)
.min_length(3)
.max_length(64)
.schema();
const fn new(user: Userid, tokenname: Option<Tokenname>) -> Self {
Self { user, tokenname }
}
pub fn user(&self) -> &Userid {
&self.user
}
pub fn is_token(&self) -> bool {
self.tokenname.is_some()
}
pub fn tokenname(&self) -> Option<&TokennameRef> {
match &self.tokenname {
Some(name) => Some(&name),
None => None,
}
}
/// Get the "backup@pam" auth id.
pub fn backup_auth_id() -> &'static Self {
&*BACKUP_AUTHID
}
/// Get the "root@pam" auth id.
pub fn root_auth_id() -> &'static Self {
&*ROOT_AUTHID
}
}
lazy_static! {
pub static ref BACKUP_AUTHID: Authid = Authid::from(Userid::new("backup@pam".to_string(), 6));
pub static ref ROOT_AUTHID: Authid = Authid::from(Userid::new("root@pam".to_string(), 4));
}
impl Eq for Authid {}
impl PartialEq for Authid {
fn eq(&self, rhs: &Self) -> bool {
self.user == rhs.user && match (&self.tokenname, &rhs.tokenname) {
(Some(ours), Some(theirs)) => ours.as_str() == theirs.as_str(),
(None, None) => true,
_ => false,
}
}
}
impl From<Userid> for Authid {
fn from(parts: Userid) -> Self {
Self::new(parts, None)
}
}
impl From<(Userid, Option<Tokenname>)> for Authid {
fn from(parts: (Userid, Option<Tokenname>)) -> Self {
Self::new(parts.0, parts.1)
}
}
impl fmt::Display for Authid {
fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result {
match &self.tokenname {
Some(token) => write!(f, "{}!{}", self.user, token.as_str()),
None => self.user.fmt(f),
}
}
}
impl std::str::FromStr for Authid {
type Err = Error;
fn from_str(id: &str) -> Result<Self, Error> {
let name_len = id
.as_bytes()
.iter()
.rposition(|&b| b == b'@')
.ok_or_else(|| format_err!("not a valid user id"))?;
let realm_end = id
.as_bytes()
.iter()
.rposition(|&b| b == b'!')
.map(|pos| if pos < name_len { id.len() } else { pos })
.unwrap_or(id.len());
if realm_end == id.len() - 1 {
bail!("empty token name in userid");
}
let user = Userid::from_str(&id[..realm_end])?;
if id.len() > realm_end {
let token = Tokenname::try_from(id[(realm_end + 1)..].to_string())?;
Ok(Self::new(user, Some(token)))
} else {
Ok(Self::new(user, None))
}
}
}
impl TryFrom<String> for Authid {
type Error = Error;
fn try_from(mut data: String) -> Result<Self, Error> {
let name_len = data
.as_bytes()
.iter()
.rposition(|&b| b == b'@')
.ok_or_else(|| format_err!("not a valid user id"))?;
let realm_end = data
.as_bytes()
.iter()
.rposition(|&b| b == b'!')
.map(|pos| if pos < name_len { data.len() } else { pos })
.unwrap_or(data.len());
if realm_end == data.len() - 1 {
bail!("empty token name in userid");
}
let tokenname = if data.len() > realm_end {
Some(Tokenname::try_from(data[(realm_end + 1)..].to_string())?)
} else {
None
};
data.truncate(realm_end);
let user:Userid = data.parse()?;
Ok(Self { user, tokenname })
}
}
#[test]
fn test_token_id() {
let userid: Userid = "test@pam".parse().expect("parsing Userid failed");
assert_eq!(userid.name().as_str(), "test");
assert_eq!(userid.realm(), "pam");
assert_eq!(userid, "test@pam");
let auth_id: Authid = "test@pam".parse().expect("parsing user Authid failed");
assert_eq!(auth_id.to_string(), "test@pam".to_string());
assert!(!auth_id.is_token());
assert_eq!(auth_id.user(), &userid);
let user_auth_id = Authid::from(userid.clone());
assert_eq!(user_auth_id, auth_id);
assert!(!user_auth_id.is_token());
let auth_id: Authid = "test@pam!bar".parse().expect("parsing token Authid failed");
let token_userid = auth_id.user();
assert_eq!(&userid, token_userid);
assert!(auth_id.is_token());
assert_eq!(auth_id.tokenname().expect("Token has tokenname").as_str(), TokennameRef::new("bar").as_str());
assert_eq!(auth_id.to_string(), "test@pam!bar".to_string());
}
proxmox::forward_deserialize_to_from_str!(Userid);
proxmox::forward_serialize_to_display!(Userid);
proxmox::forward_deserialize_to_from_str!(Authid);
proxmox::forward_serialize_to_display!(Authid);

View File

@ -1,107 +1,146 @@
//! This module implements the proxmox backup data storage
//! This module implements the data storage and access layer.
//!
//! Proxmox backup splits large files into chunks, and stores them
//! deduplicated using a content addressable storage format.
//! # Data formats
//!
//! A chunk is simply defined as binary blob, which is stored inside a
//! `ChunkStore`, addressed by the SHA256 digest of the binary blob.
//! PBS splits large files into chunks, and stores them deduplicated using
//! a content addressable storage format.
//!
//! Index files are used to reconstruct the original file. They
//! basically contain a list of SHA256 checksums. The `DynamicIndex*`
//! format is able to deal with dynamic chunk sizes, whereas the
//! `FixedIndex*` format is an optimization to store a list of equal
//! sized chunks.
//! Backup snapshots are stored as folders containing a manifest file and
//! potentially one or more index or blob files.
//!
//! # ChunkStore Locking
//! The manifest contains hashes of all other files and can be signed by
//! the client.
//!
//! We need to be able to restart the proxmox-backup service daemons,
//! so that we can update the software without rebooting the host. But
//! such restarts must not abort running backup jobs, so we need to
//! keep the old service running until those jobs are finished. This
//! implies that we need some kind of locking for the
//! ChunkStore. Please note that it is perfectly valid to have
//! multiple parallel ChunkStore writers, even when they write the
//! same chunk (because the chunk would have the same name and the
//! same data). The only real problem is garbage collection, because
//! we need to avoid deleting chunks which are still referenced.
//! Blob files contain data directly. They are used for config files and
//! the like.
//!
//! * Read Index Files:
//! Index files are used to reconstruct an original file. They contain a
//! list of SHA256 checksums. The `DynamicIndex*` format is able to deal
//! with dynamic chunk sizes (CT and host backups), whereas the
//! `FixedIndex*` format is an optimization to store a list of equal sized
//! chunks (VMs, whole block devices).
//!
//! Acquire shared lock for .idx files.
//!
//!
//! * Delete Index Files:
//!
//! Acquire exclusive lock for .idx files. This makes sure that we do
//! not delete index files while they are still in use.
//!
//!
//! * Create Index Files:
//!
//! Acquire shared lock for ChunkStore (process wide).
//!
//! Note: When creating .idx files, we create temporary a (.tmp) file,
//! then do an atomic rename ...
//!
//!
//! * Garbage Collect:
//!
//! Acquire exclusive lock for ChunkStore (process wide). If we have
//! already a shared lock for the ChunkStore, try to upgrade that
//! lock.
//!
//!
//! * Server Restart
//!
//! Try to abort the running garbage collection to release exclusive
//! ChunkStore locks ASAP. Start the new service with the existing listening
//! socket.
//! A chunk is defined as a binary blob, which is stored inside a
//! [ChunkStore](struct.ChunkStore.html) instead of the backup directory
//! directly, and can be addressed by its SHA256 digest.
//!
//!
//! # Garbage Collection (GC)
//!
//! Deleting backups is as easy as deleting the corresponding .idx
//! files. Unfortunately, this does not free up any storage, because
//! those files just contain references to chunks.
//! Deleting backups is as easy as deleting the corresponding .idx files.
//! However, this does not free up any storage, because those files just
//! contain references to chunks.
//!
//! To free up some storage, we run a garbage collection process at
//! regular intervals. The collector uses a mark and sweep
//! approach. In the first phase, it scans all .idx files to mark used
//! chunks. The second phase then removes all unmarked chunks from the
//! store.
//! regular intervals. The collector uses a mark and sweep approach. In
//! the first phase, it scans all .idx files to mark used chunks. The
//! second phase then removes all unmarked chunks from the store.
//!
//! The above locking mechanism makes sure that we are the only
//! process running GC. But we still want to be able to create backups
//! during GC, so there may be multiple backup threads/tasks
//! running. Either started before GC started, or started while GC is
//! running.
//! The locking mechanisms mentioned below make sure that we are the only
//! process running GC. We still want to be able to create backups during
//! GC, so there may be multiple backup threads/tasks running, either
//! started before GC, or while GC is running.
//!
//! ## `atime` based GC
//!
//! The idea here is to mark chunks by updating the `atime` (access
//! timestamp) on the chunk file. This is quite simple and does not
//! need additional RAM.
//! timestamp) on the chunk file. This is quite simple and does not need
//! additional RAM.
//!
//! One minor problem is that recent Linux versions use the `relatime`
//! mount flag by default for performance reasons (yes, we want
//! that). When enabled, `atime` data is written to the disk only if
//! the file has been modified since the `atime` data was last updated
//! (`mtime`), or if the file was last accessed more than a certain
//! amount of time ago (by default 24h). So we may only delete chunks
//! with `atime` older than 24 hours.
//!
//! Another problem arises from running backups. The mark phase does
//! not find any chunks from those backups, because there is no .idx
//! file for them (created after the backup). Chunks created or
//! touched by those backups may have an `atime` as old as the start
//! time of those backups. Please note that the backup start time may
//! predate the GC start time. So we may only delete chunks older than
//! the start time of those running backup jobs.
//! mount flag by default for performance reasons (and we want that). When
//! enabled, `atime` data is written to the disk only if the file has been
//! modified since the `atime` data was last updated (`mtime`), or if the
//! file was last accessed more than a certain amount of time ago (by
//! default 24h). So we may only delete chunks with `atime` older than 24
//! hours.
//!
//! Another problem arises from running backups. The mark phase does not
//! find any chunks from those backups, because there is no .idx file for
//! them (created after the backup). Chunks created or touched by those
//! backups may have an `atime` as old as the start time of those backups.
//! Please note that the backup start time may predate the GC start time.
//! So we may only delete chunks older than the start time of those
//! running backup jobs, which might be more than 24h back (this is the
//! reason why ProcessLocker exclusive locks only have to be exclusive
//! between processes, since within one we can determine the age of the
//! oldest shared lock).
//!
//! ## Store `marks` in RAM using a HASH
//!
//! Not sure if this is better. TODO
//! Might be better. Under investigation.
//!
//!
//! # Locking
//!
//! Since PBS allows multiple potentially interfering operations at the
//! same time (e.g. garbage collect, prune, multiple backup creations
//! (only in seperate groups), forget, ...), these need to lock against
//! each other in certain scenarios. There is no overarching global lock
//! though, instead always the finest grained lock possible is used,
//! because running these operations concurrently is treated as a feature
//! on its own.
//!
//! ## Inter-process Locking
//!
//! We need to be able to restart the proxmox-backup service daemons, so
//! that we can update the software without rebooting the host. But such
//! restarts must not abort running backup jobs, so we need to keep the
//! old service running until those jobs are finished. This implies that
//! we need some kind of locking for modifying chunks and indices in the
//! ChunkStore.
//!
//! Please note that it is perfectly valid to have multiple
//! parallel ChunkStore writers, even when they write the same chunk
//! (because the chunk would have the same name and the same data, and
//! writes are completed atomically via a rename). The only problem is
//! garbage collection, because we need to avoid deleting chunks which are
//! still referenced.
//!
//! To do this we use the
//! [ProcessLocker](../tools/struct.ProcessLocker.html).
//!
//! ### ChunkStore-wide
//!
//! * Create Index Files:
//!
//! Acquire shared lock for ChunkStore.
//!
//! Note: When creating .idx files, we create a temporary .tmp file,
//! then do an atomic rename.
//!
//! * Garbage Collect:
//!
//! Acquire exclusive lock for ChunkStore. If we have
//! already a shared lock for the ChunkStore, try to upgrade that
//! lock.
//!
//! Exclusive locks only work _between processes_. It is valid to have an
//! exclusive and one or more shared locks held within one process. Writing
//! chunks within one process is synchronized using the gc_mutex.
//!
//! On server restart, we stop any running GC in the old process to avoid
//! having the exclusive lock held for too long.
//!
//! ## Locking table
//!
//! Below table shows all operations that play a role in locking, and which
//! mechanisms are used to make their concurrent usage safe.
//!
//! | starting ><br>v during | read index file | create index file | GC mark | GC sweep | update manifest | forget | prune | create backup | verify | reader api |
//! |-|-|-|-|-|-|-|-|-|-|-|
//! | **read index file** | / | / | / | / | / | mmap stays valid, oldest_shared_lock prevents GC | see forget column | / | / | / |
//! | **create index file** | / | / | / | / | / | / | / | /, happens at the end, after all chunks are touched | /, only happens without a manifest | / |
//! | **GC mark** | / | Datastore process-lock shared | gc_mutex, exclusive ProcessLocker | gc_mutex | /, GC only cares about index files, not manifests | tells GC about removed chunks | see forget column | /, index files dont exist yet | / | / |
//! | **GC sweep** | / | Datastore process-lock shared | gc_mutex, exclusive ProcessLocker | gc_mutex | / | /, chunks already marked | see forget column | chunks get touched; chunk_store.mutex; oldest PL lock | / | / |
//! | **update manifest** | / | / | / | / | update_manifest lock | update_manifest lock, remove dir under lock | see forget column | /, “write manifest” happens at the end | /, can call “write manifest”, see that column | / |
//! | **forget** | / | / | removed_during_gc mutex is held during unlink | marking done, doesnt matter if forgotten now | update_manifest lock, forget waits for lock | /, unlink is atomic | causes forget to fail, but thats OK | running backup has snapshot flock | /, potentially detects missing folder | shared snap flock |
//! | **prune** | / | / | see forget row | see forget row | see forget row | causes warn in prune, but no error | see forget column | running and last non-running cant be pruned | see forget row | shared snap flock |
//! | **create backup** | / | only time this happens, thus has snapshot flock | / | chunks get touched; chunk_store.mutex; oldest PL lock | no lock, but cannot exist beforehand | snapshot flock, cant be forgotten | running and last non-running cant be pruned | snapshot group flock, only one running per group | /, wont be verified since manifest missing | / |
//! | **verify** | / | / | / | / | see “update manifest” row | /, potentially detects missing folder | see forget column | / | /, but useless (“update manifest” protects itself) | / |
//! | **reader api** | / | / | / | /, open snap cant be forgotten, so ref must exist | / | prevented by shared snap flock | prevented by shared snap flock | / | / | /, lock is shared |!
//! * / = no interaction
//! * shared/exclusive from POV of 'starting' process
use anyhow::{bail, Error};

View File

@ -15,6 +15,17 @@ use super::IndexFile;
use super::read_chunk::AsyncReadChunk;
use super::index::ChunkReadInfo;
// FIXME: This enum may not be required?
// - Put the `WaitForData` case directly into a `read_future: Option<>`
// - make the read loop as follows:
// * if read_buffer is not empty:
// use it
// * else if read_future is there:
// poll it
// if read: move data to read_buffer
// * else
// create read future
#[allow(clippy::enum_variant_names)]
enum AsyncIndexReaderState<S> {
NoData,
WaitForData(Pin<Box<dyn Future<Output = Result<(S, Vec<u8>), Error>> + Send + 'static>>),
@ -118,9 +129,8 @@ where
}
AsyncIndexReaderState::WaitForData(ref mut future) => {
match ready!(future.as_mut().poll(cx)) {
Ok((store, mut chunk_data)) => {
this.read_buffer.clear();
this.read_buffer.append(&mut chunk_data);
Ok((store, chunk_data)) => {
this.read_buffer = chunk_data;
this.state = AsyncIndexReaderState::HaveData;
this.store = Some(store);
}

View File

@ -1,37 +1,31 @@
use crate::tools;
use anyhow::{bail, format_err, Error};
use regex::Regex;
use std::os::unix::io::RawFd;
use std::path::{PathBuf, Path};
use lazy_static::lazy_static;
use proxmox::const_regex;
use super::manifest::MANIFEST_BLOB_NAME;
macro_rules! BACKUP_ID_RE { () => (r"[A-Za-z0-9][A-Za-z0-9_-]+") }
macro_rules! BACKUP_ID_RE { () => (r"[A-Za-z0-9_][A-Za-z0-9._\-]*") }
macro_rules! BACKUP_TYPE_RE { () => (r"(?:host|vm|ct)") }
macro_rules! BACKUP_TIME_RE { () => (r"[0-9]{4}-[0-9]{2}-[0-9]{2}T[0-9]{2}:[0-9]{2}:[0-9]{2}Z") }
lazy_static!{
static ref BACKUP_FILE_REGEX: Regex = Regex::new(
r"^.*\.([fd]idx|blob)$").unwrap();
const_regex!{
BACKUP_FILE_REGEX = r"^.*\.([fd]idx|blob)$";
static ref BACKUP_TYPE_REGEX: Regex = Regex::new(
concat!(r"^(", BACKUP_TYPE_RE!(), r")$")).unwrap();
BACKUP_TYPE_REGEX = concat!(r"^(", BACKUP_TYPE_RE!(), r")$");
static ref BACKUP_ID_REGEX: Regex = Regex::new(
concat!(r"^", BACKUP_ID_RE!(), r"$")).unwrap();
pub BACKUP_ID_REGEX = concat!(r"^", BACKUP_ID_RE!(), r"$");
static ref BACKUP_DATE_REGEX: Regex = Regex::new(
concat!(r"^", BACKUP_TIME_RE!() ,r"$")).unwrap();
BACKUP_DATE_REGEX = concat!(r"^", BACKUP_TIME_RE!() ,r"$");
static ref GROUP_PATH_REGEX: Regex = Regex::new(
concat!(r"^(", BACKUP_TYPE_RE!(), ")/(", BACKUP_ID_RE!(), r")$")).unwrap();
static ref SNAPSHOT_PATH_REGEX: Regex = Regex::new(
concat!(r"^(", BACKUP_TYPE_RE!(), ")/(", BACKUP_ID_RE!(), ")/(", BACKUP_TIME_RE!(), r")$")).unwrap();
GROUP_PATH_REGEX = concat!(r"^(", BACKUP_TYPE_RE!(), ")/(", BACKUP_ID_RE!(), r")$");
SNAPSHOT_PATH_REGEX = concat!(
r"^(", BACKUP_TYPE_RE!(), ")/(", BACKUP_ID_RE!(), ")/(", BACKUP_TIME_RE!(), r")$");
}
/// BackupGroup is a directory containing a list of BackupDir

View File

@ -78,7 +78,7 @@ pub struct DirEntry {
#[derive(Clone, Debug, PartialEq)]
pub enum DirEntryAttribute {
Directory { start: u64 },
File { size: u64, mtime: u64 },
File { size: u64, mtime: i64 },
Symlink,
Hardlink,
BlockDevice,
@ -89,7 +89,7 @@ pub enum DirEntryAttribute {
impl DirEntry {
fn new(etype: CatalogEntryType, name: Vec<u8>, start: u64, size: u64, mtime:u64) -> Self {
fn new(etype: CatalogEntryType, name: Vec<u8>, start: u64, size: u64, mtime: i64) -> Self {
match etype {
CatalogEntryType::Directory => {
DirEntry { name, attr: DirEntryAttribute::Directory { start } }
@ -184,7 +184,7 @@ impl DirInfo {
catalog_encode_u64(writer, name.len() as u64)?;
writer.write_all(name)?;
catalog_encode_u64(writer, *size)?;
catalog_encode_u64(writer, *mtime)?;
catalog_encode_i64(writer, *mtime)?;
}
DirEntry { name, attr: DirEntryAttribute::Symlink } => {
writer.write_all(&[CatalogEntryType::Symlink as u8])?;
@ -234,7 +234,7 @@ impl DirInfo {
Ok((self.name, data))
}
fn parse<C: FnMut(CatalogEntryType, &[u8], u64, u64, u64) -> Result<bool, Error>>(
fn parse<C: FnMut(CatalogEntryType, &[u8], u64, u64, i64) -> Result<bool, Error>>(
data: &[u8],
mut callback: C,
) -> Result<(), Error> {
@ -265,7 +265,7 @@ impl DirInfo {
}
CatalogEntryType::File => {
let size = catalog_decode_u64(&mut cursor)?;
let mtime = catalog_decode_u64(&mut cursor)?;
let mtime = catalog_decode_i64(&mut cursor)?;
callback(etype, name, 0, size, mtime)?
}
_ => {
@ -362,7 +362,7 @@ impl <W: Write> BackupCatalogWriter for CatalogWriter<W> {
Ok(())
}
fn add_file(&mut self, name: &CStr, size: u64, mtime: u64) -> Result<(), Error> {
fn add_file(&mut self, name: &CStr, size: u64, mtime: i64) -> Result<(), Error> {
let dir = self.dirstack.last_mut().ok_or_else(|| format_err!("outside root"))?;
let name = name.to_bytes().to_vec();
dir.entries.push(DirEntry { name, attr: DirEntryAttribute::File { size, mtime } });
@ -587,14 +587,77 @@ impl <R: Read + Seek> CatalogReader<R> {
}
}
/// Serialize i64 as short, variable length byte sequence
///
/// Stores 7 bits per byte, Bit 8 indicates the end of the sequence (when not set).
/// If the value is negative, we end with a zero byte (0x00).
pub fn catalog_encode_i64<W: Write>(writer: &mut W, v: i64) -> Result<(), Error> {
let mut enc = Vec::new();
let mut d = if v < 0 {
(-1 * (v + 1)) as u64 + 1 // also handles i64::MIN
} else {
v as u64
};
loop {
if d < 128 {
if v < 0 {
enc.push(128 | d as u8);
enc.push(0u8);
} else {
enc.push(d as u8);
}
break;
}
enc.push((128 | (d & 127)) as u8);
d = d >> 7;
}
writer.write_all(&enc)?;
Ok(())
}
/// Deserialize i64 from variable length byte sequence
///
/// We currently read maximal 11 bytes, which give a maximum of 70 bits + sign.
/// this method is compatible with catalog_encode_u64 iff the
/// value encoded is <= 2^63 (values > 2^63 cannot be represented in an i64)
pub fn catalog_decode_i64<R: Read>(reader: &mut R) -> Result<i64, Error> {
let mut v: u64 = 0;
let mut buf = [0u8];
for i in 0..11 { // only allow 11 bytes (70 bits + sign marker)
if buf.is_empty() {
bail!("decode_i64 failed - unexpected EOB");
}
reader.read_exact(&mut buf)?;
let t = buf[0];
if t == 0 {
if v == 0 {
return Ok(0);
}
return Ok(((v - 1) as i64 * -1) - 1); // also handles i64::MIN
} else if t < 128 {
v |= (t as u64) << (i*7);
return Ok(v as i64);
} else {
v |= ((t & 127) as u64) << (i*7);
}
}
bail!("decode_i64 failed - missing end marker");
}
/// Serialize u64 as short, variable length byte sequence
///
/// Stores 7 bits per byte, Bit 8 indicates the end of the sequence (when not set).
/// We limit values to a maximum of 2^63.
pub fn catalog_encode_u64<W: Write>(writer: &mut W, v: u64) -> Result<(), Error> {
let mut enc = Vec::new();
if (v & (1<<63)) != 0 { bail!("catalog_encode_u64 failed - value >= 2^63"); }
let mut d = v;
loop {
if d < 128 {
@ -611,13 +674,14 @@ pub fn catalog_encode_u64<W: Write>(writer: &mut W, v: u64) -> Result<(), Error>
/// Deserialize u64 from variable length byte sequence
///
/// We currently read maximal 9 bytes, which give a maximum of 63 bits.
/// We currently read maximal 10 bytes, which give a maximum of 70 bits,
/// but we currently only encode up to 64 bits
pub fn catalog_decode_u64<R: Read>(reader: &mut R) -> Result<u64, Error> {
let mut v: u64 = 0;
let mut buf = [0u8];
for i in 0..9 { // only allow 9 bytes (63 bits)
for i in 0..10 { // only allow 10 bytes (70 bits)
if buf.is_empty() {
bail!("decode_u64 failed - unexpected EOB");
}
@ -652,9 +716,58 @@ fn test_catalog_u64_encoder() {
assert!(decoded == value);
}
test_encode_decode(u64::MIN);
test_encode_decode(126);
test_encode_decode((1<<12)-1);
test_encode_decode((1<<20)-1);
test_encode_decode((1<<50)-1);
test_encode_decode((1<<63)-1);
test_encode_decode(u64::MAX);
}
#[test]
fn test_catalog_i64_encoder() {
fn test_encode_decode(value: i64) {
let mut data = Vec::new();
catalog_encode_i64(&mut data, value).unwrap();
let slice = &mut &data[..];
let decoded = catalog_decode_i64(slice).unwrap();
assert!(decoded == value);
}
test_encode_decode(0);
test_encode_decode(-0);
test_encode_decode(126);
test_encode_decode(-126);
test_encode_decode((1<<12)-1);
test_encode_decode(-(1<<12)-1);
test_encode_decode((1<<20)-1);
test_encode_decode(-(1<<20)-1);
test_encode_decode(i64::MIN);
test_encode_decode(i64::MAX);
}
#[test]
fn test_catalog_i64_compatibility() {
fn test_encode_decode(value: u64) {
let mut data = Vec::new();
catalog_encode_u64(&mut data, value).unwrap();
let slice = &mut &data[..];
let decoded = catalog_decode_i64(slice).unwrap() as u64;
assert!(decoded == value);
}
test_encode_decode(u64::MIN);
test_encode_decode(126);
test_encode_decode((1<<12)-1);
test_encode_decode((1<<20)-1);
test_encode_decode((1<<50)-1);
test_encode_decode(u64::MAX);
}

View File

@ -354,9 +354,11 @@ impl ChunkStore {
},
Err(nix::Error::Sys(nix::errno::Errno::ENOENT)) => {
// chunk hasn't been rewritten yet, keep .bad file
status.still_bad += 1;
},
Err(err) => {
// some other error, warn user and keep .bad file around too
status.still_bad += 1;
crate::task_warn!(
worker,
"error during stat on '{:?}' - {}",
@ -378,8 +380,7 @@ impl ChunkStore {
}
status.removed_chunks += 1;
status.removed_bytes += stat.st_size as u64;
} else {
if stat.st_atime < oldest_writer {
} else if stat.st_atime < oldest_writer {
status.pending_chunks += 1;
status.pending_bytes += stat.st_size as u64;
} else {
@ -387,7 +388,6 @@ impl ChunkStore {
status.disk_bytes += stat.st_size as u64;
}
}
}
drop(lock);
}

View File

@ -3,26 +3,27 @@ use std::io::{self, Write};
use std::path::{Path, PathBuf};
use std::sync::{Arc, Mutex};
use std::convert::TryFrom;
use std::time::Duration;
use std::fs::File;
use anyhow::{bail, format_err, Error};
use lazy_static::lazy_static;
use serde_json::Value;
use proxmox::tools::fs::{replace_file, CreateOptions};
use proxmox::tools::fs::{replace_file, file_read_optional_string, CreateOptions, open_file_locked};
use super::backup_info::{BackupGroup, BackupDir};
use super::chunk_store::ChunkStore;
use super::dynamic_index::{DynamicIndexReader, DynamicIndexWriter};
use super::fixed_index::{FixedIndexReader, FixedIndexWriter};
use super::manifest::{MANIFEST_BLOB_NAME, CLIENT_LOG_BLOB_NAME, BackupManifest};
use super::manifest::{MANIFEST_BLOB_NAME, MANIFEST_LOCK_NAME, CLIENT_LOG_BLOB_NAME, BackupManifest};
use super::index::*;
use super::{DataBlob, ArchiveType, archive_type};
use crate::config::datastore;
use crate::config::datastore::{self, DataStoreConfig};
use crate::task::TaskState;
use crate::tools;
use crate::tools::format::HumanByte;
use crate::tools::fs::{lock_dir_noblock, DirLockGuard};
use crate::api2::types::{GarbageCollectionStatus, Userid};
use crate::api2::types::{Authid, GarbageCollectionStatus};
use crate::server::UPID;
lazy_static! {
@ -37,6 +38,7 @@ pub struct DataStore {
chunk_store: Arc<ChunkStore>,
gc_mutex: Mutex<bool>,
last_gc_status: Mutex<GarbageCollectionStatus>,
verify_new: bool,
}
impl DataStore {
@ -45,17 +47,20 @@ impl DataStore {
let (config, _digest) = datastore::config()?;
let config: datastore::DataStoreConfig = config.lookup("datastore", name)?;
let path = PathBuf::from(&config.path);
let mut map = DATASTORE_MAP.lock().unwrap();
if let Some(datastore) = map.get(name) {
// Compare Config - if changed, create new Datastore object!
if datastore.chunk_store.base == PathBuf::from(&config.path) {
if datastore.chunk_store.base == path &&
datastore.verify_new == config.verify_new.unwrap_or(false)
{
return Ok(datastore.clone());
}
}
let datastore = DataStore::open(name)?;
let datastore = DataStore::open_with_path(name, &path, config)?;
let datastore = Arc::new(datastore);
map.insert(name.to_string(), datastore.clone());
@ -63,26 +68,29 @@ impl DataStore {
Ok(datastore)
}
pub fn open(store_name: &str) -> Result<Self, Error> {
let (config, _digest) = datastore::config()?;
let (_, store_config) = config.sections.get(store_name)
.ok_or(format_err!("no such datastore '{}'", store_name))?;
let path = store_config["path"].as_str().unwrap();
Self::open_with_path(store_name, Path::new(path))
}
pub fn open_with_path(store_name: &str, path: &Path) -> Result<Self, Error> {
fn open_with_path(store_name: &str, path: &Path, config: DataStoreConfig) -> Result<Self, Error> {
let chunk_store = ChunkStore::open(store_name, path)?;
let gc_status = GarbageCollectionStatus::default();
let mut gc_status_path = chunk_store.base_path();
gc_status_path.push(".gc-status");
let gc_status = if let Some(state) = file_read_optional_string(gc_status_path)? {
match serde_json::from_str(&state) {
Ok(state) => state,
Err(err) => {
eprintln!("error reading gc-status: {}", err);
GarbageCollectionStatus::default()
}
}
} else {
GarbageCollectionStatus::default()
};
Ok(Self {
chunk_store: Arc::new(chunk_store),
gc_mutex: Mutex::new(false),
last_gc_status: Mutex::new(gc_status),
verify_new: config.verify_new.unwrap_or(false),
})
}
@ -208,10 +216,17 @@ impl DataStore {
let _guard = tools::fs::lock_dir_noblock(&full_path, "backup group", "possible running backup")?;
log::info!("removing backup group {:?}", full_path);
// remove all individual backup dirs first to ensure nothing is using them
for snap in backup_group.list_backups(&self.base_path())? {
self.remove_backup_dir(&snap.backup_dir, false)?;
}
// no snapshots left, we can now safely remove the empty folder
std::fs::remove_dir_all(&full_path)
.map_err(|err| {
format_err!(
"removing backup group {:?} failed - {}",
"removing backup group directory {:?} failed - {}",
full_path,
err,
)
@ -225,9 +240,10 @@ impl DataStore {
let full_path = self.snapshot_path(backup_dir);
let _guard;
let (_guard, _manifest_guard);
if !force {
_guard = lock_dir_noblock(&full_path, "snapshot", "possibly running or used as base")?;
_guard = lock_dir_noblock(&full_path, "snapshot", "possibly running or in use")?;
_manifest_guard = self.lock_manifest(backup_dir);
}
log::info!("removing backup snapshot {:?}", full_path);
@ -260,8 +276,8 @@ impl DataStore {
/// Returns the backup owner.
///
/// The backup owner is the user who first created the backup group.
pub fn get_owner(&self, backup_group: &BackupGroup) -> Result<Userid, Error> {
/// The backup owner is the entity who first created the backup group.
pub fn get_owner(&self, backup_group: &BackupGroup) -> Result<Authid, Error> {
let mut full_path = self.base_path();
full_path.push(backup_group.group_path());
full_path.push("owner");
@ -273,7 +289,7 @@ impl DataStore {
pub fn set_owner(
&self,
backup_group: &BackupGroup,
userid: &Userid,
auth_id: &Authid,
force: bool,
) -> Result<(), Error> {
let mut path = self.base_path();
@ -293,7 +309,7 @@ impl DataStore {
let mut file = open_options.open(&path)
.map_err(|err| format_err!("unable to create owner file {:?} - {}", path, err))?;
write!(file, "{}\n", userid)
writeln!(file, "{}", auth_id)
.map_err(|err| format_err!("unable to write owner file {:?} - {}", path, err))?;
Ok(())
@ -308,8 +324,8 @@ impl DataStore {
pub fn create_locked_backup_group(
&self,
backup_group: &BackupGroup,
userid: &Userid,
) -> Result<(Userid, DirLockGuard), Error> {
auth_id: &Authid,
) -> Result<(Authid, DirLockGuard), Error> {
// create intermediate path first:
let base_path = self.base_path();
@ -323,7 +339,7 @@ impl DataStore {
match std::fs::create_dir(&full_path) {
Ok(_) => {
let guard = lock_dir_noblock(&full_path, "backup group", "another backup is already running")?;
self.set_owner(backup_group, userid, false)?;
self.set_owner(backup_group, auth_id, false)?;
let owner = self.get_owner(backup_group)?; // just to be sure
Ok((owner, guard))
}
@ -442,27 +458,36 @@ impl DataStore {
) -> Result<(), Error> {
let image_list = self.list_images()?;
let image_count = image_list.len();
let mut done = 0;
let mut last_percentage: usize = 0;
for path in image_list {
for img in image_list {
worker.check_abort()?;
tools::fail_on_shutdown()?;
if let Ok(archive_type) = archive_type(&path) {
let path = self.chunk_store.relative_path(&img);
match std::fs::File::open(&path) {
Ok(file) => {
if let Ok(archive_type) = archive_type(&img) {
if archive_type == ArchiveType::FixedIndex {
let index = self.open_fixed_reader(&path)?;
self.index_mark_used_chunks(index, &path, status, worker)?;
let index = FixedIndexReader::new(file).map_err(|e| {
format_err!("can't read index '{}' - {}", path.to_string_lossy(), e)
})?;
self.index_mark_used_chunks(index, &img, status, worker)?;
} else if archive_type == ArchiveType::DynamicIndex {
let index = self.open_dynamic_reader(&path)?;
self.index_mark_used_chunks(index, &path, status, worker)?;
let index = DynamicIndexReader::new(file).map_err(|e| {
format_err!("can't read index '{}' - {}", path.to_string_lossy(), e)
})?;
self.index_mark_used_chunks(index, &img, status, worker)?;
}
}
}
Err(err) if err.kind() == io::ErrorKind::NotFound => (), // ignore vanished files
Err(err) => bail!("can't open index {} - {}", path.to_string_lossy(), err),
}
done += 1;
let percentage = done*100/image_count;
@ -531,7 +556,11 @@ impl DataStore {
);
}
if gc_status.removed_bad > 0 {
crate::task_log!(worker, "Removed bad files: {}", gc_status.removed_bad);
crate::task_log!(worker, "Removed bad chunks: {}", gc_status.removed_bad);
}
if gc_status.still_bad > 0 {
crate::task_log!(worker, "Leftover bad chunks: {}", gc_status.still_bad);
}
crate::task_log!(
@ -552,11 +581,36 @@ impl DataStore {
crate::task_log!(worker, "On-Disk chunks: {}", gc_status.disk_chunks);
let deduplication_factor = if gc_status.disk_bytes > 0 {
(gc_status.index_data_bytes as f64)/(gc_status.disk_bytes as f64)
} else {
1.0
};
crate::task_log!(worker, "Deduplication factor: {:.2}", deduplication_factor);
if gc_status.disk_chunks > 0 {
let avg_chunk = gc_status.disk_bytes/(gc_status.disk_chunks as u64);
crate::task_log!(worker, "Average chunk size: {}", HumanByte::from(avg_chunk));
}
if let Ok(serialized) = serde_json::to_string(&gc_status) {
let mut path = self.base_path();
path.push(".gc-status");
let backup_user = crate::backup::backup_user()?;
let mode = nix::sys::stat::Mode::from_bits_truncate(0o0644);
// set the correct owner/group/permissions while saving file
// owner(rw) = backup, group(r)= backup
let options = CreateOptions::new()
.perm(mode)
.owner(backup_user.uid)
.group(backup_user.gid);
// ignore errors
let _ = replace_file(path, serialized.as_bytes(), options);
}
*self.last_gc_status.lock().unwrap() = gc_status;
} else {
@ -613,6 +667,25 @@ impl DataStore {
))
}
fn lock_manifest(
&self,
backup_dir: &BackupDir,
) -> Result<File, Error> {
let mut path = self.base_path();
path.push(backup_dir.relative_path());
path.push(&MANIFEST_LOCK_NAME);
// update_manifest should never take a long time, so if someone else has
// the lock we can simply block a bit and should get it soon
open_file_locked(&path, Duration::from_secs(5), true)
.map_err(|err| {
format_err!(
"unable to acquire manifest lock {:?} - {}", &path, err
)
})
}
/// Load the manifest without a lock. Must not be written back.
pub fn load_manifest(
&self,
backup_dir: &BackupDir,
@ -623,22 +696,20 @@ impl DataStore {
Ok((manifest, raw_size))
}
pub fn load_manifest_json(
/// Update the manifest of the specified snapshot. Never write a manifest directly,
/// only use this method - anything else may break locking guarantees.
pub fn update_manifest(
&self,
backup_dir: &BackupDir,
) -> Result<Value, Error> {
let blob = self.load_blob(backup_dir, MANIFEST_BLOB_NAME)?;
// no expected digest available
let manifest_data = blob.decode(None, None)?;
let manifest: Value = serde_json::from_slice(&manifest_data[..])?;
Ok(manifest)
}
pub fn store_manifest(
&self,
backup_dir: &BackupDir,
manifest: Value,
update_fn: impl FnOnce(&mut BackupManifest),
) -> Result<(), Error> {
let _guard = self.lock_manifest(backup_dir)?;
let (mut manifest, _) = self.load_manifest(&backup_dir)?;
update_fn(&mut manifest);
let manifest = serde_json::to_value(manifest)?;
let manifest = serde_json::to_string_pretty(&manifest)?;
let blob = DataBlob::encode(manifest.as_bytes(), None, true)?;
let raw_data = blob.raw_data();
@ -647,8 +718,13 @@ impl DataStore {
path.push(backup_dir.relative_path());
path.push(MANIFEST_BLOB_NAME);
// atomic replace invalidates flock - no other writes past this point!
replace_file(&path, raw_data, CreateOptions::new())?;
Ok(())
}
pub fn verify_new(&self) -> bool {
self.verify_new
}
}

View File

@ -95,6 +95,18 @@ impl DynamicIndexReader {
let header_size = std::mem::size_of::<DynamicIndexHeader>();
let rawfd = file.as_raw_fd();
let stat = match nix::sys::stat::fstat(rawfd) {
Ok(stat) => stat,
Err(err) => bail!("fstat failed - {}", err),
};
let size = stat.st_size as usize;
if size < header_size {
bail!("index too small ({})", stat.st_size);
}
let header: Box<DynamicIndexHeader> = unsafe { file.read_host_value_boxed()? };
if header.magic != super::DYNAMIC_SIZED_CHUNK_INDEX_1_0 {
@ -103,13 +115,7 @@ impl DynamicIndexReader {
let ctime = proxmox::tools::time::epoch_i64();
let rawfd = file.as_raw_fd();
let stat = nix::sys::stat::fstat(rawfd)?;
let size = stat.st_size as usize;
let index_size = size - header_size;
let index_size = stat.st_size as usize - header_size;
let index_count = index_size / 40;
if index_count * 40 != index_size {
bail!("got unexpected file size");

View File

@ -68,6 +68,19 @@ impl FixedIndexReader {
file.seek(SeekFrom::Start(0))?;
let header_size = std::mem::size_of::<FixedIndexHeader>();
let rawfd = file.as_raw_fd();
let stat = match nix::sys::stat::fstat(rawfd) {
Ok(stat) => stat,
Err(err) => bail!("fstat failed - {}", err),
};
let size = stat.st_size as usize;
if size < header_size {
bail!("index too small ({})", stat.st_size);
}
let header: Box<FixedIndexHeader> = unsafe { file.read_host_value_boxed()? };
if header.magic != super::FIXED_SIZED_CHUNK_INDEX_1_0 {
@ -81,12 +94,6 @@ impl FixedIndexReader {
let index_length = ((size + chunk_size - 1) / chunk_size) as usize;
let index_size = index_length * 32;
let rawfd = file.as_raw_fd();
let stat = match nix::sys::stat::fstat(rawfd) {
Ok(stat) => stat,
Err(err) => bail!("fstat failed - {}", err),
};
let expected_index_size = (stat.st_size as usize) - header_size;
if index_size != expected_index_size {

View File

@ -8,6 +8,7 @@ use ::serde::{Deserialize, Serialize};
use crate::backup::{BackupDir, CryptMode, CryptConfig};
pub const MANIFEST_BLOB_NAME: &str = "index.json.blob";
pub const MANIFEST_LOCK_NAME: &str = ".index.json.lck";
pub const CLIENT_LOG_BLOB_NAME: &str = "client.log.blob";
mod hex_csum {

View File

@ -2,6 +2,7 @@ use std::collections::HashSet;
use std::sync::{Arc, Mutex};
use std::sync::atomic::{Ordering, AtomicUsize};
use std::time::Instant;
use nix::dir::Dir;
use anyhow::{bail, format_err, Error};
@ -13,6 +14,7 @@ use crate::{
BackupGroup,
BackupDir,
BackupInfo,
BackupManifest,
IndexFile,
CryptMode,
FileInfo,
@ -23,6 +25,7 @@ use crate::{
task::TaskState,
task_log,
tools::ParallelHandler,
tools::fs::lock_dir_noblock_shared,
};
fn verify_blob(datastore: Arc<DataStore>, backup_dir: &BackupDir, info: &FileInfo) -> Result<(), Error> {
@ -282,9 +285,48 @@ pub fn verify_backup_dir(
corrupt_chunks: Arc<Mutex<HashSet<[u8;32]>>>,
worker: Arc<dyn TaskState + Send + Sync>,
upid: UPID,
filter: Option<&dyn Fn(&BackupManifest) -> bool>,
) -> Result<bool, Error> {
let snap_lock = lock_dir_noblock_shared(
&datastore.snapshot_path(&backup_dir),
"snapshot",
"locked by another operation");
match snap_lock {
Ok(snap_lock) => verify_backup_dir_with_lock(
datastore,
backup_dir,
verified_chunks,
corrupt_chunks,
worker,
upid,
filter,
snap_lock
),
Err(err) => {
task_log!(
worker,
"SKIPPED: verify {}:{} - could not acquire snapshot lock: {}",
datastore.name(),
backup_dir,
err,
);
Ok(true)
}
}
}
let mut manifest = match datastore.load_manifest(&backup_dir) {
/// See verify_backup_dir
pub fn verify_backup_dir_with_lock(
datastore: Arc<DataStore>,
backup_dir: &BackupDir,
verified_chunks: Arc<Mutex<HashSet<[u8;32]>>>,
corrupt_chunks: Arc<Mutex<HashSet<[u8;32]>>>,
worker: Arc<dyn TaskState + Send + Sync>,
upid: UPID,
filter: Option<&dyn Fn(&BackupManifest) -> bool>,
_snap_lock: Dir,
) -> Result<bool, Error> {
let manifest = match datastore.load_manifest(&backup_dir) {
Ok((manifest, _)) => manifest,
Err(err) => {
task_log!(
@ -298,6 +340,18 @@ pub fn verify_backup_dir(
}
};
if let Some(filter) = filter {
if filter(&manifest) == false {
task_log!(
worker,
"SKIPPED: verify {}:{} (recently verified)",
datastore.name(),
backup_dir,
);
return Ok(true);
}
}
task_log!(worker, "verify {}:{}", datastore.name(), backup_dir);
let mut error_count = 0;
@ -351,9 +405,10 @@ pub fn verify_backup_dir(
state: verify_result,
upid,
};
manifest.unprotected["verify_state"] = serde_json::to_value(verify_state)?;
datastore.store_manifest(&backup_dir, serde_json::to_value(manifest)?)
.map_err(|err| format_err!("unable to store manifest blob - {}", err))?;
let verify_state = serde_json::to_value(verify_state)?;
datastore.update_manifest(&backup_dir, |manifest| {
manifest.unprotected["verify_state"] = verify_state;
}).map_err(|err| format_err!("unable to update manifest blob - {}", err))?;
Ok(error_count == 0)
}
@ -373,6 +428,7 @@ pub fn verify_backup_group(
progress: Option<(usize, usize)>, // (done, snapshot_count)
worker: Arc<dyn TaskState + Send + Sync>,
upid: &UPID,
filter: Option<&dyn Fn(&BackupManifest) -> bool>,
) -> Result<(usize, Vec<String>), Error> {
let mut errors = Vec::new();
@ -398,6 +454,7 @@ pub fn verify_backup_group(
BackupInfo::sort_list(&mut list, false); // newest first
for info in list {
count += 1;
if !verify_backup_dir(
datastore.clone(),
&info.backup_dir,
@ -405,6 +462,7 @@ pub fn verify_backup_group(
corrupt_chunks.clone(),
worker.clone(),
upid.clone(),
filter,
)? {
errors.push(info.backup_dir.to_string());
}
@ -424,7 +482,7 @@ pub fn verify_backup_group(
Ok((count, errors))
}
/// Verify all backups inside a datastore
/// Verify all (owned) backups inside a datastore
///
/// Errors are logged to the worker log.
///
@ -435,13 +493,41 @@ pub fn verify_all_backups(
datastore: Arc<DataStore>,
worker: Arc<dyn TaskState + Send + Sync>,
upid: &UPID,
owner: Option<Authid>,
filter: Option<&dyn Fn(&BackupManifest) -> bool>,
) -> Result<Vec<String>, Error> {
let mut errors = Vec::new();
if let Some(owner) = &owner {
task_log!(
worker,
"verify datastore {} - limiting to backups owned by {}",
datastore.name(),
owner
);
}
let filter_by_owner = |group: &BackupGroup| {
if let Some(owner) = &owner {
match datastore.get_owner(group) {
Ok(ref group_owner) => {
group_owner == owner
|| (group_owner.is_token()
&& !owner.is_token()
&& group_owner.user() == owner.user())
},
Err(_) => false,
}
} else {
true
}
};
let mut list = match BackupGroup::list_groups(&datastore.base_path()) {
Ok(list) => list
.into_iter()
.filter(|group| !(group.backup_type() == "host" && group.backup_id() == "benchmark"))
.filter(filter_by_owner)
.collect::<Vec<BackupGroup>>(),
Err(err) => {
task_log!(
@ -479,6 +565,7 @@ pub fn verify_all_backups(
Some((done, snapshot_count)),
worker.clone(),
upid,
filter,
)?;
errors.append(&mut group_errors);

View File

@ -37,7 +37,7 @@ async fn run() -> Result<(), Error> {
config::update_self_signed_cert(false)?;
proxmox_backup::rrd::create_rrdb_dir()?;
proxmox_backup::config::jobstate::create_jobstate_dir()?;
proxmox_backup::server::jobstate::create_jobstate_dir()?;
if let Err(err) = generate_auth_key() {
bail!("unable to generate auth key - {}", err);
@ -49,9 +49,13 @@ async fn run() -> Result<(), Error> {
}
let _ = csrf_secret(); // load with lazy_static
let config = server::ApiConfig::new(
let mut config = server::ApiConfig::new(
buildcfg::JS_DIR, &proxmox_backup::api2::ROUTER, RpcEnvironmentType::PRIVILEGED)?;
let mut commando_sock = server::CommandoSocket::new(server::our_ctrl_sock());
config.enable_file_log(buildcfg::API_ACCESS_LOG_FN, &mut commando_sock)?;
let rest_server = RestServer::new(config);
// http server future:
@ -74,10 +78,12 @@ async fn run() -> Result<(), Error> {
},
);
server::write_pid(buildcfg::PROXMOX_BACKUP_API_PID_FN)?;
daemon::systemd_notify(daemon::SystemdNotify::Ready)?;
let init_result: Result<(), Error> = try_block!({
server::create_task_control_socket()?;
server::register_task_control_commands(&mut commando_sock)?;
commando_sock.spawn()?;
server::server_state_init()?;
Ok(())
});

View File

@ -36,7 +36,7 @@ use proxmox_backup::api2::types::*;
use proxmox_backup::api2::version;
use proxmox_backup::client::*;
use proxmox_backup::pxar::catalog::*;
use proxmox_backup::config::user::complete_user_name;
use proxmox_backup::config::user::complete_userid;
use proxmox_backup::backup::{
archive_type,
decrypt_key,
@ -193,7 +193,7 @@ pub fn complete_repository(_arg: &str, _param: &HashMap<String, String>) -> Vec<
result
}
fn connect(server: &str, port: u16, userid: &Userid) -> Result<HttpClient, Error> {
fn connect(server: &str, port: u16, auth_id: &Authid) -> Result<HttpClient, Error> {
let fingerprint = std::env::var(ENV_VAR_PBS_FINGERPRINT).ok();
@ -212,7 +212,7 @@ fn connect(server: &str, port: u16, userid: &Userid) -> Result<HttpClient, Error
.fingerprint_cache(true)
.ticket_cache(true);
HttpClient::new(server, port, userid, options)
HttpClient::new(server, port, auth_id, options)
}
async fn view_task_result(
@ -366,7 +366,7 @@ async fn list_backup_groups(param: Value) -> Result<Value, Error> {
let repo = extract_repository_from_value(&param)?;
let client = connect(repo.host(), repo.port(), repo.user())?;
let client = connect(repo.host(), repo.port(), repo.auth_id())?;
let path = format!("api2/json/admin/datastore/{}/groups", repo.store());
@ -425,7 +425,7 @@ async fn list_backup_groups(param: Value) -> Result<Value, Error> {
description: "Backup group.",
},
"new-owner": {
type: Userid,
type: Authid,
},
}
}
@ -435,7 +435,7 @@ async fn change_backup_owner(group: String, mut param: Value) -> Result<(), Erro
let repo = extract_repository_from_value(&param)?;
let mut client = connect(repo.host(), repo.port(), repo.user())?;
let mut client = connect(repo.host(), repo.port(), repo.auth_id())?;
param.as_object_mut().unwrap().remove("repository");
@ -478,7 +478,7 @@ async fn list_snapshots(param: Value) -> Result<Value, Error> {
let output_format = get_output_format(&param);
let client = connect(repo.host(), repo.port(), repo.user())?;
let client = connect(repo.host(), repo.port(), repo.auth_id())?;
let group: Option<BackupGroup> = if let Some(path) = param["group"].as_str() {
Some(path.parse()?)
@ -510,7 +510,7 @@ async fn list_snapshots(param: Value) -> Result<Value, Error> {
.sortby("backup-id", false)
.sortby("backup-time", false)
.column(ColumnConfig::new("backup-id").renderer(render_snapshot_path).header("snapshot"))
.column(ColumnConfig::new("size"))
.column(ColumnConfig::new("size").renderer(tools::format::render_bytes_human_readable))
.column(ColumnConfig::new("files").renderer(render_files))
;
@ -543,7 +543,7 @@ async fn forget_snapshots(param: Value) -> Result<Value, Error> {
let path = tools::required_string_param(&param, "snapshot")?;
let snapshot: BackupDir = path.parse()?;
let mut client = connect(repo.host(), repo.port(), repo.user())?;
let mut client = connect(repo.host(), repo.port(), repo.auth_id())?;
let path = format!("api2/json/admin/datastore/{}/snapshots", repo.store());
@ -573,7 +573,7 @@ async fn api_login(param: Value) -> Result<Value, Error> {
let repo = extract_repository_from_value(&param)?;
let client = connect(repo.host(), repo.port(), repo.user())?;
let client = connect(repo.host(), repo.port(), repo.auth_id())?;
client.login().await?;
record_repository(&repo);
@ -630,7 +630,7 @@ async fn api_version(param: Value) -> Result<(), Error> {
let repo = extract_repository_from_value(&param);
if let Ok(repo) = repo {
let client = connect(repo.host(), repo.port(), repo.user())?;
let client = connect(repo.host(), repo.port(), repo.auth_id())?;
match client.get("api2/json/version", None).await {
Ok(mut result) => version_info["server"] = result["data"].take(),
@ -680,7 +680,7 @@ async fn list_snapshot_files(param: Value) -> Result<Value, Error> {
let output_format = get_output_format(&param);
let client = connect(repo.host(), repo.port(), repo.user())?;
let client = connect(repo.host(), repo.port(), repo.auth_id())?;
let path = format!("api2/json/admin/datastore/{}/files", repo.store());
@ -724,7 +724,7 @@ async fn start_garbage_collection(param: Value) -> Result<Value, Error> {
let output_format = get_output_format(&param);
let mut client = connect(repo.host(), repo.port(), repo.user())?;
let mut client = connect(repo.host(), repo.port(), repo.auth_id())?;
let path = format!("api2/json/admin/datastore/{}/gc", repo.store());
@ -1036,7 +1036,7 @@ async fn create_backup(
let backup_time = backup_time_opt.unwrap_or_else(|| epoch_i64());
let client = connect(repo.host(), repo.port(), repo.user())?;
let client = connect(repo.host(), repo.port(), repo.auth_id())?;
record_repository(&repo);
println!("Starting backup: {}/{}/{}", backup_type, backup_id, BackupDir::backup_time_to_string(backup_time)?);
@ -1339,7 +1339,7 @@ async fn restore(param: Value) -> Result<Value, Error> {
let archive_name = tools::required_string_param(&param, "archive-name")?;
let client = connect(repo.host(), repo.port(), repo.user())?;
let client = connect(repo.host(), repo.port(), repo.auth_id())?;
record_repository(&repo);
@ -1512,7 +1512,7 @@ async fn upload_log(param: Value) -> Result<Value, Error> {
let snapshot = tools::required_string_param(&param, "snapshot")?;
let snapshot: BackupDir = snapshot.parse()?;
let mut client = connect(repo.host(), repo.port(), repo.user())?;
let mut client = connect(repo.host(), repo.port(), repo.auth_id())?;
let (keydata, crypt_mode) = keyfile_parameters(&param)?;
@ -1583,7 +1583,7 @@ fn prune<'a>(
async fn prune_async(mut param: Value) -> Result<Value, Error> {
let repo = extract_repository_from_value(&param)?;
let mut client = connect(repo.host(), repo.port(), repo.user())?;
let mut client = connect(repo.host(), repo.port(), repo.auth_id())?;
let path = format!("api2/json/admin/datastore/{}/prune", repo.store());
@ -1657,7 +1657,10 @@ async fn prune_async(mut param: Value) -> Result<Value, Error> {
optional: true,
},
}
}
},
returns: {
type: StorageStatus,
},
)]
/// Get repository status.
async fn status(param: Value) -> Result<Value, Error> {
@ -1666,7 +1669,7 @@ async fn status(param: Value) -> Result<Value, Error> {
let output_format = get_output_format(&param);
let client = connect(repo.host(), repo.port(), repo.user())?;
let client = connect(repo.host(), repo.port(), repo.auth_id())?;
let path = format!("api2/json/admin/datastore/{}/status", repo.store());
@ -1690,7 +1693,7 @@ async fn status(param: Value) -> Result<Value, Error> {
.column(ColumnConfig::new("used").renderer(render_total_percentage))
.column(ColumnConfig::new("avail").renderer(render_total_percentage));
let schema = &proxmox_backup::api2::admin::datastore::API_RETURN_SCHEMA_STATUS;
let schema = &API_RETURN_SCHEMA_STATUS;
format_and_print_result_full(&mut data, schema, &output_format, &options);
@ -1711,7 +1714,7 @@ async fn try_get(repo: &BackupRepository, url: &str) -> Value {
.fingerprint_cache(true)
.ticket_cache(true);
let client = match HttpClient::new(repo.host(), repo.port(), repo.user(), options) {
let client = match HttpClient::new(repo.host(), repo.port(), repo.auth_id(), options) {
Ok(v) => v,
_ => return Value::Null,
};
@ -2010,7 +2013,7 @@ fn main() {
let change_owner_cmd_def = CliCommand::new(&API_METHOD_CHANGE_BACKUP_OWNER)
.arg_param(&["group", "new-owner"])
.completion_cb("group", complete_backup_group)
.completion_cb("new-owner", complete_user_name)
.completion_cb("new-owner", complete_userid)
.completion_cb("repository", complete_repository);
let cmd_def = CliCommandMap::new()

View File

@ -1,4 +1,5 @@
use std::collections::HashMap;
use std::io::{self, Write};
use anyhow::{format_err, Error};
use serde_json::{json, Value};
@ -62,10 +63,10 @@ fn connect() -> Result<HttpClient, Error> {
let ticket = Ticket::new("PBS", Userid::root_userid())?
.sign(private_auth_key(), None)?;
options = options.password(Some(ticket));
HttpClient::new("localhost", 8007, Userid::root_userid(), options)?
HttpClient::new("localhost", 8007, Authid::root_auth_id(), options)?
} else {
options = options.ticket_cache(true).interactive(true);
HttpClient::new("localhost", 8007, Userid::root_userid(), options)?
HttpClient::new("localhost", 8007, Authid::root_auth_id(), options)?
};
Ok(client)
@ -354,6 +355,14 @@ async fn verify(
Ok(Value::Null)
}
#[api()]
/// System report
async fn report() -> Result<Value, Error> {
let report = proxmox_backup::server::generate_report();
io::stdout().write_all(report.as_bytes())?;
Ok(Value::Null)
}
fn main() {
proxmox_backup::tools::setup_safe_path_env();
@ -368,6 +377,7 @@ fn main() {
.insert("remote", remote_commands())
.insert("garbage-collection", garbage_collection_commands())
.insert("cert", cert_mgmt_cli())
.insert("subscription", subscription_commands())
.insert("sync-job", sync_job_commands())
.insert("task", task_mgmt_cli())
.insert(
@ -383,12 +393,15 @@ fn main() {
CliCommand::new(&API_METHOD_VERIFY)
.arg_param(&["store"])
.completion_cb("store", config::datastore::complete_datastore_name)
)
.insert("report",
CliCommand::new(&API_METHOD_REPORT)
);
let mut rpcenv = CliEnvironment::new();
rpcenv.set_user(Some(String::from("root@pam")));
rpcenv.set_auth_id(Some(String::from("root@pam")));
proxmox_backup::tools::runtime::main(run_async_cli_command(cmd_def, rpcenv));
}

View File

@ -1,5 +1,6 @@
use std::sync::{Arc};
use std::sync::Arc;
use std::path::{Path, PathBuf};
use std::os::unix::io::AsRawFd;
use anyhow::{bail, format_err, Error};
use futures::*;
@ -9,16 +10,46 @@ use openssl::ssl::{SslMethod, SslAcceptor, SslFiletype};
use proxmox::try_block;
use proxmox::api::RpcEnvironmentType;
use proxmox_backup::api2::types::Userid;
use proxmox_backup::{
backup::DataStore,
server::{
WorkerTask,
ApiConfig,
rest::*,
jobstate::{
self,
Job,
},
rotate_task_log_archive,
},
tools::systemd::time::{
parse_calendar_event,
compute_next_event,
},
};
use proxmox_backup::api2::types::Authid;
use proxmox_backup::configdir;
use proxmox_backup::buildcfg;
use proxmox_backup::server;
use proxmox_backup::tools::daemon;
use proxmox_backup::server::{ApiConfig, rest::*};
use proxmox_backup::auth_helpers::*;
use proxmox_backup::tools::disks::{ DiskManage, zfs_pool_stats };
use proxmox_backup::tools::{
daemon,
disks::{
DiskManage,
zfs_pool_stats,
},
logrotate::LogRotate,
socket::{
set_tcp_keepalive,
PROXMOX_BACKUP_TCP_KEEPALIVE_TIME,
},
};
use proxmox_backup::api2::pull::do_sync_job;
use proxmox_backup::server::do_verification_job;
use proxmox_backup::server::do_prune_job;
fn main() -> Result<(), Error> {
proxmox_backup::tools::setup_safe_path_env();
@ -43,6 +74,10 @@ async fn run() -> Result<(), Error> {
bail!("unable to inititialize syslog - {}", err);
}
// Note: To debug early connection error use
// PROXMOX_DEBUG=1 ./target/release/proxmox-backup-proxy
let debug = std::env::var("PROXMOX_DEBUG").is_ok();
let _ = public_auth_key(); // load with lazy_static
let _ = csrf_secret(); // load with lazy_static
@ -63,6 +98,10 @@ async fn run() -> Result<(), Error> {
config.register_template("index", &indexpath)?;
config.register_template("console", "/usr/share/pve-xtermjs/index.html.hbs")?;
let mut commando_sock = server::CommandoSocket::new(server::our_ctrl_sock());
config.enable_file_log(buildcfg::API_ACCESS_LOG_FN, &mut commando_sock)?;
let rest_server = RestServer::new(config);
//openssl req -x509 -newkey rsa:4096 -keyout /etc/proxmox-backup/proxy.key -out /etc/proxmox-backup/proxy.pem -nodes
@ -81,19 +120,9 @@ async fn run() -> Result<(), Error> {
let server = daemon::create_daemon(
([0,0,0,0,0,0,0,0], 8007).into(),
|listener, ready| {
let connections = proxmox_backup::tools::async_io::StaticIncoming::from(listener)
.map_err(Error::from)
.try_filter_map(move |(sock, _addr)| {
let acceptor = Arc::clone(&acceptor);
async move {
sock.set_nodelay(true).unwrap();
Ok(tokio_openssl::accept(&acceptor, sock)
.await
.ok() // handshake errors aren't be fatal, so return None to filter
)
}
});
let connections = proxmox_backup::tools::async_io::HyperAccept(connections);
let connections = accept_connections(listener, acceptor, debug);
let connections = hyper::server::accept::from_stream(connections);
Ok(ready
.and_then(|_| hyper::Server::builder(connections)
@ -107,10 +136,12 @@ async fn run() -> Result<(), Error> {
},
);
server::write_pid(buildcfg::PROXMOX_BACKUP_PROXY_PID_FN)?;
daemon::systemd_notify(daemon::SystemdNotify::Ready)?;
let init_result: Result<(), Error> = try_block!({
server::create_task_control_socket()?;
server::register_task_control_commands(&mut commando_sock)?;
commando_sock.spawn()?;
server::server_state_init()?;
Ok(())
});
@ -130,6 +161,72 @@ async fn run() -> Result<(), Error> {
Ok(())
}
fn accept_connections(
mut listener: tokio::net::TcpListener,
acceptor: Arc<openssl::ssl::SslAcceptor>,
debug: bool,
) -> tokio::sync::mpsc::Receiver<Result<tokio_openssl::SslStream<tokio::net::TcpStream>, Error>> {
const MAX_PENDING_ACCEPTS: usize = 1024;
let (sender, receiver) = tokio::sync::mpsc::channel(MAX_PENDING_ACCEPTS);
let accept_counter = Arc::new(());
tokio::spawn(async move {
loop {
match listener.accept().await {
Err(err) => {
eprintln!("error accepting tcp connection: {}", err);
}
Ok((sock, _addr)) => {
sock.set_nodelay(true).unwrap();
let _ = set_tcp_keepalive(sock.as_raw_fd(), PROXMOX_BACKUP_TCP_KEEPALIVE_TIME);
let acceptor = Arc::clone(&acceptor);
let mut sender = sender.clone();
if Arc::strong_count(&accept_counter) > MAX_PENDING_ACCEPTS {
eprintln!("connection rejected - to many open connections");
continue;
}
let accept_counter = accept_counter.clone();
tokio::spawn(async move {
let accept_future = tokio::time::timeout(
Duration::new(10, 0), tokio_openssl::accept(&acceptor, sock));
let result = accept_future.await;
match result {
Ok(Ok(connection)) => {
if let Err(_) = sender.send(Ok(connection)).await {
if debug {
eprintln!("detect closed connection channel");
}
}
}
Ok(Err(err)) => {
if debug {
eprintln!("https handshake failed - {}", err);
}
}
Err(_) => {
if debug {
eprintln!("https handshake timeout");
}
}
}
drop(accept_counter); // decrease reference count
});
}
}
}
});
receiver
}
fn start_stat_generator() {
let abort_future = server::shutdown_future();
let future = Box::pin(run_stat_generator());
@ -196,8 +293,8 @@ async fn schedule_tasks() -> Result<(), Error> {
schedule_datastore_garbage_collection().await;
schedule_datastore_prune().await;
schedule_datastore_verification().await;
schedule_datastore_sync_jobs().await;
schedule_datastore_verify_jobs().await;
schedule_task_log_rotate().await;
Ok(())
@ -205,14 +302,12 @@ async fn schedule_tasks() -> Result<(), Error> {
async fn schedule_datastore_garbage_collection() {
use proxmox_backup::backup::DataStore;
use proxmox_backup::server::{UPID, WorkerTask};
use proxmox_backup::config::{
jobstate::{self, Job},
datastore::{self, DataStoreConfig}
datastore::{
self,
DataStoreConfig,
},
};
use proxmox_backup::tools::systemd::time::{
parse_calendar_event, compute_next_event};
let config = match datastore::config() {
Err(err) => {
@ -256,23 +351,12 @@ async fn schedule_datastore_garbage_collection() {
let worker_type = "garbage_collection";
let stat = datastore.last_gc_status();
let last = if let Some(upid_str) = stat.upid {
match upid_str.parse::<UPID>() {
Ok(upid) => upid.starttime,
Err(err) => {
eprintln!("unable to parse upid '{}' - {}", upid_str, err);
continue;
}
}
} else {
match jobstate::last_run_time(worker_type, &store) {
let last = match jobstate::last_run_time(worker_type, &store) {
Ok(time) => time,
Err(err) => {
eprintln!("could not get last run time of {} {}: {}", worker_type, store, err);
continue;
}
}
};
let next = match compute_next_event(&event, last, false) {
@ -288,51 +372,30 @@ async fn schedule_datastore_garbage_collection() {
if next > now { continue; }
let mut job = match Job::new(worker_type, &store) {
let job = match Job::new(worker_type, &store) {
Ok(job) => job,
Err(_) => continue, // could not get lock
};
let store2 = store.clone();
let auth_id = Authid::backup_auth_id();
if let Err(err) = WorkerTask::new_thread(
worker_type,
Some(store.clone()),
Userid::backup_userid().clone(),
false,
move |worker| {
job.start(&worker.upid().to_string())?;
worker.log(format!("starting garbage collection on store {}", store));
worker.log(format!("task triggered by schedule '{}'", event_str));
let result = datastore.garbage_collection(&*worker, worker.upid());
let status = worker.create_state(&result);
if let Err(err) = job.finish(status) {
eprintln!("could not finish job state for {}: {}", worker_type, err);
}
result
}
) {
eprintln!("unable to start garbage collection on store {} - {}", store2, err);
if let Err(err) = crate::server::do_garbage_collection_job(job, datastore, auth_id, Some(event_str), false) {
eprintln!("unable to start garbage collection job on datastore {} - {}", store, err);
}
}
}
async fn schedule_datastore_prune() {
use proxmox_backup::backup::{
PruneOptions, DataStore, BackupGroup, compute_prune_info};
use proxmox_backup::server::{WorkerTask};
use proxmox_backup::config::{
jobstate::{self, Job},
datastore::{self, DataStoreConfig}
use proxmox_backup::{
backup::{
PruneOptions,
},
config::datastore::{
self,
DataStoreConfig,
},
};
use proxmox_backup::tools::systemd::time::{
parse_calendar_event, compute_next_event};
let config = match datastore::config() {
Err(err) => {
@ -343,13 +406,6 @@ async fn schedule_datastore_prune() {
};
for (store, (_, store_config)) in config.sections {
let datastore = match DataStore::lookup_datastore(&store) {
Ok(datastore) => datastore,
Err(err) => {
eprintln!("lookup_datastore '{}' failed - {}", store, err);
continue;
}
};
let store_config: DataStoreConfig = match serde_json::from_value(store_config) {
Ok(c) => c,
@ -377,218 +433,26 @@ async fn schedule_datastore_prune() {
continue;
}
let event = match parse_calendar_event(&event_str) {
Ok(event) => event,
Err(err) => {
eprintln!("unable to parse schedule '{}' - {}", event_str, err);
continue;
}
};
let worker_type = "prune";
let last = match jobstate::last_run_time(worker_type, &store) {
Ok(time) => time,
Err(err) => {
eprintln!("could not get last run time of {} {}: {}", worker_type, store, err);
continue;
}
};
let next = match compute_next_event(&event, last, false) {
Ok(Some(next)) => next,
Ok(None) => continue,
Err(err) => {
eprintln!("compute_next_event for '{}' failed - {}", event_str, err);
continue;
}
};
let now = proxmox::tools::time::epoch_i64();
if next > now { continue; }
let mut job = match Job::new(worker_type, &store) {
if check_schedule(worker_type, &event_str, &store) {
let job = match Job::new(worker_type, &store) {
Ok(job) => job,
Err(_) => continue, // could not get lock
};
let store2 = store.clone();
if let Err(err) = WorkerTask::new_thread(
worker_type,
Some(store.clone()),
Userid::backup_userid().clone(),
false,
move |worker| {
job.start(&worker.upid().to_string())?;
let result = try_block!({
worker.log(format!("Starting datastore prune on store \"{}\"", store));
worker.log(format!("task triggered by schedule '{}'", event_str));
worker.log(format!("retention options: {}", prune_options.cli_options_string()));
let base_path = datastore.base_path();
let groups = BackupGroup::list_groups(&base_path)?;
for group in groups {
let list = group.list_backups(&base_path)?;
let mut prune_info = compute_prune_info(list, &prune_options)?;
prune_info.reverse(); // delete older snapshots first
worker.log(format!("Starting prune on store \"{}\" group \"{}/{}\"",
store, group.backup_type(), group.backup_id()));
for (info, keep) in prune_info {
worker.log(format!(
"{} {}/{}/{}",
if keep { "keep" } else { "remove" },
group.backup_type(), group.backup_id(),
info.backup_dir.backup_time_string()));
if !keep {
datastore.remove_backup_dir(&info.backup_dir, true)?;
}
}
}
Ok(())
});
let status = worker.create_state(&result);
if let Err(err) = job.finish(status) {
eprintln!("could not finish job state for {}: {}", worker_type, err);
}
result
}
) {
eprintln!("unable to start datastore prune on store {} - {}", store2, err);
}
}
}
async fn schedule_datastore_verification() {
use proxmox_backup::backup::{DataStore, verify_all_backups};
use proxmox_backup::server::{WorkerTask};
use proxmox_backup::config::{
jobstate::{self, Job},
datastore::{self, DataStoreConfig}
};
use proxmox_backup::tools::systemd::time::{
parse_calendar_event, compute_next_event};
let config = match datastore::config() {
Err(err) => {
eprintln!("unable to read datastore config - {}", err);
return;
}
Ok((config, _digest)) => config,
};
for (store, (_, store_config)) in config.sections {
let datastore = match DataStore::lookup_datastore(&store) {
Ok(datastore) => datastore,
Err(err) => {
eprintln!("lookup_datastore failed - {}", err);
continue;
let auth_id = Authid::backup_auth_id().clone();
if let Err(err) = do_prune_job(job, prune_options, store.clone(), &auth_id, Some(event_str)) {
eprintln!("unable to start datastore prune job {} - {}", &store, err);
}
};
let store_config: DataStoreConfig = match serde_json::from_value(store_config) {
Ok(c) => c,
Err(err) => {
eprintln!("datastore config from_value failed - {}", err);
continue;
}
};
let event_str = match store_config.verify_schedule {
Some(event_str) => event_str,
None => continue,
};
let event = match parse_calendar_event(&event_str) {
Ok(event) => event,
Err(err) => {
eprintln!("unable to parse schedule '{}' - {}", event_str, err);
continue;
}
};
let worker_type = "verify";
let last = match jobstate::last_run_time(worker_type, &store) {
Ok(time) => time,
Err(err) => {
eprintln!("could not get last run time of {} {}: {}", worker_type, store, err);
continue;
}
};
let next = match compute_next_event(&event, last, false) {
Ok(Some(next)) => next,
Ok(None) => continue,
Err(err) => {
eprintln!("compute_next_event for '{}' failed - {}", event_str, err);
continue;
}
};
let now = proxmox::tools::time::epoch_i64();
if next > now { continue; }
let mut job = match Job::new(worker_type, &store) {
Ok(job) => job,
Err(_) => continue, // could not get lock
};
let worker_id = store.clone();
let store2 = store.clone();
if let Err(err) = WorkerTask::new_thread(
worker_type,
Some(worker_id),
Userid::backup_userid().clone(),
false,
move |worker| {
job.start(&worker.upid().to_string())?;
worker.log(format!("starting verification on store {}", store2));
worker.log(format!("task triggered by schedule '{}'", event_str));
let result = try_block!({
let failed_dirs =
verify_all_backups(datastore, worker.clone(), worker.upid())?;
if failed_dirs.len() > 0 {
worker.log("Failed to verify following snapshots:");
for dir in failed_dirs {
worker.log(format!("\t{}", dir));
}
Err(format_err!("verification failed - please check the log for details"))
} else {
Ok(())
}
});
let status = worker.create_state(&result);
if let Err(err) = job.finish(status) {
eprintln!("could not finish job state for {}: {}", worker_type, err);
}
result
},
) {
eprintln!("unable to start verification on store {} - {}", store, err);
}
}
}
async fn schedule_datastore_sync_jobs() {
use proxmox_backup::{
config::{ sync::{self, SyncJobConfig}, jobstate::{self, Job} },
tools::systemd::time::{ parse_calendar_event, compute_next_event },
use proxmox_backup::config::sync::{
self,
SyncJobConfig,
};
let config = match sync::config() {
@ -613,94 +477,71 @@ async fn schedule_datastore_sync_jobs() {
None => continue,
};
let event = match parse_calendar_event(&event_str) {
Ok(event) => event,
Err(err) => {
eprintln!("unable to parse schedule '{}' - {}", event_str, err);
continue;
}
};
let worker_type = "syncjob";
let last = match jobstate::last_run_time(worker_type, &job_id) {
Ok(time) => time,
Err(err) => {
eprintln!("could not get last run time of {} {}: {}", worker_type, job_id, err);
continue;
}
};
let next = match compute_next_event(&event, last, false) {
Ok(Some(next)) => next,
Ok(None) => continue,
Err(err) => {
eprintln!("compute_next_event for '{}' failed - {}", event_str, err);
continue;
}
};
let now = proxmox::tools::time::epoch_i64();
if next > now { continue; }
if check_schedule(worker_type, &event_str, &job_id) {
let job = match Job::new(worker_type, &job_id) {
Ok(job) => job,
Err(_) => continue, // could not get lock
};
let userid = Userid::backup_userid().clone();
if let Err(err) = do_sync_job(job, job_config, &userid, Some(event_str)) {
let auth_id = Authid::backup_auth_id().clone();
if let Err(err) = do_sync_job(job, job_config, &auth_id, Some(event_str)) {
eprintln!("unable to start datastore sync job {} - {}", &job_id, err);
}
};
}
}
async fn schedule_datastore_verify_jobs() {
use proxmox_backup::config::verify::{
self,
VerificationJobConfig,
};
let config = match verify::config() {
Err(err) => {
eprintln!("unable to read verification job config - {}", err);
return;
}
Ok((config, _digest)) => config,
};
for (job_id, (_, job_config)) in config.sections {
let job_config: VerificationJobConfig = match serde_json::from_value(job_config) {
Ok(c) => c,
Err(err) => {
eprintln!("verification job config from_value failed - {}", err);
continue;
}
};
let event_str = match job_config.schedule {
Some(ref event_str) => event_str.clone(),
None => continue,
};
let worker_type = "verificationjob";
let auth_id = Authid::backup_auth_id().clone();
if check_schedule(worker_type, &event_str, &job_id) {
let job = match Job::new(&worker_type, &job_id) {
Ok(job) => job,
Err(_) => continue, // could not get lock
};
if let Err(err) = do_verification_job(job, job_config, &auth_id, Some(event_str)) {
eprintln!("unable to start datastore verification job {} - {}", &job_id, err);
}
};
}
}
async fn schedule_task_log_rotate() {
use proxmox_backup::{
config::jobstate::{self, Job},
server::rotate_task_log_archive,
};
use proxmox_backup::server::WorkerTask;
use proxmox_backup::tools::systemd::time::{
parse_calendar_event, compute_next_event};
let worker_type = "logrotate";
let job_id = "task-archive";
let last = match jobstate::last_run_time(worker_type, job_id) {
Ok(time) => time,
Err(err) => {
eprintln!("could not get last run time of task log archive rotation: {}", err);
return;
}
};
let job_id = "access-log_and_task-archive";
// schedule daily at 00:00 like normal logrotate
let schedule = "00:00";
let event = match parse_calendar_event(schedule) {
Ok(event) => event,
Err(err) => {
// should not happen?
eprintln!("unable to parse schedule '{}' - {}", schedule, err);
return;
}
};
let next = match compute_next_event(&event, last, false) {
Ok(Some(next)) => next,
Ok(None) => return,
Err(err) => {
eprintln!("compute_next_event for '{}' failed - {}", schedule, err);
return;
}
};
let now = proxmox::tools::time::epoch_i64();
if next > now {
if !check_schedule(worker_type, schedule, job_id) {
// if we never ran the rotation, schedule instantly
match jobstate::JobState::load(worker_type, job_id) {
Ok(state) => match state {
@ -718,16 +559,16 @@ async fn schedule_task_log_rotate() {
if let Err(err) = WorkerTask::new_thread(
worker_type,
Some(job_id.to_string()),
Userid::backup_userid().clone(),
None,
Authid::backup_auth_id().clone(),
false,
move |worker| {
job.start(&worker.upid().to_string())?;
worker.log(format!("starting task log rotation"));
// one entry has normally about ~100-150 bytes
let max_size = 500000; // at least 5000 entries
let max_files = 20; // at least 100000 entries
let result = try_block!({
let max_size = 512 * 1024 - 1; // an entry has ~ 100b, so > 5000 entries/file
let max_files = 20; // times twenty files gives > 100000 task entries
let has_rotated = rotate_task_log_archive(max_size, true, Some(max_files))?;
if has_rotated {
worker.log(format!("task log archive was rotated"));
@ -735,6 +576,28 @@ async fn schedule_task_log_rotate() {
worker.log(format!("task log archive was not rotated"));
}
let max_size = 32 * 1024 * 1024 - 1;
let max_files = 14;
let mut logrotate = LogRotate::new(buildcfg::API_ACCESS_LOG_FN, true)
.ok_or_else(|| format_err!("could not get API access log file names"))?;
if logrotate.rotate(max_size, None, Some(max_files))? {
println!("rotated access log, telling daemons to re-open log file");
proxmox_backup::tools::runtime::block_on(command_reopen_logfiles())?;
worker.log(format!("API access log was rotated"));
} else {
worker.log(format!("API access log was not rotated"));
}
let mut logrotate = LogRotate::new(buildcfg::API_AUTH_LOG_FN, true)
.ok_or_else(|| format_err!("could not get API auth log file names"))?;
if logrotate.rotate(max_size, None, Some(max_files))? {
worker.log(format!("API access log was rotated"));
} else {
worker.log(format!("API access log was not rotated"));
}
Ok(())
});
@ -752,6 +615,28 @@ async fn schedule_task_log_rotate() {
}
async fn command_reopen_logfiles() -> Result<(), Error> {
// only care about the most recent daemon instance for each, proxy & api, as other older ones
// should not respond to new requests anyway, but only finish their current one and then exit.
let sock = server::our_ctrl_sock();
let f1 = server::send_command(sock, serde_json::json!({
"command": "api-access-log-reopen",
}));
let pid = server::read_pid(buildcfg::PROXMOX_BACKUP_API_PID_FN)?;
let sock = server::ctrl_sock_from_pid(pid);
let f2 = server::send_command(sock, serde_json::json!({
"command": "api-access-log-reopen",
}));
match futures::join!(f1, f2) {
(Err(e1), Err(e2)) => Err(format_err!("reopen commands failed, proxy: {}; api: {}", e1, e2)),
(Err(e1), Ok(_)) => Err(format_err!("reopen commands failed, proxy: {}", e1)),
(Ok(_), Err(e2)) => Err(format_err!("reopen commands failed, api: {}", e2)),
_ => Ok(()),
}
}
async fn run_stat_generator() {
let mut count = 0;
@ -864,6 +749,36 @@ async fn generate_host_stats(save: bool) {
});
}
fn check_schedule(worker_type: &str, event_str: &str, id: &str) -> bool {
let event = match parse_calendar_event(event_str) {
Ok(event) => event,
Err(err) => {
eprintln!("unable to parse schedule '{}' - {}", event_str, err);
return false;
}
};
let last = match jobstate::last_run_time(worker_type, &id) {
Ok(time) => time,
Err(err) => {
eprintln!("could not get last run time of {} {}: {}", worker_type, id, err);
return false;
}
};
let next = match compute_next_event(&event, last, false) {
Ok(Some(next)) => next,
Ok(None) => return false,
Err(err) => {
eprintln!("compute_next_event for '{}' failed - {}", event_str, err);
return false;
}
};
let now = proxmox::tools::time::epoch_i64();
next <= now
}
fn gather_disk_stats(disk_manager: Arc<DiskManage>, path: &Path, rrd_prefix: &str, save: bool) {
match proxmox_backup::tools::disks::disk_usage(path) {

View File

@ -0,0 +1,73 @@
use anyhow::Error;
use serde_json::{json, Value};
use proxmox::api::{cli::*, RpcEnvironment, ApiHandler};
use proxmox_backup::api2;
use proxmox_backup::tools::subscription;
async fn wait_for_local_worker(upid_str: &str) -> Result<(), Error> {
let upid: proxmox_backup::server::UPID = upid_str.parse()?;
let sleep_duration = core::time::Duration::new(0, 100_000_000);
loop {
if !proxmox_backup::server::worker_is_active_local(&upid) {
break;
}
tokio::time::delay_for(sleep_duration).await;
}
Ok(())
}
/// Daily update
async fn do_update(
rpcenv: &mut dyn RpcEnvironment,
) -> Result<Value, Error> {
let param = json!({});
let method = &api2::node::subscription::API_METHOD_CHECK_SUBSCRIPTION;
let _res = match method.handler {
ApiHandler::Sync(handler) => (handler)(param, method, rpcenv)?,
_ => unreachable!(),
};
let notify = match subscription::read_subscription() {
Ok(Some(subscription)) => subscription.status == subscription::SubscriptionStatus::ACTIVE,
Ok(None) => false,
Err(err) => {
eprintln!("Error reading subscription - {}", err);
false
},
};
let param = json!({
"notify": notify,
});
let method = &api2::node::apt::API_METHOD_APT_UPDATE_DATABASE;
let upid = match method.handler {
ApiHandler::Sync(handler) => (handler)(param, method, rpcenv)?,
_ => unreachable!(),
};
wait_for_local_worker(upid.as_str().unwrap()).await?;
// TODO: certificate checks/renewal/... ?
// TODO: cleanup tasks like in PVE?
Ok(Value::Null)
}
fn main() {
proxmox_backup::tools::setup_safe_path_env();
let mut rpcenv = CliEnvironment::new();
rpcenv.set_auth_id(Some(String::from("root@pam")));
match proxmox_backup::tools::runtime::main(do_update(&mut rpcenv)) {
Err(err) => {
eprintln!("error during update: {}", err);
std::process::exit(1);
},
_ => (),
}
}

View File

@ -225,7 +225,7 @@ async fn test_upload_speed(
let backup_time = proxmox::tools::time::epoch_i64();
let client = connect(repo.host(), repo.port(), repo.user())?;
let client = connect(repo.host(), repo.port(), repo.auth_id())?;
record_repository(&repo);
if verbose { eprintln!("Connecting to backup server"); }

View File

@ -79,7 +79,7 @@ async fn dump_catalog(param: Value) -> Result<Value, Error> {
}
};
let client = connect(repo.host(), repo.port(), repo.user())?;
let client = connect(repo.host(), repo.port(), repo.auth_id())?;
let client = BackupReader::start(
client,
@ -153,7 +153,7 @@ async fn dump_catalog(param: Value) -> Result<Value, Error> {
/// Shell to interactively inspect and restore snapshots.
async fn catalog_shell(param: Value) -> Result<(), Error> {
let repo = extract_repository_from_value(&param)?;
let client = connect(repo.host(), repo.port(), repo.user())?;
let client = connect(repo.host(), repo.port(), repo.auth_id())?;
let path = tools::required_string_param(&param, "snapshot")?;
let archive_name = tools::required_string_param(&param, "archive-name")?;

View File

@ -414,13 +414,13 @@ fn paperkey_html(data: &str, subject: Option<String>) -> Result<(), Error> {
println!("</p>");
let data = data.join("\n");
let qr_code = generate_qr_code("png", data.as_bytes())?;
let qr_code = generate_qr_code("svg", data.as_bytes())?;
let qr_code = base64::encode_config(&qr_code, base64::STANDARD_NO_PAD);
println!("<center>");
println!("<img");
println!("width=\"{}pt\" height=\"{}pt\"", img_size_pt, img_size_pt);
println!("src=\"data:image/png;base64,{}\"/>", qr_code);
println!("src=\"data:image/svg+xml;base64,{}\"/>", qr_code);
println!("</center>");
println!("</div>");
}
@ -447,13 +447,13 @@ fn paperkey_html(data: &str, subject: Option<String>) -> Result<(), Error> {
println!("</p>");
let qr_code = generate_qr_code("png", key_text.as_bytes())?;
let qr_code = generate_qr_code("svg", key_text.as_bytes())?;
let qr_code = base64::encode_config(&qr_code, base64::STANDARD_NO_PAD);
println!("<center>");
println!("<img");
println!("width=\"{}pt\" height=\"{}pt\"", img_size_pt, img_size_pt);
println!("src=\"data:image/png;base64,{}\"/>", qr_code);
println!("src=\"data:image/svg+xml;base64,{}\"/>", qr_code);
println!("</center>");
println!("</div>");

View File

@ -144,7 +144,7 @@ fn mount(
// Process should be deamonized.
// Make sure to fork before the async runtime is instantiated to avoid troubles.
let pipe = pipe()?;
match fork() {
match unsafe { fork() } {
Ok(ForkResult::Parent { .. }) => {
nix::unistd::close(pipe.1).unwrap();
// Blocks the parent process until we are ready to go in the child
@ -163,7 +163,7 @@ fn mount(
async fn mount_do(param: Value, pipe: Option<RawFd>) -> Result<Value, Error> {
let repo = extract_repository_from_value(&param)?;
let archive_name = tools::required_string_param(&param, "archive-name")?;
let client = connect(repo.host(), repo.port(), repo.user())?;
let client = connect(repo.host(), repo.port(), repo.auth_id())?;
let target = param["target"].as_str();

View File

@ -48,7 +48,7 @@ async fn task_list(param: Value) -> Result<Value, Error> {
let output_format = get_output_format(&param);
let repo = extract_repository_from_value(&param)?;
let client = connect(repo.host(), repo.port(), repo.user())?;
let client = connect(repo.host(), repo.port(), repo.auth_id())?;
let limit = param["limit"].as_u64().unwrap_or(50) as usize;
let running = !param["all"].as_bool().unwrap_or(false);
@ -57,7 +57,7 @@ async fn task_list(param: Value) -> Result<Value, Error> {
"running": running,
"start": 0,
"limit": limit,
"userfilter": repo.user(),
"userfilter": repo.auth_id(),
"store": repo.store(),
});
@ -96,7 +96,7 @@ async fn task_log(param: Value) -> Result<Value, Error> {
let repo = extract_repository_from_value(&param)?;
let upid = tools::required_string_param(&param, "upid")?;
let client = connect(repo.host(), repo.port(), repo.user())?;
let client = connect(repo.host(), repo.port(), repo.auth_id())?;
display_task_log(client, upid, true).await?;
@ -122,7 +122,7 @@ async fn task_stop(param: Value) -> Result<Value, Error> {
let repo = extract_repository_from_value(&param)?;
let upid_str = tools::required_string_param(&param, "upid")?;
let mut client = connect(repo.host(), repo.port(), repo.user())?;
let mut client = connect(repo.host(), repo.port(), repo.auth_id())?;
let path = format!("api2/json/nodes/localhost/tasks/{}", upid_str);
let _ = client.delete(&path, None).await?;

View File

@ -60,7 +60,7 @@ pub fn acl_commands() -> CommandLineInterface {
"update",
CliCommand::new(&api2::access::acl::API_METHOD_UPDATE_ACL)
.arg_param(&["path", "role"])
.completion_cb("userid", config::user::complete_user_name)
.completion_cb("auth-id", config::user::complete_authid)
.completion_cb("path", config::datastore::complete_acl_path)
);

View File

@ -14,5 +14,7 @@ mod sync;
pub use sync::*;
mod user;
pub use user::*;
mod subscription;
pub use subscription::*;
mod disk;
pub use disk::*;

View File

@ -0,0 +1,55 @@
use anyhow::Error;
use serde_json::Value;
use proxmox::api::{api, cli::*, RpcEnvironment, ApiHandler};
use proxmox_backup::api2;
#[api(
input: {
properties: {
"output-format": {
schema: OUTPUT_FORMAT,
optional: true,
},
}
}
)]
/// Read subscription info.
fn get(param: Value, rpcenv: &mut dyn RpcEnvironment) -> Result<Value, Error> {
let output_format = get_output_format(&param);
let info = &api2::node::subscription::API_METHOD_GET_SUBSCRIPTION;
let mut data = match info.handler {
ApiHandler::Sync(handler) => (handler)(param, info, rpcenv)?,
_ => unreachable!(),
};
let options = default_table_format_options();
format_and_print_result_full(&mut data, info.returns, &output_format, &options);
Ok(Value::Null)
}
pub fn subscription_commands() -> CommandLineInterface {
let cmd_def = CliCommandMap::new()
.insert("get", CliCommand::new(&API_METHOD_GET))
.insert("set",
CliCommand::new(&api2::node::subscription::API_METHOD_SET_SUBSCRIPTION)
.fixed_param("node", "localhost".into())
.arg_param(&["key"])
)
.insert("update",
CliCommand::new(&api2::node::subscription::API_METHOD_CHECK_SUBSCRIPTION)
.fixed_param("node", "localhost".into())
)
.insert("remove",
CliCommand::new(&api2::node::subscription::API_METHOD_DELETE_SUBSCRIPTION)
.fixed_param("node", "localhost".into())
)
;
cmd_def.into()
}

View File

@ -1,11 +1,14 @@
use anyhow::Error;
use serde_json::Value;
use std::collections::HashMap;
use proxmox::api::{api, cli::*, RpcEnvironment, ApiHandler};
use proxmox_backup::config;
use proxmox_backup::tools;
use proxmox_backup::api2;
use proxmox_backup::api2::types::{ACL_PATH_SCHEMA, Authid, Userid};
#[api(
input: {
@ -48,6 +51,106 @@ fn list_users(param: Value, rpcenv: &mut dyn RpcEnvironment) -> Result<Value, Er
Ok(Value::Null)
}
#[api(
input: {
properties: {
"output-format": {
schema: OUTPUT_FORMAT,
optional: true,
},
userid: {
type: Userid,
}
}
}
)]
/// List tokens associated with user.
fn list_tokens(param: Value, rpcenv: &mut dyn RpcEnvironment) -> Result<Value, Error> {
let output_format = get_output_format(&param);
let info = &api2::access::user::API_METHOD_LIST_TOKENS;
let mut data = match info.handler {
ApiHandler::Sync(handler) => (handler)(param, info, rpcenv)?,
_ => unreachable!(),
};
let options = default_table_format_options()
.column(ColumnConfig::new("tokenid"))
.column(
ColumnConfig::new("enable")
.renderer(tools::format::render_bool_with_default_true)
)
.column(
ColumnConfig::new("expire")
.renderer(tools::format::render_epoch)
)
.column(ColumnConfig::new("comment"));
format_and_print_result_full(&mut data, info.returns, &output_format, &options);
Ok(Value::Null)
}
#[api(
input: {
properties: {
"output-format": {
schema: OUTPUT_FORMAT,
optional: true,
},
"auth-id": {
type: Authid,
},
path: {
schema: ACL_PATH_SCHEMA,
optional: true,
},
}
}
)]
/// List permissions of user/token.
fn list_permissions(param: Value, rpcenv: &mut dyn RpcEnvironment) -> Result<Value, Error> {
let output_format = get_output_format(&param);
let info = &api2::access::API_METHOD_LIST_PERMISSIONS;
let mut data = match info.handler {
ApiHandler::Sync(handler) => (handler)(param, info, rpcenv)?,
_ => unreachable!(),
};
if output_format == "text" {
println!("Privileges with (*) have the propagate flag set\n");
let data:HashMap<String, HashMap<String, bool>> = serde_json::from_value(data)?;
let mut paths:Vec<String> = data.keys().cloned().collect();
paths.sort_unstable();
for path in paths {
println!("Path: {}", path);
let priv_map = data.get(&path).unwrap();
let mut privs:Vec<String> = priv_map.keys().cloned().collect();
if privs.is_empty() {
println!("- NoAccess");
} else {
privs.sort_unstable();
for privilege in privs {
if *priv_map.get(&privilege).unwrap() {
println!("- {} (*)", privilege);
} else {
println!("- {}", privilege);
}
}
}
}
} else {
format_and_print_result(&mut data, &output_format);
}
Ok(Value::Null)
}
pub fn user_commands() -> CommandLineInterface {
let cmd_def = CliCommandMap::new()
@ -62,13 +165,39 @@ pub fn user_commands() -> CommandLineInterface {
"update",
CliCommand::new(&api2::access::user::API_METHOD_UPDATE_USER)
.arg_param(&["userid"])
.completion_cb("userid", config::user::complete_user_name)
.completion_cb("userid", config::user::complete_userid)
)
.insert(
"remove",
CliCommand::new(&api2::access::user::API_METHOD_DELETE_USER)
.arg_param(&["userid"])
.completion_cb("userid", config::user::complete_user_name)
.completion_cb("userid", config::user::complete_userid)
)
.insert(
"list-tokens",
CliCommand::new(&&API_METHOD_LIST_TOKENS)
.arg_param(&["userid"])
.completion_cb("userid", config::user::complete_userid)
)
.insert(
"generate-token",
CliCommand::new(&api2::access::user::API_METHOD_GENERATE_TOKEN)
.arg_param(&["userid", "tokenname"])
.completion_cb("userid", config::user::complete_userid)
)
.insert(
"delete-token",
CliCommand::new(&api2::access::user::API_METHOD_DELETE_TOKEN)
.arg_param(&["userid", "tokenname"])
.completion_cb("userid", config::user::complete_userid)
.completion_cb("tokenname", config::user::complete_token_name)
)
.insert(
"permissions",
CliCommand::new(&&API_METHOD_LIST_PERMISSIONS)
.arg_param(&["auth-id"])
.completion_cb("auth-id", config::user::complete_authid)
.completion_cb("path", config::datastore::complete_acl_path)
);
cmd_def.into()

View File

@ -4,6 +4,32 @@
pub const CONFIGDIR: &str = "/etc/proxmox-backup";
pub const JS_DIR: &str = "/usr/share/javascript/proxmox-backup";
#[macro_export]
macro_rules! PROXMOX_BACKUP_RUN_DIR_M { () => ("/run/proxmox-backup") }
#[macro_export]
macro_rules! PROXMOX_BACKUP_LOG_DIR_M { () => ("/var/log/proxmox-backup") }
/// namespaced directory for in-memory (tmpfs) run state
pub const PROXMOX_BACKUP_RUN_DIR: &str = PROXMOX_BACKUP_RUN_DIR_M!();
/// namespaced directory for persistent logging
pub const PROXMOX_BACKUP_LOG_DIR: &str = PROXMOX_BACKUP_LOG_DIR_M!();
/// logfile for all API reuests handled by the proxy and privileged API daemons. Note that not all
/// failed logins can be logged here with full information, use the auth log for that.
pub const API_ACCESS_LOG_FN: &str = concat!(PROXMOX_BACKUP_LOG_DIR_M!(), "/api/access.log");
/// logfile for any failed authentication, via ticket or via token, and new successfull ticket
/// creations. This file can be useful for fail2ban.
pub const API_AUTH_LOG_FN: &str = concat!(PROXMOX_BACKUP_LOG_DIR_M!(), "/api/auth.log");
/// the PID filename for the unprivileged proxy daemon
pub const PROXMOX_BACKUP_PROXY_PID_FN: &str = concat!(PROXMOX_BACKUP_RUN_DIR_M!(), "/proxy.pid");
/// the PID filename for the privileged api daemon
pub const PROXMOX_BACKUP_API_PID_FN: &str = concat!(PROXMOX_BACKUP_RUN_DIR_M!(), "/api.pid");
/// Prepend configuration directory to a file name
///
/// This is a simply way to get the full path for configuration files.

View File

@ -16,7 +16,7 @@ pub const BACKUP_REPO_URL: ApiStringFormat = ApiStringFormat::Pattern(&BACKUP_RE
#[derive(Debug)]
pub struct BackupRepository {
/// The user name used for Authentication
user: Option<Userid>,
auth_id: Option<Authid>,
/// The host name or IP address
host: Option<String>,
/// The port
@ -27,20 +27,29 @@ pub struct BackupRepository {
impl BackupRepository {
pub fn new(user: Option<Userid>, host: Option<String>, port: Option<u16>, store: String) -> Self {
pub fn new(auth_id: Option<Authid>, host: Option<String>, port: Option<u16>, store: String) -> Self {
let host = match host {
Some(host) if (IP_V6_REGEX.regex_obj)().is_match(&host) => {
Some(format!("[{}]", host))
},
other => other,
};
Self { user, host, port, store }
Self { auth_id, host, port, store }
}
pub fn auth_id(&self) -> &Authid {
if let Some(ref auth_id) = self.auth_id {
return auth_id;
}
&Authid::root_auth_id()
}
pub fn user(&self) -> &Userid {
if let Some(ref user) = self.user {
return &user;
if let Some(auth_id) = &self.auth_id {
return auth_id.user();
}
Userid::root_userid()
}
@ -65,8 +74,8 @@ impl BackupRepository {
impl fmt::Display for BackupRepository {
fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result {
match (&self.user, &self.host, self.port) {
(Some(user), _, _) => write!(f, "{}@{}:{}:{}", user, self.host(), self.port(), self.store),
match (&self.auth_id, &self.host, self.port) {
(Some(auth_id), _, _) => write!(f, "{}@{}:{}:{}", auth_id, self.host(), self.port(), self.store),
(None, Some(host), None) => write!(f, "{}:{}", host, self.store),
(None, _, Some(port)) => write!(f, "{}:{}:{}", self.host(), port, self.store),
(None, None, None) => write!(f, "{}", self.store),
@ -88,7 +97,7 @@ impl std::str::FromStr for BackupRepository {
.ok_or_else(|| format_err!("unable to parse repository url '{}'", url))?;
Ok(Self {
user: cap.get(1).map(|m| Userid::try_from(m.as_str().to_owned())).transpose()?,
auth_id: cap.get(1).map(|m| Authid::try_from(m.as_str().to_owned())).transpose()?,
host: cap.get(2).map(|m| m.as_str().to_owned()),
port: cap.get(3).map(|m| m.as_str().parse::<u16>()).transpose()?,
store: cap[4].to_owned(),

View File

@ -38,6 +38,9 @@ pub struct BackupStats {
pub csum: [u8; 32],
}
type UploadQueueSender = mpsc::Sender<(MergedChunkInfo, Option<h2::client::ResponseFuture>)>;
type UploadResultReceiver = oneshot::Receiver<Result<(), Error>>;
impl BackupWriter {
fn new(h2: H2Client, abort: AbortHandle, crypt_config: Option<Arc<CryptConfig>>, verbose: bool) -> Arc<Self> {
@ -262,7 +265,7 @@ impl BackupWriter {
let archive = if self.verbose {
archive_name.to_string()
} else {
crate::tools::format::strip_server_file_extension(archive_name.clone())
crate::tools::format::strip_server_file_extension(archive_name)
};
if archive_name != CATALOG_NAME {
let speed: HumanByte = ((uploaded * 1_000_000) / (duration.as_micros() as usize)).into();
@ -335,15 +338,15 @@ impl BackupWriter {
(verify_queue_tx, verify_result_rx)
}
fn append_chunk_queue(h2: H2Client, wid: u64, path: String, verbose: bool) -> (
mpsc::Sender<(MergedChunkInfo, Option<h2::client::ResponseFuture>)>,
oneshot::Receiver<Result<(), Error>>,
) {
fn append_chunk_queue(
h2: H2Client,
wid: u64,
path: String,
verbose: bool,
) -> (UploadQueueSender, UploadResultReceiver) {
let (verify_queue_tx, verify_queue_rx) = mpsc::channel(64);
let (verify_result_tx, verify_result_rx) = oneshot::channel();
let h2_2 = h2.clone();
// FIXME: async-block-ify this code!
tokio::spawn(
verify_queue_rx
@ -381,7 +384,7 @@ impl BackupWriter {
let request = H2Client::request_builder("localhost", "PUT", &path, None, Some("application/json")).unwrap();
let param_data = bytes::Bytes::from(param.to_string().into_bytes());
let upload_data = Some(param_data);
h2_2.send_request(request, upload_data)
h2.send_request(request, upload_data)
.and_then(move |response| {
response
.map_err(Error::from)
@ -489,6 +492,10 @@ impl BackupWriter {
Ok(manifest)
}
// We have no `self` here for `h2` and `verbose`, the only other arg "common" with 1 other
// funciton in the same path is `wid`, so those 3 could be in a struct, but there's no real use
// since this is a private method.
#[allow(clippy::too_many_arguments)]
fn upload_chunk_info_stream(
h2: H2Client,
wid: u64,
@ -515,7 +522,7 @@ impl BackupWriter {
let is_fixed_chunk_size = prefix == "fixed";
let (upload_queue, upload_result) =
Self::append_chunk_queue(h2.clone(), wid, append_chunk_path.to_owned(), verbose);
Self::append_chunk_queue(h2.clone(), wid, append_chunk_path, verbose);
let start_time = std::time::Instant::now();
@ -574,10 +581,12 @@ impl BackupWriter {
let digest = chunk_info.digest;
let digest_str = digest_to_hex(&digest);
if false && verbose { // TO verbose, needs finer verbosity setting granularity
/* too verbose, needs finer verbosity setting granularity
if verbose {
println!("upload new chunk {} ({} bytes, offset {})", digest_str,
chunk_info.chunk_len, offset);
}
*/
let chunk_data = chunk_info.chunk.into_inner();
let param = json!({

View File

@ -1,5 +1,4 @@
use std::io::Write;
use std::task::{Context, Poll};
use std::sync::{Arc, Mutex, RwLock};
use std::time::Duration;
@ -18,19 +17,21 @@ use xdg::BaseDirectories;
use proxmox::{
api::error::HttpError,
sys::linux::tty,
tools::{
fs::{file_get_json, replace_file, CreateOptions},
}
tools::fs::{file_get_json, replace_file, CreateOptions},
};
use super::pipe_to_stream::PipeToSendStream;
use crate::api2::types::Userid;
use crate::tools::async_io::EitherStream;
use crate::tools::{self, BroadcastFuture, DEFAULT_ENCODE_SET};
use crate::api2::types::{Authid, Userid};
use crate::tools::{
self,
BroadcastFuture,
DEFAULT_ENCODE_SET,
http::HttpsConnector,
};
#[derive(Clone)]
pub struct AuthInfo {
pub userid: Userid,
pub auth_id: Authid,
pub ticket: String,
pub token: String,
}
@ -101,7 +102,7 @@ pub struct HttpClient {
server: String,
port: u16,
fingerprint: Arc<Mutex<Option<String>>>,
first_auth: BroadcastFuture<()>,
first_auth: Option<BroadcastFuture<()>>,
auth: Arc<RwLock<AuthInfo>>,
ticket_abort: futures::future::AbortHandle,
_options: HttpClientOptions,
@ -181,12 +182,10 @@ fn load_fingerprint(prefix: &str, server: &str) -> Option<String> {
for line in raw.split('\n') {
let items: Vec<String> = line.split_whitespace().map(String::from).collect();
if items.len() == 2 {
if &items[0] == server {
if items.len() == 2 && &items[0] == server {
return Some(items[1].clone());
}
}
}
None
}
@ -212,11 +211,11 @@ fn store_ticket_info(prefix: &str, server: &str, username: &str, ticket: &str, t
let empty = serde_json::map::Map::new();
for (server, info) in data.as_object().unwrap_or(&empty) {
for (_user, uinfo) in info.as_object().unwrap_or(&empty) {
for (user, uinfo) in info.as_object().unwrap_or(&empty) {
if let Some(timestamp) = uinfo["timestamp"].as_i64() {
let age = now - timestamp;
if age < ticket_lifetime {
new_data[server][username] = uinfo.clone();
new_data[server][user] = uinfo.clone();
}
}
}
@ -252,7 +251,7 @@ impl HttpClient {
pub fn new(
server: &str,
port: u16,
userid: &Userid,
auth_id: &Authid,
mut options: HttpClientOptions,
) -> Result<Self, Error> {
@ -294,10 +293,11 @@ impl HttpClient {
ssl_connector_builder.set_verify(openssl::ssl::SslVerifyMode::NONE);
}
let mut httpc = hyper::client::HttpConnector::new();
let mut httpc = HttpConnector::new();
httpc.set_nodelay(true); // important for h2 download performance!
httpc.enforce_http(false); // we want https...
httpc.set_connect_timeout(Some(std::time::Duration::new(10, 0)));
let https = HttpsConnector::with_connector(httpc, ssl_connector_builder.build());
let client = Client::builder()
@ -311,6 +311,11 @@ impl HttpClient {
let password = if let Some(password) = password {
password
} else {
let userid = if auth_id.is_token() {
bail!("API token secret must be provided!");
} else {
auth_id.user()
};
let mut ticket_info = None;
if use_ticket_cache {
ticket_info = load_ticket_info(options.prefix.as_ref().unwrap(), server, userid);
@ -323,7 +328,7 @@ impl HttpClient {
};
let auth = Arc::new(RwLock::new(AuthInfo {
userid: userid.clone(),
auth_id: auth_id.clone(),
ticket: password.clone(),
token: "".to_string(),
}));
@ -336,14 +341,14 @@ impl HttpClient {
let renewal_future = async move {
loop {
tokio::time::delay_for(Duration::new(60*15, 0)).await; // 15 minutes
let (userid, ticket) = {
let (auth_id, ticket) = {
let authinfo = auth2.read().unwrap().clone();
(authinfo.userid, authinfo.ticket)
(authinfo.auth_id, authinfo.ticket)
};
match Self::credentials(client2.clone(), server2.clone(), port, userid, ticket).await {
match Self::credentials(client2.clone(), server2.clone(), port, auth_id.user().clone(), ticket).await {
Ok(auth) => {
if use_ticket_cache & &prefix2.is_some() {
let _ = store_ticket_info(prefix2.as_ref().unwrap(), &server2, &auth.userid.to_string(), &auth.ticket, &auth.token);
let _ = store_ticket_info(prefix2.as_ref().unwrap(), &server2, &auth.auth_id.to_string(), &auth.ticket, &auth.token);
}
*auth2.write().unwrap() = auth;
},
@ -361,7 +366,7 @@ impl HttpClient {
client.clone(),
server.to_owned(),
port,
userid.to_owned(),
auth_id.user().clone(),
password.to_owned(),
).map_ok({
let server = server.to_string();
@ -370,13 +375,20 @@ impl HttpClient {
move |auth| {
if use_ticket_cache & &prefix.is_some() {
let _ = store_ticket_info(prefix.as_ref().unwrap(), &server, &auth.userid.to_string(), &auth.ticket, &auth.token);
let _ = store_ticket_info(prefix.as_ref().unwrap(), &server, &auth.auth_id.to_string(), &auth.ticket, &auth.token);
}
*authinfo.write().unwrap() = auth;
tokio::spawn(renewal_future);
}
});
let first_auth = if auth_id.is_token() {
// TODO check access here?
None
} else {
Some(BroadcastFuture::new(Box::new(login_future)))
};
Ok(Self {
client,
server: String::from(server),
@ -384,7 +396,7 @@ impl HttpClient {
fingerprint: verified_fingerprint,
auth,
ticket_abort,
first_auth: BroadcastFuture::new(Box::new(login_future)),
first_auth,
_options: options,
})
}
@ -393,8 +405,14 @@ impl HttpClient {
///
/// Login is done on demand, so this is only required if you need
/// access to authentication data in 'AuthInfo'.
///
/// Note: tickets a periodially re-newed, so one can use this
/// to query changed ticket.
pub async fn login(&self) -> Result<AuthInfo, Error> {
self.first_auth.listen().await?;
if let Some(future) = &self.first_auth {
future.listen().await?;
}
let authinfo = self.auth.read().unwrap();
Ok(authinfo.clone())
}
@ -477,10 +495,14 @@ impl HttpClient {
let client = self.client.clone();
let auth = self.login().await?;
if auth.auth_id.is_token() {
let enc_api_token = format!("PBSAPIToken {}:{}", auth.auth_id, percent_encode(auth.ticket.as_bytes(), DEFAULT_ENCODE_SET));
req.headers_mut().insert("Authorization", HeaderValue::from_str(&enc_api_token).unwrap());
} else {
let enc_ticket = format!("PBSAuthCookie={}", percent_encode(auth.ticket.as_bytes(), DEFAULT_ENCODE_SET));
req.headers_mut().insert("Cookie", HeaderValue::from_str(&enc_ticket).unwrap());
req.headers_mut().insert("CSRFPreventionToken", HeaderValue::from_str(&auth.token).unwrap());
}
Self::api_request(client, req).await
}
@ -579,11 +601,18 @@ impl HttpClient {
protocol_name: String,
) -> Result<(H2Client, futures::future::AbortHandle), Error> {
let auth = self.login().await?;
let client = self.client.clone();
let auth = self.login().await?;
if auth.auth_id.is_token() {
let enc_api_token = format!("PBSAPIToken {}:{}", auth.auth_id, percent_encode(auth.ticket.as_bytes(), DEFAULT_ENCODE_SET));
req.headers_mut().insert("Authorization", HeaderValue::from_str(&enc_api_token).unwrap());
} else {
let enc_ticket = format!("PBSAuthCookie={}", percent_encode(auth.ticket.as_bytes(), DEFAULT_ENCODE_SET));
req.headers_mut().insert("Cookie", HeaderValue::from_str(&enc_ticket).unwrap());
req.headers_mut().insert("CSRFPreventionToken", HeaderValue::from_str(&auth.token).unwrap());
}
req.headers_mut().insert("UPGRADE", HeaderValue::from_str(&protocol_name).unwrap());
let resp = client.request(req).await?;
@ -609,7 +638,7 @@ impl HttpClient {
.await?;
let connection = connection
.map_err(|_| panic!("HTTP/2.0 connection failed"));
.map_err(|_| eprintln!("HTTP/2.0 connection failed"));
let (connection, abort) = futures::future::abortable(connection);
// A cancellable future returns an Option which is None when cancelled and
@ -636,7 +665,7 @@ impl HttpClient {
let req = Self::request_builder(&server, port, "POST", "/api2/json/access/ticket", Some(data))?;
let cred = Self::api_request(client, req).await?;
let auth = AuthInfo {
userid: cred["data"]["username"].as_str().unwrap().parse()?,
auth_id: cred["data"]["username"].as_str().unwrap().parse()?,
ticket: cred["data"]["ticket"].as_str().unwrap().to_owned(),
token: cred["data"]["CSRFPreventionToken"].as_str().unwrap().to_owned(),
};
@ -921,61 +950,3 @@ impl H2Client {
}
}
}
#[derive(Clone)]
pub struct HttpsConnector {
http: HttpConnector,
ssl_connector: std::sync::Arc<SslConnector>,
}
impl HttpsConnector {
pub fn with_connector(mut http: HttpConnector, ssl_connector: SslConnector) -> Self {
http.enforce_http(false);
Self {
http,
ssl_connector: std::sync::Arc::new(ssl_connector),
}
}
}
type MaybeTlsStream = EitherStream<
tokio::net::TcpStream,
tokio_openssl::SslStream<tokio::net::TcpStream>,
>;
impl hyper::service::Service<Uri> for HttpsConnector {
type Response = MaybeTlsStream;
type Error = Error;
type Future = std::pin::Pin<Box<
dyn Future<Output = Result<Self::Response, Self::Error>> + Send + 'static
>>;
fn poll_ready(&mut self, _: &mut Context<'_>) -> Poll<Result<(), Self::Error>> {
// This connector is always ready, but others might not be.
Poll::Ready(Ok(()))
}
fn call(&mut self, dst: Uri) -> Self::Future {
let mut this = self.clone();
async move {
let is_https = dst
.scheme()
.ok_or_else(|| format_err!("missing URL scheme"))?
== "https";
let host = dst
.host()
.ok_or_else(|| format_err!("missing hostname in destination url?"))?
.to_string();
let config = this.ssl_connector.configure();
let conn = this.http.call(dst).await?;
if is_https {
let conn = tokio_openssl::connect(config?, &host, conn).await?;
Ok(MaybeTlsStream::Right(conn))
} else {
Ok(MaybeTlsStream::Left(conn))
}
}.boxed()
}
}

View File

@ -103,7 +103,7 @@ async fn pull_index_chunks<I: IndexFile>(
let bytes = bytes.load(Ordering::SeqCst);
worker.log(format!("downloaded {} bytes ({} MiB/s)", bytes, (bytes as f64)/(1024.0*1024.0*elapsed)));
worker.log(format!("downloaded {} bytes ({:.2} MiB/s)", bytes, (bytes as f64)/(1024.0*1024.0*elapsed)));
Ok(())
}
@ -410,7 +410,8 @@ pub async fn pull_group(
list.sort_unstable_by(|a, b| a.backup_time.cmp(&b.backup_time));
let auth_info = client.login().await?;
client.login().await?; // make sure auth is complete
let fingerprint = client.fingerprint();
let last_sync = tgt_store.last_successful_backup(group)?;
@ -447,11 +448,14 @@ pub async fn pull_group(
if last_sync_time > backup_time { continue; }
}
// get updated auth_info (new tickets)
let auth_info = client.login().await?;
let options = HttpClientOptions::new()
.password(Some(auth_info.ticket.clone()))
.fingerprint(fingerprint.clone());
let new_client = HttpClient::new(src_repo.host(), src_repo.port(), src_repo.user(), options)?;
let new_client = HttpClient::new(src_repo.host(), src_repo.port(), src_repo.auth_id(), options)?;
let reader = BackupReader::start(
new_client,
@ -491,7 +495,7 @@ pub async fn pull_store(
src_repo: &BackupRepository,
tgt_store: Arc<DataStore>,
delete: bool,
userid: Userid,
auth_id: Authid,
) -> Result<(), Error> {
// explicit create shared lock to prevent GC on newly created chunks
@ -524,16 +528,14 @@ pub async fn pull_store(
for (groups_done, item) in list.into_iter().enumerate() {
let group = BackupGroup::new(&item.backup_type, &item.backup_id);
let (owner, _lock_guard) = tgt_store.create_locked_backup_group(&group, &userid)?;
let (owner, _lock_guard) = tgt_store.create_locked_backup_group(&group, &auth_id)?;
// permission check
if userid != owner { // only the owner is allowed to create additional snapshots
if auth_id != owner { // only the owner is allowed to create additional snapshots
worker.log(format!("sync group {}/{} failed - owner check failed ({} != {})",
item.backup_type, item.backup_id, userid, owner));
item.backup_type, item.backup_id, auth_id, owner));
errors = true; // do not stop here, instead continue
} else {
if let Err(err) = pull_group(
} else if let Err(err) = pull_group(
worker,
client,
src_repo,
@ -542,11 +544,15 @@ pub async fn pull_store(
delete,
Some((groups_done, group_count)),
).await {
worker.log(format!("sync group {}/{} failed - {}", item.backup_type, item.backup_id, err));
worker.log(format!(
"sync group {}/{} failed - {}",
item.backup_type,
item.backup_id,
err,
));
errors = true; // do not stop here, instead continue
}
}
}
if delete {
let result: Result<(), Error> = proxmox::try_block!({

View File

@ -43,8 +43,8 @@ pub async fn display_task_log(
} else {
break;
}
} else {
if lines != limit { bail!("got wrong number of lines from server ({} != {})", lines, limit); }
} else if lines != limit {
bail!("got wrong number of lines from server ({} != {})", lines, limit);
}
}

View File

@ -18,11 +18,12 @@ use crate::buildcfg;
pub mod acl;
pub mod cached_user_info;
pub mod datastore;
pub mod jobstate;
pub mod network;
pub mod remote;
pub mod sync;
pub mod token_shadow;
pub mod user;
pub mod verify;
/// Check configuration directory permissions
///

View File

@ -1,5 +1,5 @@
use std::io::Write;
use std::collections::{HashMap, HashSet, BTreeMap, BTreeSet};
use std::collections::{HashMap, BTreeMap, BTreeSet};
use std::path::{PathBuf, Path};
use std::sync::{Arc, RwLock};
use std::str::FromStr;
@ -15,7 +15,7 @@ use proxmox::tools::{fs::replace_file, fs::CreateOptions};
use proxmox::constnamedbitmap;
use proxmox::api::{api, schema::*};
use crate::api2::types::Userid;
use crate::api2::types::{Authid,Userid};
// define Privilege bitfield
@ -26,14 +26,23 @@ constnamedbitmap! {
PRIV_SYS_MODIFY("Sys.Modify");
PRIV_SYS_POWER_MANAGEMENT("Sys.PowerManagement");
/// Datastore.Audit allows knowing about a datastore,
/// including reading the configuration entry and listing its contents
PRIV_DATASTORE_AUDIT("Datastore.Audit");
/// Datastore.Allocate allows creating or deleting datastores
PRIV_DATASTORE_ALLOCATE("Datastore.Allocate");
/// Datastore.Modify allows modifying a datastore and its contents
PRIV_DATASTORE_MODIFY("Datastore.Modify");
/// Datastore.Read allows reading arbitrary backup contents
PRIV_DATASTORE_READ("Datastore.Read");
/// Allows verifying a datastore
PRIV_DATASTORE_VERIFY("Datastore.Verify");
/// Datastore.Backup also requires backup ownership
/// Datastore.Backup allows Datastore.Read|Verify and creating new snapshots,
/// but also requires backup ownership
PRIV_DATASTORE_BACKUP("Datastore.Backup");
/// Datastore.Prune also requires backup ownership
/// Datastore.Prune allows deleting snapshots,
/// but also requires backup ownership
PRIV_DATASTORE_PRUNE("Datastore.Prune");
PRIV_PERMISSIONS_MODIFY("Permissions.Modify");
@ -41,7 +50,6 @@ constnamedbitmap! {
PRIV_REMOTE_AUDIT("Remote.Audit");
PRIV_REMOTE_MODIFY("Remote.Modify");
PRIV_REMOTE_READ("Remote.Read");
PRIV_REMOTE_PRUNE("Remote.Prune");
PRIV_SYS_CONSOLE("Sys.Console");
}
@ -64,12 +72,14 @@ pub const ROLE_DATASTORE_ADMIN: u64 =
PRIV_DATASTORE_AUDIT |
PRIV_DATASTORE_MODIFY |
PRIV_DATASTORE_READ |
PRIV_DATASTORE_VERIFY |
PRIV_DATASTORE_BACKUP |
PRIV_DATASTORE_PRUNE;
/// Datastore.Reader can read datastore content an do restore
/// Datastore.Reader can read/verify datastore content and do restore
pub const ROLE_DATASTORE_READER: u64 =
PRIV_DATASTORE_AUDIT |
PRIV_DATASTORE_VERIFY |
PRIV_DATASTORE_READ;
/// Datastore.Backup can do backup and restore, but no prune.
@ -93,14 +103,12 @@ PRIV_REMOTE_AUDIT;
pub const ROLE_REMOTE_ADMIN: u64 =
PRIV_REMOTE_AUDIT |
PRIV_REMOTE_MODIFY |
PRIV_REMOTE_READ |
PRIV_REMOTE_PRUNE;
PRIV_REMOTE_READ;
/// Remote.SyncOperator can do read and prune on the remote.
pub const ROLE_REMOTE_SYNC_OPERATOR: u64 =
PRIV_REMOTE_AUDIT |
PRIV_REMOTE_READ |
PRIV_REMOTE_PRUNE;
PRIV_REMOTE_READ;
pub const ROLE_NAME_NO_ACCESS: &str ="NoAccess";
@ -231,7 +239,7 @@ pub struct AclTree {
}
pub struct AclTreeNode {
pub users: HashMap<Userid, HashMap<String, bool>>,
pub users: HashMap<Authid, HashMap<String, bool>>,
pub groups: HashMap<String, HashMap<String, bool>>,
pub children: BTreeMap<String, AclTreeNode>,
}
@ -246,43 +254,43 @@ impl AclTreeNode {
}
}
pub fn extract_roles(&self, user: &Userid, all: bool) -> HashSet<String> {
let user_roles = self.extract_user_roles(user, all);
if !user_roles.is_empty() {
pub fn extract_roles(&self, auth_id: &Authid, all: bool) -> HashMap<String, bool> {
let user_roles = self.extract_user_roles(auth_id, all);
if !user_roles.is_empty() || auth_id.is_token() {
// user privs always override group privs
return user_roles
};
self.extract_group_roles(user, all)
self.extract_group_roles(auth_id.user(), all)
}
pub fn extract_user_roles(&self, user: &Userid, all: bool) -> HashSet<String> {
pub fn extract_user_roles(&self, auth_id: &Authid, all: bool) -> HashMap<String, bool> {
let mut set = HashSet::new();
let mut map = HashMap::new();
let roles = match self.users.get(user) {
let roles = match self.users.get(auth_id) {
Some(m) => m,
None => return set,
None => return map,
};
for (role, propagate) in roles {
if *propagate || all {
if role == ROLE_NAME_NO_ACCESS {
// return a set with a single role 'NoAccess'
let mut set = HashSet::new();
set.insert(role.to_string());
return set;
// return a map with a single role 'NoAccess'
let mut map = HashMap::new();
map.insert(role.to_string(), false);
return map;
}
set.insert(role.to_string());
map.insert(role.to_string(), *propagate);
}
}
set
map
}
pub fn extract_group_roles(&self, _user: &Userid, all: bool) -> HashSet<String> {
pub fn extract_group_roles(&self, _user: &Userid, all: bool) -> HashMap<String, bool> {
let mut set = HashSet::new();
let mut map = HashMap::new();
for (_group, roles) in &self.groups {
let is_member = false; // fixme: check if user is member of the group
@ -291,17 +299,17 @@ impl AclTreeNode {
for (role, propagate) in roles {
if *propagate || all {
if role == ROLE_NAME_NO_ACCESS {
// return a set with a single role 'NoAccess'
let mut set = HashSet::new();
set.insert(role.to_string());
return set;
// return a map with a single role 'NoAccess'
let mut map = HashMap::new();
map.insert(role.to_string(), false);
return map;
}
set.insert(role.to_string());
map.insert(role.to_string(), *propagate);
}
}
}
set
map
}
pub fn delete_group_role(&mut self, group: &str, role: &str) {
@ -312,8 +320,8 @@ impl AclTreeNode {
roles.remove(role);
}
pub fn delete_user_role(&mut self, userid: &Userid, role: &str) {
let roles = match self.users.get_mut(userid) {
pub fn delete_user_role(&mut self, auth_id: &Authid, role: &str) {
let roles = match self.users.get_mut(auth_id) {
Some(r) => r,
None => return,
};
@ -331,8 +339,8 @@ impl AclTreeNode {
}
}
pub fn insert_user_role(&mut self, user: Userid, role: String, propagate: bool) {
let map = self.users.entry(user).or_insert_with(|| HashMap::new());
pub fn insert_user_role(&mut self, auth_id: Authid, role: String, propagate: bool) {
let map = self.users.entry(auth_id).or_insert_with(|| HashMap::new());
if role == ROLE_NAME_NO_ACCESS {
map.clear();
map.insert(role, propagate);
@ -346,7 +354,9 @@ impl AclTreeNode {
impl AclTree {
pub fn new() -> Self {
Self { root: AclTreeNode::new() }
Self {
root: AclTreeNode::new(),
}
}
pub fn find_node(&mut self, path: &str) -> Option<&mut AclTreeNode> {
@ -383,13 +393,13 @@ impl AclTree {
node.delete_group_role(group, role);
}
pub fn delete_user_role(&mut self, path: &str, userid: &Userid, role: &str) {
pub fn delete_user_role(&mut self, path: &str, auth_id: &Authid, role: &str) {
let path = split_acl_path(path);
let node = match self.get_node(&path) {
Some(n) => n,
None => return,
};
node.delete_user_role(userid, role);
node.delete_user_role(auth_id, role);
}
pub fn insert_group_role(&mut self, path: &str, group: &str, role: &str, propagate: bool) {
@ -398,10 +408,10 @@ impl AclTree {
node.insert_group_role(group.to_string(), role.to_string(), propagate);
}
pub fn insert_user_role(&mut self, path: &str, user: &Userid, role: &str, propagate: bool) {
pub fn insert_user_role(&mut self, path: &str, auth_id: &Authid, role: &str, propagate: bool) {
let path = split_acl_path(path);
let node = self.get_or_insert_node(&path);
node.insert_user_role(user.to_owned(), role.to_string(), propagate);
node.insert_user_role(auth_id.to_owned(), role.to_string(), propagate);
}
fn write_node_config(
@ -413,18 +423,18 @@ impl AclTree {
let mut role_ug_map0 = HashMap::new();
let mut role_ug_map1 = HashMap::new();
for (user, roles) in &node.users {
for (auth_id, roles) in &node.users {
// no need to save, because root is always 'Administrator'
if user == "root@pam" { continue; }
if !auth_id.is_token() && auth_id.user() == "root@pam" { continue; }
for (role, propagate) in roles {
let role = role.as_str();
let user = user.to_string();
let auth_id = auth_id.to_string();
if *propagate {
role_ug_map1.entry(role).or_insert_with(|| BTreeSet::new())
.insert(user);
.insert(auth_id);
} else {
role_ug_map0.entry(role).or_insert_with(|| BTreeSet::new())
.insert(user);
.insert(auth_id);
}
}
}
@ -512,7 +522,8 @@ impl AclTree {
bail!("expected '0' or '1' for propagate flag.");
};
let path = split_acl_path(items[2]);
let path_str = items[2];
let path = split_acl_path(path_str);
let node = self.get_or_insert_node(&path);
let uglist: Vec<&str> = items[3].split(',').map(|v| v.trim()).collect();
@ -576,25 +587,26 @@ impl AclTree {
Ok(tree)
}
pub fn roles(&self, userid: &Userid, path: &[&str]) -> HashSet<String> {
pub fn roles(&self, auth_id: &Authid, path: &[&str]) -> HashMap<String, bool> {
let mut node = &self.root;
let mut role_set = node.extract_roles(userid, path.is_empty());
let mut role_map = node.extract_roles(auth_id, path.is_empty());
for (pos, comp) in path.iter().enumerate() {
let last_comp = (pos + 1) == path.len();
node = match node.children.get(*comp) {
Some(n) => n,
None => return role_set, // path not found
None => return role_map, // path not found
};
let new_set = node.extract_roles(userid, last_comp);
if !new_set.is_empty() {
// overwrite previous settings
role_set = new_set;
let new_map = node.extract_roles(auth_id, last_comp);
if !new_map.is_empty() {
// overwrite previous maptings
role_map = new_map;
}
}
role_set
role_map
}
}
@ -675,22 +687,22 @@ mod test {
use anyhow::{Error};
use super::AclTree;
use crate::api2::types::Userid;
use crate::api2::types::Authid;
fn check_roles(
tree: &AclTree,
user: &Userid,
auth_id: &Authid,
path: &str,
expected_roles: &str,
) {
let path_vec = super::split_acl_path(path);
let mut roles = tree.roles(user, &path_vec)
.iter().map(|v| v.clone()).collect::<Vec<String>>();
let mut roles = tree.roles(auth_id, &path_vec)
.iter().map(|(v, _)| v.clone()).collect::<Vec<String>>();
roles.sort();
let roles = roles.join(",");
assert_eq!(roles, expected_roles, "\nat check_roles for '{}' on '{}'", user, path);
assert_eq!(roles, expected_roles, "\nat check_roles for '{}' on '{}'", auth_id, path);
}
#[test]
@ -721,13 +733,13 @@ acl:1:/storage:user1@pbs:Admin
acl:1:/storage/store1:user1@pbs:DatastoreBackup
acl:1:/storage/store2:user2@pbs:DatastoreBackup
"###)?;
let user1: Userid = "user1@pbs".parse()?;
let user1: Authid = "user1@pbs".parse()?;
check_roles(&tree, &user1, "/", "");
check_roles(&tree, &user1, "/storage", "Admin");
check_roles(&tree, &user1, "/storage/store1", "DatastoreBackup");
check_roles(&tree, &user1, "/storage/store2", "Admin");
let user2: Userid = "user2@pbs".parse()?;
let user2: Authid = "user2@pbs".parse()?;
check_roles(&tree, &user2, "/", "");
check_roles(&tree, &user2, "/storage", "");
check_roles(&tree, &user2, "/storage/store1", "");
@ -744,7 +756,7 @@ acl:1:/:user1@pbs:Admin
acl:1:/storage:user1@pbs:NoAccess
acl:1:/storage/store1:user1@pbs:DatastoreBackup
"###)?;
let user1: Userid = "user1@pbs".parse()?;
let user1: Authid = "user1@pbs".parse()?;
check_roles(&tree, &user1, "/", "Admin");
check_roles(&tree, &user1, "/storage", "NoAccess");
check_roles(&tree, &user1, "/storage/store1", "DatastoreBackup");
@ -770,7 +782,7 @@ acl:1:/storage/store1:user1@pbs:DatastoreBackup
let mut tree = AclTree::new();
let user1: Userid = "user1@pbs".parse()?;
let user1: Authid = "user1@pbs".parse()?;
tree.insert_user_role("/", &user1, "Admin", true);
tree.insert_user_role("/", &user1, "Audit", true);
@ -794,7 +806,7 @@ acl:1:/storage/store1:user1@pbs:DatastoreBackup
let mut tree = AclTree::new();
let user1: Userid = "user1@pbs".parse()?;
let user1: Authid = "user1@pbs".parse()?;
tree.insert_user_role("/storage", &user1, "NoAccess", true);

View File

@ -9,10 +9,10 @@ use lazy_static::lazy_static;
use proxmox::api::UserInformation;
use super::acl::{AclTree, ROLE_NAMES, ROLE_ADMIN};
use super::user::User;
use crate::api2::types::Userid;
use super::user::{ApiToken, User};
use crate::api2::types::{Authid, Userid};
/// Cache User/Group/Acl configuration data for fast permission tests
/// Cache User/Group/Token/Acl configuration data for fast permission tests
pub struct CachedUserInfo {
user_cfg: Arc<SectionConfigData>,
acl_tree: Arc<AclTree>,
@ -57,18 +57,40 @@ impl CachedUserInfo {
Ok(config)
}
/// Test if a user account is enabled and not expired
pub fn is_active_user(&self, userid: &Userid) -> bool {
#[cfg(test)]
pub(crate) fn test_new(user_cfg: SectionConfigData, acl_tree: AclTree) -> Self {
Self {
user_cfg: Arc::new(user_cfg),
acl_tree: Arc::new(acl_tree),
}
}
/// Test if a authentication id is enabled and not expired
pub fn is_active_auth_id(&self, auth_id: &Authid) -> bool {
let userid = auth_id.user();
if let Ok(info) = self.user_cfg.lookup::<User>("user", userid.as_str()) {
if !info.enable.unwrap_or(true) {
return false;
}
if let Some(expire) = info.expire {
if expire > 0 {
if expire <= now() {
if expire > 0 && expire <= now() {
return false;
}
}
} else {
return false;
}
if auth_id.is_token() {
if let Ok(info) = self.user_cfg.lookup::<ApiToken>("token", &auth_id.to_string()) {
if !info.enable.unwrap_or(true) {
return false;
}
if let Some(expire) = info.expire {
if expire > 0 && expire <= now() {
return false;
}
}
return true;
} else {
@ -76,18 +98,21 @@ impl CachedUserInfo {
}
}
return true;
}
pub fn check_privs(
&self,
userid: &Userid,
auth_id: &Authid,
path: &[&str],
required_privs: u64,
partial: bool,
) -> Result<(), Error> {
let user_privs = self.lookup_privs(&userid, path);
let privs = self.lookup_privs(&auth_id, path);
let allowed = if partial {
(user_privs & required_privs) != 0
(privs & required_privs) != 0
} else {
(user_privs & required_privs) == required_privs
(privs & required_privs) == required_privs
};
if !allowed {
// printing the path doesn't leaks any information as long as we
@ -97,29 +122,48 @@ impl CachedUserInfo {
Ok(())
}
pub fn is_superuser(&self, userid: &Userid) -> bool {
userid == "root@pam"
pub fn is_superuser(&self, auth_id: &Authid) -> bool {
!auth_id.is_token() && auth_id.user() == "root@pam"
}
pub fn is_group_member(&self, _userid: &Userid, _group: &str) -> bool {
false
}
pub fn lookup_privs(&self, userid: &Userid, path: &[&str]) -> u64 {
if self.is_superuser(userid) {
return ROLE_ADMIN;
pub fn lookup_privs(&self, auth_id: &Authid, path: &[&str]) -> u64 {
let (privs, _) = self.lookup_privs_details(auth_id, path);
privs
}
let roles = self.acl_tree.roles(userid, path);
pub fn lookup_privs_details(&self, auth_id: &Authid, path: &[&str]) -> (u64, u64) {
if self.is_superuser(auth_id) {
return (ROLE_ADMIN, ROLE_ADMIN);
}
let roles = self.acl_tree.roles(auth_id, path);
let mut privs: u64 = 0;
for role in roles {
let mut propagated_privs: u64 = 0;
for (role, propagate) in roles {
if let Some((role_privs, _)) = ROLE_NAMES.get(role.as_str()) {
if propagate {
propagated_privs |= role_privs;
}
privs |= role_privs;
}
}
privs
if auth_id.is_token() {
// limit privs to that of owning user
let user_auth_id = Authid::from(auth_id.user().clone());
privs &= self.lookup_privs(&user_auth_id, path);
let (owner_privs, owner_propagated_privs) = self.lookup_privs_details(&user_auth_id, path);
privs &= owner_privs;
propagated_privs &= owner_propagated_privs;
}
(privs, propagated_privs)
}
}
impl UserInformation for CachedUserInfo {
@ -131,9 +175,9 @@ impl UserInformation for CachedUserInfo {
false
}
fn lookup_privs(&self, userid: &str, path: &[&str]) -> u64 {
match userid.parse::<Userid>() {
Ok(userid) => Self::lookup_privs(self, &userid, path),
fn lookup_privs(&self, auth_id: &str, path: &[&str]) -> u64 {
match auth_id.parse::<Authid>() {
Ok(auth_id) => Self::lookup_privs(self, &auth_id, path),
Err(_) => 0,
}
}

View File

@ -32,6 +32,14 @@ pub const DIR_NAME_SCHEMA: Schema = StringSchema::new("Directory name").schema()
path: {
schema: DIR_NAME_SCHEMA,
},
"notify-user": {
optional: true,
type: Userid,
},
"notify": {
optional: true,
type: Notify,
},
comment: {
optional: true,
schema: SINGLE_LINE_COMMENT_SCHEMA,
@ -44,10 +52,6 @@ pub const DIR_NAME_SCHEMA: Schema = StringSchema::new("Directory name").schema()
optional: true,
schema: PRUNE_SCHEDULE_SCHEMA,
},
"verify-schedule": {
optional: true,
schema: VERIFY_SCHEDULE_SCHEMA,
},
"keep-last": {
optional: true,
schema: PRUNE_SCHEMA_KEEP_LAST,
@ -72,6 +76,10 @@ pub const DIR_NAME_SCHEMA: Schema = StringSchema::new("Directory name").schema()
optional: true,
schema: PRUNE_SCHEMA_KEEP_YEARLY,
},
"verify-new": {
optional: true,
type: bool,
},
}
)]
#[serde(rename_all="kebab-case")]
@ -87,8 +95,6 @@ pub struct DataStoreConfig {
#[serde(skip_serializing_if="Option::is_none")]
pub prune_schedule: Option<String>,
#[serde(skip_serializing_if="Option::is_none")]
pub verify_schedule: Option<String>,
#[serde(skip_serializing_if="Option::is_none")]
pub keep_last: Option<u64>,
#[serde(skip_serializing_if="Option::is_none")]
pub keep_hourly: Option<u64>,
@ -100,6 +106,15 @@ pub struct DataStoreConfig {
pub keep_monthly: Option<u64>,
#[serde(skip_serializing_if="Option::is_none")]
pub keep_yearly: Option<u64>,
/// If enabled, all backups will be verified right after completion.
#[serde(skip_serializing_if="Option::is_none")]
pub verify_new: Option<bool>,
/// Send job email notification to this user
#[serde(skip_serializing_if="Option::is_none")]
pub notify_user: Option<Userid>,
/// Send notification only for job errors
#[serde(skip_serializing_if="Option::is_none")]
pub notify: Option<Notify>,
}
fn init() -> SectionConfig {
@ -157,12 +172,12 @@ pub fn complete_acl_path(_arg: &str, _param: &HashMap<String, String>) -> Vec<St
let mut list = Vec::new();
list.push(String::from("/"));
list.push(String::from("/storage"));
list.push(String::from("/storage/"));
list.push(String::from("/datastore"));
list.push(String::from("/datastore/"));
if let Ok((data, _digest)) = config() {
for id in data.sections.keys() {
list.push(format!("/storage/{}", id));
list.push(format!("/datastore/{}", id));
}
}

View File

@ -289,8 +289,12 @@ impl Interface {
if let Some(method6) = self.method6 {
let mut skip_v6 = false; // avoid empty inet6 manual entry
if self.method.is_some() && method6 == NetworkConfigMethod::Manual {
if self.comments6.is_none() && self.options6.is_empty() { skip_v6 = true; }
if self.method.is_some()
&& method6 == NetworkConfigMethod::Manual
&& self.comments6.is_none()
&& self.options6.is_empty()
{
skip_v6 = true;
}
if !skip_v6 {

View File

@ -10,7 +10,7 @@ use regex::Regex;
use proxmox::*; // for IP macros
pub static IPV4_REVERSE_MASK: &[&'static str] = &[
pub static IPV4_REVERSE_MASK: &[&str] = &[
"0.0.0.0",
"128.0.0.0",
"192.0.0.0",
@ -57,38 +57,57 @@ lazy_static! {
}
pub fn parse_cidr(cidr: &str) -> Result<(String, u8, bool), Error> {
let (address, mask, is_v6) = parse_address_or_cidr(cidr)?;
if let Some(mask) = mask {
return Ok((address, mask, is_v6));
} else {
bail!("missing netmask in '{}'", cidr);
}
}
pub fn check_netmask(mask: u8, is_v6: bool) -> Result<(), Error> {
if is_v6 {
if !(mask >= 1 && mask <= 128) {
bail!("IPv6 mask '{}' is out of range (1..128).", mask);
}
} else {
if !(mask > 0 && mask <= 32) {
bail!("IPv4 mask '{}' is out of range (1..32).", mask);
}
}
Ok(())
}
// parse ip address with otional cidr mask
pub fn parse_address_or_cidr(cidr: &str) -> Result<(String, Option<u8>, bool), Error> {
lazy_static! {
pub static ref CIDR_V4_REGEX: Regex = Regex::new(
concat!(r"^(", IPV4RE!(), r")(?:/(\d{1,2}))$")
concat!(r"^(", IPV4RE!(), r")(?:/(\d{1,2}))?$")
).unwrap();
pub static ref CIDR_V6_REGEX: Regex = Regex::new(
concat!(r"^(", IPV6RE!(), r")(?:/(\d{1,3}))$")
concat!(r"^(", IPV6RE!(), r")(?:/(\d{1,3}))?$")
).unwrap();
}
if let Some(caps) = CIDR_V4_REGEX.captures(&cidr) {
let address = &caps[1];
let mask = &caps[2];
let mask = u8::from_str_radix(mask, 10)
.map(|mask| {
if !(mask > 0 && mask <= 32) {
bail!("IPv4 mask '{}' is out of range (1..32).", mask);
if let Some(mask) = caps.get(2) {
let mask = u8::from_str_radix(mask.as_str(), 10)?;
check_netmask(mask, false)?;
return Ok((address.to_string(), Some(mask), false));
} else {
return Ok((address.to_string(), None, false));
}
Ok(mask)
})?;
return Ok((address.to_string(), mask.unwrap(), false));
} else if let Some(caps) = CIDR_V6_REGEX.captures(&cidr) {
let address = &caps[1];
let mask = &caps[2];
let mask = u8::from_str_radix(mask, 10)
.map(|mask| {
if !(mask >= 1 && mask <= 128) {
bail!("IPv6 mask '{}' is out of range (1..128).", mask);
if let Some(mask) = caps.get(2) {
let mask = u8::from_str_radix(mask.as_str(), 10)?;
check_netmask(mask, true)?;
return Ok((address.to_string(), Some(mask), true));
} else {
return Ok((address.to_string(), None, true));
}
Ok(mask)
})?;
return Ok((address.to_string(), mask.unwrap(), true));
} else {
bail!("invalid address/mask '{}'", cidr);
}

View File

@ -86,20 +86,35 @@ impl <R: BufRead> NetworkParser<R> {
Ok(())
}
fn parse_iface_address(&mut self, interface: &mut Interface) -> Result<(), Error> {
self.eat(Token::Address)?;
let cidr = self.next_text()?;
fn parse_netmask(&mut self) -> Result<u8, Error> {
self.eat(Token::Netmask)?;
let netmask = self.next_text()?;
let (_address, _mask, ipv6) = parse_cidr(&cidr)?;
if ipv6 {
interface.set_cidr_v6(cidr)?;
let mask = if let Some(mask) = IPV4_MASK_HASH_LOCALNET.get(netmask.as_str()) {
*mask
} else {
interface.set_cidr_v4(cidr)?;
match u8::from_str_radix(netmask.as_str(), 10) {
Ok(mask) => mask,
Err(err) => {
bail!("unable to parse netmask '{}' - {}", netmask, err);
}
}
};
self.eat(Token::Newline)?;
Ok(())
Ok(mask)
}
fn parse_iface_address(&mut self) -> Result<(String, Option<u8>, bool), Error> {
self.eat(Token::Address)?;
let cidr = self.next_text()?;
let (_address, mask, ipv6) = parse_address_or_cidr(&cidr)?;
self.eat(Token::Newline)?;
Ok((cidr, mask, ipv6))
}
fn parse_iface_gateway(&mut self, interface: &mut Interface) -> Result<(), Error> {
@ -191,6 +206,9 @@ impl <R: BufRead> NetworkParser<R> {
address_family_v6: bool,
) -> Result<(), Error> {
let mut netmask = None;
let mut address_list = Vec::new();
loop {
match self.peek()? {
Token::Attribute => { self.eat(Token::Attribute)?; },
@ -214,8 +232,15 @@ impl <R: BufRead> NetworkParser<R> {
}
match self.peek()? {
Token::Address => self.parse_iface_address(interface)?,
Token::Address => {
let (cidr, mask, is_v6) = self.parse_iface_address()?;
address_list.push((cidr, mask, is_v6));
}
Token::Gateway => self.parse_iface_gateway(interface)?,
Token::Netmask => {
//Note: netmask is deprecated, but we try to do our best
netmask = Some(self.parse_netmask()?);
}
Token::MTU => {
let mtu = self.parse_iface_mtu()?;
interface.mtu = Some(mtu);
@ -255,8 +280,6 @@ impl <R: BufRead> NetworkParser<R> {
interface.bond_xmit_hash_policy = Some(policy);
self.eat(Token::Newline)?;
}
Token::Netmask => bail!("netmask is deprecated and no longer supported"),
_ => { // parse addon attributes
let option = self.parse_to_eol()?;
if !option.is_empty() {
@ -270,6 +293,38 @@ impl <R: BufRead> NetworkParser<R> {
}
}
if let Some(netmask) = netmask {
if address_list.len() > 1 {
bail!("unable to apply netmask to multiple addresses (please use cidr notation)");
} else if address_list.len() == 1 {
let (mut cidr, mask, is_v6) = address_list.pop().unwrap();
if mask.is_some() {
// address already has a mask - ignore netmask
} else {
check_netmask(netmask, is_v6)?;
cidr.push_str(&format!("/{}", netmask));
}
if is_v6 {
interface.set_cidr_v6(cidr)?;
} else {
interface.set_cidr_v4(cidr)?;
}
} else {
// no address - simply ignore useless netmask
}
} else {
for (cidr, mask, is_v6) in address_list {
if mask.is_none() {
bail!("missing netmask in '{}'", cidr);
}
if is_v6 {
interface.set_cidr_v6(cidr)?;
} else {
interface.set_cidr_v4(cidr)?;
}
}
}
Ok(())
}

View File

@ -45,7 +45,7 @@ pub const REMOTE_PASSWORD_SCHEMA: Schema = StringSchema::new("Password or auth t
type: u16,
},
userid: {
type: Userid,
type: Authid,
},
password: {
schema: REMOTE_PASSWORD_SCHEMA,
@ -65,7 +65,7 @@ pub struct Remote {
pub host: String,
#[serde(skip_serializing_if="Option::is_none")]
pub port: Option<u16>,
pub userid: Userid,
pub userid: Authid,
#[serde(skip_serializing_if="String::is_empty")]
#[serde(with = "proxmox::tools::serde::string_as_base64")]
pub password: String,

View File

@ -21,7 +21,6 @@ lazy_static! {
static ref CONFIG: SectionConfig = init();
}
#[api(
properties: {
id: {
@ -30,6 +29,10 @@ lazy_static! {
store: {
schema: DATASTORE_SCHEMA,
},
"owner": {
type: Authid,
optional: true,
},
remote: {
schema: REMOTE_ID_SCHEMA,
},
@ -51,11 +54,13 @@ lazy_static! {
}
)]
#[serde(rename_all="kebab-case")]
#[derive(Serialize,Deserialize)]
#[derive(Serialize,Deserialize,Clone)]
/// Sync Job
pub struct SyncJobConfig {
pub id: String,
pub store: String,
#[serde(skip_serializing_if="Option::is_none")]
pub owner: Option<Authid>,
pub remote: String,
pub remote_store: String,
#[serde(skip_serializing_if="Option::is_none")]
@ -66,6 +71,21 @@ pub struct SyncJobConfig {
pub schedule: Option<String>,
}
impl From<&SyncJobStatus> for SyncJobConfig {
fn from(job_status: &SyncJobStatus) -> Self {
Self {
id: job_status.id.clone(),
store: job_status.store.clone(),
owner: job_status.owner.clone(),
remote: job_status.remote.clone(),
remote_store: job_status.remote_store.clone(),
remove_vanished: job_status.remove_vanished.clone(),
comment: job_status.comment.clone(),
schedule: job_status.schedule.clone(),
}
}
}
// FIXME: generate duplicate schemas/structs from one listing?
#[api(
properties: {
@ -75,6 +95,10 @@ pub struct SyncJobConfig {
store: {
schema: DATASTORE_SCHEMA,
},
owner: {
type: Authid,
optional: true,
},
remote: {
schema: REMOTE_ID_SCHEMA,
},
@ -121,6 +145,8 @@ pub struct SyncJobConfig {
pub struct SyncJobStatus {
pub id: String,
pub store: String,
#[serde(skip_serializing_if="Option::is_none")]
pub owner: Option<Authid>,
pub remote: String,
pub remote_store: String,
#[serde(skip_serializing_if="Option::is_none")]

View File

@ -0,0 +1,91 @@
use std::collections::HashMap;
use std::time::Duration;
use anyhow::{bail, format_err, Error};
use serde::{Serialize, Deserialize};
use serde_json::{from_value, Value};
use proxmox::tools::fs::{open_file_locked, CreateOptions};
use crate::api2::types::Authid;
use crate::auth;
const LOCK_FILE: &str = configdir!("/token.shadow.lock");
const CONF_FILE: &str = configdir!("/token.shadow");
const LOCK_TIMEOUT: Duration = Duration::from_secs(5);
#[serde(rename_all="kebab-case")]
#[derive(Serialize, Deserialize)]
/// ApiToken id / secret pair
pub struct ApiTokenSecret {
pub tokenid: Authid,
pub secret: String,
}
fn read_file() -> Result<HashMap<Authid, String>, Error> {
let json = proxmox::tools::fs::file_get_json(CONF_FILE, Some(Value::Null))?;
if json == Value::Null {
Ok(HashMap::new())
} else {
// swallow serde error which might contain sensitive data
from_value(json).map_err(|_err| format_err!("unable to parse '{}'", CONF_FILE))
}
}
fn write_file(data: HashMap<Authid, String>) -> Result<(), Error> {
let backup_user = crate::backup::backup_user()?;
let options = CreateOptions::new()
.perm(nix::sys::stat::Mode::from_bits_truncate(0o0640))
.owner(backup_user.uid)
.group(backup_user.gid);
let json = serde_json::to_vec(&data)?;
proxmox::tools::fs::replace_file(CONF_FILE, &json, options)
}
/// Verifies that an entry for given tokenid / API token secret exists
pub fn verify_secret(tokenid: &Authid, secret: &str) -> Result<(), Error> {
if !tokenid.is_token() {
bail!("not an API token ID");
}
let data = read_file()?;
match data.get(tokenid) {
Some(hashed_secret) => {
auth::verify_crypt_pw(secret, &hashed_secret)
},
None => bail!("invalid API token"),
}
}
/// Adds a new entry for the given tokenid / API token secret. The secret is stored as salted hash.
pub fn set_secret(tokenid: &Authid, secret: &str) -> Result<(), Error> {
if !tokenid.is_token() {
bail!("not an API token ID");
}
let _guard = open_file_locked(LOCK_FILE, LOCK_TIMEOUT, true)?;
let mut data = read_file()?;
let hashed_secret = auth::encrypt_pw(secret)?;
data.insert(tokenid.clone(), hashed_secret);
write_file(data)?;
Ok(())
}
/// Deletes the entry for the given tokenid.
pub fn delete_secret(tokenid: &Authid) -> Result<(), Error> {
if !tokenid.is_token() {
bail!("not an API token ID");
}
let _guard = open_file_locked(LOCK_FILE, LOCK_TIMEOUT, true)?;
let mut data = read_file()?;
data.remove(tokenid);
write_file(data)?;
Ok(())
}

View File

@ -52,6 +52,36 @@ pub const EMAIL_SCHEMA: Schema = StringSchema::new("E-Mail Address.")
.max_length(64)
.schema();
#[api(
properties: {
tokenid: {
schema: PROXMOX_TOKEN_ID_SCHEMA,
},
comment: {
optional: true,
schema: SINGLE_LINE_COMMENT_SCHEMA,
},
enable: {
optional: true,
schema: ENABLE_USER_SCHEMA,
},
expire: {
optional: true,
schema: EXPIRE_USER_SCHEMA,
},
}
)]
#[derive(Serialize,Deserialize)]
/// ApiToken properties.
pub struct ApiToken {
pub tokenid: Authid,
#[serde(skip_serializing_if="Option::is_none")]
pub comment: Option<String>,
#[serde(skip_serializing_if="Option::is_none")]
pub enable: Option<bool>,
#[serde(skip_serializing_if="Option::is_none")]
pub expire: Option<i64>,
}
#[api(
properties: {
@ -103,15 +133,21 @@ pub struct User {
}
fn init() -> SectionConfig {
let obj_schema = match User::API_SCHEMA {
Schema::Object(ref obj_schema) => obj_schema,
let mut config = SectionConfig::new(&Authid::API_SCHEMA);
let user_schema = match User::API_SCHEMA {
Schema::Object(ref user_schema) => user_schema,
_ => unreachable!(),
};
let user_plugin = SectionConfigPlugin::new("user".to_string(), Some("userid".to_string()), user_schema);
config.register_plugin(user_plugin);
let plugin = SectionConfigPlugin::new("user".to_string(), Some("userid".to_string()), obj_schema);
let mut config = SectionConfig::new(&Userid::API_SCHEMA);
config.register_plugin(plugin);
let token_schema = match ApiToken::API_SCHEMA {
Schema::Object(ref token_schema) => token_schema,
_ => unreachable!(),
};
let token_plugin = SectionConfigPlugin::new("token".to_string(), Some("tokenid".to_string()), token_schema);
config.register_plugin(token_plugin);
config
}
@ -205,10 +241,66 @@ pub fn save_config(config: &SectionConfigData) -> Result<(), Error> {
Ok(())
}
#[cfg(test)]
pub(crate) fn test_cfg_from_str(raw: &str) -> Result<(SectionConfigData, [u8;32]), Error> {
let cfg = init();
let parsed = cfg.parse("test_user_cfg", raw)?;
Ok((parsed, [0;32]))
}
// shell completion helper
pub fn complete_user_name(_arg: &str, _param: &HashMap<String, String>) -> Vec<String> {
pub fn complete_userid(_arg: &str, _param: &HashMap<String, String>) -> Vec<String> {
match config() {
Ok((data, _digest)) => data.sections.iter().map(|(id, _)| id.to_string()).collect(),
Ok((data, _digest)) => {
data.sections.iter()
.filter_map(|(id, (section_type, _))| {
if section_type == "user" {
Some(id.to_string())
} else {
None
}
}).collect()
},
Err(_) => return vec![],
}
}
// shell completion helper
pub fn complete_authid(_arg: &str, _param: &HashMap<String, String>) -> Vec<String> {
match config() {
Ok((data, _digest)) => data.sections.iter().map(|(id, _)| id.to_string()).collect(),
Err(_) => vec![],
}
}
// shell completion helper
pub fn complete_token_name(_arg: &str, param: &HashMap<String, String>) -> Vec<String> {
let data = match config() {
Ok((data, _digest)) => data,
Err(_) => return Vec::new(),
};
match param.get("userid") {
Some(userid) => {
let user = data.lookup::<User>("user", userid);
let tokens = data.convert_to_typed_array("token");
match (user, tokens) {
(Ok(_), Ok(tokens)) => {
tokens
.into_iter()
.filter_map(|token: ApiToken| {
let tokenid = token.tokenid;
if tokenid.is_token() && tokenid.user() == userid {
Some(tokenid.tokenname().unwrap().as_str().to_string())
} else {
None
}
}).collect()
},
_ => vec![],
}
},
None => vec![],
}
}

205
src/config/verify.rs Normal file
View File

@ -0,0 +1,205 @@
use anyhow::{Error};
use lazy_static::lazy_static;
use std::collections::HashMap;
use serde::{Serialize, Deserialize};
use proxmox::api::{
api,
schema::*,
section_config::{
SectionConfig,
SectionConfigData,
SectionConfigPlugin,
}
};
use proxmox::tools::{fs::replace_file, fs::CreateOptions};
use crate::api2::types::*;
lazy_static! {
static ref CONFIG: SectionConfig = init();
}
#[api(
properties: {
id: {
schema: JOB_ID_SCHEMA,
},
store: {
schema: DATASTORE_SCHEMA,
},
"ignore-verified": {
optional: true,
schema: IGNORE_VERIFIED_BACKUPS_SCHEMA,
},
"outdated-after": {
optional: true,
schema: VERIFICATION_OUTDATED_AFTER_SCHEMA,
},
comment: {
optional: true,
schema: SINGLE_LINE_COMMENT_SCHEMA,
},
schedule: {
optional: true,
schema: VERIFICATION_SCHEDULE_SCHEMA,
},
}
)]
#[serde(rename_all="kebab-case")]
#[derive(Serialize,Deserialize)]
/// Verification Job
pub struct VerificationJobConfig {
/// unique ID to address this job
pub id: String,
/// the datastore ID this verificaiton job affects
pub store: String,
#[serde(skip_serializing_if="Option::is_none")]
/// if not set to false, check the age of the last snapshot verification to filter
/// out recent ones, depending on 'outdated_after' configuration.
pub ignore_verified: Option<bool>,
#[serde(skip_serializing_if="Option::is_none")]
/// Reverify snapshots after X days, never if 0. Ignored if 'ignore_verified' is false.
pub outdated_after: Option<i64>,
#[serde(skip_serializing_if="Option::is_none")]
pub comment: Option<String>,
#[serde(skip_serializing_if="Option::is_none")]
/// when to schedule this job in calendar event notation
pub schedule: Option<String>,
}
#[api(
properties: {
id: {
schema: JOB_ID_SCHEMA,
},
store: {
schema: DATASTORE_SCHEMA,
},
"ignore-verified": {
optional: true,
schema: IGNORE_VERIFIED_BACKUPS_SCHEMA,
},
"outdated-after": {
optional: true,
schema: VERIFICATION_OUTDATED_AFTER_SCHEMA,
},
comment: {
optional: true,
schema: SINGLE_LINE_COMMENT_SCHEMA,
},
schedule: {
optional: true,
schema: VERIFICATION_SCHEDULE_SCHEMA,
},
"next-run": {
description: "Estimated time of the next run (UNIX epoch).",
optional: true,
type: Integer,
},
"last-run-state": {
description: "Result of the last run.",
optional: true,
type: String,
},
"last-run-upid": {
description: "Task UPID of the last run.",
optional: true,
type: String,
},
"last-run-endtime": {
description: "Endtime of the last run.",
optional: true,
type: Integer,
},
}
)]
#[serde(rename_all="kebab-case")]
#[derive(Serialize,Deserialize)]
/// Status of Verification Job
pub struct VerificationJobStatus {
/// unique ID to address this job
pub id: String,
/// the datastore ID this verificaiton job affects
pub store: String,
#[serde(skip_serializing_if="Option::is_none")]
/// if not set to false, check the age of the last snapshot verification to filter
/// out recent ones, depending on 'outdated_after' configuration.
pub ignore_verified: Option<bool>,
#[serde(skip_serializing_if="Option::is_none")]
/// Reverify snapshots after X days, never if 0. Ignored if 'ignore_verified' is false.
pub outdated_after: Option<i64>,
#[serde(skip_serializing_if="Option::is_none")]
pub comment: Option<String>,
#[serde(skip_serializing_if="Option::is_none")]
/// when to schedule this job in calendar event notation
pub schedule: Option<String>,
#[serde(skip_serializing_if="Option::is_none")]
/// The timestamp when this job runs the next time.
pub next_run: Option<i64>,
#[serde(skip_serializing_if="Option::is_none")]
/// The state of the last scheduled run, if any
pub last_run_state: Option<String>,
#[serde(skip_serializing_if="Option::is_none")]
/// The task UPID of the last scheduled run, if any
pub last_run_upid: Option<String>,
#[serde(skip_serializing_if="Option::is_none")]
/// When the last run was finished, combined with UPID.starttime one can calculate the duration
pub last_run_endtime: Option<i64>,
}
fn init() -> SectionConfig {
let obj_schema = match VerificationJobConfig::API_SCHEMA {
Schema::Object(ref obj_schema) => obj_schema,
_ => unreachable!(),
};
let plugin = SectionConfigPlugin::new("verification".to_string(), Some(String::from("id")), obj_schema);
let mut config = SectionConfig::new(&JOB_ID_SCHEMA);
config.register_plugin(plugin);
config
}
pub const VERIFICATION_CFG_FILENAME: &str = "/etc/proxmox-backup/verification.cfg";
pub const VERIFICATION_CFG_LOCKFILE: &str = "/etc/proxmox-backup/.verification.lck";
pub fn config() -> Result<(SectionConfigData, [u8;32]), Error> {
let content = proxmox::tools::fs::file_read_optional_string(VERIFICATION_CFG_FILENAME)?;
let content = content.unwrap_or_else(String::new);
let digest = openssl::sha::sha256(content.as_bytes());
let data = CONFIG.parse(VERIFICATION_CFG_FILENAME, &content)?;
Ok((data, digest))
}
pub fn save_config(config: &SectionConfigData) -> Result<(), Error> {
let raw = CONFIG.write(VERIFICATION_CFG_FILENAME, &config)?;
let backup_user = crate::backup::backup_user()?;
let mode = nix::sys::stat::Mode::from_bits_truncate(0o0640);
// set the correct owner/group/permissions while saving file
// owner(rw) = root, group(r)= backup
let options = CreateOptions::new()
.perm(mode)
.owner(nix::unistd::ROOT)
.group(backup_user.gid);
replace_file(VERIFICATION_CFG_FILENAME, raw.as_bytes(), options)?;
Ok(())
}
// shell completion helper
pub fn complete_verification_job_id(_arg: &str, _param: &HashMap<String, String>) -> Vec<String> {
match config() {
Ok((data, _digest)) => data.sections.iter().map(|(id, _)| id.to_string()).collect(),
Err(_) => return vec![],
}
}

View File

@ -1,3 +1,8 @@
//! See the different modules for documentation on their usage.
//!
//! The [backup](backup/index.html) module contains some detailed information
//! on the inner workings of the backup server regarding data storage.
pub mod task;
#[macro_use]

View File

@ -9,7 +9,7 @@ use std::ffi::CStr;
pub trait BackupCatalogWriter {
fn start_directory(&mut self, name: &CStr) -> Result<(), Error>;
fn end_directory(&mut self) -> Result<(), Error>;
fn add_file(&mut self, name: &CStr, size: u64, mtime: u64) -> Result<(), Error>;
fn add_file(&mut self, name: &CStr, size: u64, mtime: i64) -> Result<(), Error>;
fn add_symlink(&mut self, name: &CStr) -> Result<(), Error>;
fn add_hardlink(&mut self, name: &CStr) -> Result<(), Error>;
fn add_block_device(&mut self, name: &CStr) -> Result<(), Error>;

View File

@ -12,7 +12,7 @@ use nix::errno::Errno;
use nix::fcntl::OFlag;
use nix::sys::stat::{FileStat, Mode};
use pathpatterns::{MatchEntry, MatchList, MatchType, PatternFlag};
use pathpatterns::{MatchEntry, MatchFlag, MatchList, MatchType, PatternFlag};
use pxar::Metadata;
use pxar::encoder::LinkOffset;
@ -291,13 +291,15 @@ impl<'a, 'b> Archiver<'a, 'b> {
}
fn read_pxar_excludes(&mut self, parent: RawFd) -> Result<(), Error> {
let fd = self.open_file(parent, c_str!(".pxarexclude"), OFlag::O_RDONLY, false)?;
let fd = match self.open_file(parent, c_str!(".pxarexclude"), OFlag::O_RDONLY, false)? {
Some(fd) => fd,
None => return Ok(()),
};
let old_pattern_count = self.patterns.len();
let path_bytes = self.path.as_os_str().as_bytes();
if let Some(fd) = fd {
let file = unsafe { std::fs::File::from_raw_fd(fd.into_raw_fd()) };
use io::BufRead;
@ -323,29 +325,36 @@ impl<'a, 'b> Archiver<'a, 'b> {
}
let mut buf;
let (line, mode) = if line[0] == b'/' {
let (line, mode, anchored) = if line[0] == b'/' {
buf = Vec::with_capacity(path_bytes.len() + 1 + line.len());
buf.extend(path_bytes);
buf.extend(line);
(&buf[..], MatchType::Exclude)
(&buf[..], MatchType::Exclude, true)
} else if line.starts_with(b"!/") {
// inverted case with absolute path
buf = Vec::with_capacity(path_bytes.len() + line.len());
buf.extend(path_bytes);
buf.extend(&line[1..]); // without the '!'
(&buf[..], MatchType::Include)
(&buf[..], MatchType::Include, true)
} else if line.starts_with(b"!") {
(&line[1..], MatchType::Include, false)
} else {
(line, MatchType::Exclude)
(line, MatchType::Exclude, false)
};
match MatchEntry::parse_pattern(line, PatternFlag::PATH_NAME, mode) {
Ok(pattern) => self.patterns.push(pattern),
Ok(pattern) => {
if anchored {
self.patterns.push(pattern.add_flags(MatchFlag::ANCHORED));
} else {
self.patterns.push(pattern);
}
}
Err(err) => {
let _ = writeln!(self.errors, "bad pattern in {:?}: {}", self.path, err);
}
}
}
}
Ok(())
}
@ -526,7 +535,7 @@ impl<'a, 'b> Archiver<'a, 'b> {
let file_size = stat.st_size as u64;
if let Some(ref mut catalog) = self.catalog {
catalog.add_file(c_file_name, file_size, stat.st_mtime as u64)?;
catalog.add_file(c_file_name, file_size, stat.st_mtime)?;
}
let offset: LinkOffset =
@ -997,7 +1006,7 @@ fn process_acl(
/// Since we are generating an *exclude* list, we need to invert this, so includes get a `'!'`
/// prefix.
fn generate_pxar_excludes_cli(patterns: &[MatchEntry]) -> Vec<u8> {
use pathpatterns::{MatchFlag, MatchPattern};
use pathpatterns::MatchPattern;
let mut content = Vec::new();

View File

@ -266,11 +266,9 @@ impl SessionImpl {
) {
let final_result = match err.downcast::<io::Error>() {
Ok(err) => {
if err.kind() == io::ErrorKind::Other {
if self.verbose {
if err.kind() == io::ErrorKind::Other && self.verbose {
eprintln!("an IO error occurred: {}", err);
}
}
// fail the request
request.io_fail(err).map_err(Error::from)

View File

@ -4,6 +4,47 @@
//! services. We want async IO, so this is built on top of
//! tokio/hyper.
use anyhow::{format_err, Error};
use lazy_static::lazy_static;
use nix::unistd::Pid;
use proxmox::sys::linux::procfs::PidStat;
use crate::buildcfg;
lazy_static! {
static ref PID: i32 = unsafe { libc::getpid() };
static ref PSTART: u64 = PidStat::read_from_pid(Pid::from_raw(*PID)).unwrap().starttime;
}
pub fn pid() -> i32 {
*PID
}
pub fn pstart() -> u64 {
*PSTART
}
pub fn write_pid(pid_fn: &str) -> Result<(), Error> {
let pid_str = format!("{}\n", *PID);
let opts = proxmox::tools::fs::CreateOptions::new();
proxmox::tools::fs::replace_file(pid_fn, pid_str.as_bytes(), opts)
}
pub fn read_pid(pid_fn: &str) -> Result<i32, Error> {
let pid = proxmox::tools::fs::file_get_contents(pid_fn)?;
let pid = std::str::from_utf8(&pid)?.trim();
pid.parse().map_err(|err| format_err!("could not parse pid - {}", err))
}
pub fn ctrl_sock_from_pid(pid: i32) -> String {
format!("\0{}/control-{}.sock", buildcfg::PROXMOX_BACKUP_RUN_DIR, pid)
}
pub fn our_ctrl_sock() -> String {
ctrl_sock_from_pid(*PID)
}
mod environment;
pub use environment::*;
@ -30,3 +71,19 @@ pub mod formatter;
#[macro_use]
pub mod rest;
pub mod jobstate;
mod verify_job;
pub use verify_job::*;
mod prune_job;
pub use prune_job::*;
mod gc_job;
pub use gc_job::*;
mod email_notifications;
pub use email_notifications::*;
mod report;
pub use report::*;

Some files were not shown because too many files have changed in this diff Show More