Compare commits

...

160 Commits

Author SHA1 Message Date
9d79cec4d5 bump version to 0.9.6-1
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-11-04 19:13:04 +01:00
4935681cf4 ui: sync jobs: add tooltip for remove vanished
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-11-04 19:07:07 +01:00
669fa672d9 ui: sync jobs: reorder fields
group local ones togeteher on the left side, and source + schedule
on the right side.

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-11-04 19:05:48 +01:00
a797583535 ui: sync jobs: fix originalValue of owner and improve label
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-11-04 19:04:42 +01:00
54ed1b2a71 ui: sync jobs: only set default schedule when creating new jobs
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-11-04 19:04:06 +01:00
8e12e86f0b ui: add shell panel under administration
some users prefer an inline console
we still have the pop-out console in 'Administration'

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-11-04 18:16:49 +01:00
fe7bdc9d29 proxy: also rotate auth.log file
no need for triggering re-open here, we always re-open that file.

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-11-04 17:26:34 +01:00
546b6a23df proxy: logrotate: do not serialize sending async log-reopen commands
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-11-04 17:26:34 +01:00
4fdf13f95f api: factor out auth logger and use for all API authentication failures
we have information here not available in the access log, especially
if the /api2/extjs formatter is used, which encapsulates errors in a
200 response.

So keep the auth log for now, but extend it use from create ticket
calls to all authentication failures for API calls, this ensures one
can also fail2ban tokens.

Do that logging in a central place, which makes it simple but means
that we do not have the user ID information available to include in
the log.

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-11-04 17:26:34 +01:00
385681c9ab worker task: fix passing upid to send command
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-11-04 17:16:55 +01:00
be99df2767 log rotate: only add .zst to new file after second rotation
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-11-04 17:16:55 +01:00
30200b5c4a ui: fix task description for log rotate
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-11-04 14:20:44 +01:00
f47c1d3a2f proxy: use new datastore notify settings 2020-11-04 11:54:29 +01:00
6e545d0058 config: allow to configure who receives job notify emails 2020-11-04 11:54:29 +01:00
84006f98b2 ui: SyncJobEdit: fix sending 'delete' values on SyncJob creation
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-11-04 11:39:52 +01:00
42ca9e918a sync: improve log format 2020-11-04 09:10:56 +01:00
ea93bea7bf proxy: log if there are too many open connections 2020-11-04 08:49:35 +01:00
0081903f7c fix bug #2870: use updated tickets 2020-11-04 08:20:36 +01:00
c53797f627 ui: set default deduplication factor to 1.0 2020-11-04 07:12:55 +01:00
e1d367df47 proxy: use env PROXMOX_DEBUG to enable/disable debug output
We only print early connection errors when this env var is set.
2020-11-04 06:55:57 +01:00
71f413cd27 cleanup: use Arc to count open connections 2020-11-04 06:35:44 +01:00
48aa2b93b7 fix #3106: correctly queue incoming connections 2020-11-04 06:24:42 +01:00
641862ddad bump version to 0.9.5-1
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-11-03 17:41:26 +01:00
2f08ee1fe3 report: add more commands/files to check
add all of our configuration files in /etc/proxmox-backup/ further,
call some ZFS tool to get that status.

Also, use the subscription command form manager, as we often require
more info than the status. Also, adapt formatting a bit.

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-11-03 17:33:16 +01:00
93f077c5cf report: avoid lazy_static for command/files/.. definitions
those are not in a hot code path, and it is not really much work to
build them on the go..

It may not matther much, but it is unnecessary. Rust will probably
inline most of it anyway..

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-11-03 17:27:16 +01:00
941342f70e manager: report: call method directly, avoid HTTPS request
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-11-03 17:23:43 +01:00
9a556c8a30 manager: add report cli command
Signed-off-by: Hannes Laimer <h.laimer@proxmox.com>
2020-11-03 15:16:42 +01:00
46dce62be6 report: add webui button for system report
Signed-off-by: Hannes Laimer <h.laimer@proxmox.com>
2020-11-03 15:16:42 +01:00
b0ef9631e6 report: add api endpoint and function to generate report
Signed-off-by: Hannes Laimer <h.laimer@proxmox.com>
2020-11-03 15:16:42 +01:00
fb0d9833af ui: task filter: add button icons
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-11-03 14:49:04 +01:00
bfe4b7d782 ui: task filter: reorder to avoid wasting vertical space
Includes some eslint fixes and label changes as well, was to much
work to split that out in its own commit.

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-11-03 14:48:20 +01:00
185dab7678 ui: add panel/Tasks and use it for the node tasks
this is a panel that is heavily inspired from widget-toolkits
node/Tasks panel, but is adapted to use the extended api calls of
pbs (e.g. since/until filter)

has 'filter' panel (like pmgs log tracker gui), but it is collapsible

if we extend the api calls of the other projects, we can merge this
again into the widget-toolkit one and use that

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-11-03 11:35:21 +01:00
c1fa057cce api2/node/tasks: add optional until filter
so that users select specific time ranges with 'since' and 'until'
(e.g. a single day)

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-11-03 11:35:21 +01:00
f66565203a api2/status: remove list_task api call
we do not need it anymore, we can do everything with nodes/NODE/tasks
instead

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-11-03 11:35:21 +01:00
a2a7dd1535 api2/node/tasks: add optional since/typefilter/statusfilter
and change all users of the /status/tasks api call to this

with this change we can now delete the /status/tasks api call

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-11-03 11:35:21 +01:00
e7dd169fdf api2/node/tasks: change limit behaviour when it is 0
instead of returning 0 elements (which does not really make sense anyway),
change it so that there is no limit anymore (besides usize::MAX)

this is technically a breaking change for the api, but i guess
no one is using limit=0 for anything sensible anyway

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-11-03 11:35:21 +01:00
fa31f4c54c server/worker_task: add tasktype to return the api type of a taskstate
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-11-03 11:35:21 +01:00
038ee59960 cleanup: use const_regex, use BACKUP_ID_REGEX for api too 2020-11-03 06:36:50 +01:00
e1c1533790 fix #3039: use the same ID regex for info and api
in the api we use PROXMOX_SAFE_ID_REGEX for backup ids, but here
(where we use it to list them) we use a local regex

since the first is a superset of the one used here, simply extend
the local one

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-11-03 06:25:06 +01:00
9de7c71a81 docs: extend managing remotes
with information about required privileges and limitations

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2020-11-02 21:13:24 +01:00
aa64e06540 sync: add access check tests
should cover all the current scenarios. remote server-side checks can't
be meaningfully unit-tested, but they are simple enough so should
hopefully never break.

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2020-11-02 21:13:24 +01:00
18077ac633 user.cfg/user info: add test constructors
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2020-11-02 21:13:24 +01:00
a71a009313 proxy: drop now unused UPID import
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-11-02 21:08:38 +01:00
b6ba5acd29 proxmox-backup-proxy: use only jobstate for garbage_collection schedule
in case the garbage_collection errors out, we never set the in-memory
state, so if it failed, the last 'good' starttime was considered
for the schedule

this could lead to the job running every minute instead of the
correct schedule

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-11-02 21:08:38 +01:00
4fdf5ddf5b api2/admin/datastore: start the garbage_collection task with our helper
instead of manually, this has the advantage that we now set
the jobstate correctly and can return with an error if it is
currently running (instead of failing in the task)

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-11-02 21:08:38 +01:00
c724f65805 server/gc_job: add 'to_stdout'
we will use this for the manual api call

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-11-02 21:08:38 +01:00
79c9bf55b9 backup/{dynamic, fixed}_index: improve error message for small index files
index files that were smaller than their respective header size,
would fail with

"failed to fill whole buffer"

instead now check explicitely for the size and fail with
"index too small (size)"

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-11-02 21:08:38 +01:00
788d82d9b7 gc: mark_used_chunks: reduce implementation noise
try do reduce some unecessary lines, make match arms more precise so
one can faster see what's actually happening.

Also, avoid
> return Err(format_err!(...))
stuff, just use bail!()

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-11-02 21:08:38 +01:00
2f0b92352d garbage collect: improve index error messages
so that in case of a broken index file, the user knows which it is

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-11-02 20:08:50 +01:00
b7f2be5137 log rotate task: make task archive limits be binary based
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-11-02 19:53:30 +01:00
72aa1834dc log rotate task: adapt internal jobstate ID, set worker one to None for now
as we have only one logrotate task currently..

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-11-02 19:53:30 +01:00
fe4cc5b1a1 server: implement access log rotation with re-open via command socket
re-use the future we already have for task log rotation to trigger
it.

Move the FileLogger in ApiConfig into an Arc, so that we can actually
update it and REST using the new one.

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-11-02 19:53:30 +01:00
04b053d87e server: write main daemon PID to run directory
so that we can easily get the main PID of the last recently launched
daemon. Will be used to get the control socket of that one for access
lgo rotate in a future patch

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-11-02 19:50:24 +01:00
b469011fd1 command socket: make create_control_socket private
this is internal for now, use the comanndo socket struct
implementation, and ideally not a new one but the existing ones
created in the proxy and api daemons.

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-11-02 19:50:24 +01:00
a68768cf31 server: use generalized commando socket for worker tasks commands
Allows to extend the use of that socket in the future, e.g., for log
rotate re-open signaling.

To reflect this we use a more general name, and change the commandos
to a more clear namespace.

Both are actually somewhat a breaking change, but the single real
world issue it should be able to cause is, that one won't be able to
stop task from older daemons, which still use the older abstract
socket name format.

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-11-02 19:48:04 +01:00
f3df613cb7 server: add CommandoSocket where multiple users can register commands
This is a preparatory step to replace the task control socket with it
and provide a "reopen log file" command for the rest server.

Kept it simple by disallowing to register new commands after the
socket gets spawned, this avoids the need for locking.

If we really need that we can always wrap it in a Arc<RWLock<..>> or
something like that, or even nicer, register at compile time.

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-11-02 19:32:22 +01:00
056ee78567 config: network: use error message when parsing netmask failed
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-11-02 19:32:22 +01:00
3cd529ea51 tools: file logger: avoid some possible unwraps in log method
writing to a file can explode quite easily.
time formatting to rfc3339 should be more robust, but it has a few
conditions where it could fail, so catch that too (and only really
do it if required).

The writes to stdout are left as is, it normally is redirected to
journal which is in memory, and thus breaks later than most stuff,
and at that point we probably do not care anymore anyway.

It could make sense to actually return a result here..

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-11-02 19:32:22 +01:00
3aade17125 tools: log rotate: compressing rotated files
We renamed the last one always to a file without compression
extension, even if it was .zst previously. So always add the correct
ending to the new last one, if compress was true.

Further, we cannot detect if there'd be a compression required if we
rotated (renamed) it already to the file with .zst included.

So check on rotation itself if it would be a "no .zst" -> ",zst"
transition, and call compress there.

it really should be OK now *knocking wood*

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-11-02 18:35:13 +01:00
1dc2fe20dd tools: log rotate: fix file ending for compressed files
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-11-02 18:35:13 +01:00
645a47ff6e config: support netmask when parsing interfaces file 2020-11-02 14:32:35 +01:00
b1456a8ea7 ui: fix verificationjob task description 2020-11-02 10:15:52 +01:00
a9fcbec9dc file logger: allow reopening file
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-11-02 10:03:10 +01:00
346a488e35 pull out /run and /var/log directory constants to buildcfg
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-11-02 10:03:10 +01:00
3066f56481 notify: add link to server GUI 2020-11-02 09:12:14 +01:00
07ca4e3609 gc: remove extra empty lines in email notification template 2020-11-02 09:12:14 +01:00
dcd75edb72 ui: fix dashboard subscription
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-11-02 08:08:44 +01:00
59af9ca98e sync: allow sync for non-superusers
by requiring
- Datastore.Backup permission for target datastore
- Remote.Read permission for source remote/datastore
- Datastore.Prune if vanished snapshots should be removed
- Datastore.Modify if another user should own the freshly synced
snapshots

reading a sync job entry only requires knowing about both the source
remote and the target datastore.

note that this does not affect the Authid used to authenticate with the
remote, which of course also needs permissions to access the source
datastore.

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2020-11-02 07:10:12 +01:00
f1694b062d fix #2864: add owner option to sync
instead of hard-coding 'backup@pam'. this allows a bit more flexibility
(e.g., syncing to a datastore that can directly be used as restore
source) without overly complicating things.

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2020-11-02 07:08:05 +01:00
fa7aceeb15 manager: subscription commands s/delete/remove/
no idea why I added it as "delete", for all other such operations we
use the "remove" sub-command...

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-11-01 13:19:30 +01:00
0e16f57e37 apt: sort packages for update notifcation mail
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-10-31 22:58:52 +01:00
bc00289bce add daily update and maintenance task
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-10-31 22:51:26 +01:00
86d602457a api: apt: implement support to send notification email on new updates
again, base idea copied off PVE, but, we safe the information about
which pending version we send a mail out already in a separate
object, to keep the api return type APTUpdateInfo clean.

This also makes a few things a bit easier, as we can update the
package status without saving/restoring the notify information.

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-10-31 22:51:26 +01:00
33508b1237 api: implement apt pkg cache
based on the idea of PVE

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-10-31 21:42:49 +01:00
b282557563 api: apt: factor out and improve calling apt update
apt changes some of its state/cache also if it errors out, most of
the time, so we actually want to print both, stderr and stdout.

Further, only warn if its exit code is non-zero, for the same
rationale, it may bring updates available even if it errors (e.g.,
because a future pbs-enterprise repo is additionally configured but
not accessible).

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-10-31 21:31:59 +01:00
e6513bd5de api/tools: split out apt helpers from api to own module
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-10-31 21:31:36 +01:00
5911f74096 api types: derive Debug for APTUpdateInfo
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-10-31 21:31:36 +01:00
0bb74e54b1 worker task: drop debug prints
they are not useful anymore, rather noisy

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-10-31 21:31:36 +01:00
f254a27071 tools: do not unnecessarily prefix module path
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-10-31 21:31:36 +01:00
d0abba3397 trivial: fix typo in comment
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-10-31 21:31:36 +01:00
54adea366c ui: ACL view: do not save grid state
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-10-31 11:36:48 +01:00
ba2e4b15da ui: improve ACL view layout
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-10-31 11:33:31 +01:00
0ccdd1b6a4 ui: bump sync/verify grid stateid
so that people get the improved view by default

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-10-31 10:58:57 +01:00
fb66c85363 ui: improve sync job view layout
Avoid overuse of flex, that is as bad as having all to fixed widths.

In spirit similar to the previous commit for the verify panel, see
that for some rationale.

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-10-31 10:56:51 +01:00
aae4c30ceb ui: improve verify job view layout, show job-id
Avoid overuse of flex, that is as bad as having all to fixed widths.

* Set date-time fields to 150 px as they are fixed width text.
* Duration is maximal 3 units, so it can be made fixed too.
* Schedule is flex with lower and upper limits, this is useful as
  it's a field which can be both, quite short (daily) or long
  (mon..fri *-10..12-1..7 02:00/30:30)
* Status and comment is flex, this way we always get a filled grid

Move status after last verify date and duration field, increases
information density at the left of the grid - reducing need for eye
movement, also, it groups together the "information about last job"
nicer.

Show job-id by default even if they are auto generated when adding
over the gui, as it can help finding the respective job faster when
getting a mail with an error.

Reported-by: Dietmar Maurer <dietmar@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-10-31 10:56:51 +01:00
0656344ae4 ui: administration: set icons for tabs
orient on PVE, the ones for Updates, ServerStatus, should by
self-explanatory.

Services is in PVE named "System", but reusing that cogs icon makes
similar sense here too, and seems in line with search result of a
"service icons" query.

Syslog is the same as our general log icon, but as we also use this
normally for worker task logs and that is present here too, I
changed the worker task log icon to the alternative list, which
resembles a task view window - so IMO even better than before.

Sync that change also into the always present tasks button at the top
right.

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-10-31 09:11:11 +01:00
1143f6ca93 cleanup: fix wording in GC status emails 2020-10-31 07:56:42 +01:00
90e94aa280 docs: client: avoid that repo gets detected as email address
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-10-30 17:08:08 +01:00
c0af05e143 docs: fixup bad RST table format
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-10-30 17:05:49 +01:00
4aef06f1b6 docs: add token example to client, and reformat a bit
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-10-30 17:01:22 +01:00
034cf70b72 docs: add API tokens to documentation
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2020-10-30 16:46:19 +01:00
8b600f9965 api: replace auth_id with auth-id
in parameters, and fix up the completion for the ACL update parameter.

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2020-10-30 16:46:19 +01:00
e4e280183e privs: add some more comments explaining privileges
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2020-10-30 16:42:30 +01:00
2fc45a97a9 privs: remove PRIV_REMOVE_PRUNE
it's not used anywhere, and not needed either until the day we might
implement push syncs.

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2020-10-30 16:42:26 +01:00
b7ce2e575f verify jobs: add permissions
equivalent to verifying a whole datastore, except for reading job
(entries), which is accessible to regular Datastore.Audit/Backup users
as well.

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2020-10-30 16:36:52 +01:00
09f6a24078 verify: introduce & use new Datastore.Verify privilege
for verifying a whole datastore. Datastore.Backup now allows verifying
only backups owned by the triggering user.

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2020-10-30 16:36:52 +01:00
b728a69e7d privs: use Datastore.Modify|Backup to set backup notes
Datastore.Backup is limited to owned groups, as usual.

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2020-10-30 16:36:52 +01:00
1401f4be5f privs: allow reading notes with Datastore.Audit
they are returned when reading the manifest, which just requires
Datastore.Audit as well. Datastore.Read is for reading backup contents,
not metadata.

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2020-10-30 16:36:52 +01:00
fdb4416bae ui: permission path selector: cbind typeAhead to editable
ExtJS throws an exception if 'typeAhead' is true but 'editable' is
false.

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-10-30 16:31:53 +01:00
abe1edfc95 update d/control
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2020-10-30 16:11:50 +01:00
e4a864bd21 impl From<Authid> for Userid
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2020-10-30 15:19:07 +01:00
7a7368ee08 bump proxmox dependency to 0.7.0 for totp udpates
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2020-10-30 15:19:07 +01:00
e707fd2b3b ui: Utils: add product specific task descriptions
and sort them alphabetically

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-10-30 14:05:17 +01:00
625a56b75e server/rest: accept also = as token separator
Like we do in Proxmox VE

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-10-30 13:34:26 +01:00
6d8a1ac9e4 server/rest: user constants for HTTP headers
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-10-30 13:33:36 +01:00
362739054e api tokens: add authorization method
and properly decode secret (which is a no-op with the current scheme).

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2020-10-30 13:15:14 +01:00
2762481cc8 proxmox-backup-manager: add subscription commands
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-10-30 13:03:58 +01:00
652506e6b8 api: define subscription module and methods as public
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-10-30 13:03:58 +01:00
926d253126 api: define subscription key schema and use it
nicer to have the correct regex checked in parameter verification
already

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-10-30 12:57:14 +01:00
1cd951c93e proxy: fix warnings
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2020-10-30 12:49:43 +01:00
3b707fbb8f proxy: split out code to run garbage collection job 2020-10-30 11:01:45 +01:00
b15751bf55 check_schedule cleanup: use &str instead of String
This way we can avoid many clone() calls.
2020-10-30 09:49:50 +01:00
82c05b41fa proxy: extract commonly used logic for scheduling into new function
Signed-off-by: Hannes Laimer <h.laimer@proxmox.com>
2020-10-30 09:49:50 +01:00
b8d9079835 proxy: move prune logic into new file
Signed-off-by: Hannes Laimer <h.laimer@proxmox.com>
2020-10-30 09:49:50 +01:00
f8a682a873 ui: user menu: allow changing language while logged in
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-10-30 09:46:04 +01:00
b03a19b6e8 bump version to 0.9.4-2
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-10-29 20:25:37 +01:00
603a6bd183 d/postinst: followup: grep and sed use different regex escaping ..
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-10-29 20:25:37 +01:00
83b039af35 d/postinst: make more resilient
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-10-29 19:58:41 +01:00
c9299e76fc bump version to 0.9.3-2
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-10-29 17:20:04 +01:00
2f1a46f748 ui: move user, token and permissions into an access control tab panel
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-10-29 16:47:18 +01:00
2b38dfb456 d/control: update
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-10-29 16:18:40 +01:00
f487a622ce ui: datastore summary: handle missing snapshot of a types
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-10-29 15:52:53 +01:00
906ef6c5bd api2/access/user: fix return type schema
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2020-10-29 15:20:10 +01:00
ea1853a17b api2/access/user: drop Option, treat empty Vec as None
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2020-10-29 15:17:54 +01:00
221177ba41 fixup hardcoded paths
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2020-10-29 15:15:17 +01:00
184a37635b gui: add API token ACLs
and the needed API token selector.

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2020-10-29 15:14:27 +01:00
b2da7fbd1c acls: allow viewing/editing user's token ACLs
even for otherwise unprivileged users.

since effective privileges of an API token are always intersected with
those of their owning user, this does not allow an unprivileged user to
elevate their privileges in practice, but avoids the need to involve a
privileged user to deploy API tokens.

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2020-10-29 15:14:27 +01:00
7fe76d3491 gui: add API token UI
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2020-10-29 15:14:27 +01:00
e6b5bf69a3 gui: add permissions button to user view
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2020-10-29 15:14:27 +01:00
4615325f9e manager: add user permissions command
useful for debugging complex ACL setups.

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2020-10-29 15:14:27 +01:00
2156dec5a9 manager: add token commands
to generate, list and delete tokens. adding them to ACLs already works
out of the box.

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2020-10-29 15:14:27 +01:00
16245d540c tasks: allow unpriv users to read their tokens' tasks
and tighten down the return schema while we're at it.

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2020-10-29 15:14:27 +01:00
bff8557298 owner checks: handle backups owned by API tokens
a user should be allowed to read/list/overwrite backups owned by their
own tokens, but a token should not be able to read/list/overwrite
backups owned by their owning user.

when changing ownership of a backup group, a user should be able to
transfer ownership to/from their own tokens if the backup is owned by
them (or one of their tokens).

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2020-10-29 15:14:27 +01:00
34aa8e13b6 client/remote: allow using ApiToken + secret
in place of user + password.

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2020-10-29 15:14:27 +01:00
babab85b56 api: add permissions endpoint
and adapt privilege calculation to return propagate flag

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2020-10-29 15:14:27 +01:00
6746bbb1a2 api: allow listing users + tokens
since it's not possible to extend existing structs, UserWithTokens
duplicates most of user::User.. to avoid duplicating user::ApiToken as
well, this returns full API token IDs, not just the token name part.

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2020-10-29 15:14:27 +01:00
942078c40b api: add API token endpoints
beneath the user endpoint.

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2020-10-29 15:14:27 +01:00
c30816c1f8 REST: extract and handle API tokens
and refactor handling of headers in the REST server while we're at it.

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2020-10-29 15:14:27 +01:00
e6dc35acb8 replace Userid with Authid
in most generic places. this is accompanied by a change in
RpcEnvironment to purposefully break existing call sites.

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2020-10-29 15:11:39 +01:00
e10c5c74f6 bump proxmox dependency to 0.6.0 for api tokens and tfa
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2020-10-29 15:11:39 +01:00
f8adf8f83f config: add token.shadow file
containing pairs of token ids and hashed secret values.

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2020-10-29 15:11:39 +01:00
e0538349e2 api: add Authid as wrapper around Userid
with an optional Tokenname, appended with '!' as delimiter in the string
representation like for PVE.

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2020-10-29 15:11:39 +01:00
0903403ce7 bump version to 0.9.3-1
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-10-29 14:58:21 +01:00
b6563f48ad GC: improve task logs
Make it more clear that removed files are chunks (not indexes or
something like that, user cannot know that we do not touch them here)

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-10-29 14:47:39 +01:00
932390bd46 GC: fix logging leftover bad chunks
fixes commit b4fb262335, which copied
over the "Removed bad files:" block, but only adapted the log text,
not the actual variable.

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-10-29 14:40:29 +01:00
6b7688aa98 ui: datastore: fix sync/verify job removal prompt
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-10-29 14:34:31 +01:00
ab0cf7e6a1 ui: drop id field from verify/sync add window
the config is shared between multiple datastores with the ID as, well
the unique ID, but we only show those of a single datastore.

So if a user adds a new one with a fixed ID "12345" but a job with
that ID exists already on another store, they get a error about
duplicate IDs, but cannot relate as that duplicate job is not visible
(filtered away)

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-10-29 14:22:43 +01:00
264779e704 server/worker_task: simplify task log writing
instead of prerotating 1000 tasks
(which resulted in 2 writes each time an active worker was finished)
simply append finished tasks to the archive (which will be rotated)

page cache should be good enough so that we can get the task logs fast

since existing installations might have an 'index' file, we
still have to read tasks from there, but only if it exists

this simplifies the TaskListInfoIterator a good amount

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-10-29 12:41:20 +01:00
7f3d91003c worker task: remove debug print, faster modulo
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-10-29 12:35:33 +01:00
14e0862509 api: datstore status: introduce proper structs and restore compatibility
by moving the properties of the storage status out again to the top
level object

also introduce proper structs for the types used, to get type-safety
and better documentation for the api calls

this changes the backup counts from an array of [groups,snapshots] to
an object/struct with { groups, snapshots } and include 'other' types
(though we do not have any at this moment)

this way it is better documented

this also adapts the ui code to cope with the api changes

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-10-29 12:31:27 +01:00
9e733dae48 send sync job status emails 2020-10-29 12:22:50 +01:00
bfea476be2 schedule_datastore_sync_jobs: remove unneccessary clone() 2020-10-29 12:22:41 +01:00
385cf2bd9d send_job_status_mail: corectly escape html characters 2020-10-29 11:22:08 +01:00
d6373f3525 garbage_collection: log deduplication factor 2020-10-29 11:13:01 +01:00
01f37e01c3 ui: datastore: use pointer cursor for edit notes
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-10-29 10:45:37 +01:00
b4fb262335 garbage_collection: log bad chunks (still_bad value) 2020-10-29 10:24:31 +01:00
5499bd3dee fix #2998: encode mtime as i64 instead of u64
saves files mtime as i64 instead of u64 which enables backup of
files with negative mtime

the catalog_decode_i64 is compatible to encoded u64 values (if < 2^63)
but not reverse, so all "old" catalogs can be read with the new
decoder, but catalogs that contain negative mtimes will decode wrongly
on older clients

also remove the arbitrary maximum value of 2^63 - 1 for
encode_u64 (we just use up to 10 bytes now) and correctly
decode them and update the comments accordingly

adds also test for i64 encode/decode and for compatibility between
u64 encode and i64 decode

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-10-29 08:51:10 +01:00
d771a608f5 verify: directly pass manifest to filter function
In order to avoid loading the manifest twice during verify.
2020-10-29 07:59:19 +01:00
227a39b34b bump version to 0.9.2-2
re-use the changelog as this was not released publicly and it's just
a small fix

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-10-28 23:05:58 +01:00
f9beae9cc9 client: adapt to change datastroe status return schema
fixes commit 16f9f244cf which extended
the return schema of the status API but did not adapted the client
status command to that.

Simply define our own tiny return schema and use that.

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-10-28 22:59:40 +01:00
124 changed files with 6224 additions and 1826 deletions

View File

@ -1,6 +1,6 @@
[package]
name = "proxmox-backup"
version = "0.9.2"
version = "0.9.6"
authors = ["Dietmar Maurer <dietmar@proxmox.com>"]
edition = "2018"
license = "AGPL-3"
@ -38,7 +38,7 @@ pam-sys = "0.5"
percent-encoding = "2.1"
pin-utils = "0.1.0"
pathpatterns = "0.1.2"
proxmox = { version = "0.5.0", features = [ "sortable-macro", "api-macro", "websocket" ] }
proxmox = { version = "0.7.0", features = [ "sortable-macro", "api-macro", "websocket" ] }
#proxmox = { git = "git://git.proxmox.com/git/proxmox", version = "0.1.2", features = [ "sortable-macro", "api-macro" ] }
#proxmox = { path = "../proxmox/proxmox", features = [ "sortable-macro", "api-macro", "websocket" ] }
proxmox-fuse = "0.1.0"

View File

@ -19,7 +19,8 @@ USR_SBIN := \
SERVICE_BIN := \
proxmox-backup-api \
proxmox-backup-banner \
proxmox-backup-proxy
proxmox-backup-proxy \
proxmox-daily-update \
ifeq ($(BUILD_MODE), release)
CARGO_BUILD_ARGS += --release

107
debian/changelog vendored
View File

@ -1,4 +1,107 @@
rust-proxmox-backup (0.9.2-1) unstable; urgency=medium
rust-proxmox-backup (0.9.6-1) unstable; urgency=medium
* fix #3106: improve queueing new incoming connections
* fix #2870: sync: ensure a updated ticket is used, if available
* proxy: log if there are too many open connections
* ui: SyncJobEdit: fix sending 'delete' values on SyncJob creation
* datastore config: allow to configure who receives job notify emails
* ui: fix task description for log rotate
* proxy: also rotate auth.log file
* ui: add shell panel under administration
* ui: sync jobs: only set default schedule when creating new jobs and some
other small fixes
-- Proxmox Support Team <support@proxmox.com> Wed, 04 Nov 2020 19:12:57 +0100
rust-proxmox-backup (0.9.5-1) unstable; urgency=medium
* ui: user menu: allow one to change the language while staying logged in
* proxmox-backup-manager: add subscription commands
* server/rest: also accept = as token separator
* privs: allow reading snapshot notes with Datastore.Audit
* privs: enforce Datastore.Modify|Backup to set backup notes
* verify: introduce and use new Datastore.Verify privilege
* docs: add API tokens to documentation
* ui: various smaller layout and icon improvements
* api: implement apt pkg cache for caching pending updates
* api: apt: implement support to send notification email on new updates
* add daily update and maintenance task
* fix #2864: add owner option to sync
* sync: allow sync for non-superusers under special conditions
* config: support depreacated netmask when parsing interfaces file
* server: implement access log rotation with re-open via command socket
* garbage collect: improve index error messages
* fix #3039: use the same ID regex for info and api
* ui: administration: allow extensive filtering of the worker task
* report: add api endpoint and function to generate report
-- Proxmox Support Team <support@proxmox.com> Tue, 03 Nov 2020 17:41:17 +0100
rust-proxmox-backup (0.9.4-2) unstable; urgency=medium
* make postinst (update) script more resilient
-- Proxmox Support Team <support@proxmox.com> Thu, 29 Oct 2020 20:09:30 +0100
rust-proxmox-backup (0.9.4-1) unstable; urgency=medium
* implement API-token
* client/remote: allow using API-token + secret
* ui/cli: implement API-token management interface and commands
* ui: add widget to view the effective permissions of a user or token
* ui: datastore summary: handle error when havin zero snapshot of any type
* ui: move user, token and permissions into an access control tab panel
-- Proxmox Support Team <support@proxmox.com> Thu, 29 Oct 2020 17:19:13 +0100
rust-proxmox-backup (0.9.3-1) unstable; urgency=medium
* fix #2998: encode mtime as i64 instead of u64
* GC: log the number of leftover bad chunks we could not yet cleanup, as no
valid one replaced them. Also log deduplication factor.
* send sync job status emails
* api: datstore status: introduce proper structs and restore compatibility
to 0.9.1
* ui: drop id field from verify/sync add window, they are now seen as internal
-- Proxmox Support Team <support@proxmox.com> Thu, 29 Oct 2020 14:58:13 +0100
rust-proxmox-backup (0.9.2-2) unstable; urgency=medium
* rework server web-interface, move more datastore related panels as tabs
inside the datastore view
@ -76,7 +179,7 @@ rust-proxmox-backup (0.9.2-1) unstable; urgency=medium
* ui: datastore: show snapshot manifest comment and allow to edit them
-- Proxmox Support Team <support@proxmox.com> Wed, 28 Oct 2020 21:27:02 +0100
-- Proxmox Support Team <support@proxmox.com> Wed, 28 Oct 2020 23:05:41 +0100
rust-proxmox-backup (0.9.1-1) unstable; urgency=medium

8
debian/control vendored
View File

@ -34,10 +34,10 @@ Build-Depends: debhelper (>= 11),
librust-pathpatterns-0.1+default-dev (>= 0.1.2-~~),
librust-percent-encoding-2+default-dev (>= 2.1-~~),
librust-pin-utils-0.1+default-dev,
librust-proxmox-0.5+api-macro-dev,
librust-proxmox-0.5+default-dev,
librust-proxmox-0.5+sortable-macro-dev,
librust-proxmox-0.5+websocket-dev,
librust-proxmox-0.7+api-macro-dev,
librust-proxmox-0.7+default-dev,
librust-proxmox-0.7+sortable-macro-dev,
librust-proxmox-0.7+websocket-dev,
librust-proxmox-fuse-0.1+default-dev,
librust-pxar-0.6+default-dev (>= 0.6.1-~~),
librust-pxar-0.6+futures-io-dev (>= 0.6.1-~~),

18
debian/postinst vendored
View File

@ -15,12 +15,24 @@ case "$1" in
fi
deb-systemd-invoke $_dh_action proxmox-backup.service proxmox-backup-proxy.service >/dev/null || true
flock -w 30 /etc/proxmox-backup/.datastore.lck sed -i '/^\s\+verify-schedule /d' /etc/proxmox-backup/datastore.cfg
# FIXME: Remove with 1.1
if test -n "$2"; then
if dpkg --compare-versions "$2" 'lt' '0.9.4-1'; then
if grep -s -q -P -e '^\s+verify-schedule ' /etc/proxmox-backup/datastore.cfg; then
echo "NOTE: drop all verify schedules from datastore config."
echo "You can now add more flexible verify jobs"
flock -w 30 /etc/proxmox-backup/.datastore.lck \
sed -i '/^\s\+verify-schedule /d' /etc/proxmox-backup/datastore.cfg || true
fi
fi
if dpkg --compare-versions "$2" 'le' '0.9.5-1'; then
chown --quiet backup:backup /var/log/proxmox-backup/api/auth.log || true
fi
fi
# FIXME: Remove in future version once we're sure no broken entries remain in anyone's files
if grep -q -e ':termproxy::[^@]\+: ' /var/log/proxmox-backup/tasks/active; then
echo "Fixing up termproxy user id in task log..."
flock -w 30 /var/log/proxmox-backup/tasks/active.lock sed -i 's/:termproxy::\([^@]\+\): /:termproxy::\1@pam: /' /var/log/proxmox-backup/tasks/active
flock -w 30 /var/log/proxmox-backup/tasks/active.lock sed -i 's/:termproxy::\([^@]\+\): /:termproxy::\1@pam: /' /var/log/proxmox-backup/tasks/active || true
fi
;;

3
debian/prerm vendored
View File

@ -6,5 +6,6 @@ set -e
# modeled after dh_systemd_start output
if [ -d /run/systemd/system ] && [ "$1" = remove ]; then
deb-systemd-invoke stop 'proxmox-backup-banner.service' 'proxmox-backup-proxy.service' 'proxmox-backup.service' >/dev/null || true
deb-systemd-invoke stop 'proxmox-backup-banner.service' 'proxmox-backup-proxy.service' \
'proxmox-backup.service' 'proxmox-backup-daily-update.timer' >/dev/null || true
fi

View File

@ -1,10 +1,13 @@
etc/proxmox-backup-proxy.service /lib/systemd/system/
etc/proxmox-backup.service /lib/systemd/system/
etc/proxmox-backup-banner.service /lib/systemd/system/
etc/proxmox-backup-daily-update.service /lib/systemd/system/
etc/proxmox-backup-daily-update.timer /lib/systemd/system/
etc/pbstest-beta.list /etc/apt/sources.list.d/
usr/lib/x86_64-linux-gnu/proxmox-backup/proxmox-backup-api
usr/lib/x86_64-linux-gnu/proxmox-backup/proxmox-backup-proxy
usr/lib/x86_64-linux-gnu/proxmox-backup/proxmox-backup-banner
usr/lib/x86_64-linux-gnu/proxmox-backup/proxmox-daily-update
usr/sbin/proxmox-backup-manager
usr/share/javascript/proxmox-backup/index.hbs
usr/share/javascript/proxmox-backup/css/ext6-pbs.css

1
debian/rules vendored
View File

@ -38,6 +38,7 @@ override_dh_auto_install:
LIBDIR=/usr/lib/$(DEB_HOST_MULTIARCH)
override_dh_installsystemd:
dh_installsystemd -pproxmox-backup-server proxmox-backup-daily-update.timer
# note: we start/try-reload-restart services manually in postinst
dh_installsystemd --no-start --no-restart-after-upgrade

View File

@ -18,25 +18,25 @@ the default is the local host (``localhost``).
You can specify a port if your backup server is only reachable on a different
port (e.g. with NAT and port forwarding).
Note that if the server is an IPv6 address, you have to write it with
square brackets (e.g. [fe80::01]).
Note that if the server is an IPv6 address, you have to write it with square
brackets (for example, `[fe80::01]`).
You can pass the repository with the ``--repository`` command
line option, or by setting the ``PBS_REPOSITORY`` environment
variable.
You can pass the repository with the ``--repository`` command line option, or
by setting the ``PBS_REPOSITORY`` environment variable.
Here some examples of valid repositories and the real values
================================ ============ ================== ===========
================================ ================== ================== ===========
Example User Host:Port Datastore
================================ ============ ================== ===========
================================ ================== ================== ===========
mydatastore ``root@pam`` localhost:8007 mydatastore
myhostname:mydatastore ``root@pam`` myhostname:8007 mydatastore
user@pbs@myhostname:mydatastore ``user@pbs`` myhostname:8007 mydatastore
user\@pbs!token@host:store ``user@pbs!token`` myhostname:8007 mydatastore
192.168.55.55:1234:mydatastore ``root@pam`` 192.168.55.55:1234 mydatastore
[ff80::51]:mydatastore ``root@pam`` [ff80::51]:8007 mydatastore
[ff80::51]:1234:mydatastore ``root@pam`` [ff80::51]:1234 mydatastore
================================ ============ ================== ===========
================================ ================== ================== ===========
Environment Variables
---------------------
@ -45,16 +45,16 @@ Environment Variables
The default backup repository.
``PBS_PASSWORD``
When set, this value is used for the password required for the
backup server.
When set, this value is used for the password required for the backup server.
You can also set this to a API token secret.
``PBS_ENCRYPTION_PASSWORD``
When set, this value is used to access the secret encryption key (if
protected by password).
``PBS_FINGERPRINT`` When set, this value is used to verify the server
certificate (only used if the system CA certificates cannot
validate the certificate).
certificate (only used if the system CA certificates cannot validate the
certificate).
Output Format

View File

@ -79,4 +79,17 @@ either start it manually on the GUI or provide it with a schedule (see
└────────────┴───────┴────────┴──────────────┴───────────┴─────────┘
# proxmox-backup-manager sync-job remove pbs2-local
For setting up sync jobs, the configuring user needs the following permissions:
#. ``Remote.Read`` on the ``/remote/{remote}/{remote-store}`` path
#. at least ``Datastore.Backup`` on the local target datastore (``/datastore/{store}``)
If the ``remove-vanished`` option is set, ``Datastore.Prune`` is required on
the local datastore as well. If the ``owner`` option is not set (defaulting to
``backup@pam``) or set to something other than the configuring user,
``Datastore.Modify`` is required as well.
.. note:: A sync job can only sync backup groups that the configured remote's
user/API token can read. If a remote is configured with a user/API token that
only has ``Datastore.Backup`` privileges, only the limited set of accessible
snapshots owned by that user/API token can be synced.

View File

@ -70,7 +70,7 @@ The resulting user list looks like this:
│ root@pam │ 1 │ │ │ │ │ Superuser │
└──────────┴────────┴────────┴───────────┴──────────┴──────────────────┴──────────────────┘
Newly created users do not have any permissions. Please read the next
Newly created users do not have any permissions. Please read the Access Control
section to learn how to set access permissions.
If you want to disable a user account, you can do that by setting ``--enable`` to ``0``
@ -85,15 +85,69 @@ Or completely remove the user with:
# proxmox-backup-manager user remove john@pbs
.. _user_tokens:
API Tokens
----------
Any authenticated user can generate API tokens which can in turn be used to
configure various clients, instead of directly providing the username and
password.
API tokens serve two purposes:
#. Easy revocation in case client gets compromised
#. Limit permissions for each client/token within the users' permission
An API token consists of two parts: an identifier consisting of the user name,
the realm and a tokenname (``user@realm!tokenname``), and a secret value. Both
need to be provided to the client in place of the user ID (``user@realm``) and
the user password, respectively.
The API token is passed from the client to the server by setting the
``Authorization`` HTTP header with method ``PBSAPIToken`` to the value
``TOKENID:TOKENSECRET``.
Generating new tokens can done using ``proxmox-backup-manager`` or the GUI:
.. code-block:: console
# proxmox-backup-manager user generate-token john@pbs client1
Result: {
"tokenid": "john@pbs!client1",
"value": "d63e505a-e3ec-449a-9bc7-1da610d4ccde"
}
.. note:: The displayed secret value needs to be saved, since it cannot be
displayed again after generating the API token.
The ``user list-tokens`` sub-command can be used to display tokens and their
metadata:
.. code-block:: console
# proxmox-backup-manager user list-tokens john@pbs
┌──────────────────┬────────┬────────┬─────────┐
│ tokenid │ enable │ expire │ comment │
╞══════════════════╪════════╪════════╪═════════╡
│ john@pbs!client1 │ 1 │ │ │
└──────────────────┴────────┴────────┴─────────┘
Similarly, the ``user delete-token`` subcommand can be used to delete a token
again.
Newly generated API tokens don't have any permissions. Please read the next
section to learn how to set access permissions.
.. _user_acl:
Access Control
--------------
By default new users do not have any permission. Instead you need to
specify what is allowed and what is not. You can do this by assigning
roles to users on specific objects like datastores or remotes. The
By default new users and API tokens do not have any permission. Instead you
need to specify what is allowed and what is not. You can do this by assigning
roles to users/tokens on specific objects like datastores or remotes. The
following roles exist:
**NoAccess**
@ -148,20 +202,21 @@ The data represented in each field is as follows:
#. The object on which the permission is set. This can be a specific object
(single datastore, remote, etc.) or a top level object, which with
propagation enabled, represents all children of the object also.
#. The user for which the permission is set
#. The user(s)/token(s) for which the permission is set
#. The role being set
You can manage datastore permissions from **Configuration -> Permissions** in the
web interface. Likewise, you can use the ``acl`` subcommand to manage and
monitor user permissions from the command line. For example, the command below
will add the user ``john@pbs`` as a **DatastoreAdmin** for the datastore
``store1``, located at ``/backup/disk1/store1``:
You can manage permissions via **Configuration -> Access Control ->
Permissions** in the web interface. Likewise, you can use the ``acl``
subcommand to manage and monitor user permissions from the command line. For
example, the command below will add the user ``john@pbs`` as a
**DatastoreAdmin** for the datastore ``store1``, located at
``/backup/disk1/store1``:
.. code-block:: console
# proxmox-backup-manager acl update /datastore/store1 DatastoreAdmin --userid john@pbs
# proxmox-backup-manager acl update /datastore/store1 DatastoreAdmin --auth-id john@pbs
You can monitor the roles of each user using the following command:
You can list the ACLs of each user/token using the following command:
.. code-block:: console
@ -172,7 +227,7 @@ You can monitor the roles of each user using the following command:
│ john@pbs │ /datastore/disk1 │ 1 │ DatastoreAdmin │
└──────────┴──────────────────┴───────────┴────────────────┘
A single user can be assigned multiple permission sets for different datastores.
A single user/token can be assigned multiple permission sets for different datastores.
.. Note::
Naming convention is important here. For datastores on the host,
@ -183,4 +238,39 @@ A single user can be assigned multiple permission sets for different datastores.
remote (see `Remote` below) and ``{storename}`` is the name of the datastore on
the remote.
API Token permissions
~~~~~~~~~~~~~~~~~~~~~
API token permissions are calculated based on ACLs containing their ID
independent of those of their corresponding user. The resulting permission set
on a given path is then intersected with that of the corresponding user.
In practice this means:
#. API tokens require their own ACL entries
#. API tokens can never do more than their corresponding user
Effective permissions
~~~~~~~~~~~~~~~~~~~~~
To calculate and display the effective permission set of a user or API token
you can use the ``proxmox-backup-manager user permission`` command:
.. code-block:: console
# proxmox-backup-manager user permissions john@pbs -- path /datastore/store1
Privileges with (*) have the propagate flag set
Path: /datastore/store1
- Datastore.Audit (*)
- Datastore.Backup (*)
- Datastore.Modify (*)
- Datastore.Prune (*)
- Datastore.Read (*)
# proxmox-backup-manager acl update /datastore/store1 DatastoreBackup --auth-id 'john@pbs!client1'
# proxmox-backup-manager user permissions 'john@pbs!test' -- path /datastore/store1
Privileges with (*) have the propagate flag set
Path: /datastore/store1
- Datastore.Backup (*)

View File

@ -1,9 +1,11 @@
include ../defines.mk
UNITS :=
UNITS := \
proxmox-backup-daily-update.timer \
DYNAMIC_UNITS := \
proxmox-backup-banner.service \
proxmox-backup-daily-update.service \
proxmox-backup.service \
proxmox-backup-proxy.service

View File

@ -0,0 +1,8 @@
[Unit]
Description=Daily Proxmox Backup Server update and maintenance activities
After=network-online.target
Wants=network-online.target
[Service]
Type=oneshot
ExecStart=%LIBEXECDIR%/proxmox-backup/proxmox-daily-update

View File

@ -0,0 +1,10 @@
[Unit]
Description=Daily Proxmox Backup Server update and maintenance activities
[Timer]
OnCalendar=*-*-* 1:00
RandomizedDelaySec=5h
Persistent=true
[Install]
WantedBy=timers.target

View File

@ -9,6 +9,7 @@ After=proxmox-backup.service
Type=notify
ExecStart=%LIBEXECDIR%/proxmox-backup/proxmox-backup-proxy
ExecReload=/bin/kill -HUP $MAINPID
PIDFile=/run/proxmox-backup/proxy.pid
Restart=on-failure
User=%PROXY_USER%
Group=%PROXY_USER%

View File

@ -7,6 +7,7 @@ After=network.target
Type=notify
ExecStart=%LIBEXECDIR%/proxmox-backup/proxmox-backup-api
ExecReload=/bin/kill -HUP $MAINPID
PIDFile=/run/proxmox-backup/api.pid
Restart=on-failure
[Install]

View File

@ -2,7 +2,7 @@ use std::io::Write;
use anyhow::{Error};
use proxmox_backup::api2::types::Userid;
use proxmox_backup::api2::types::Authid;
use proxmox_backup::client::{HttpClient, HttpClientOptions, BackupReader};
pub struct DummyWriter {
@ -26,13 +26,13 @@ async fn run() -> Result<(), Error> {
let host = "localhost";
let username = Userid::root_userid();
let auth_id = Authid::root_auth_id();
let options = HttpClientOptions::new()
.interactive(true)
.ticket_cache(true);
let client = HttpClient::new(host, 8007, username, options)?;
let client = HttpClient::new(host, 8007, auth_id, options)?;
let backup_time = proxmox::tools::time::parse_rfc3339("2019-06-28T10:49:48Z")?;

View File

@ -1,6 +1,6 @@
use anyhow::{Error};
use proxmox_backup::api2::types::Userid;
use proxmox_backup::api2::types::Authid;
use proxmox_backup::client::*;
async fn upload_speed() -> Result<f64, Error> {
@ -8,13 +8,13 @@ async fn upload_speed() -> Result<f64, Error> {
let host = "localhost";
let datastore = "store2";
let username = Userid::root_userid();
let auth_id = Authid::root_auth_id();
let options = HttpClientOptions::new()
.interactive(true)
.ticket_cache(true);
let client = HttpClient::new(host, 8007, username, options)?;
let client = HttpClient::new(host, 8007, auth_id, options)?;
let backup_time = proxmox::tools::time::epoch_i64();

View File

@ -1,6 +1,8 @@
use anyhow::{bail, format_err, Error};
use serde_json::{json, Value};
use std::collections::HashMap;
use std::collections::HashSet;
use proxmox::api::{api, RpcEnvironment, Permission};
use proxmox::api::router::{Router, SubdirMap};
@ -10,10 +12,10 @@ use proxmox::{http_err, list_subdirs_api_method};
use crate::tools::ticket::{self, Empty, Ticket};
use crate::auth_helpers::*;
use crate::api2::types::*;
use crate::tools::{FileLogOptions, FileLogger};
use crate::config::acl as acl_config;
use crate::config::acl::{PRIVILEGES, PRIV_SYS_AUDIT, PRIV_PERMISSIONS_MODIFY};
use crate::config::cached_user_info::CachedUserInfo;
use crate::config::acl::{PRIVILEGES, PRIV_PERMISSIONS_MODIFY};
pub mod user;
pub mod domain;
@ -31,7 +33,8 @@ fn authenticate_user(
) -> Result<bool, Error> {
let user_info = CachedUserInfo::new()?;
if !user_info.is_active_user(&userid) {
let auth_id = Authid::from(userid.clone());
if !user_info.is_active_auth_id(&auth_id) {
bail!("user account disabled or expired.");
}
@ -69,8 +72,7 @@ fn authenticate_user(
path_vec.push(part);
}
}
user_info.check_privs(userid, &path_vec, *privilege, false)?;
user_info.check_privs(&auth_id, &path_vec, *privilege, false)?;
return Ok(false);
}
}
@ -141,20 +143,13 @@ fn create_ticket(
port: Option<u16>,
rpcenv: &mut dyn RpcEnvironment,
) -> Result<Value, Error> {
let logger_options = FileLogOptions {
append: true,
prefix_time: true,
..Default::default()
};
let mut auth_log = FileLogger::new("/var/log/proxmox-backup/api/auth.log", logger_options)?;
match authenticate_user(&username, &password, path, privs, port) {
Ok(true) => {
let ticket = Ticket::new("PBS", &username)?.sign(private_auth_key(), None)?;
let token = assemble_csrf_prevention_token(csrf_secret(), &username);
auth_log.log(format!("successful auth for user '{}'", username));
crate::server::rest::auth_logger()?.log(format!("successful auth for user '{}'", username));
Ok(json!({
"username": username,
@ -177,7 +172,7 @@ fn create_ticket(
username,
err.to_string()
);
auth_log.log(&msg);
crate::server::rest::auth_logger()?.log(&msg);
log::error!("{}", msg);
Err(http_err!(UNAUTHORIZED, "permission check failed."))
@ -213,9 +208,10 @@ fn change_password(
) -> Result<Value, Error> {
let current_user: Userid = rpcenv
.get_user()
.get_auth_id()
.ok_or_else(|| format_err!("unknown user"))?
.parse()?;
let current_auth = Authid::from(current_user.clone());
let mut allowed = userid == current_user;
@ -223,7 +219,7 @@ fn change_password(
if !allowed {
let user_info = CachedUserInfo::new()?;
let privs = user_info.lookup_privs(&current_user, &[]);
let privs = user_info.lookup_privs(&current_auth, &[]);
if (privs & PRIV_PERMISSIONS_MODIFY) != 0 { allowed = true; }
}
@ -237,6 +233,128 @@ fn change_password(
Ok(Value::Null)
}
#[api(
input: {
properties: {
"auth-id": {
type: Authid,
optional: true,
},
path: {
schema: ACL_PATH_SCHEMA,
optional: true,
},
},
},
access: {
permission: &Permission::Anybody,
description: "Requires Sys.Audit on '/access', limited to own privileges otherwise.",
},
returns: {
description: "Map of ACL path to Map of privilege to propagate bit",
type: Object,
properties: {},
additional_properties: true,
},
)]
/// List permissions of given or currently authenticated user / API token.
///
/// Optionally limited to specific path.
pub fn list_permissions(
auth_id: Option<Authid>,
path: Option<String>,
rpcenv: &dyn RpcEnvironment,
) -> Result<HashMap<String, HashMap<String, bool>>, Error> {
let current_auth_id: Authid = rpcenv.get_auth_id().unwrap().parse()?;
let user_info = CachedUserInfo::new()?;
let user_privs = user_info.lookup_privs(&current_auth_id, &["access"]);
let auth_id = if user_privs & PRIV_SYS_AUDIT == 0 {
match auth_id {
Some(auth_id) => {
if auth_id == current_auth_id {
auth_id
} else if auth_id.is_token()
&& !current_auth_id.is_token()
&& auth_id.user() == current_auth_id.user() {
auth_id
} else {
bail!("not allowed to list permissions of {}", auth_id);
}
},
None => current_auth_id,
}
} else {
match auth_id {
Some(auth_id) => auth_id,
None => current_auth_id,
}
};
fn populate_acl_paths(
mut paths: HashSet<String>,
node: acl_config::AclTreeNode,
path: &str
) -> HashSet<String> {
for (sub_path, child_node) in node.children {
let sub_path = format!("{}/{}", path, &sub_path);
paths = populate_acl_paths(paths, child_node, &sub_path);
paths.insert(sub_path);
}
paths
}
let paths = match path {
Some(path) => {
let mut paths = HashSet::new();
paths.insert(path);
paths
},
None => {
let mut paths = HashSet::new();
let (acl_tree, _) = acl_config::config()?;
paths = populate_acl_paths(paths, acl_tree.root, "");
// default paths, returned even if no ACL exists
paths.insert("/".to_string());
paths.insert("/access".to_string());
paths.insert("/datastore".to_string());
paths.insert("/remote".to_string());
paths.insert("/system".to_string());
paths
},
};
let map = paths
.into_iter()
.fold(HashMap::new(), |mut map: HashMap<String, HashMap<String, bool>>, path: String| {
let split_path = acl_config::split_acl_path(path.as_str());
let (privs, propagated_privs) = user_info.lookup_privs_details(&auth_id, &split_path);
match privs {
0 => map, // Don't leak ACL paths where we don't have any privileges
_ => {
let priv_map = PRIVILEGES
.iter()
.fold(HashMap::new(), |mut priv_map, (name, value)| {
if value & privs != 0 {
priv_map.insert(name.to_string(), value & propagated_privs != 0);
}
priv_map
});
map.insert(path, priv_map);
map
},
}});
Ok(map)
}
#[sortable]
const SUBDIRS: SubdirMap = &sorted!([
("acl", &acl::ROUTER),
@ -244,6 +362,10 @@ const SUBDIRS: SubdirMap = &sorted!([
"password", &Router::new()
.put(&API_METHOD_CHANGE_PASSWORD)
),
(
"permissions", &Router::new()
.get(&API_METHOD_LIST_PERMISSIONS)
),
(
"ticket", &Router::new()
.post(&API_METHOD_CREATE_TICKET)

View File

@ -7,6 +7,7 @@ use proxmox::tools::fs::open_file_locked;
use crate::api2::types::*;
use crate::config::acl;
use crate::config::acl::{Role, PRIV_SYS_AUDIT, PRIV_PERMISSIONS_MODIFY};
use crate::config::cached_user_info::CachedUserInfo;
#[api(
properties: {
@ -43,8 +44,23 @@ fn extract_acl_node_data(
path: &str,
list: &mut Vec<AclListItem>,
exact: bool,
token_user: &Option<Authid>,
) {
// tokens can't have tokens, so we can early return
if let Some(token_user) = token_user {
if token_user.is_token() {
return;
}
}
for (user, roles) in &node.users {
if let Some(token_user) = token_user {
if !user.is_token()
|| user.user() != token_user.user() {
continue;
}
}
for (role, propagate) in roles {
list.push(AclListItem {
path: if path.is_empty() { String::from("/") } else { path.to_string() },
@ -56,6 +72,10 @@ fn extract_acl_node_data(
}
}
for (group, roles) in &node.groups {
if let Some(_) = token_user {
continue;
}
for (role, propagate) in roles {
list.push(AclListItem {
path: if path.is_empty() { String::from("/") } else { path.to_string() },
@ -71,7 +91,7 @@ fn extract_acl_node_data(
}
for (comp, child) in &node.children {
let new_path = format!("{}/{}", path, comp);
extract_acl_node_data(child, &new_path, list, exact);
extract_acl_node_data(child, &new_path, list, exact, token_user);
}
}
@ -98,7 +118,8 @@ fn extract_acl_node_data(
}
},
access: {
permission: &Permission::Privilege(&["access", "acl"], PRIV_SYS_AUDIT, false),
permission: &Permission::Anybody,
description: "Returns all ACLs if user has Sys.Audit on '/access/acl', or just the ACLs containing the user's API tokens.",
},
)]
/// Read Access Control List (ACLs).
@ -107,18 +128,26 @@ pub fn read_acl(
exact: bool,
mut rpcenv: &mut dyn RpcEnvironment,
) -> Result<Vec<AclListItem>, Error> {
let auth_id = rpcenv.get_auth_id().unwrap().parse()?;
//let auth_user = rpcenv.get_user().unwrap();
let user_info = CachedUserInfo::new()?;
let top_level_privs = user_info.lookup_privs(&auth_id, &["access", "acl"]);
let auth_id_filter = if (top_level_privs & PRIV_SYS_AUDIT) == 0 {
Some(auth_id)
} else {
None
};
let (mut tree, digest) = acl::config()?;
let mut list: Vec<AclListItem> = Vec::new();
if let Some(path) = &path {
if let Some(node) = &tree.find_node(path) {
extract_acl_node_data(&node, path, &mut list, exact);
extract_acl_node_data(&node, path, &mut list, exact, &auth_id_filter);
}
} else {
extract_acl_node_data(&tree.root, "", &mut list, exact);
extract_acl_node_data(&tree.root, "", &mut list, exact, &auth_id_filter);
}
rpcenv["digest"] = proxmox::tools::digest_to_hex(&digest).into();
@ -140,9 +169,9 @@ pub fn read_acl(
optional: true,
schema: ACL_PROPAGATE_SCHEMA,
},
userid: {
"auth-id": {
optional: true,
type: Userid,
type: Authid,
},
group: {
optional: true,
@ -160,7 +189,8 @@ pub fn read_acl(
},
},
access: {
permission: &Permission::Privilege(&["access", "acl"], PRIV_PERMISSIONS_MODIFY, false),
permission: &Permission::Anybody,
description: "Requires Permissions.Modify on '/access/acl', limited to updating ACLs of the user's API tokens otherwise."
},
)]
/// Update Access Control List (ACLs).
@ -168,12 +198,35 @@ pub fn update_acl(
path: String,
role: String,
propagate: Option<bool>,
userid: Option<Userid>,
auth_id: Option<Authid>,
group: Option<String>,
delete: Option<bool>,
digest: Option<String>,
_rpcenv: &mut dyn RpcEnvironment,
rpcenv: &mut dyn RpcEnvironment,
) -> Result<(), Error> {
let current_auth_id: Authid = rpcenv.get_auth_id().unwrap().parse()?;
let user_info = CachedUserInfo::new()?;
let top_level_privs = user_info.lookup_privs(&current_auth_id, &["access", "acl"]);
if top_level_privs & PRIV_PERMISSIONS_MODIFY == 0 {
if let Some(_) = group {
bail!("Unprivileged users are not allowed to create group ACL item.");
}
match &auth_id {
Some(auth_id) => {
if current_auth_id.is_token() {
bail!("Unprivileged API tokens can't set ACL items.");
} else if !auth_id.is_token() {
bail!("Unprivileged users can only set ACL items for API tokens.");
} else if auth_id.user() != current_auth_id.user() {
bail!("Unprivileged users can only set ACL items for their own API tokens.");
}
},
None => { bail!("Unprivileged user needs to provide auth_id to update ACL item."); },
};
}
let _lock = open_file_locked(acl::ACL_CFG_LOCKFILE, std::time::Duration::new(10, 0), true)?;
@ -190,11 +243,12 @@ pub fn update_acl(
if let Some(ref _group) = group {
bail!("parameter 'group' - groups are currently not supported.");
} else if let Some(ref userid) = userid {
} else if let Some(ref auth_id) = auth_id {
if !delete { // Note: we allow to delete non-existent users
let user_cfg = crate::config::user::cached_config()?;
if user_cfg.sections.get(&userid.to_string()).is_none() {
bail!("no such user.");
if user_cfg.sections.get(&auth_id.to_string()).is_none() {
bail!(format!("no such {}.",
if auth_id.is_token() { "API token" } else { "user" }));
}
}
} else {
@ -205,11 +259,11 @@ pub fn update_acl(
acl::check_acl_path(&path)?;
}
if let Some(userid) = userid {
if let Some(auth_id) = auth_id {
if delete {
tree.delete_user_role(&path, &userid, &role);
tree.delete_user_role(&path, &auth_id, &role);
} else {
tree.insert_user_role(&path, &userid, &role, propagate);
tree.insert_user_role(&path, &auth_id, &role, propagate);
}
} else if let Some(group) = group {
if delete {

View File

@ -1,12 +1,16 @@
use anyhow::{bail, Error};
use serde_json::Value;
use serde::{Serialize, Deserialize};
use serde_json::{json, Value};
use std::collections::HashMap;
use proxmox::api::{api, ApiMethod, Router, RpcEnvironment, Permission};
use proxmox::api::router::SubdirMap;
use proxmox::api::schema::{Schema, StringSchema};
use proxmox::tools::fs::open_file_locked;
use crate::api2::types::*;
use crate::config::user;
use crate::config::token_shadow;
use crate::config::acl::{PRIV_SYS_AUDIT, PRIV_PERMISSIONS_MODIFY};
use crate::config::cached_user_info::CachedUserInfo;
@ -16,14 +20,96 @@ pub const PBS_PASSWORD_SCHEMA: Schema = StringSchema::new("User Password.")
.max_length(64)
.schema();
#[api(
properties: {
userid: {
type: Userid,
},
comment: {
optional: true,
schema: SINGLE_LINE_COMMENT_SCHEMA,
},
enable: {
optional: true,
schema: user::ENABLE_USER_SCHEMA,
},
expire: {
optional: true,
schema: user::EXPIRE_USER_SCHEMA,
},
firstname: {
optional: true,
schema: user::FIRST_NAME_SCHEMA,
},
lastname: {
schema: user::LAST_NAME_SCHEMA,
optional: true,
},
email: {
schema: user::EMAIL_SCHEMA,
optional: true,
},
tokens: {
type: Array,
optional: true,
description: "List of user's API tokens.",
items: {
type: user::ApiToken
},
},
}
)]
#[derive(Serialize,Deserialize)]
/// User properties with added list of ApiTokens
pub struct UserWithTokens {
pub userid: Userid,
#[serde(skip_serializing_if="Option::is_none")]
pub comment: Option<String>,
#[serde(skip_serializing_if="Option::is_none")]
pub enable: Option<bool>,
#[serde(skip_serializing_if="Option::is_none")]
pub expire: Option<i64>,
#[serde(skip_serializing_if="Option::is_none")]
pub firstname: Option<String>,
#[serde(skip_serializing_if="Option::is_none")]
pub lastname: Option<String>,
#[serde(skip_serializing_if="Option::is_none")]
pub email: Option<String>,
#[serde(skip_serializing_if="Vec::is_empty")]
pub tokens: Vec<user::ApiToken>,
}
impl UserWithTokens {
fn new(user: user::User) -> Self {
Self {
userid: user.userid,
comment: user.comment,
enable: user.enable,
expire: user.expire,
firstname: user.firstname,
lastname: user.lastname,
email: user.email,
tokens: Vec::new(),
}
}
}
#[api(
input: {
properties: {},
properties: {
include_tokens: {
type: bool,
description: "Include user's API tokens in returned list.",
optional: true,
default: false,
},
},
},
returns: {
description: "List users (with config digest).",
type: Array,
items: { type: user::User },
items: { type: UserWithTokens },
},
access: {
permission: &Permission::Anybody,
@ -32,28 +118,60 @@ pub const PBS_PASSWORD_SCHEMA: Schema = StringSchema::new("User Password.")
)]
/// List users
pub fn list_users(
_param: Value,
include_tokens: bool,
_info: &ApiMethod,
mut rpcenv: &mut dyn RpcEnvironment,
) -> Result<Vec<user::User>, Error> {
) -> Result<Vec<UserWithTokens>, Error> {
let (config, digest) = user::config()?;
let userid: Userid = rpcenv.get_user().unwrap().parse()?;
// intentionally user only for now
let userid: Userid = rpcenv.get_auth_id().unwrap().parse()?;
let auth_id = Authid::from(userid.clone());
let user_info = CachedUserInfo::new()?;
let top_level_privs = user_info.lookup_privs(&userid, &["access", "users"]);
let top_level_privs = user_info.lookup_privs(&auth_id, &["access", "users"]);
let top_level_allowed = (top_level_privs & PRIV_SYS_AUDIT) != 0;
let filter_by_privs = |user: &user::User| {
top_level_allowed || user.userid == userid
};
let list:Vec<user::User> = config.convert_to_typed_array("user")?;
rpcenv["digest"] = proxmox::tools::digest_to_hex(&digest).into();
Ok(list.into_iter().filter(filter_by_privs).collect())
let iter = list.into_iter().filter(filter_by_privs);
let list = if include_tokens {
let tokens: Vec<user::ApiToken> = config.convert_to_typed_array("token")?;
let mut user_to_tokens = tokens
.into_iter()
.fold(
HashMap::new(),
|mut map: HashMap<Userid, Vec<user::ApiToken>>, token: user::ApiToken| {
if token.tokenid.is_token() {
map
.entry(token.tokenid.user().clone())
.or_default()
.push(token);
}
map
});
iter
.map(|user: user::User| {
let mut user = UserWithTokens::new(user);
user.tokens = user_to_tokens.remove(&user.userid).unwrap_or_default();
user
})
.collect()
} else {
iter.map(|user: user::User| UserWithTokens::new(user))
.collect()
};
Ok(list)
}
#[api(
@ -304,12 +422,340 @@ pub fn delete_user(userid: Userid, digest: Option<String>) -> Result<(), Error>
Ok(())
}
const ITEM_ROUTER: Router = Router::new()
#[api(
input: {
properties: {
userid: {
type: Userid,
},
tokenname: {
type: Tokenname,
},
},
},
returns: {
description: "Get API token metadata (with config digest).",
type: user::ApiToken,
},
access: {
permission: &Permission::Or(&[
&Permission::Privilege(&["access", "users"], PRIV_SYS_AUDIT, false),
&Permission::UserParam("userid"),
]),
},
)]
/// Read user's API token metadata
pub fn read_token(
userid: Userid,
tokenname: Tokenname,
_info: &ApiMethod,
mut rpcenv: &mut dyn RpcEnvironment,
) -> Result<user::ApiToken, Error> {
let (config, digest) = user::config()?;
let tokenid = Authid::from((userid, Some(tokenname)));
rpcenv["digest"] = proxmox::tools::digest_to_hex(&digest).into();
config.lookup("token", &tokenid.to_string())
}
#[api(
protected: true,
input: {
properties: {
userid: {
type: Userid,
},
tokenname: {
type: Tokenname,
},
comment: {
optional: true,
schema: SINGLE_LINE_COMMENT_SCHEMA,
},
enable: {
schema: user::ENABLE_USER_SCHEMA,
optional: true,
},
expire: {
schema: user::EXPIRE_USER_SCHEMA,
optional: true,
},
digest: {
optional: true,
schema: PROXMOX_CONFIG_DIGEST_SCHEMA,
},
},
},
access: {
permission: &Permission::Or(&[
&Permission::Privilege(&["access", "users"], PRIV_PERMISSIONS_MODIFY, false),
&Permission::UserParam("userid"),
]),
},
returns: {
description: "API token identifier + generated secret.",
properties: {
value: {
type: String,
description: "The API token secret",
},
tokenid: {
type: String,
description: "The API token identifier",
},
},
},
)]
/// Generate a new API token with given metadata
pub fn generate_token(
userid: Userid,
tokenname: Tokenname,
comment: Option<String>,
enable: Option<bool>,
expire: Option<i64>,
digest: Option<String>,
) -> Result<Value, Error> {
let _lock = open_file_locked(user::USER_CFG_LOCKFILE, std::time::Duration::new(10, 0), true)?;
let (mut config, expected_digest) = user::config()?;
if let Some(ref digest) = digest {
let digest = proxmox::tools::hex_to_digest(digest)?;
crate::tools::detect_modified_configuration_file(&digest, &expected_digest)?;
}
let tokenid = Authid::from((userid.clone(), Some(tokenname.clone())));
let tokenid_string = tokenid.to_string();
if let Some(_) = config.sections.get(&tokenid_string) {
bail!("token '{}' for user '{}' already exists.", tokenname.as_str(), userid);
}
let secret = format!("{:x}", proxmox::tools::uuid::Uuid::generate());
token_shadow::set_secret(&tokenid, &secret)?;
let token = user::ApiToken {
tokenid: tokenid.clone(),
comment,
enable,
expire,
};
config.set_data(&tokenid_string, "token", &token)?;
user::save_config(&config)?;
Ok(json!({
"tokenid": tokenid_string,
"value": secret
}))
}
#[api(
protected: true,
input: {
properties: {
userid: {
type: Userid,
},
tokenname: {
type: Tokenname,
},
comment: {
optional: true,
schema: SINGLE_LINE_COMMENT_SCHEMA,
},
enable: {
schema: user::ENABLE_USER_SCHEMA,
optional: true,
},
expire: {
schema: user::EXPIRE_USER_SCHEMA,
optional: true,
},
digest: {
optional: true,
schema: PROXMOX_CONFIG_DIGEST_SCHEMA,
},
},
},
access: {
permission: &Permission::Or(&[
&Permission::Privilege(&["access", "users"], PRIV_PERMISSIONS_MODIFY, false),
&Permission::UserParam("userid"),
]),
},
)]
/// Update user's API token metadata
pub fn update_token(
userid: Userid,
tokenname: Tokenname,
comment: Option<String>,
enable: Option<bool>,
expire: Option<i64>,
digest: Option<String>,
) -> Result<(), Error> {
let _lock = open_file_locked(user::USER_CFG_LOCKFILE, std::time::Duration::new(10, 0), true)?;
let (mut config, expected_digest) = user::config()?;
if let Some(ref digest) = digest {
let digest = proxmox::tools::hex_to_digest(digest)?;
crate::tools::detect_modified_configuration_file(&digest, &expected_digest)?;
}
let tokenid = Authid::from((userid, Some(tokenname)));
let tokenid_string = tokenid.to_string();
let mut data: user::ApiToken = config.lookup("token", &tokenid_string)?;
if let Some(comment) = comment {
let comment = comment.trim().to_string();
if comment.is_empty() {
data.comment = None;
} else {
data.comment = Some(comment);
}
}
if let Some(enable) = enable {
data.enable = if enable { None } else { Some(false) };
}
if let Some(expire) = expire {
data.expire = if expire > 0 { Some(expire) } else { None };
}
config.set_data(&tokenid_string, "token", &data)?;
user::save_config(&config)?;
Ok(())
}
#[api(
protected: true,
input: {
properties: {
userid: {
type: Userid,
},
tokenname: {
type: Tokenname,
},
digest: {
optional: true,
schema: PROXMOX_CONFIG_DIGEST_SCHEMA,
},
},
},
access: {
permission: &Permission::Or(&[
&Permission::Privilege(&["access", "users"], PRIV_PERMISSIONS_MODIFY, false),
&Permission::UserParam("userid"),
]),
},
)]
/// Delete a user's API token
pub fn delete_token(
userid: Userid,
tokenname: Tokenname,
digest: Option<String>,
) -> Result<(), Error> {
let _lock = open_file_locked(user::USER_CFG_LOCKFILE, std::time::Duration::new(10, 0), true)?;
let (mut config, expected_digest) = user::config()?;
if let Some(ref digest) = digest {
let digest = proxmox::tools::hex_to_digest(digest)?;
crate::tools::detect_modified_configuration_file(&digest, &expected_digest)?;
}
let tokenid = Authid::from((userid.clone(), Some(tokenname.clone())));
let tokenid_string = tokenid.to_string();
match config.sections.get(&tokenid_string) {
Some(_) => { config.sections.remove(&tokenid_string); },
None => bail!("token '{}' of user '{}' does not exist.", tokenname.as_str(), userid),
}
token_shadow::delete_secret(&tokenid)?;
user::save_config(&config)?;
Ok(())
}
#[api(
input: {
properties: {
userid: {
type: Userid,
},
},
},
returns: {
description: "List user's API tokens (with config digest).",
type: Array,
items: { type: user::ApiToken },
},
access: {
permission: &Permission::Or(&[
&Permission::Privilege(&["access", "users"], PRIV_SYS_AUDIT, false),
&Permission::UserParam("userid"),
]),
},
)]
/// List user's API tokens
pub fn list_tokens(
userid: Userid,
_info: &ApiMethod,
mut rpcenv: &mut dyn RpcEnvironment,
) -> Result<Vec<user::ApiToken>, Error> {
let (config, digest) = user::config()?;
let list:Vec<user::ApiToken> = config.convert_to_typed_array("token")?;
rpcenv["digest"] = proxmox::tools::digest_to_hex(&digest).into();
let filter_by_owner = |token: &user::ApiToken| {
if token.tokenid.is_token() {
token.tokenid.user() == &userid
} else {
false
}
};
Ok(list.into_iter().filter(filter_by_owner).collect())
}
const TOKEN_ITEM_ROUTER: Router = Router::new()
.get(&API_METHOD_READ_TOKEN)
.put(&API_METHOD_UPDATE_TOKEN)
.post(&API_METHOD_GENERATE_TOKEN)
.delete(&API_METHOD_DELETE_TOKEN);
const TOKEN_ROUTER: Router = Router::new()
.get(&API_METHOD_LIST_TOKENS)
.match_all("tokenname", &TOKEN_ITEM_ROUTER);
const USER_SUBDIRS: SubdirMap = &[
("token", &TOKEN_ROUTER),
];
const USER_ROUTER: Router = Router::new()
.get(&API_METHOD_READ_USER)
.put(&API_METHOD_UPDATE_USER)
.delete(&API_METHOD_DELETE_USER);
.delete(&API_METHOD_DELETE_USER)
.subdirs(USER_SUBDIRS);
pub const ROUTER: Router = Router::new()
.get(&API_METHOD_LIST_USERS)
.post(&API_METHOD_CREATE_USER)
.match_all("userid", &ITEM_ROUTER);
.match_all("userid", &USER_ROUTER);

View File

@ -29,7 +29,7 @@ use crate::backup::*;
use crate::config::datastore;
use crate::config::cached_user_info::CachedUserInfo;
use crate::server::WorkerTask;
use crate::server::{jobstate::Job, WorkerTask};
use crate::tools::{
self,
zip::{ZipEncoder, ZipEntry},
@ -42,16 +42,33 @@ use crate::config::acl::{
PRIV_DATASTORE_READ,
PRIV_DATASTORE_PRUNE,
PRIV_DATASTORE_BACKUP,
PRIV_DATASTORE_VERIFY,
};
fn check_backup_owner(
fn check_priv_or_backup_owner(
store: &DataStore,
group: &BackupGroup,
userid: &Userid,
auth_id: &Authid,
required_privs: u64,
) -> Result<(), Error> {
let user_info = CachedUserInfo::new()?;
let privs = user_info.lookup_privs(&auth_id, &["datastore", store.name()]);
if privs & required_privs == 0 {
let owner = store.get_owner(group)?;
if &owner != userid {
bail!("backup owner check failed ({} != {})", userid, owner);
check_backup_owner(&owner, auth_id)?;
}
Ok(())
}
fn check_backup_owner(
owner: &Authid,
auth_id: &Authid,
) -> Result<(), Error> {
let correct_owner = owner == auth_id
|| (owner.is_token() && &Authid::from(owner.user().clone()) == auth_id);
if !correct_owner {
bail!("backup owner check failed ({} != {})", auth_id, owner);
}
Ok(())
}
@ -149,9 +166,9 @@ fn list_groups(
rpcenv: &mut dyn RpcEnvironment,
) -> Result<Vec<GroupListItem>, Error> {
let userid: Userid = rpcenv.get_user().unwrap().parse()?;
let auth_id: Authid = rpcenv.get_auth_id().unwrap().parse()?;
let user_info = CachedUserInfo::new()?;
let user_privs = user_info.lookup_privs(&userid, &["datastore", &store]);
let user_privs = user_info.lookup_privs(&auth_id, &["datastore", &store]);
let datastore = DataStore::lookup_datastore(&store)?;
@ -171,7 +188,7 @@ fn list_groups(
let list_all = (user_privs & PRIV_DATASTORE_AUDIT) != 0;
let owner = datastore.get_owner(group)?;
if !list_all && owner != userid {
if !list_all && check_backup_owner(&owner, &auth_id).is_err() {
continue;
}
@ -230,16 +247,12 @@ pub fn list_snapshot_files(
rpcenv: &mut dyn RpcEnvironment,
) -> Result<Vec<BackupContent>, Error> {
let userid: Userid = rpcenv.get_user().unwrap().parse()?;
let user_info = CachedUserInfo::new()?;
let user_privs = user_info.lookup_privs(&userid, &["datastore", &store]);
let auth_id: Authid = rpcenv.get_auth_id().unwrap().parse()?;
let datastore = DataStore::lookup_datastore(&store)?;
let snapshot = BackupDir::new(backup_type, backup_id, backup_time)?;
let allowed = (user_privs & (PRIV_DATASTORE_AUDIT | PRIV_DATASTORE_READ)) != 0;
if !allowed { check_backup_owner(&datastore, snapshot.group(), &userid)?; }
check_priv_or_backup_owner(&datastore, snapshot.group(), &auth_id, PRIV_DATASTORE_AUDIT | PRIV_DATASTORE_READ)?;
let info = BackupInfo::new(&datastore.base_path(), snapshot)?;
@ -282,16 +295,12 @@ fn delete_snapshot(
rpcenv: &mut dyn RpcEnvironment,
) -> Result<Value, Error> {
let userid: Userid = rpcenv.get_user().unwrap().parse()?;
let user_info = CachedUserInfo::new()?;
let user_privs = user_info.lookup_privs(&userid, &["datastore", &store]);
let auth_id: Authid = rpcenv.get_auth_id().unwrap().parse()?;
let snapshot = BackupDir::new(backup_type, backup_id, backup_time)?;
let datastore = DataStore::lookup_datastore(&store)?;
let allowed = (user_privs & PRIV_DATASTORE_MODIFY) != 0;
if !allowed { check_backup_owner(&datastore, snapshot.group(), &userid)?; }
check_priv_or_backup_owner(&datastore, snapshot.group(), &auth_id, PRIV_DATASTORE_MODIFY)?;
datastore.remove_backup_dir(&snapshot, false)?;
@ -338,9 +347,9 @@ pub fn list_snapshots (
rpcenv: &mut dyn RpcEnvironment,
) -> Result<Vec<SnapshotListItem>, Error> {
let userid: Userid = rpcenv.get_user().unwrap().parse()?;
let auth_id: Authid = rpcenv.get_auth_id().unwrap().parse()?;
let user_info = CachedUserInfo::new()?;
let user_privs = user_info.lookup_privs(&userid, &["datastore", &store]);
let user_privs = user_info.lookup_privs(&auth_id, &["datastore", &store]);
let datastore = DataStore::lookup_datastore(&store)?;
@ -362,7 +371,7 @@ pub fn list_snapshots (
let list_all = (user_privs & PRIV_DATASTORE_AUDIT) != 0;
let owner = datastore.get_owner(group)?;
if !list_all && owner != userid {
if !list_all && check_backup_owner(&owner, &auth_id).is_err() {
continue;
}
@ -423,12 +432,18 @@ pub fn list_snapshots (
Ok(snapshots)
}
// returns a map from type to (group_count, snapshot_count)
fn get_snaphots_count(store: &DataStore) -> Result<HashMap<String, (usize, usize)>, Error> {
fn get_snapshots_count(store: &DataStore) -> Result<Counts, Error> {
let base_path = store.base_path();
let backup_list = BackupInfo::list_backups(&base_path)?;
let mut groups = HashSet::new();
let mut result: HashMap<String, (usize, usize)> = HashMap::new();
let mut result = Counts {
ct: None,
host: None,
vm: None,
other: None,
};
for info in backup_list {
let group = info.backup_dir.group();
@ -441,13 +456,23 @@ fn get_snaphots_count(store: &DataStore) -> Result<HashMap<String, (usize, usize
new_id = true;
}
if let Some(mut counts) = result.get_mut(backup_type) {
counts.1 += 1;
let mut counts = match backup_type {
"ct" => result.ct.take().unwrap_or(Default::default()),
"host" => result.host.take().unwrap_or(Default::default()),
"vm" => result.vm.take().unwrap_or(Default::default()),
_ => result.other.take().unwrap_or(Default::default()),
};
counts.snapshots += 1;
if new_id {
counts.0 +=1;
counts.groups +=1;
}
} else {
result.insert(backup_type.to_string(), (1, 1));
match backup_type {
"ct" => result.ct = Some(counts),
"host" => result.host = Some(counts),
"vm" => result.vm = Some(counts),
_ => result.other = Some(counts),
}
}
@ -463,21 +488,7 @@ fn get_snaphots_count(store: &DataStore) -> Result<HashMap<String, (usize, usize
},
},
returns: {
description: "The overall Datastore status and information.",
type: Object,
properties: {
storage: {
type: StorageStatus,
},
counts: {
description: "Group and Snapshot counts per Type",
type: Object,
properties: { },
},
"gc-status": {
type: GarbageCollectionStatus,
},
},
type: DataStoreStatus,
},
access: {
permission: &Permission::Privilege(&["datastore", "{store}"], PRIV_DATASTORE_AUDIT | PRIV_DATASTORE_BACKUP, true),
@ -488,19 +499,19 @@ pub fn status(
store: String,
_info: &ApiMethod,
_rpcenv: &mut dyn RpcEnvironment,
) -> Result<Value, Error> {
) -> Result<DataStoreStatus, Error> {
let datastore = DataStore::lookup_datastore(&store)?;
let storage_status = crate::tools::disks::disk_usage(&datastore.base_path())?;
let counts = get_snaphots_count(&datastore)?;
let storage = crate::tools::disks::disk_usage(&datastore.base_path())?;
let counts = get_snapshots_count(&datastore)?;
let gc_status = datastore.last_gc_status();
let res = json!({
"storage": storage_status,
"counts": counts,
"gc-status": gc_status,
});
Ok(res)
Ok(DataStoreStatus {
total: storage.total,
used: storage.used,
avail: storage.avail,
gc_status,
counts,
})
}
#[api(
@ -527,7 +538,7 @@ pub fn status(
schema: UPID_SCHEMA,
},
access: {
permission: &Permission::Privilege(&["datastore", "{store}"], PRIV_DATASTORE_READ | PRIV_DATASTORE_BACKUP, true), // fixme
permission: &Permission::Privilege(&["datastore", "{store}"], PRIV_DATASTORE_VERIFY | PRIV_DATASTORE_BACKUP, true),
},
)]
/// Verify backups.
@ -543,6 +554,7 @@ pub fn verify(
) -> Result<Value, Error> {
let datastore = DataStore::lookup_datastore(&store)?;
let auth_id: Authid = rpcenv.get_auth_id().unwrap().parse()?;
let worker_id;
let mut backup_dir = None;
@ -553,12 +565,18 @@ pub fn verify(
(Some(backup_type), Some(backup_id), Some(backup_time)) => {
worker_id = format!("{}:{}/{}/{:08X}", store, backup_type, backup_id, backup_time);
let dir = BackupDir::new(backup_type, backup_id, backup_time)?;
check_priv_or_backup_owner(&datastore, dir.group(), &auth_id, PRIV_DATASTORE_VERIFY)?;
backup_dir = Some(dir);
worker_type = "verify_snapshot";
}
(Some(backup_type), Some(backup_id), None) => {
worker_id = format!("{}:{}/{}", store, backup_type, backup_id);
let group = BackupGroup::new(backup_type, backup_id);
check_priv_or_backup_owner(&datastore, &group, &auth_id, PRIV_DATASTORE_VERIFY)?;
backup_group = Some(group);
worker_type = "verify_group";
}
@ -568,18 +586,16 @@ pub fn verify(
_ => bail!("parameters do not specify a backup group or snapshot"),
}
let userid: Userid = rpcenv.get_user().unwrap().parse()?;
let to_stdout = if rpcenv.env_type() == RpcEnvironmentType::CLI { true } else { false };
let upid_str = WorkerTask::new_thread(
worker_type,
Some(worker_id.clone()),
userid,
auth_id.clone(),
to_stdout,
move |worker| {
let verified_chunks = Arc::new(Mutex::new(HashSet::with_capacity(1024*16)));
let corrupt_chunks = Arc::new(Mutex::new(HashSet::with_capacity(64)));
let filter = |_backup_info: &BackupInfo| { true };
let failed_dirs = if let Some(backup_dir) = backup_dir {
let mut res = Vec::new();
@ -590,6 +606,7 @@ pub fn verify(
corrupt_chunks,
worker.clone(),
worker.upid().clone(),
None,
)? {
res.push(backup_dir.to_string());
}
@ -603,11 +620,20 @@ pub fn verify(
None,
worker.clone(),
worker.upid(),
&filter,
None,
)?;
failed_dirs
} else {
verify_all_backups(datastore, worker.clone(), worker.upid(), &filter)?
let privs = CachedUserInfo::new()?
.lookup_privs(&auth_id, &["datastore", &store]);
let owner = if privs & PRIV_DATASTORE_VERIFY == 0 {
Some(auth_id)
} else {
None
};
verify_all_backups(datastore, worker.clone(), worker.upid(), owner, None)?
};
if failed_dirs.len() > 0 {
worker.log("Failed to verify following snapshots:");
@ -703,9 +729,7 @@ fn prune(
let backup_type = tools::required_string_param(&param, "backup-type")?;
let backup_id = tools::required_string_param(&param, "backup-id")?;
let userid: Userid = rpcenv.get_user().unwrap().parse()?;
let user_info = CachedUserInfo::new()?;
let user_privs = user_info.lookup_privs(&userid, &["datastore", &store]);
let auth_id: Authid = rpcenv.get_auth_id().unwrap().parse()?;
let dry_run = param["dry-run"].as_bool().unwrap_or(false);
@ -713,8 +737,7 @@ fn prune(
let datastore = DataStore::lookup_datastore(&store)?;
let allowed = (user_privs & PRIV_DATASTORE_MODIFY) != 0;
if !allowed { check_backup_owner(&datastore, &group, &userid)?; }
check_priv_or_backup_owner(&datastore, &group, &auth_id, PRIV_DATASTORE_MODIFY)?;
let prune_options = PruneOptions {
keep_last: param["keep-last"].as_u64(),
@ -756,7 +779,7 @@ fn prune(
// We use a WorkerTask just to have a task log, but run synchrounously
let worker = WorkerTask::new("prune", Some(worker_id), Userid::root_userid().clone(), true)?;
let worker = WorkerTask::new("prune", Some(worker_id), auth_id.clone(), true)?;
if keep_all {
worker.log("No prune selection - keeping all files.");
@ -831,21 +854,15 @@ fn start_garbage_collection(
) -> Result<Value, Error> {
let datastore = DataStore::lookup_datastore(&store)?;
let auth_id: Authid = rpcenv.get_auth_id().unwrap().parse()?;
println!("Starting garbage collection on store {}", store);
let job = Job::new("garbage_collection", &store)
.map_err(|_| format_err!("garbage collection already running"))?;
let to_stdout = if rpcenv.env_type() == RpcEnvironmentType::CLI { true } else { false };
let upid_str = WorkerTask::new_thread(
"garbage_collection",
Some(store.clone()),
Userid::root_userid().clone(),
to_stdout,
move |worker| {
worker.log(format!("starting garbage collection on store {}", store));
datastore.garbage_collection(&*worker, worker.upid())
},
)?;
let upid_str = crate::server::do_garbage_collection_job(job, datastore, &auth_id, None, to_stdout)
.map_err(|err| format_err!("unable to start garbage collection job on datastore {} - {}", store, err))?;
Ok(json!(upid_str))
}
@ -909,13 +926,13 @@ fn get_datastore_list(
let (config, _digest) = datastore::config()?;
let userid: Userid = rpcenv.get_user().unwrap().parse()?;
let auth_id: Authid = rpcenv.get_auth_id().unwrap().parse()?;
let user_info = CachedUserInfo::new()?;
let mut list = Vec::new();
for (store, (_, data)) in &config.sections {
let user_privs = user_info.lookup_privs(&userid, &["datastore", &store]);
let user_privs = user_info.lookup_privs(&auth_id, &["datastore", &store]);
let allowed = (user_privs & (PRIV_DATASTORE_AUDIT| PRIV_DATASTORE_BACKUP)) != 0;
if allowed {
let mut entry = json!({ "store": store });
@ -960,9 +977,7 @@ fn download_file(
let store = tools::required_string_param(&param, "store")?;
let datastore = DataStore::lookup_datastore(store)?;
let userid: Userid = rpcenv.get_user().unwrap().parse()?;
let user_info = CachedUserInfo::new()?;
let user_privs = user_info.lookup_privs(&userid, &["datastore", &store]);
let auth_id: Authid = rpcenv.get_auth_id().unwrap().parse()?;
let file_name = tools::required_string_param(&param, "file-name")?.to_owned();
@ -972,8 +987,7 @@ fn download_file(
let backup_dir = BackupDir::new(backup_type, backup_id, backup_time)?;
let allowed = (user_privs & PRIV_DATASTORE_READ) != 0;
if !allowed { check_backup_owner(&datastore, backup_dir.group(), &userid)?; }
check_priv_or_backup_owner(&datastore, backup_dir.group(), &auth_id, PRIV_DATASTORE_READ)?;
println!("Download {} from {} ({}/{})", file_name, store, backup_dir, file_name);
@ -1033,9 +1047,7 @@ fn download_file_decoded(
let store = tools::required_string_param(&param, "store")?;
let datastore = DataStore::lookup_datastore(store)?;
let userid: Userid = rpcenv.get_user().unwrap().parse()?;
let user_info = CachedUserInfo::new()?;
let user_privs = user_info.lookup_privs(&userid, &["datastore", &store]);
let auth_id: Authid = rpcenv.get_auth_id().unwrap().parse()?;
let file_name = tools::required_string_param(&param, "file-name")?.to_owned();
@ -1045,8 +1057,7 @@ fn download_file_decoded(
let backup_dir = BackupDir::new(backup_type, backup_id, backup_time)?;
let allowed = (user_privs & PRIV_DATASTORE_READ) != 0;
if !allowed { check_backup_owner(&datastore, backup_dir.group(), &userid)?; }
check_priv_or_backup_owner(&datastore, backup_dir.group(), &auth_id, PRIV_DATASTORE_READ)?;
let (manifest, files) = read_backup_index(&datastore, &backup_dir)?;
for file in files {
@ -1158,8 +1169,9 @@ fn upload_backup_log(
let backup_dir = BackupDir::new(backup_type, backup_id, backup_time)?;
let userid: Userid = rpcenv.get_user().unwrap().parse()?;
check_backup_owner(&datastore, backup_dir.group(), &userid)?;
let auth_id: Authid = rpcenv.get_auth_id().unwrap().parse()?;
let owner = datastore.get_owner(backup_dir.group())?;
check_backup_owner(&owner, &auth_id)?;
let mut path = datastore.base_path();
path.push(backup_dir.relative_path());
@ -1228,14 +1240,11 @@ fn catalog(
) -> Result<Value, Error> {
let datastore = DataStore::lookup_datastore(&store)?;
let userid: Userid = rpcenv.get_user().unwrap().parse()?;
let user_info = CachedUserInfo::new()?;
let user_privs = user_info.lookup_privs(&userid, &["datastore", &store]);
let auth_id: Authid = rpcenv.get_auth_id().unwrap().parse()?;
let backup_dir = BackupDir::new(backup_type, backup_id, backup_time)?;
let allowed = (user_privs & PRIV_DATASTORE_READ) != 0;
if !allowed { check_backup_owner(&datastore, backup_dir.group(), &userid)?; }
check_priv_or_backup_owner(&datastore, backup_dir.group(), &auth_id, PRIV_DATASTORE_READ)?;
let file_name = CATALOG_NAME;
@ -1399,9 +1408,7 @@ fn pxar_file_download(
let store = tools::required_string_param(&param, "store")?;
let datastore = DataStore::lookup_datastore(&store)?;
let userid: Userid = rpcenv.get_user().unwrap().parse()?;
let user_info = CachedUserInfo::new()?;
let user_privs = user_info.lookup_privs(&userid, &["datastore", &store]);
let auth_id: Authid = rpcenv.get_auth_id().unwrap().parse()?;
let filepath = tools::required_string_param(&param, "filepath")?.to_owned();
@ -1411,8 +1418,7 @@ fn pxar_file_download(
let backup_dir = BackupDir::new(backup_type, backup_id, backup_time)?;
let allowed = (user_privs & PRIV_DATASTORE_READ) != 0;
if !allowed { check_backup_owner(&datastore, backup_dir.group(), &userid)?; }
check_priv_or_backup_owner(&datastore, backup_dir.group(), &auth_id, PRIV_DATASTORE_READ)?;
let mut components = base64::decode(&filepath)?;
if components.len() > 0 && components[0] == '/' as u8 {
@ -1565,7 +1571,7 @@ fn get_rrd_stats(
},
},
access: {
permission: &Permission::Privilege(&["datastore", "{store}"], PRIV_DATASTORE_READ | PRIV_DATASTORE_BACKUP, true),
permission: &Permission::Privilege(&["datastore", "{store}"], PRIV_DATASTORE_AUDIT | PRIV_DATASTORE_BACKUP, true),
},
)]
/// Get "notes" for a specific backup
@ -1578,14 +1584,10 @@ fn get_notes(
) -> Result<String, Error> {
let datastore = DataStore::lookup_datastore(&store)?;
let userid: Userid = rpcenv.get_user().unwrap().parse()?;
let user_info = CachedUserInfo::new()?;
let user_privs = user_info.lookup_privs(&userid, &["datastore", &store]);
let auth_id: Authid = rpcenv.get_auth_id().unwrap().parse()?;
let backup_dir = BackupDir::new(backup_type, backup_id, backup_time)?;
let allowed = (user_privs & PRIV_DATASTORE_READ) != 0;
if !allowed { check_backup_owner(&datastore, backup_dir.group(), &userid)?; }
check_priv_or_backup_owner(&datastore, backup_dir.group(), &auth_id, PRIV_DATASTORE_AUDIT)?;
let (manifest, _) = datastore.load_manifest(&backup_dir)?;
@ -1617,7 +1619,9 @@ fn get_notes(
},
},
access: {
permission: &Permission::Privilege(&["datastore", "{store}"], PRIV_DATASTORE_MODIFY, true),
permission: &Permission::Privilege(&["datastore", "{store}"],
PRIV_DATASTORE_MODIFY | PRIV_DATASTORE_BACKUP,
true),
},
)]
/// Set "notes" for a specific backup
@ -1631,14 +1635,10 @@ fn set_notes(
) -> Result<(), Error> {
let datastore = DataStore::lookup_datastore(&store)?;
let userid: Userid = rpcenv.get_user().unwrap().parse()?;
let user_info = CachedUserInfo::new()?;
let user_privs = user_info.lookup_privs(&userid, &["datastore", &store]);
let auth_id: Authid = rpcenv.get_auth_id().unwrap().parse()?;
let backup_dir = BackupDir::new(backup_type, backup_id, backup_time)?;
let allowed = (user_privs & PRIV_DATASTORE_READ) != 0;
if !allowed { check_backup_owner(&datastore, backup_dir.group(), &userid)?; }
check_priv_or_backup_owner(&datastore, backup_dir.group(), &auth_id, PRIV_DATASTORE_MODIFY)?;
datastore.update_manifest(&backup_dir,|manifest| {
manifest.unprotected["notes"] = notes.into();
@ -1660,12 +1660,13 @@ fn set_notes(
schema: BACKUP_ID_SCHEMA,
},
"new-owner": {
type: Userid,
type: Authid,
},
},
},
access: {
permission: &Permission::Privilege(&["datastore", "{store}"], PRIV_DATASTORE_MODIFY, true),
permission: &Permission::Anybody,
description: "Datastore.Modify on whole datastore, or changing ownership between user and a user's token for owned backups with Datastore.Backup"
},
)]
/// Change owner of a backup group
@ -1673,18 +1674,69 @@ fn set_backup_owner(
store: String,
backup_type: String,
backup_id: String,
new_owner: Userid,
_rpcenv: &mut dyn RpcEnvironment,
new_owner: Authid,
rpcenv: &mut dyn RpcEnvironment,
) -> Result<(), Error> {
let datastore = DataStore::lookup_datastore(&store)?;
let backup_group = BackupGroup::new(backup_type, backup_id);
let auth_id: Authid = rpcenv.get_auth_id().unwrap().parse()?;
let user_info = CachedUserInfo::new()?;
if !user_info.is_active_user(&new_owner) {
bail!("user '{}' is inactive or non-existent", new_owner);
let privs = user_info.lookup_privs(&auth_id, &["datastore", &store]);
let allowed = if (privs & PRIV_DATASTORE_MODIFY) != 0 {
// High-privilege user/token
true
} else if (privs & PRIV_DATASTORE_BACKUP) != 0 {
let owner = datastore.get_owner(&backup_group)?;
match (owner.is_token(), new_owner.is_token()) {
(true, true) => {
// API token to API token, owned by same user
let owner = owner.user();
let new_owner = new_owner.user();
owner == new_owner && Authid::from(owner.clone()) == auth_id
},
(true, false) => {
// API token to API token owner
Authid::from(owner.user().clone()) == auth_id
&& new_owner == auth_id
},
(false, true) => {
// API token owner to API token
owner == auth_id
&& Authid::from(new_owner.user().clone()) == auth_id
},
(false, false) => {
// User to User, not allowed for unprivileged users
false
},
}
} else {
false
};
if !allowed {
return Err(http_err!(UNAUTHORIZED,
"{} does not have permission to change owner of backup group '{}' to {}",
auth_id,
backup_group,
new_owner,
));
}
if !user_info.is_active_auth_id(&new_owner) {
bail!("{} '{}' is inactive or non-existent",
if new_owner.is_token() {
"API token".to_string()
} else {
"user".to_string()
},
new_owner);
}
datastore.set_owner(&backup_group, &new_owner, true)?;

View File

@ -1,12 +1,15 @@
use anyhow::{format_err, Error};
use anyhow::{bail, format_err, Error};
use serde_json::Value;
use proxmox::api::{api, ApiMethod, Router, RpcEnvironment};
use proxmox::api::{api, ApiMethod, Permission, Router, RpcEnvironment};
use proxmox::api::router::SubdirMap;
use proxmox::{list_subdirs_api_method, sortable};
use crate::api2::types::*;
use crate::api2::pull::do_sync_job;
use crate::api2::config::sync::{check_sync_job_modify_access, check_sync_job_read_access};
use crate::config::cached_user_info::CachedUserInfo;
use crate::config::sync::{self, SyncJobStatus, SyncJobConfig};
use crate::server::UPID;
use crate::server::jobstate::{Job, JobState};
@ -27,6 +30,10 @@ use crate::tools::systemd::time::{
type: Array,
items: { type: sync::SyncJobStatus },
},
access: {
description: "Limited to sync jobs where user has Datastore.Audit on target datastore, and Remote.Audit on source remote.",
permission: &Permission::Anybody,
},
)]
/// List all sync jobs
pub fn list_sync_jobs(
@ -35,6 +42,9 @@ pub fn list_sync_jobs(
mut rpcenv: &mut dyn RpcEnvironment,
) -> Result<Vec<SyncJobStatus>, Error> {
let auth_id: Authid = rpcenv.get_auth_id().unwrap().parse()?;
let user_info = CachedUserInfo::new()?;
let (config, digest) = sync::config()?;
let mut list: Vec<SyncJobStatus> = config
@ -46,6 +56,10 @@ pub fn list_sync_jobs(
} else {
true
}
})
.filter(|job: &SyncJobStatus| {
let as_config: SyncJobConfig = job.clone().into();
check_sync_job_read_access(&user_info, &auth_id, &as_config)
}).collect();
for job in &mut list {
@ -89,7 +103,11 @@ pub fn list_sync_jobs(
schema: JOB_ID_SCHEMA,
}
}
}
},
access: {
description: "User needs Datastore.Backup on target datastore, and Remote.Read on source remote. Additionally, remove_vanished requires Datastore.Prune, and any owner other than the user themselves requires Datastore.Modify",
permission: &Permission::Anybody,
},
)]
/// Runs the sync jobs manually.
fn run_sync_job(
@ -97,15 +115,19 @@ fn run_sync_job(
_info: &ApiMethod,
rpcenv: &mut dyn RpcEnvironment,
) -> Result<String, Error> {
let auth_id: Authid = rpcenv.get_auth_id().unwrap().parse()?;
let user_info = CachedUserInfo::new()?;
let (config, _digest) = sync::config()?;
let sync_job: SyncJobConfig = config.lookup("sync", &id)?;
let userid: Userid = rpcenv.get_user().unwrap().parse()?;
if !check_sync_job_modify_access(&user_info, &auth_id, &sync_job) {
bail!("permission check failed");
}
let job = Job::new("syncjob", &id)?;
let upid_str = do_sync_job(job, sync_job, &userid, None)?;
let upid_str = do_sync_job(job, sync_job, &auth_id, None)?;
Ok(upid_str)
}

View File

@ -101,11 +101,11 @@ fn run_verification_job(
let (config, _digest) = verify::config()?;
let verification_job: VerificationJobConfig = config.lookup("verification", &id)?;
let userid: Userid = rpcenv.get_user().unwrap().parse()?;
let auth_id: Authid = rpcenv.get_auth_id().unwrap().parse()?;
let job = Job::new("verificationjob", &id)?;
let upid_str = do_verification_job(job, verification_job, &userid, None)?;
let upid_str = do_verification_job(job, verification_job, &auth_id, None)?;
Ok(upid_str)
}

View File

@ -59,12 +59,12 @@ async move {
let debug = param["debug"].as_bool().unwrap_or(false);
let benchmark = param["benchmark"].as_bool().unwrap_or(false);
let userid: Userid = rpcenv.get_user().unwrap().parse()?;
let auth_id: Authid = rpcenv.get_auth_id().unwrap().parse()?;
let store = tools::required_string_param(&param, "store")?.to_owned();
let user_info = CachedUserInfo::new()?;
user_info.check_privs(&userid, &["datastore", &store], PRIV_DATASTORE_BACKUP, false)?;
user_info.check_privs(&auth_id, &["datastore", &store], PRIV_DATASTORE_BACKUP, false)?;
let datastore = DataStore::lookup_datastore(&store)?;
@ -105,12 +105,15 @@ async move {
};
// lock backup group to only allow one backup per group at a time
let (owner, _group_guard) = datastore.create_locked_backup_group(&backup_group, &userid)?;
let (owner, _group_guard) = datastore.create_locked_backup_group(&backup_group, &auth_id)?;
// permission check
if owner != userid && worker_type != "benchmark" {
let correct_owner = owner == auth_id
|| (owner.is_token()
&& Authid::from(owner.user().clone()) == auth_id);
if !correct_owner && worker_type != "benchmark" {
// only the owner is allowed to create additional snapshots
bail!("backup owner check failed ({} != {})", userid, owner);
bail!("backup owner check failed ({} != {})", auth_id, owner);
}
let last_backup = {
@ -153,9 +156,9 @@ async move {
if !is_new { bail!("backup directory already exists."); }
WorkerTask::spawn(worker_type, Some(worker_id), userid.clone(), true, move |worker| {
WorkerTask::spawn(worker_type, Some(worker_id), auth_id.clone(), true, move |worker| {
let mut env = BackupEnvironment::new(
env_type, userid, worker.clone(), datastore, backup_dir);
env_type, auth_id, worker.clone(), datastore, backup_dir);
env.debug = debug;
env.last_backup = last_backup;

View File

@ -10,7 +10,7 @@ use proxmox::tools::digest_to_hex;
use proxmox::tools::fs::{replace_file, CreateOptions};
use proxmox::api::{RpcEnvironment, RpcEnvironmentType};
use crate::api2::types::Userid;
use crate::api2::types::Authid;
use crate::backup::*;
use crate::server::WorkerTask;
use crate::server::formatter::*;
@ -104,7 +104,7 @@ impl SharedBackupState {
pub struct BackupEnvironment {
env_type: RpcEnvironmentType,
result_attributes: Value,
user: Userid,
auth_id: Authid,
pub debug: bool,
pub formatter: &'static OutputFormatter,
pub worker: Arc<WorkerTask>,
@ -117,7 +117,7 @@ pub struct BackupEnvironment {
impl BackupEnvironment {
pub fn new(
env_type: RpcEnvironmentType,
user: Userid,
auth_id: Authid,
worker: Arc<WorkerTask>,
datastore: Arc<DataStore>,
backup_dir: BackupDir,
@ -137,7 +137,7 @@ impl BackupEnvironment {
Self {
result_attributes: json!({}),
env_type,
user,
auth_id,
worker,
datastore,
debug: false,
@ -518,7 +518,7 @@ impl BackupEnvironment {
WorkerTask::new_thread(
"verify",
Some(worker_id),
self.user.clone(),
self.auth_id.clone(),
false,
move |worker| {
worker.log("Automatically verifying newly added snapshot");
@ -533,6 +533,7 @@ impl BackupEnvironment {
corrupt_chunks,
worker.clone(),
worker.upid().clone(),
None,
snap_lock,
)? {
bail!("verification failed - please check the log for details");
@ -598,12 +599,12 @@ impl RpcEnvironment for BackupEnvironment {
self.env_type
}
fn set_user(&mut self, _user: Option<String>) {
panic!("unable to change user");
fn set_auth_id(&mut self, _auth_id: Option<String>) {
panic!("unable to change auth_id");
}
fn get_user(&self) -> Option<String> {
Some(self.user.to_string())
fn get_auth_id(&self) -> Option<String> {
Some(self.auth_id.to_string())
}
}

View File

@ -35,14 +35,14 @@ pub fn list_datastores(
let (config, digest) = datastore::config()?;
let userid: Userid = rpcenv.get_user().unwrap().parse()?;
let auth_id: Authid = rpcenv.get_auth_id().unwrap().parse()?;
let user_info = CachedUserInfo::new()?;
rpcenv["digest"] = proxmox::tools::digest_to_hex(&digest).into();
let list:Vec<DataStoreConfig> = config.convert_to_typed_array("datastore")?;
let filter_by_privs = |store: &DataStoreConfig| {
let user_privs = user_info.lookup_privs(&userid, &["datastore", &store.name]);
let user_privs = user_info.lookup_privs(&auth_id, &["datastore", &store.name]);
(user_privs & PRIV_DATASTORE_AUDIT) != 0
};
@ -68,6 +68,14 @@ pub fn list_datastores(
optional: true,
schema: SINGLE_LINE_COMMENT_SCHEMA,
},
"notify-user": {
optional: true,
type: Userid,
},
"notify": {
optional: true,
type: Notify,
},
"gc-schedule": {
optional: true,
schema: GC_SCHEDULE_SCHEMA,
@ -187,6 +195,10 @@ pub enum DeletableProperty {
keep_monthly,
/// Delete the keep-yearly property
keep_yearly,
/// Delete the notify-user property
notify_user,
/// Delete the notify property
notify,
}
#[api(
@ -200,6 +212,14 @@ pub enum DeletableProperty {
optional: true,
schema: SINGLE_LINE_COMMENT_SCHEMA,
},
"notify-user": {
optional: true,
type: Userid,
},
"notify": {
optional: true,
type: Notify,
},
"gc-schedule": {
optional: true,
schema: GC_SCHEDULE_SCHEMA,
@ -262,6 +282,8 @@ pub fn update_datastore(
keep_weekly: Option<u64>,
keep_monthly: Option<u64>,
keep_yearly: Option<u64>,
notify: Option<Notify>,
notify_user: Option<Userid>,
delete: Option<Vec<DeletableProperty>>,
digest: Option<String>,
) -> Result<(), Error> {
@ -290,6 +312,8 @@ pub fn update_datastore(
DeletableProperty::keep_weekly => { data.keep_weekly = None; },
DeletableProperty::keep_monthly => { data.keep_monthly = None; },
DeletableProperty::keep_yearly => { data.keep_yearly = None; },
DeletableProperty::notify => { data.notify = None; },
DeletableProperty::notify_user => { data.notify_user = None; },
}
}
}
@ -322,6 +346,9 @@ pub fn update_datastore(
if keep_monthly.is_some() { data.keep_monthly = keep_monthly; }
if keep_yearly.is_some() { data.keep_yearly = keep_yearly; }
if notify.is_some() { data.notify = notify; }
if notify_user.is_some() { data.notify_user = notify_user; }
config.set_data(&name, "datastore", &data)?;
datastore::save_config(&config)?;

View File

@ -6,6 +6,7 @@ use proxmox::api::{api, ApiMethod, Router, RpcEnvironment, Permission};
use proxmox::tools::fs::open_file_locked;
use crate::api2::types::*;
use crate::config::cached_user_info::CachedUserInfo;
use crate::config::remote;
use crate::config::acl::{PRIV_REMOTE_AUDIT, PRIV_REMOTE_MODIFY};
@ -22,7 +23,8 @@ use crate::config::acl::{PRIV_REMOTE_AUDIT, PRIV_REMOTE_MODIFY};
},
},
access: {
permission: &Permission::Privilege(&["remote"], PRIV_REMOTE_AUDIT, false),
description: "List configured remotes filtered by Remote.Audit privileges",
permission: &Permission::Anybody,
},
)]
/// List all remotes
@ -31,16 +33,25 @@ pub fn list_remotes(
_info: &ApiMethod,
mut rpcenv: &mut dyn RpcEnvironment,
) -> Result<Vec<remote::Remote>, Error> {
let auth_id: Authid = rpcenv.get_auth_id().unwrap().parse()?;
let user_info = CachedUserInfo::new()?;
let (config, digest) = remote::config()?;
let mut list: Vec<remote::Remote> = config.convert_to_typed_array("remote")?;
// don't return password in api
for remote in &mut list {
remote.password = "".to_string();
}
let list = list
.into_iter()
.filter(|remote| {
let privs = user_info.lookup_privs(&auth_id, &["remote", &remote.name]);
privs & PRIV_REMOTE_AUDIT != 0
})
.collect();
rpcenv["digest"] = proxmox::tools::digest_to_hex(&digest).into();
Ok(list)
}
@ -66,7 +77,7 @@ pub fn list_remotes(
default: 8007,
},
userid: {
type: Userid,
type: Authid,
},
password: {
schema: remote::REMOTE_PASSWORD_SCHEMA,
@ -167,7 +178,7 @@ pub enum DeletableProperty {
},
userid: {
optional: true,
type: Userid,
type: Authid,
},
password: {
optional: true,
@ -201,7 +212,7 @@ pub fn update_remote(
comment: Option<String>,
host: Option<String>,
port: Option<u16>,
userid: Option<Userid>,
userid: Option<Authid>,
password: Option<String>,
fingerprint: Option<String>,
delete: Option<Vec<DeletableProperty>>,

View File

@ -2,13 +2,73 @@ use anyhow::{bail, Error};
use serde_json::Value;
use ::serde::{Deserialize, Serialize};
use proxmox::api::{api, Router, RpcEnvironment};
use proxmox::api::{api, Permission, Router, RpcEnvironment};
use proxmox::tools::fs::open_file_locked;
use crate::api2::types::*;
use crate::config::acl::{
PRIV_DATASTORE_AUDIT,
PRIV_DATASTORE_BACKUP,
PRIV_DATASTORE_MODIFY,
PRIV_DATASTORE_PRUNE,
PRIV_REMOTE_AUDIT,
PRIV_REMOTE_READ,
};
use crate::config::cached_user_info::CachedUserInfo;
use crate::config::sync::{self, SyncJobConfig};
// fixme: add access permissions
pub fn check_sync_job_read_access(
user_info: &CachedUserInfo,
auth_id: &Authid,
job: &SyncJobConfig,
) -> bool {
let datastore_privs = user_info.lookup_privs(&auth_id, &["datastore", &job.store]);
if datastore_privs & PRIV_DATASTORE_AUDIT == 0 {
return false;
}
let remote_privs = user_info.lookup_privs(&auth_id, &["remote", &job.remote]);
remote_privs & PRIV_REMOTE_AUDIT != 0
}
// user can run the corresponding pull job
pub fn check_sync_job_modify_access(
user_info: &CachedUserInfo,
auth_id: &Authid,
job: &SyncJobConfig,
) -> bool {
let datastore_privs = user_info.lookup_privs(&auth_id, &["datastore", &job.store]);
if datastore_privs & PRIV_DATASTORE_BACKUP == 0 {
return false;
}
if let Some(true) = job.remove_vanished {
if datastore_privs & PRIV_DATASTORE_PRUNE == 0 {
return false;
}
}
let correct_owner = match job.owner {
Some(ref owner) => {
owner == auth_id
|| (owner.is_token()
&& !auth_id.is_token()
&& owner.user() == auth_id.user())
},
// default sync owner
None => auth_id == Authid::backup_auth_id(),
};
// same permission as changing ownership after syncing
if !correct_owner && datastore_privs & PRIV_DATASTORE_MODIFY == 0 {
return false;
}
let remote_privs = user_info.lookup_privs(&auth_id, &["remote", &job.remote, &job.remote_store]);
remote_privs & PRIV_REMOTE_READ != 0
}
#[api(
input: {
@ -19,12 +79,18 @@ use crate::config::sync::{self, SyncJobConfig};
type: Array,
items: { type: sync::SyncJobConfig },
},
access: {
description: "Limited to sync job entries where user has Datastore.Audit on target datastore, and Remote.Audit on source remote.",
permission: &Permission::Anybody,
},
)]
/// List all sync jobs
pub fn list_sync_jobs(
_param: Value,
mut rpcenv: &mut dyn RpcEnvironment,
) -> Result<Vec<SyncJobConfig>, Error> {
let auth_id: Authid = rpcenv.get_auth_id().unwrap().parse()?;
let user_info = CachedUserInfo::new()?;
let (config, digest) = sync::config()?;
@ -32,6 +98,10 @@ pub fn list_sync_jobs(
rpcenv["digest"] = proxmox::tools::digest_to_hex(&digest).into();
let list = list
.into_iter()
.filter(|sync_job| check_sync_job_read_access(&user_info, &auth_id, &sync_job))
.collect();
Ok(list)
}
@ -45,6 +115,10 @@ pub fn list_sync_jobs(
store: {
schema: DATASTORE_SCHEMA,
},
owner: {
type: Authid,
optional: true,
},
remote: {
schema: REMOTE_ID_SCHEMA,
},
@ -65,13 +139,25 @@ pub fn list_sync_jobs(
},
},
},
access: {
description: "User needs Datastore.Backup on target datastore, and Remote.Read on source remote. Additionally, remove_vanished requires Datastore.Prune, and any owner other than the user themselves requires Datastore.Modify",
permission: &Permission::Anybody,
},
)]
/// Create a new sync job.
pub fn create_sync_job(param: Value) -> Result<(), Error> {
pub fn create_sync_job(
param: Value,
rpcenv: &mut dyn RpcEnvironment,
) -> Result<(), Error> {
let auth_id: Authid = rpcenv.get_auth_id().unwrap().parse()?;
let user_info = CachedUserInfo::new()?;
let _lock = open_file_locked(sync::SYNC_CFG_LOCKFILE, std::time::Duration::new(10, 0), true)?;
let sync_job: sync::SyncJobConfig = serde_json::from_value(param.clone())?;
if !check_sync_job_modify_access(&user_info, &auth_id, &sync_job) {
bail!("permission check failed");
}
let (mut config, _digest) = sync::config()?;
@ -100,15 +186,26 @@ pub fn create_sync_job(param: Value) -> Result<(), Error> {
description: "The sync job configuration.",
type: sync::SyncJobConfig,
},
access: {
description: "Limited to sync job entries where user has Datastore.Audit on target datastore, and Remote.Audit on source remote.",
permission: &Permission::Anybody,
},
)]
/// Read a sync job configuration.
pub fn read_sync_job(
id: String,
mut rpcenv: &mut dyn RpcEnvironment,
) -> Result<SyncJobConfig, Error> {
let auth_id: Authid = rpcenv.get_auth_id().unwrap().parse()?;
let user_info = CachedUserInfo::new()?;
let (config, digest) = sync::config()?;
let sync_job = config.lookup("sync", &id)?;
if !check_sync_job_read_access(&user_info, &auth_id, &sync_job) {
bail!("permission check failed");
}
rpcenv["digest"] = proxmox::tools::digest_to_hex(&digest).into();
Ok(sync_job)
@ -120,6 +217,8 @@ pub fn read_sync_job(
#[allow(non_camel_case_types)]
/// Deletable property name
pub enum DeletableProperty {
/// Delete the owner property.
owner,
/// Delete the comment property.
comment,
/// Delete the job schedule.
@ -139,6 +238,10 @@ pub enum DeletableProperty {
schema: DATASTORE_SCHEMA,
optional: true,
},
owner: {
type: Authid,
optional: true,
},
remote: {
schema: REMOTE_ID_SCHEMA,
optional: true,
@ -173,11 +276,16 @@ pub enum DeletableProperty {
},
},
},
access: {
permission: &Permission::Anybody,
description: "User needs Datastore.Backup on target datastore, and Remote.Read on source remote. Additionally, remove_vanished requires Datastore.Prune, and any owner other than the user themselves requires Datastore.Modify",
},
)]
/// Update sync job config.
pub fn update_sync_job(
id: String,
store: Option<String>,
owner: Option<Authid>,
remote: Option<String>,
remote_store: Option<String>,
remove_vanished: Option<bool>,
@ -185,7 +293,10 @@ pub fn update_sync_job(
schedule: Option<String>,
delete: Option<Vec<DeletableProperty>>,
digest: Option<String>,
rpcenv: &mut dyn RpcEnvironment,
) -> Result<(), Error> {
let auth_id: Authid = rpcenv.get_auth_id().unwrap().parse()?;
let user_info = CachedUserInfo::new()?;
let _lock = open_file_locked(sync::SYNC_CFG_LOCKFILE, std::time::Duration::new(10, 0), true)?;
@ -202,6 +313,7 @@ pub fn update_sync_job(
if let Some(delete) = delete {
for delete_prop in delete {
match delete_prop {
DeletableProperty::owner => { data.owner = None; },
DeletableProperty::comment => { data.comment = None; },
DeletableProperty::schedule => { data.schedule = None; },
DeletableProperty::remove_vanished => { data.remove_vanished = None; },
@ -221,11 +333,15 @@ pub fn update_sync_job(
if let Some(store) = store { data.store = store; }
if let Some(remote) = remote { data.remote = remote; }
if let Some(remote_store) = remote_store { data.remote_store = remote_store; }
if let Some(owner) = owner { data.owner = Some(owner); }
if schedule.is_some() { data.schedule = schedule; }
if remove_vanished.is_some() { data.remove_vanished = remove_vanished; }
if !check_sync_job_modify_access(&user_info, &auth_id, &data) {
bail!("permission check failed");
}
config.set_data(&id, "sync", &data)?;
sync::save_config(&config)?;
@ -246,9 +362,19 @@ pub fn update_sync_job(
},
},
},
access: {
permission: &Permission::Anybody,
description: "User needs Datastore.Backup on target datastore, and Remote.Read on source remote. Additionally, remove_vanished requires Datastore.Prune, and any owner other than the user themselves requires Datastore.Modify",
},
)]
/// Remove a sync job configuration
pub fn delete_sync_job(id: String, digest: Option<String>) -> Result<(), Error> {
pub fn delete_sync_job(
id: String,
digest: Option<String>,
rpcenv: &mut dyn RpcEnvironment,
) -> Result<(), Error> {
let auth_id: Authid = rpcenv.get_auth_id().unwrap().parse()?;
let user_info = CachedUserInfo::new()?;
let _lock = open_file_locked(sync::SYNC_CFG_LOCKFILE, std::time::Duration::new(10, 0), true)?;
@ -259,10 +385,15 @@ pub fn delete_sync_job(id: String, digest: Option<String>) -> Result<(), Error>
crate::tools::detect_modified_configuration_file(&digest, &expected_digest)?;
}
match config.sections.get(&id) {
Some(_) => { config.sections.remove(&id); },
None => bail!("job '{}' does not exist.", id),
match config.lookup("sync", &id) {
Ok(job) => {
if !check_sync_job_modify_access(&user_info, &auth_id, &job) {
bail!("permission check failed");
}
config.sections.remove(&id);
},
Err(_) => { bail!("job '{}' does not exist.", id) },
};
sync::save_config(&config)?;
@ -280,3 +411,116 @@ pub const ROUTER: Router = Router::new()
.get(&API_METHOD_LIST_SYNC_JOBS)
.post(&API_METHOD_CREATE_SYNC_JOB)
.match_all("id", &ITEM_ROUTER);
#[test]
fn sync_job_access_test() -> Result<(), Error> {
let (user_cfg, _) = crate::config::user::test_cfg_from_str(r###"
user: noperm@pbs
user: read@pbs
user: write@pbs
"###).expect("test user.cfg is not parsable");
let acl_tree = crate::config::acl::AclTree::from_raw(r###"
acl:1:/datastore/localstore1:read@pbs,write@pbs:DatastoreAudit
acl:1:/datastore/localstore1:write@pbs:DatastoreBackup
acl:1:/datastore/localstore2:write@pbs:DatastorePowerUser
acl:1:/datastore/localstore3:write@pbs:DatastoreAdmin
acl:1:/remote/remote1:read@pbs,write@pbs:RemoteAudit
acl:1:/remote/remote1/remotestore1:write@pbs:RemoteSyncOperator
"###).expect("test acl.cfg is not parsable");
let user_info = CachedUserInfo::test_new(user_cfg, acl_tree);
let root_auth_id = Authid::root_auth_id();
let no_perm_auth_id: Authid = "noperm@pbs".parse()?;
let read_auth_id: Authid = "read@pbs".parse()?;
let write_auth_id: Authid = "write@pbs".parse()?;
let mut job = SyncJobConfig {
id: "regular".to_string(),
remote: "remote0".to_string(),
remote_store: "remotestore1".to_string(),
store: "localstore0".to_string(),
owner: Some(write_auth_id.clone()),
comment: None,
remove_vanished: None,
schedule: None,
};
// should work without ACLs
assert_eq!(check_sync_job_read_access(&user_info, &root_auth_id, &job), true);
assert_eq!(check_sync_job_modify_access(&user_info, &root_auth_id, &job), true);
// user without permissions must fail
assert_eq!(check_sync_job_read_access(&user_info, &no_perm_auth_id, &job), false);
assert_eq!(check_sync_job_modify_access(&user_info, &no_perm_auth_id, &job), false);
// reading without proper read permissions on either remote or local must fail
assert_eq!(check_sync_job_read_access(&user_info, &read_auth_id, &job), false);
// reading without proper read permissions on local end must fail
job.remote = "remote1".to_string();
assert_eq!(check_sync_job_read_access(&user_info, &read_auth_id, &job), false);
// reading without proper read permissions on remote end must fail
job.remote = "remote0".to_string();
job.store = "localstore1".to_string();
assert_eq!(check_sync_job_read_access(&user_info, &read_auth_id, &job), false);
// writing without proper write permissions on either end must fail
job.store = "localstore0".to_string();
assert_eq!(check_sync_job_modify_access(&user_info, &write_auth_id, &job), false);
// writing without proper write permissions on local end must fail
job.remote = "remote1".to_string();
// writing without proper write permissions on remote end must fail
job.remote = "remote0".to_string();
job.store = "localstore1".to_string();
assert_eq!(check_sync_job_modify_access(&user_info, &write_auth_id, &job), false);
// reset remote to one where users have access
job.remote = "remote1".to_string();
// user with read permission can only read, but not modify/run
assert_eq!(check_sync_job_read_access(&user_info, &read_auth_id, &job), true);
job.owner = Some(read_auth_id.clone());
assert_eq!(check_sync_job_modify_access(&user_info, &read_auth_id, &job), false);
job.owner = None;
assert_eq!(check_sync_job_modify_access(&user_info, &read_auth_id, &job), false);
job.owner = Some(write_auth_id.clone());
assert_eq!(check_sync_job_modify_access(&user_info, &read_auth_id, &job), false);
// user with simple write permission can modify/run
assert_eq!(check_sync_job_read_access(&user_info, &write_auth_id, &job), true);
assert_eq!(check_sync_job_modify_access(&user_info, &write_auth_id, &job), true);
// but can't modify/run with deletion
job.remove_vanished = Some(true);
assert_eq!(check_sync_job_modify_access(&user_info, &write_auth_id, &job), false);
// unless they have Datastore.Prune as well
job.store = "localstore2".to_string();
assert_eq!(check_sync_job_modify_access(&user_info, &write_auth_id, &job), true);
// changing owner is not possible
job.owner = Some(read_auth_id.clone());
assert_eq!(check_sync_job_modify_access(&user_info, &write_auth_id, &job), false);
// also not to the default 'backup@pam'
job.owner = None;
assert_eq!(check_sync_job_modify_access(&user_info, &write_auth_id, &job), false);
// unless they have Datastore.Modify as well
job.store = "localstore3".to_string();
job.owner = Some(read_auth_id.clone());
assert_eq!(check_sync_job_modify_access(&user_info, &write_auth_id, &job), true);
job.owner = None;
assert_eq!(check_sync_job_modify_access(&user_info, &write_auth_id, &job), true);
Ok(())
}

View File

@ -2,10 +2,17 @@ use anyhow::{bail, Error};
use serde_json::Value;
use ::serde::{Deserialize, Serialize};
use proxmox::api::{api, Router, RpcEnvironment};
use proxmox::api::{api, Permission, Router, RpcEnvironment};
use proxmox::tools::fs::open_file_locked;
use crate::api2::types::*;
use crate::config::acl::{
PRIV_DATASTORE_AUDIT,
PRIV_DATASTORE_BACKUP,
PRIV_DATASTORE_VERIFY,
};
use crate::config::verify::{self, VerificationJobConfig};
#[api(
@ -17,6 +24,12 @@ use crate::config::verify::{self, VerificationJobConfig};
type: Array,
items: { type: verify::VerificationJobConfig },
},
access: {
permission: &Permission::Privilege(
&["datastore", "{store}"],
PRIV_DATASTORE_AUDIT | PRIV_DATASTORE_BACKUP | PRIV_DATASTORE_VERIFY,
true),
},
)]
/// List all verification jobs
pub fn list_verification_jobs(
@ -61,7 +74,13 @@ pub fn list_verification_jobs(
schema: VERIFICATION_SCHEDULE_SCHEMA,
},
}
}
},
access: {
permission: &Permission::Privilege(
&["datastore", "{store}"],
PRIV_DATASTORE_VERIFY,
true),
},
)]
/// Create a new verification job.
pub fn create_verification_job(param: Value) -> Result<(), Error> {
@ -97,6 +116,12 @@ pub fn create_verification_job(param: Value) -> Result<(), Error> {
description: "The verification job configuration.",
type: verify::VerificationJobConfig,
},
access: {
permission: &Permission::Privilege(
&["datastore", "{store}"],
PRIV_DATASTORE_AUDIT | PRIV_DATASTORE_BACKUP | PRIV_DATASTORE_VERIFY,
true),
},
)]
/// Read a verification job configuration.
pub fn read_verification_job(
@ -167,6 +192,12 @@ pub enum DeletableProperty {
},
},
},
access: {
permission: &Permission::Privilege(
&["datastore", "{store}"],
PRIV_DATASTORE_VERIFY,
true),
},
)]
/// Update verification job config.
pub fn update_verification_job(
@ -238,6 +269,12 @@ pub fn update_verification_job(
},
},
},
access: {
permission: &Permission::Privilege(
&["datastore", "{store}"],
PRIV_DATASTORE_VERIFY,
true),
},
)]
/// Remove a verification job configuration
pub fn delete_verification_job(id: String, digest: Option<String>) -> Result<(), Error> {

View File

@ -24,20 +24,21 @@ use crate::server::WorkerTask;
use crate::tools;
use crate::tools::ticket::{self, Empty, Ticket};
pub mod apt;
pub mod disks;
pub mod dns;
pub mod network;
pub mod tasks;
pub mod subscription;
pub(crate) mod rrd;
mod apt;
mod journal;
mod services;
mod status;
mod subscription;
mod syslog;
mod time;
mod report;
pub const SHELL_CMD_SCHEMA: Schema = StringSchema::new("The command to run.")
.format(&ApiStringFormat::Enum(&[
@ -91,10 +92,12 @@ async fn termproxy(
cmd: Option<String>,
rpcenv: &mut dyn RpcEnvironment,
) -> Result<Value, Error> {
// intentionally user only for now
let userid: Userid = rpcenv
.get_user()
.get_auth_id()
.ok_or_else(|| format_err!("unknown user"))?
.parse()?;
let auth_id = Authid::from(userid.clone());
if userid.realm() != "pam" {
bail!("only pam users can use the console");
@ -137,7 +140,7 @@ async fn termproxy(
let upid = WorkerTask::spawn(
"termproxy",
None,
userid,
auth_id,
false,
move |worker| async move {
// move inside the worker so that it survives and does not close the port
@ -272,7 +275,8 @@ fn upgrade_to_websocket(
rpcenv: Box<dyn RpcEnvironment>,
) -> ApiResponseFuture {
async move {
let userid: Userid = rpcenv.get_user().unwrap().parse()?;
// intentionally user only for now
let userid: Userid = rpcenv.get_auth_id().unwrap().parse()?;
let ticket = tools::required_string_param(&param, "vncticket")?;
let port: u16 = tools::required_integer_param(&param, "port")? as u16;
@ -307,6 +311,7 @@ pub const SUBDIRS: SubdirMap = &[
("dns", &dns::ROUTER),
("journal", &journal::ROUTER),
("network", &network::ROUTER),
("report", &report::ROUTER),
("rrd", &rrd::ROUTER),
("services", &services::ROUTER),
("status", &status::ROUTER),

View File

@ -1,301 +1,15 @@
use std::collections::HashSet;
use apt_pkg_native::Cache;
use anyhow::{Error, bail, format_err};
use serde_json::{json, Value};
use proxmox::{list_subdirs_api_method, const_regex};
use proxmox::list_subdirs_api_method;
use proxmox::api::{api, RpcEnvironment, RpcEnvironmentType, Permission};
use proxmox::api::router::{Router, SubdirMap};
use crate::server::WorkerTask;
use crate::tools::http;
use crate::tools::{apt, http};
use crate::config::acl::{PRIV_SYS_AUDIT, PRIV_SYS_MODIFY};
use crate::api2::types::{APTUpdateInfo, NODE_SCHEMA, Userid, UPID_SCHEMA};
const_regex! {
VERSION_EPOCH_REGEX = r"^\d+:";
FILENAME_EXTRACT_REGEX = r"^.*/.*?_(.*)_Packages$";
}
// FIXME: once the 'changelog' API call switches over to 'apt-get changelog' only,
// consider removing this function entirely, as it's value is never used anywhere
// then (widget-toolkit doesn't use the value either)
fn get_changelog_url(
package: &str,
filename: &str,
version: &str,
origin: &str,
component: &str,
) -> Result<String, Error> {
if origin == "" {
bail!("no origin available for package {}", package);
}
if origin == "Debian" {
let mut command = std::process::Command::new("apt-get");
command.arg("changelog");
command.arg("--print-uris");
command.arg(package);
let output = crate::tools::run_command(command, None)?; // format: 'http://foo/bar' package.changelog
let output = match output.splitn(2, ' ').next() {
Some(output) => {
if output.len() < 2 {
bail!("invalid output (URI part too short) from 'apt-get changelog --print-uris': {}", output)
}
output[1..output.len()-1].to_owned()
},
None => bail!("invalid output from 'apt-get changelog --print-uris': {}", output)
};
return Ok(output);
} else if origin == "Proxmox" {
// FIXME: Use above call to 'apt changelog <pkg> --print-uris' as well.
// Currently not possible as our packages do not have a URI set in their Release file.
let version = (VERSION_EPOCH_REGEX.regex_obj)().replace_all(version, "");
let base = match (FILENAME_EXTRACT_REGEX.regex_obj)().captures(filename) {
Some(captures) => {
let base_capture = captures.get(1);
match base_capture {
Some(base_underscore) => base_underscore.as_str().replace("_", "/"),
None => bail!("incompatible filename, cannot find regex group")
}
},
None => bail!("incompatible filename, doesn't match regex")
};
return Ok(format!("http://download.proxmox.com/{}/{}_{}.changelog",
base, package, version));
}
bail!("unknown origin ({}) or component ({})", origin, component)
}
struct FilterData<'a> {
// this is version info returned by APT
installed_version: Option<&'a str>,
candidate_version: &'a str,
// this is the version info the filter is supposed to check
active_version: &'a str,
}
enum PackagePreSelect {
OnlyInstalled,
OnlyNew,
All,
}
fn list_installed_apt_packages<F: Fn(FilterData) -> bool>(
filter: F,
only_versions_for: Option<&str>,
) -> Vec<APTUpdateInfo> {
let mut ret = Vec::new();
let mut depends = HashSet::new();
// note: this is not an 'apt update', it just re-reads the cache from disk
let mut cache = Cache::get_singleton();
cache.reload();
let mut cache_iter = match only_versions_for {
Some(name) => cache.find_by_name(name),
None => cache.iter()
};
loop {
match cache_iter.next() {
Some(view) => {
let di = if only_versions_for.is_some() {
query_detailed_info(
PackagePreSelect::All,
&filter,
view,
None
)
} else {
query_detailed_info(
PackagePreSelect::OnlyInstalled,
&filter,
view,
Some(&mut depends)
)
};
if let Some(info) = di {
ret.push(info);
}
if only_versions_for.is_some() {
break;
}
},
None => {
drop(cache_iter);
// also loop through missing dependencies, as they would be installed
for pkg in depends.iter() {
let mut iter = cache.find_by_name(&pkg);
let view = match iter.next() {
Some(view) => view,
None => continue // package not found, ignore
};
let di = query_detailed_info(
PackagePreSelect::OnlyNew,
&filter,
view,
None
);
if let Some(info) = di {
ret.push(info);
}
}
break;
}
}
}
return ret;
}
fn query_detailed_info<'a, F, V>(
pre_select: PackagePreSelect,
filter: F,
view: V,
depends: Option<&mut HashSet<String>>,
) -> Option<APTUpdateInfo>
where
F: Fn(FilterData) -> bool,
V: std::ops::Deref<Target = apt_pkg_native::sane::PkgView<'a>>
{
let current_version = view.current_version();
let candidate_version = view.candidate_version();
let (current_version, candidate_version) = match pre_select {
PackagePreSelect::OnlyInstalled => match (current_version, candidate_version) {
(Some(cur), Some(can)) => (Some(cur), can), // package installed and there is an update
(Some(cur), None) => (Some(cur.clone()), cur), // package installed and up-to-date
(None, Some(_)) => return None, // package could be installed
(None, None) => return None, // broken
},
PackagePreSelect::OnlyNew => match (current_version, candidate_version) {
(Some(_), Some(_)) => return None,
(Some(_), None) => return None,
(None, Some(can)) => (None, can),
(None, None) => return None,
},
PackagePreSelect::All => match (current_version, candidate_version) {
(Some(cur), Some(can)) => (Some(cur), can),
(Some(cur), None) => (Some(cur.clone()), cur),
(None, Some(can)) => (None, can),
(None, None) => return None,
},
};
// get additional information via nested APT 'iterators'
let mut view_iter = view.versions();
while let Some(ver) = view_iter.next() {
let package = view.name();
let version = ver.version();
let mut origin_res = "unknown".to_owned();
let mut section_res = "unknown".to_owned();
let mut priority_res = "unknown".to_owned();
let mut change_log_url = "".to_owned();
let mut short_desc = package.clone();
let mut long_desc = "".to_owned();
let fd = FilterData {
installed_version: current_version.as_deref(),
candidate_version: &candidate_version,
active_version: &version,
};
if filter(fd) {
if let Some(section) = ver.section() {
section_res = section;
}
if let Some(prio) = ver.priority_type() {
priority_res = prio;
}
// assume every package has only one origin file (not
// origin, but origin *file*, for some reason those seem to
// be different concepts in APT)
let mut origin_iter = ver.origin_iter();
let origin = origin_iter.next();
if let Some(origin) = origin {
if let Some(sd) = origin.short_desc() {
short_desc = sd;
}
if let Some(ld) = origin.long_desc() {
long_desc = ld;
}
// the package files appear in priority order, meaning
// the one for the candidate version is first - this is fine
// however, as the source package should be the same for all
// versions anyway
let mut pkg_iter = origin.file();
let pkg_file = pkg_iter.next();
if let Some(pkg_file) = pkg_file {
if let Some(origin_name) = pkg_file.origin() {
origin_res = origin_name;
}
let filename = pkg_file.file_name();
let component = pkg_file.component();
// build changelog URL from gathered information
// ignore errors, use empty changelog instead
let url = get_changelog_url(&package, &filename,
&version, &origin_res, &component);
if let Ok(url) = url {
change_log_url = url;
}
}
}
if let Some(depends) = depends {
let mut dep_iter = ver.dep_iter();
loop {
let dep = match dep_iter.next() {
Some(dep) if dep.dep_type() != "Depends" => continue,
Some(dep) => dep,
None => break
};
let dep_pkg = dep.target_pkg();
let name = dep_pkg.name();
depends.insert(name);
}
}
return Some(APTUpdateInfo {
package,
title: short_desc,
arch: view.arch(),
description: long_desc,
change_log_url,
origin: origin_res,
version: candidate_version.clone(),
old_version: match current_version {
Some(vers) => vers,
None => "".to_owned()
},
priority: priority_res,
section: section_res,
});
}
}
return None;
}
use crate::api2::types::{Authid, APTUpdateInfo, NODE_SCHEMA, UPID_SCHEMA};
#[api(
input: {
@ -308,19 +22,60 @@ where
returns: {
description: "A list of packages with available updates.",
type: Array,
items: { type: APTUpdateInfo },
items: {
type: APTUpdateInfo
},
},
protected: true,
access: {
permission: &Permission::Privilege(&[], PRIV_SYS_AUDIT, false),
},
)]
/// List available APT updates
fn apt_update_available(_param: Value) -> Result<Value, Error> {
let all_upgradeable = list_installed_apt_packages(|data| {
data.candidate_version == data.active_version &&
data.installed_version != Some(data.candidate_version)
}, None);
Ok(json!(all_upgradeable))
match apt::pkg_cache_expired() {
Ok(false) => {
if let Ok(Some(cache)) = apt::read_pkg_state() {
return Ok(json!(cache.package_status));
}
},
_ => (),
}
let cache = apt::update_cache()?;
return Ok(json!(cache.package_status));
}
fn do_apt_update(worker: &WorkerTask, quiet: bool) -> Result<(), Error> {
if !quiet { worker.log("starting apt-get update") }
// TODO: set proxy /etc/apt/apt.conf.d/76pbsproxy like PVE
let mut command = std::process::Command::new("apt-get");
command.arg("update");
// apt "errors" quite easily, and run_command is a bit rigid, so handle this inline for now.
let output = command.output()
.map_err(|err| format_err!("failed to execute {:?} - {}", command, err))?;
if !quiet {
worker.log(String::from_utf8(output.stdout)?);
}
// TODO: improve run_command to allow outputting both, stderr and stdout
if !output.status.success() {
if output.status.code().is_some() {
let msg = String::from_utf8(output.stderr)
.map(|m| if m.is_empty() { String::from("no error message") } else { m })
.unwrap_or_else(|_| String::from("non utf8 error message (suppressed)"));
worker.warn(msg);
} else {
bail!("terminated by signal");
}
}
Ok(())
}
#[api(
@ -330,6 +85,13 @@ fn apt_update_available(_param: Value) -> Result<Value, Error> {
node: {
schema: NODE_SCHEMA,
},
notify: {
type: bool,
description: r#"Send notification mail about new package updates availanle to the
email address configured for 'root@pam')."#,
optional: true,
default: false,
},
quiet: {
description: "Only produces output suitable for logging, omitting progress indicators.",
type: bool,
@ -347,26 +109,46 @@ fn apt_update_available(_param: Value) -> Result<Value, Error> {
)]
/// Update the APT database
pub fn apt_update_database(
notify: Option<bool>,
quiet: Option<bool>,
rpcenv: &mut dyn RpcEnvironment,
) -> Result<String, Error> {
let userid: Userid = rpcenv.get_user().unwrap().parse()?;
let auth_id: Authid = rpcenv.get_auth_id().unwrap().parse()?;
let to_stdout = if rpcenv.env_type() == RpcEnvironmentType::CLI { true } else { false };
// FIXME: change to non-option in signature and drop below once we have proxmox-api-macro 0.2.3
let quiet = quiet.unwrap_or(API_METHOD_APT_UPDATE_DATABASE_PARAM_DEFAULT_QUIET);
let notify = notify.unwrap_or(API_METHOD_APT_UPDATE_DATABASE_PARAM_DEFAULT_NOTIFY);
let upid_str = WorkerTask::new_thread("aptupdate", None, userid, to_stdout, move |worker| {
if !quiet { worker.log("starting apt-get update") }
let upid_str = WorkerTask::new_thread("aptupdate", None, auth_id, to_stdout, move |worker| {
do_apt_update(&worker, quiet)?;
// TODO: set proxy /etc/apt/apt.conf.d/76pbsproxy like PVE
let mut cache = apt::update_cache()?;
let mut command = std::process::Command::new("apt-get");
command.arg("update");
if notify {
let mut notified = match cache.notified {
Some(notified) => notified,
None => std::collections::HashMap::new(),
};
let mut to_notify: Vec<&APTUpdateInfo> = Vec::new();
let output = crate::tools::run_command(command, None)?;
if !quiet { worker.log(output) }
// TODO: add mail notify for new updates like PVE
for pkg in &cache.package_status {
match notified.insert(pkg.package.to_owned(), pkg.version.to_owned()) {
Some(notified_version) => {
if notified_version != pkg.version {
to_notify.push(pkg);
}
},
None => to_notify.push(pkg),
}
}
if !to_notify.is_empty() {
to_notify.sort_unstable_by_key(|k| &k.package);
crate::server::send_updates_available(&to_notify)?;
}
cache.notified = Some(notified);
apt::write_pkg_cache(&cache)?;
}
Ok(())
})?;
@ -406,7 +188,7 @@ fn apt_get_changelog(
let name = crate::tools::required_string_param(&param, "name")?.to_owned();
let version = param["version"].as_str();
let pkg_info = list_installed_apt_packages(|data| {
let pkg_info = apt::list_installed_apt_packages(|data| {
match version {
Some(version) => version == data.active_version,
None => data.active_version == data.candidate_version

View File

@ -13,7 +13,7 @@ use crate::tools::disks::{
};
use crate::server::WorkerTask;
use crate::api2::types::{Userid, UPID_SCHEMA, NODE_SCHEMA, BLOCKDEVICE_NAME_SCHEMA};
use crate::api2::types::{Authid, UPID_SCHEMA, NODE_SCHEMA, BLOCKDEVICE_NAME_SCHEMA};
pub mod directory;
pub mod zfs;
@ -140,7 +140,7 @@ pub fn initialize_disk(
let to_stdout = if rpcenv.env_type() == RpcEnvironmentType::CLI { true } else { false };
let userid: Userid = rpcenv.get_user().unwrap().parse()?;
let auth_id: Authid = rpcenv.get_auth_id().unwrap().parse()?;
let info = get_disk_usage_info(&disk, true)?;
@ -149,7 +149,7 @@ pub fn initialize_disk(
}
let upid_str = WorkerTask::new_thread(
"diskinit", Some(disk.clone()), userid, to_stdout, move |worker|
"diskinit", Some(disk.clone()), auth_id, to_stdout, move |worker|
{
worker.log(format!("initialize disk {}", disk));

View File

@ -134,7 +134,7 @@ pub fn create_datastore_disk(
let to_stdout = if rpcenv.env_type() == RpcEnvironmentType::CLI { true } else { false };
let userid: Userid = rpcenv.get_user().unwrap().parse()?;
let auth_id: Authid = rpcenv.get_auth_id().unwrap().parse()?;
let info = get_disk_usage_info(&disk, true)?;
@ -143,7 +143,7 @@ pub fn create_datastore_disk(
}
let upid_str = WorkerTask::new_thread(
"dircreate", Some(name.clone()), userid, to_stdout, move |worker|
"dircreate", Some(name.clone()), auth_id, to_stdout, move |worker|
{
worker.log(format!("create datastore '{}' on disk {}", name, disk));

View File

@ -256,7 +256,7 @@ pub fn create_zpool(
let to_stdout = if rpcenv.env_type() == RpcEnvironmentType::CLI { true } else { false };
let userid: Userid = rpcenv.get_user().unwrap().parse()?;
let auth_id: Authid = rpcenv.get_auth_id().unwrap().parse()?;
let add_datastore = add_datastore.unwrap_or(false);
@ -316,7 +316,7 @@ pub fn create_zpool(
}
let upid_str = WorkerTask::new_thread(
"zfscreate", Some(name.clone()), userid, to_stdout, move |worker|
"zfscreate", Some(name.clone()), auth_id, to_stdout, move |worker|
{
worker.log(format!("create {:?} zpool '{}' on devices '{}'", raidlevel, name, devices_text));

View File

@ -684,9 +684,9 @@ pub async fn reload_network_config(
network::assert_ifupdown2_installed()?;
let userid: Userid = rpcenv.get_user().unwrap().parse()?;
let auth_id: Authid = rpcenv.get_auth_id().unwrap().parse()?;
let upid_str = WorkerTask::spawn("srvreload", Some(String::from("networking")), userid, true, |_worker| async {
let upid_str = WorkerTask::spawn("srvreload", Some(String::from("networking")), auth_id, true, |_worker| async {
let _ = std::fs::rename(network::NETWORK_INTERFACES_NEW_FILENAME, network::NETWORK_INTERFACES_FILENAME);

35
src/api2/node/report.rs Normal file
View File

@ -0,0 +1,35 @@
use anyhow::Error;
use proxmox::api::{api, ApiMethod, Permission, Router, RpcEnvironment};
use serde_json::{json, Value};
use crate::api2::types::*;
use crate::config::acl::PRIV_SYS_AUDIT;
use crate::server::generate_report;
#[api(
input: {
properties: {
node: {
schema: NODE_SCHEMA,
},
},
},
returns: {
type: String,
description: "Returns report of the node"
},
access: {
permission: &Permission::Privilege(&["system", "status"], PRIV_SYS_AUDIT, false),
},
)]
/// Generate a report
fn get_report(
_param: Value,
_info: &ApiMethod,
_rpcenv: &mut dyn RpcEnvironment,
) -> Result<Value, Error> {
Ok(json!(generate_report()))
}
pub const ROUTER: Router = Router::new()
.get(&API_METHOD_GET_REPORT);

View File

@ -182,7 +182,7 @@ fn get_service_state(
Ok(json_service_state(&service, status))
}
fn run_service_command(service: &str, cmd: &str, userid: Userid) -> Result<Value, Error> {
fn run_service_command(service: &str, cmd: &str, auth_id: Authid) -> Result<Value, Error> {
let workerid = format!("srv{}", &cmd);
@ -196,7 +196,7 @@ fn run_service_command(service: &str, cmd: &str, userid: Userid) -> Result<Value
let upid = WorkerTask::new_thread(
&workerid,
Some(service.clone()),
userid,
auth_id,
false,
move |_worker| {
@ -244,11 +244,11 @@ fn start_service(
rpcenv: &mut dyn RpcEnvironment,
) -> Result<Value, Error> {
let userid: Userid = rpcenv.get_user().unwrap().parse()?;
let auth_id: Authid = rpcenv.get_auth_id().unwrap().parse()?;
log::info!("starting service {}", service);
run_service_command(&service, "start", userid)
run_service_command(&service, "start", auth_id)
}
#[api(
@ -274,11 +274,11 @@ fn stop_service(
rpcenv: &mut dyn RpcEnvironment,
) -> Result<Value, Error> {
let userid: Userid = rpcenv.get_user().unwrap().parse()?;
let auth_id: Authid = rpcenv.get_auth_id().unwrap().parse()?;
log::info!("stopping service {}", service);
run_service_command(&service, "stop", userid)
run_service_command(&service, "stop", auth_id)
}
#[api(
@ -304,15 +304,15 @@ fn restart_service(
rpcenv: &mut dyn RpcEnvironment,
) -> Result<Value, Error> {
let userid: Userid = rpcenv.get_user().unwrap().parse()?;
let auth_id: Authid = rpcenv.get_auth_id().unwrap().parse()?;
log::info!("re-starting service {}", service);
if &service == "proxmox-backup-proxy" {
// special case, avoid aborting running tasks
run_service_command(&service, "reload", userid)
run_service_command(&service, "reload", auth_id)
} else {
run_service_command(&service, "restart", userid)
run_service_command(&service, "restart", auth_id)
}
}
@ -339,11 +339,11 @@ fn reload_service(
rpcenv: &mut dyn RpcEnvironment,
) -> Result<Value, Error> {
let userid: Userid = rpcenv.get_user().unwrap().parse()?;
let auth_id: Authid = rpcenv.get_auth_id().unwrap().parse()?;
log::info!("reloading service {}", service);
run_service_command(&service, "reload", userid)
run_service_command(&service, "reload", auth_id)
}

View File

@ -7,7 +7,7 @@ use crate::tools;
use crate::tools::subscription::{self, SubscriptionStatus, SubscriptionInfo};
use crate::config::acl::{PRIV_SYS_AUDIT,PRIV_SYS_MODIFY};
use crate::config::cached_user_info::CachedUserInfo;
use crate::api2::types::{NODE_SCHEMA, Userid};
use crate::api2::types::{NODE_SCHEMA, SUBSCRIPTION_KEY_SCHEMA, Authid};
#[api(
input: {
@ -29,7 +29,7 @@ use crate::api2::types::{NODE_SCHEMA, Userid};
},
)]
/// Check and update subscription status.
fn check_subscription(
pub fn check_subscription(
force: bool,
) -> Result<(), Error> {
// FIXME: drop once proxmox-api-macro is bumped to >> 5.0.0-1
@ -82,7 +82,7 @@ fn check_subscription(
},
)]
/// Read subscription info.
fn get_subscription(
pub fn get_subscription(
_param: Value,
rpcenv: &mut dyn RpcEnvironment,
) -> Result<SubscriptionInfo, Error> {
@ -100,9 +100,9 @@ fn get_subscription(
},
};
let userid: Userid = rpcenv.get_user().unwrap().parse()?;
let auth_id: Authid = rpcenv.get_auth_id().unwrap().parse()?;
let user_info = CachedUserInfo::new()?;
let user_privs = user_info.lookup_privs(&userid, &[]);
let user_privs = user_info.lookup_privs(&auth_id, &[]);
if (user_privs & PRIV_SYS_AUDIT) == 0 {
// not enough privileges for full state
@ -124,9 +124,7 @@ fn get_subscription(
schema: NODE_SCHEMA,
},
key: {
description: "Proxmox Backup Server subscription key",
type: String,
max_length: 32,
schema: SUBSCRIPTION_KEY_SCHEMA,
},
},
},
@ -136,7 +134,7 @@ fn get_subscription(
},
)]
/// Set a subscription key and check it.
fn set_subscription(
pub fn set_subscription(
key: String,
) -> Result<(), Error> {
@ -164,7 +162,7 @@ fn set_subscription(
},
)]
/// Delete subscription info.
fn delete_subscription() -> Result<(), Error> {
pub fn delete_subscription() -> Result<(), Error> {
subscription::delete_subscription()
.map_err(|err| format_err!("Deleting subscription failed: {}", err))?;

View File

@ -14,6 +14,16 @@ use crate::server::{self, UPID, TaskState, TaskListInfoIterator};
use crate::config::acl::{PRIV_SYS_AUDIT, PRIV_SYS_MODIFY};
use crate::config::cached_user_info::CachedUserInfo;
fn check_task_access(auth_id: &Authid, upid: &UPID) -> Result<(), Error> {
let task_auth_id = &upid.auth_id;
if auth_id == task_auth_id
|| (task_auth_id.is_token() && &Authid::from(task_auth_id.user().clone()) == auth_id) {
Ok(())
} else {
let user_info = CachedUserInfo::new()?;
user_info.check_privs(auth_id, &["system", "tasks"], PRIV_SYS_AUDIT, false)
}
}
#[api(
input: {
@ -57,9 +67,13 @@ use crate::config::cached_user_info::CachedUserInfo;
description: "Worker ID (arbitrary ASCII string)",
},
user: {
type: String,
type: Userid,
description: "The user who started the task.",
},
tokenid: {
type: Tokenname,
optional: true,
},
status: {
type: String,
description: "'running' or 'stopped'",
@ -84,12 +98,8 @@ async fn get_task_status(
let upid = extract_upid(&param)?;
let userid: Userid = rpcenv.get_user().unwrap().parse()?;
if userid != upid.userid {
let user_info = CachedUserInfo::new()?;
user_info.check_privs(&userid, &["system", "tasks"], PRIV_SYS_AUDIT, false)?;
}
let auth_id: Authid = rpcenv.get_auth_id().unwrap().parse()?;
check_task_access(&auth_id, &upid)?;
let mut result = json!({
"upid": param["upid"],
@ -99,9 +109,13 @@ async fn get_task_status(
"starttime": upid.starttime,
"type": upid.worker_type,
"id": upid.worker_id,
"user": upid.userid,
"user": upid.auth_id.user(),
});
if upid.auth_id.is_token() {
result["tokenid"] = Value::from(upid.auth_id.tokenname().unwrap().as_str());
}
if crate::server::worker_is_active(&upid).await? {
result["status"] = Value::from("running");
} else {
@ -161,12 +175,9 @@ async fn read_task_log(
let upid = extract_upid(&param)?;
let userid: Userid = rpcenv.get_user().unwrap().parse()?;
let auth_id: Authid = rpcenv.get_auth_id().unwrap().parse()?;
if userid != upid.userid {
let user_info = CachedUserInfo::new()?;
user_info.check_privs(&userid, &["system", "tasks"], PRIV_SYS_AUDIT, false)?;
}
check_task_access(&auth_id, &upid)?;
let test_status = param["test-status"].as_bool().unwrap_or(false);
@ -234,11 +245,11 @@ fn stop_task(
let upid = extract_upid(&param)?;
let userid: Userid = rpcenv.get_user().unwrap().parse()?;
let auth_id: Authid = rpcenv.get_auth_id().unwrap().parse()?;
if userid != upid.userid {
if auth_id != upid.auth_id {
let user_info = CachedUserInfo::new()?;
user_info.check_privs(&userid, &["system", "tasks"], PRIV_SYS_MODIFY, false)?;
user_info.check_privs(&auth_id, &["system", "tasks"], PRIV_SYS_MODIFY, false)?;
}
server::abort_worker_async(upid);
@ -260,7 +271,7 @@ fn stop_task(
},
limit: {
type: u64,
description: "Only list this amount of tasks.",
description: "Only list this amount of tasks. (0 means no limit)",
default: 50,
optional: true,
},
@ -285,6 +296,29 @@ fn stop_task(
type: String,
description: "Only list tasks from this user.",
},
since: {
type: i64,
description: "Only list tasks since this UNIX epoch.",
optional: true,
},
until: {
type: i64,
description: "Only list tasks until this UNIX epoch.",
optional: true,
},
typefilter: {
optional: true,
type: String,
description: "Only list tasks whose type contains this.",
},
statusfilter: {
optional: true,
type: Array,
description: "Only list tasks which have any one of the listed status.",
items: {
type: TaskStateType,
},
},
},
},
returns: {
@ -304,32 +338,52 @@ pub fn list_tasks(
errors: bool,
running: bool,
userfilter: Option<String>,
since: Option<i64>,
until: Option<i64>,
typefilter: Option<String>,
statusfilter: Option<Vec<TaskStateType>>,
param: Value,
mut rpcenv: &mut dyn RpcEnvironment,
) -> Result<Vec<TaskListItem>, Error> {
let userid: Userid = rpcenv.get_user().unwrap().parse()?;
let auth_id: Authid = rpcenv.get_auth_id().unwrap().parse()?;
let user_info = CachedUserInfo::new()?;
let user_privs = user_info.lookup_privs(&userid, &["system", "tasks"]);
let user_privs = user_info.lookup_privs(&auth_id, &["system", "tasks"]);
let list_all = (user_privs & PRIV_SYS_AUDIT) != 0;
let store = param["store"].as_str();
let list = TaskListInfoIterator::new(running)?;
let limit = if limit > 0 { limit as usize } else { usize::MAX };
let result: Vec<TaskListItem> = list
.take_while(|info| !info.is_err())
.skip_while(|info| {
match (info, until) {
(Ok(info), Some(until)) => info.upid.starttime > until,
(Ok(_), None) => false,
(Err(_), _) => false,
}
})
.take_while(|info| {
match (info, since) {
(Ok(info), Some(since)) => info.upid.starttime > since,
(Ok(_), None) => true,
(Err(_), _) => false,
}
})
.filter_map(|info| {
let info = match info {
Ok(info) => info,
Err(_) => return None,
};
if !list_all && info.upid.userid != userid { return None; }
if !list_all && check_task_access(&auth_id, &info.upid).is_err() {
return None;
}
if let Some(userid) = &userfilter {
if !info.upid.userid.as_str().contains(userid) { return None; }
if let Some(needle) = &userfilter {
if !info.upid.auth_id.to_string().contains(needle) { return None; }
}
if let Some(store) = store {
@ -351,19 +405,31 @@ pub fn list_tasks(
}
}
match info.state {
Some(_) if running => return None,
Some(crate::server::TaskState::OK { .. }) if errors => return None,
if let Some(typefilter) = &typefilter {
if !info.upid.worker_type.contains(typefilter) {
return None;
}
}
match (&info.state, &statusfilter) {
(Some(_), _) if running => return None,
(Some(crate::server::TaskState::OK { .. }), _) if errors => return None,
(Some(state), Some(filters)) => {
if !filters.contains(&state.tasktype()) {
return None;
}
},
(None, Some(_)) => return None,
_ => {},
}
Some(info.into())
}).skip(start as usize)
.take(limit as usize)
.take(limit)
.collect();
let mut count = result.len() + start as usize;
if result.len() > 0 && result.len() >= limit as usize { // we have a 'virtual' entry as long as we have any new
if result.len() > 0 && result.len() >= limit { // we have a 'virtual' entry as long as we have any new
count += 1;
}

View File

@ -20,7 +20,7 @@ use crate::config::{
pub fn check_pull_privs(
userid: &Userid,
auth_id: &Authid,
store: &str,
remote: &str,
remote_store: &str,
@ -29,11 +29,11 @@ pub fn check_pull_privs(
let user_info = CachedUserInfo::new()?;
user_info.check_privs(userid, &["datastore", store], PRIV_DATASTORE_BACKUP, false)?;
user_info.check_privs(userid, &["remote", remote, remote_store], PRIV_REMOTE_READ, false)?;
user_info.check_privs(auth_id, &["datastore", store], PRIV_DATASTORE_BACKUP, false)?;
user_info.check_privs(auth_id, &["remote", remote, remote_store], PRIV_REMOTE_READ, false)?;
if delete {
user_info.check_privs(userid, &["datastore", store], PRIV_DATASTORE_PRUNE, false)?;
user_info.check_privs(auth_id, &["datastore", store], PRIV_DATASTORE_PRUNE, false)?;
}
Ok(())
@ -56,7 +56,7 @@ pub async fn get_pull_parameters(
let src_repo = BackupRepository::new(Some(remote.userid.clone()), Some(remote.host.clone()), remote.port, remote_store.to_string());
let client = HttpClient::new(&src_repo.host(), src_repo.port(), &src_repo.user(), options)?;
let client = HttpClient::new(&src_repo.host(), src_repo.port(), &src_repo.auth_id(), options)?;
let _auth_info = client.login() // make sure we can auth
.await
.map_err(|err| format_err!("remote connection to '{}' failed - {}", remote.host, err))?;
@ -68,27 +68,31 @@ pub async fn get_pull_parameters(
pub fn do_sync_job(
mut job: Job,
sync_job: SyncJobConfig,
userid: &Userid,
auth_id: &Authid,
schedule: Option<String>,
) -> Result<String, Error> {
let job_id = job.jobname().to_string();
let worker_type = job.jobtype().to_string();
let (email, notify) = crate::server::lookup_datastore_notify_settings(&sync_job.store);
let upid_str = WorkerTask::spawn(
&worker_type,
Some(job.jobname().to_string()),
userid.clone(),
auth_id.clone(),
false,
move |worker| async move {
job.start(&worker.upid().to_string())?;
let worker2 = worker.clone();
let sync_job2 = sync_job.clone();
let worker_future = async move {
let delete = sync_job.remove_vanished.unwrap_or(true);
let sync_owner = sync_job.owner.unwrap_or(Authid::backup_auth_id().clone());
let (client, src_repo, tgt_store) = get_pull_parameters(&sync_job.store, &sync_job.remote, &sync_job.remote_store).await?;
worker.log(format!("Starting datastore sync job '{}'", job_id));
@ -98,7 +102,7 @@ pub fn do_sync_job(
worker.log(format!("Sync datastore '{}' from '{}/{}'",
sync_job.store, sync_job.remote, sync_job.remote_store));
crate::client::pull::pull_store(&worker, &client, &src_repo, tgt_store.clone(), delete, Userid::backup_userid().clone()).await?;
crate::client::pull::pull_store(&worker, &client, &src_repo, tgt_store.clone(), delete, sync_owner).await?;
worker.log(format!("sync job '{}' end", &job_id));
@ -107,12 +111,12 @@ pub fn do_sync_job(
let mut abort_future = worker2.abort_future().map(|_| Err(format_err!("sync aborted")));
let res = select!{
let result = select!{
worker = worker_future.fuse() => worker,
abort = abort_future => abort,
};
let status = worker2.create_state(&res);
let status = worker2.create_state(&result);
match job.finish(status) {
Ok(_) => {},
@ -121,7 +125,13 @@ pub fn do_sync_job(
}
}
res
if let Some(email) = email {
if let Err(err) = crate::server::send_sync_status(&email, notify, &sync_job2, &result) {
eprintln!("send sync notification failed: {}", err);
}
}
result
})?;
Ok(upid_str)
@ -164,19 +174,19 @@ async fn pull (
rpcenv: &mut dyn RpcEnvironment,
) -> Result<String, Error> {
let userid: Userid = rpcenv.get_user().unwrap().parse()?;
let auth_id: Authid = rpcenv.get_auth_id().unwrap().parse()?;
let delete = remove_vanished.unwrap_or(true);
check_pull_privs(&userid, &store, &remote, &remote_store, delete)?;
check_pull_privs(&auth_id, &store, &remote, &remote_store, delete)?;
let (client, src_repo, tgt_store) = get_pull_parameters(&store, &remote, &remote_store).await?;
// fixme: set to_stdout to false?
let upid_str = WorkerTask::spawn("sync", Some(store.clone()), userid.clone(), true, move |worker| async move {
let upid_str = WorkerTask::spawn("sync", Some(store.clone()), auth_id.clone(), true, move |worker| async move {
worker.log(format!("sync datastore '{}' start", store));
let pull_future = pull_store(&worker, &client, &src_repo, tgt_store.clone(), delete, userid);
let pull_future = pull_store(&worker, &client, &src_repo, tgt_store.clone(), delete, auth_id);
let future = select!{
success = pull_future.fuse() => success,
abort = worker.abort_future().map(|_| Err(format_err!("pull aborted"))) => abort,

View File

@ -55,11 +55,11 @@ fn upgrade_to_backup_reader_protocol(
async move {
let debug = param["debug"].as_bool().unwrap_or(false);
let userid: Userid = rpcenv.get_user().unwrap().parse()?;
let auth_id: Authid = rpcenv.get_auth_id().unwrap().parse()?;
let store = tools::required_string_param(&param, "store")?.to_owned();
let user_info = CachedUserInfo::new()?;
let privs = user_info.lookup_privs(&userid, &["datastore", &store]);
let privs = user_info.lookup_privs(&auth_id, &["datastore", &store]);
let priv_read = privs & PRIV_DATASTORE_READ != 0;
let priv_backup = privs & PRIV_DATASTORE_BACKUP != 0;
@ -94,7 +94,10 @@ fn upgrade_to_backup_reader_protocol(
let backup_dir = BackupDir::new(backup_type, backup_id, backup_time)?;
if !priv_read {
let owner = datastore.get_owner(backup_dir.group())?;
if owner != userid {
let correct_owner = owner == auth_id
|| (owner.is_token()
&& Authid::from(owner.user().clone()) == auth_id);
if !correct_owner {
bail!("backup owner check failed!");
}
}
@ -110,10 +113,10 @@ fn upgrade_to_backup_reader_protocol(
let worker_id = format!("{}:{}/{}/{:08X}", store, backup_type, backup_id, backup_dir.backup_time());
WorkerTask::spawn("reader", Some(worker_id), userid.clone(), true, move |worker| {
WorkerTask::spawn("reader", Some(worker_id), auth_id.clone(), true, move |worker| {
let mut env = ReaderEnvironment::new(
env_type,
userid,
auth_id,
worker.clone(),
datastore,
backup_dir,

View File

@ -5,7 +5,7 @@ use serde_json::{json, Value};
use proxmox::api::{RpcEnvironment, RpcEnvironmentType};
use crate::api2::types::Userid;
use crate::api2::types::Authid;
use crate::backup::*;
use crate::server::formatter::*;
use crate::server::WorkerTask;
@ -17,7 +17,7 @@ use crate::server::WorkerTask;
pub struct ReaderEnvironment {
env_type: RpcEnvironmentType,
result_attributes: Value,
user: Userid,
auth_id: Authid,
pub debug: bool,
pub formatter: &'static OutputFormatter,
pub worker: Arc<WorkerTask>,
@ -29,7 +29,7 @@ pub struct ReaderEnvironment {
impl ReaderEnvironment {
pub fn new(
env_type: RpcEnvironmentType,
user: Userid,
auth_id: Authid,
worker: Arc<WorkerTask>,
datastore: Arc<DataStore>,
backup_dir: BackupDir,
@ -39,7 +39,7 @@ impl ReaderEnvironment {
Self {
result_attributes: json!({}),
env_type,
user,
auth_id,
worker,
datastore,
debug: false,
@ -82,12 +82,12 @@ impl RpcEnvironment for ReaderEnvironment {
self.env_type
}
fn set_user(&mut self, _user: Option<String>) {
panic!("unable to change user");
fn set_auth_id(&mut self, _auth_id: Option<String>) {
panic!("unable to change auth_id");
}
fn get_user(&self) -> Option<String> {
Some(self.user.to_string())
fn get_auth_id(&self) -> Option<String> {
Some(self.auth_id.to_string())
}
}

View File

@ -16,18 +16,14 @@ use crate::api2::types::{
DATASTORE_SCHEMA,
RRDMode,
RRDTimeFrameResolution,
TaskListItem,
TaskStateType,
Userid,
Authid,
};
use crate::server;
use crate::backup::{DataStore};
use crate::config::datastore;
use crate::tools::statistics::{linear_regression};
use crate::config::cached_user_info::CachedUserInfo;
use crate::config::acl::{
PRIV_SYS_AUDIT,
PRIV_DATASTORE_AUDIT,
PRIV_DATASTORE_BACKUP,
};
@ -87,13 +83,13 @@ fn datastore_status(
let (config, _digest) = datastore::config()?;
let userid: Userid = rpcenv.get_user().unwrap().parse()?;
let auth_id: Authid = rpcenv.get_auth_id().unwrap().parse()?;
let user_info = CachedUserInfo::new()?;
let mut list = Vec::new();
for (store, (_, _)) in &config.sections {
let user_privs = user_info.lookup_privs(&userid, &["datastore", &store]);
let user_privs = user_info.lookup_privs(&auth_id, &["datastore", &store]);
let allowed = (user_privs & (PRIV_DATASTORE_AUDIT| PRIV_DATASTORE_BACKUP)) != 0;
if !allowed {
continue;
@ -179,103 +175,8 @@ fn datastore_status(
Ok(list.into())
}
#[api(
input: {
properties: {
since: {
type: i64,
description: "Only list tasks since this UNIX epoch.",
optional: true,
},
typefilter: {
optional: true,
type: String,
description: "Only list tasks, whose type contains this string.",
},
statusfilter: {
optional: true,
type: Array,
description: "Only list tasks which have any one of the listed status.",
items: {
type: TaskStateType,
},
},
},
},
returns: {
description: "A list of tasks.",
type: Array,
items: { type: TaskListItem },
},
access: {
description: "Users can only see there own tasks, unless the have Sys.Audit on /system/tasks.",
permission: &Permission::Anybody,
},
)]
/// List tasks.
pub fn list_tasks(
since: Option<i64>,
typefilter: Option<String>,
statusfilter: Option<Vec<TaskStateType>>,
_param: Value,
rpcenv: &mut dyn RpcEnvironment,
) -> Result<Vec<TaskListItem>, Error> {
let userid: Userid = rpcenv.get_user().unwrap().parse()?;
let user_info = CachedUserInfo::new()?;
let user_privs = user_info.lookup_privs(&userid, &["system", "tasks"]);
let list_all = (user_privs & PRIV_SYS_AUDIT) != 0;
let since = since.unwrap_or_else(|| 0);
let list: Vec<TaskListItem> = server::TaskListInfoIterator::new(false)?
.take_while(|info| {
match info {
Ok(info) => info.upid.starttime > since,
Err(_) => false
}
})
.filter_map(|info| {
match info {
Ok(info) => {
if list_all || info.upid.userid == userid {
if let Some(filter) = &typefilter {
if !info.upid.worker_type.contains(filter) {
return None;
}
}
if let Some(filters) = &statusfilter {
if let Some(state) = &info.state {
let statetype = match state {
server::TaskState::OK { .. } => TaskStateType::OK,
server::TaskState::Unknown { .. } => TaskStateType::Unknown,
server::TaskState::Error { .. } => TaskStateType::Error,
server::TaskState::Warning { .. } => TaskStateType::Warning,
};
if !filters.contains(&statetype) {
return None;
}
}
}
Some(Ok(TaskListItem::from(info)))
} else {
None
}
}
Err(err) => Some(Err(err))
}
})
.collect::<Result<Vec<TaskListItem>, Error>>()?;
Ok(list.into())
}
const SUBDIRS: SubdirMap = &[
("datastore-usage", &Router::new().get(&API_METHOD_DATASTORE_STATUS)),
("tasks", &Router::new().get(&API_METHOD_LIST_TASKS)),
];
pub const ROUTER: Router = Router::new()

View File

@ -5,7 +5,7 @@ use proxmox::api::{api, schema::*};
use proxmox::const_regex;
use proxmox::{IPRE, IPRE_BRACKET, IPV4RE, IPV6RE, IPV4OCTET, IPV6H16, IPV6LS32};
use crate::backup::CryptMode;
use crate::backup::{CryptMode, BACKUP_ID_REGEX};
use crate::server::UPID;
#[macro_use]
@ -14,9 +14,11 @@ mod macros;
#[macro_use]
mod userid;
pub use userid::{Realm, RealmRef};
pub use userid::{Tokenname, TokennameRef};
pub use userid::{Username, UsernameRef};
pub use userid::Userid;
pub use userid::PROXMOX_GROUP_ID_SCHEMA;
pub use userid::Authid;
pub use userid::{PROXMOX_TOKEN_ID_SCHEMA, PROXMOX_TOKEN_NAME_SCHEMA, PROXMOX_GROUP_ID_SCHEMA};
// File names: may not contain slashes, may not start with "."
pub const FILENAME_FORMAT: ApiStringFormat = ApiStringFormat::VerifyFn(|name| {
@ -65,12 +67,14 @@ const_regex!{
pub DNS_NAME_OR_IP_REGEX = concat!(r"^(?:", DNS_NAME!(), "|", IPRE!(), r")$");
pub BACKUP_REPO_URL_REGEX = concat!(r"^^(?:(?:(", USER_ID_REGEX_STR!(), ")@)?(", DNS_NAME!(), "|", IPRE_BRACKET!() ,"):)?(?:([0-9]{1,5}):)?(", PROXMOX_SAFE_ID_REGEX_STR!(), r")$");
pub BACKUP_REPO_URL_REGEX = concat!(r"^^(?:(?:(", USER_ID_REGEX_STR!(), "|", APITOKEN_ID_REGEX_STR!(), ")@)?(", DNS_NAME!(), "|", IPRE_BRACKET!() ,"):)?(?:([0-9]{1,5}):)?(", PROXMOX_SAFE_ID_REGEX_STR!(), r")$");
pub CERT_FINGERPRINT_SHA256_REGEX = r"^(?:[0-9a-fA-F][0-9a-fA-F])(?::[0-9a-fA-F][0-9a-fA-F]){31}$";
pub ACL_PATH_REGEX = concat!(r"^(?:/|", r"(?:/", PROXMOX_SAFE_ID_REGEX_STR!(), ")+", r")$");
pub SUBSCRIPTION_KEY_REGEX = concat!(r"^pbs(?:[cbsp])-[0-9a-f]{10}$");
pub BLOCKDEVICE_NAME_REGEX = r"^(:?(:?h|s|x?v)d[a-z]+)|(:?nvme\d+n\d+)$";
pub ZPOOL_NAME_REGEX = r"^[a-zA-Z][a-z0-9A-Z\-_.:]+$";
@ -97,6 +101,9 @@ pub const CERT_FINGERPRINT_SHA256_FORMAT: ApiStringFormat =
pub const PROXMOX_SAFE_ID_FORMAT: ApiStringFormat =
ApiStringFormat::Pattern(&PROXMOX_SAFE_ID_REGEX);
pub const BACKUP_ID_FORMAT: ApiStringFormat =
ApiStringFormat::Pattern(&BACKUP_ID_REGEX);
pub const SINGLE_LINE_COMMENT_FORMAT: ApiStringFormat =
ApiStringFormat::Pattern(&SINGLE_LINE_COMMENT_REGEX);
@ -127,6 +134,9 @@ pub const CIDR_V6_FORMAT: ApiStringFormat =
pub const CIDR_FORMAT: ApiStringFormat =
ApiStringFormat::Pattern(&CIDR_REGEX);
pub const SUBSCRIPTION_KEY_FORMAT: ApiStringFormat =
ApiStringFormat::Pattern(&SUBSCRIPTION_KEY_REGEX);
pub const BLOCKDEVICE_NAME_FORMAT: ApiStringFormat =
ApiStringFormat::Pattern(&BLOCKDEVICE_NAME_REGEX);
@ -269,7 +279,7 @@ pub const BACKUP_TYPE_SCHEMA: Schema =
pub const BACKUP_ID_SCHEMA: Schema =
StringSchema::new("Backup ID.")
.format(&PROXMOX_SAFE_ID_FORMAT)
.format(&BACKUP_ID_FORMAT)
.schema();
pub const BACKUP_TIME_SCHEMA: Schema =
@ -346,6 +356,12 @@ pub const DNS_NAME_OR_IP_SCHEMA: Schema = StringSchema::new("DNS name or IP addr
.format(&DNS_NAME_OR_IP_FORMAT)
.schema();
pub const SUBSCRIPTION_KEY_SCHEMA: Schema = StringSchema::new("Proxmox Backup Server subscription key.")
.format(&SUBSCRIPTION_KEY_FORMAT)
.min_length(15)
.max_length(16)
.schema();
pub const BLOCKDEVICE_NAME_SCHEMA: Schema = StringSchema::new("Block device name (/sys/block/<name>).")
.format(&BLOCKDEVICE_NAME_FORMAT)
.min_length(3)
@ -374,7 +390,7 @@ pub const BLOCKDEVICE_NAME_SCHEMA: Schema = StringSchema::new("Block device name
},
},
owner: {
type: Userid,
type: Authid,
optional: true,
},
},
@ -392,7 +408,7 @@ pub struct GroupListItem {
pub files: Vec<String>,
/// The owner of group
#[serde(skip_serializing_if="Option::is_none")]
pub owner: Option<Userid>,
pub owner: Option<Authid>,
}
#[api()]
@ -450,7 +466,7 @@ pub struct SnapshotVerifyState {
},
},
owner: {
type: Userid,
type: Authid,
optional: true,
},
},
@ -475,7 +491,7 @@ pub struct SnapshotListItem {
pub size: Option<u64>,
/// The owner of the snapshots group
#[serde(skip_serializing_if="Option::is_none")]
pub owner: Option<Userid>,
pub owner: Option<Authid>,
}
#[api(
@ -622,10 +638,75 @@ pub struct StorageStatus {
pub avail: u64,
}
#[api()]
#[derive(Serialize, Deserialize, Default)]
/// Backup Type group/snapshot counts.
pub struct TypeCounts {
/// The number of groups of the type.
pub groups: u64,
/// The number of snapshots of the type.
pub snapshots: u64,
}
#[api(
properties: {
ct: {
type: TypeCounts,
optional: true,
},
host: {
type: TypeCounts,
optional: true,
},
vm: {
type: TypeCounts,
optional: true,
},
other: {
type: TypeCounts,
optional: true,
},
},
)]
#[derive(Serialize, Deserialize)]
/// Counts of groups/snapshots per BackupType.
pub struct Counts {
/// The counts for CT backups
pub ct: Option<TypeCounts>,
/// The counts for Host backups
pub host: Option<TypeCounts>,
/// The counts for VM backups
pub vm: Option<TypeCounts>,
/// The counts for other backup types
pub other: Option<TypeCounts>,
}
#[api(
properties: {
"gc-status": { type: GarbageCollectionStatus, },
counts: { type: Counts, }
},
)]
#[derive(Serialize, Deserialize)]
#[serde(rename_all="kebab-case")]
/// Overall Datastore status and useful information.
pub struct DataStoreStatus {
/// Total space (bytes).
pub total: u64,
/// Used space (bytes).
pub used: u64,
/// Available space (bytes).
pub avail: u64,
/// Status of last GC
pub gc_status: GarbageCollectionStatus,
/// Group/Snapshot counts
pub counts: Counts,
}
#[api(
properties: {
upid: { schema: UPID_SCHEMA },
user: { type: Userid },
user: { type: Authid },
},
)]
#[derive(Serialize, Deserialize)]
@ -644,8 +725,8 @@ pub struct TaskListItem {
pub worker_type: String,
/// Worker ID (arbitrary ASCII string)
pub worker_id: Option<String>,
/// The user who started the task
pub user: Userid,
/// The authenticated entity who started the task
pub user: Authid,
/// The task end time (Epoch)
#[serde(skip_serializing_if="Option::is_none")]
pub endtime: Option<i64>,
@ -668,7 +749,7 @@ impl From<crate::server::TaskListInfo> for TaskListItem {
starttime: info.upid.starttime,
worker_type: info.upid.worker_type,
worker_id: info.upid.worker_id,
user: info.upid.userid,
user: info.upid.auth_id,
endtime,
status,
}
@ -1048,7 +1129,7 @@ pub enum RRDTimeFrameResolution {
}
#[api()]
#[derive(Serialize, Deserialize)]
#[derive(Debug, Serialize, Deserialize)]
#[serde(rename_all = "PascalCase")]
/// Describes a package for which an update is available.
pub struct APTUpdateInfo {
@ -1073,3 +1154,16 @@ pub struct APTUpdateInfo {
/// URL under which the package's changelog can be retrieved
pub change_log_url: String,
}
#[api()]
#[derive(Debug, Copy, Clone, PartialEq, Serialize, Deserialize)]
#[serde(rename_all = "lowercase")]
/// When do we send notifications
pub enum Notify {
/// Never send notification
Never,
/// Send notifications for failed and sucessful jobs
Always,
/// Send notifications for failed jobs only
Error,
}

View File

@ -1,6 +1,7 @@
//! Types for user handling.
//!
//! We have [`Username`]s and [`Realm`]s. To uniquely identify a user, they must be combined into a [`Userid`].
//! We have [`Username`]s, [`Realm`]s and [`Tokenname`]s. To uniquely identify a user/API token, they
//! must be combined into a [`Userid`] or [`Authid`].
//!
//! Since they're all string types, they're organized as follows:
//!
@ -9,13 +10,16 @@
//! with `String`, meaning you can only make references to it.
//! * [`Realm`]: an owned realm (`String` equivalent).
//! * [`RealmRef`]: a borrowed realm (`str` equivalent).
//! * [`Userid`]: an owned user id (`"user@realm"`). Note that this does not have a separate
//! borrowed type.
//! * [`Tokenname`]: an owned API token name (`String` equivalent)
//! * [`TokennameRef`]: a borrowed `Tokenname` (`str` equivalent).
//! * [`Userid`]: an owned user id (`"user@realm"`).
//! * [`Authid`]: an owned Authentication ID (a `Userid` with an optional `Tokenname`).
//! Note that `Userid` and `Authid` do not have a separate borrowed type.
//!
//! Note that `Username`s are not unique, therefore they do not implement `Eq` and cannot be
//! Note that `Username`s and `Tokenname`s are not unique, therefore they do not implement `Eq` and cannot be
//! compared directly. If a direct comparison is really required, they can be compared as strings
//! via the `as_str()` method. [`Realm`]s and [`Userid`]s on the other hand can be compared with
//! each other, as in those two cases the comparison has meaning.
//! via the `as_str()` method. [`Realm`]s, [`Userid`]s and [`Authid`]s on the other
//! hand can be compared with each other, as in those cases the comparison has meaning.
use std::borrow::Borrow;
use std::convert::TryFrom;
@ -36,19 +40,42 @@ use proxmox::const_regex;
// also see "man useradd"
macro_rules! USER_NAME_REGEX_STR { () => (r"(?:[^\s:/[:cntrl:]]+)") }
macro_rules! GROUP_NAME_REGEX_STR { () => (USER_NAME_REGEX_STR!()) }
macro_rules! TOKEN_NAME_REGEX_STR { () => (PROXMOX_SAFE_ID_REGEX_STR!()) }
macro_rules! USER_ID_REGEX_STR { () => (concat!(USER_NAME_REGEX_STR!(), r"@", PROXMOX_SAFE_ID_REGEX_STR!())) }
macro_rules! APITOKEN_ID_REGEX_STR { () => (concat!(USER_ID_REGEX_STR!() , r"!", TOKEN_NAME_REGEX_STR!())) }
const_regex! {
pub PROXMOX_USER_NAME_REGEX = concat!(r"^", USER_NAME_REGEX_STR!(), r"$");
pub PROXMOX_TOKEN_NAME_REGEX = concat!(r"^", TOKEN_NAME_REGEX_STR!(), r"$");
pub PROXMOX_USER_ID_REGEX = concat!(r"^", USER_ID_REGEX_STR!(), r"$");
pub PROXMOX_APITOKEN_ID_REGEX = concat!(r"^", APITOKEN_ID_REGEX_STR!(), r"$");
pub PROXMOX_AUTH_ID_REGEX = concat!(r"^", r"(?:", USER_ID_REGEX_STR!(), r"|", APITOKEN_ID_REGEX_STR!(), r")$");
pub PROXMOX_GROUP_ID_REGEX = concat!(r"^", GROUP_NAME_REGEX_STR!(), r"$");
}
pub const PROXMOX_USER_NAME_FORMAT: ApiStringFormat =
ApiStringFormat::Pattern(&PROXMOX_USER_NAME_REGEX);
pub const PROXMOX_TOKEN_NAME_FORMAT: ApiStringFormat =
ApiStringFormat::Pattern(&PROXMOX_TOKEN_NAME_REGEX);
pub const PROXMOX_USER_ID_FORMAT: ApiStringFormat =
ApiStringFormat::Pattern(&PROXMOX_USER_ID_REGEX);
pub const PROXMOX_TOKEN_ID_FORMAT: ApiStringFormat =
ApiStringFormat::Pattern(&PROXMOX_APITOKEN_ID_REGEX);
pub const PROXMOX_AUTH_ID_FORMAT: ApiStringFormat =
ApiStringFormat::Pattern(&PROXMOX_AUTH_ID_REGEX);
pub const PROXMOX_TOKEN_ID_SCHEMA: Schema = StringSchema::new("API Token ID")
.format(&PROXMOX_TOKEN_ID_FORMAT)
.min_length(3)
.max_length(64)
.schema();
pub const PROXMOX_TOKEN_NAME_SCHEMA: Schema = StringSchema::new("API Token name")
.format(&PROXMOX_TOKEN_NAME_FORMAT)
.min_length(3)
.max_length(64)
.schema();
pub const PROXMOX_GROUP_ID_FORMAT: ApiStringFormat =
ApiStringFormat::Pattern(&PROXMOX_GROUP_ID_REGEX);
@ -91,26 +118,6 @@ pub struct Username(String);
#[derive(Debug, Hash)]
pub struct UsernameRef(str);
#[doc(hidden)]
/// ```compile_fail
/// let a: Username = unsafe { std::mem::zeroed() };
/// let b: Username = unsafe { std::mem::zeroed() };
/// let _ = <Username as PartialEq>::eq(&a, &b);
/// ```
///
/// ```compile_fail
/// let a: &UsernameRef = unsafe { std::mem::zeroed() };
/// let b: &UsernameRef = unsafe { std::mem::zeroed() };
/// let _ = <&UsernameRef as PartialEq>::eq(a, b);
/// ```
///
/// ```compile_fail
/// let a: &UsernameRef = unsafe { std::mem::zeroed() };
/// let b: &UsernameRef = unsafe { std::mem::zeroed() };
/// let _ = <&UsernameRef as PartialEq>::eq(&a, &b);
/// ```
struct _AssertNoEqImpl;
impl UsernameRef {
fn new(s: &str) -> &Self {
unsafe { &*(s as *const str as *const UsernameRef) }
@ -286,7 +293,132 @@ impl PartialEq<Realm> for &RealmRef {
}
}
/// A complete user id consting of a user name and a realm.
#[api(
type: String,
format: &PROXMOX_TOKEN_NAME_FORMAT,
)]
/// The token ID part of an API token authentication id.
///
/// This alone does NOT uniquely identify the API token and therefore does not implement `Eq`. In
/// order to compare token IDs directly, they need to be explicitly compared as strings by calling
/// `.as_str()`.
///
/// ```compile_fail
/// fn test(a: Tokenname, b: Tokenname) -> bool {
/// a == b // illegal and does not compile
/// }
/// ```
#[derive(Clone, Debug, Hash, Deserialize, Serialize)]
pub struct Tokenname(String);
/// A reference to a token name part of an authentication id. This alone does NOT uniquely identify
/// the user.
///
/// This is like a `str` to the `String` of a [`Tokenname`].
#[derive(Debug, Hash)]
pub struct TokennameRef(str);
#[doc(hidden)]
/// ```compile_fail
/// let a: Username = unsafe { std::mem::zeroed() };
/// let b: Username = unsafe { std::mem::zeroed() };
/// let _ = <Username as PartialEq>::eq(&a, &b);
/// ```
///
/// ```compile_fail
/// let a: &UsernameRef = unsafe { std::mem::zeroed() };
/// let b: &UsernameRef = unsafe { std::mem::zeroed() };
/// let _ = <&UsernameRef as PartialEq>::eq(a, b);
/// ```
///
/// ```compile_fail
/// let a: &UsernameRef = unsafe { std::mem::zeroed() };
/// let b: &UsernameRef = unsafe { std::mem::zeroed() };
/// let _ = <&UsernameRef as PartialEq>::eq(&a, &b);
/// ```
///
/// ```compile_fail
/// let a: Tokenname = unsafe { std::mem::zeroed() };
/// let b: Tokenname = unsafe { std::mem::zeroed() };
/// let _ = <Tokenname as PartialEq>::eq(&a, &b);
/// ```
///
/// ```compile_fail
/// let a: &TokennameRef = unsafe { std::mem::zeroed() };
/// let b: &TokennameRef = unsafe { std::mem::zeroed() };
/// let _ = <&TokennameRef as PartialEq>::eq(a, b);
/// ```
///
/// ```compile_fail
/// let a: &TokennameRef = unsafe { std::mem::zeroed() };
/// let b: &TokennameRef = unsafe { std::mem::zeroed() };
/// let _ = <&TokennameRef as PartialEq>::eq(&a, &b);
/// ```
struct _AssertNoEqImpl;
impl TokennameRef {
fn new(s: &str) -> &Self {
unsafe { &*(s as *const str as *const TokennameRef) }
}
pub fn as_str(&self) -> &str {
&self.0
}
}
impl std::ops::Deref for Tokenname {
type Target = TokennameRef;
fn deref(&self) -> &TokennameRef {
self.borrow()
}
}
impl Borrow<TokennameRef> for Tokenname {
fn borrow(&self) -> &TokennameRef {
TokennameRef::new(self.0.as_str())
}
}
impl AsRef<TokennameRef> for Tokenname {
fn as_ref(&self) -> &TokennameRef {
self.borrow()
}
}
impl ToOwned for TokennameRef {
type Owned = Tokenname;
fn to_owned(&self) -> Self::Owned {
Tokenname(self.0.to_owned())
}
}
impl TryFrom<String> for Tokenname {
type Error = Error;
fn try_from(s: String) -> Result<Self, Error> {
if !PROXMOX_TOKEN_NAME_REGEX.is_match(&s) {
bail!("invalid token name");
}
Ok(Self(s))
}
}
impl<'a> TryFrom<&'a str> for &'a TokennameRef {
type Error = Error;
fn try_from(s: &'a str) -> Result<&'a TokennameRef, Error> {
if !PROXMOX_TOKEN_NAME_REGEX.is_match(s) {
bail!("invalid token name in user id");
}
Ok(TokennameRef::new(s))
}
}
/// A complete user id consisting of a user name and a realm
#[derive(Clone, Debug, Hash)]
pub struct Userid {
data: String,
@ -342,6 +474,12 @@ impl PartialEq for Userid {
}
}
impl From<Authid> for Userid {
fn from(authid: Authid) -> Self {
authid.user
}
}
impl From<(Username, Realm)> for Userid {
fn from(parts: (Username, Realm)) -> Self {
Self::from((parts.0.as_ref(), parts.1.as_ref()))
@ -366,10 +504,18 @@ impl std::str::FromStr for Userid {
type Err = Error;
fn from_str(id: &str) -> Result<Self, Error> {
let (name, realm) = match id.as_bytes().iter().rposition(|&b| b == b'@') {
Some(pos) => (&id[..pos], &id[(pos + 1)..]),
None => bail!("not a valid user id"),
};
let name_len = id
.as_bytes()
.iter()
.rposition(|&b| b == b'@')
.ok_or_else(|| format_err!("not a valid user id"))?;
let name = &id[..name_len];
let realm = &id[(name_len + 1)..];
if !PROXMOX_USER_NAME_REGEX.is_match(name) {
bail!("invalid user name in user id");
}
PROXMOX_AUTH_REALM_STRING_SCHEMA.check_constraints(realm)
.map_err(|_| format_err!("invalid realm in user id"))?;
@ -388,6 +534,10 @@ impl TryFrom<String> for Userid {
.rposition(|&b| b == b'@')
.ok_or_else(|| format_err!("not a valid user id"))?;
if !PROXMOX_USER_NAME_REGEX.is_match(&data[..name_len]) {
bail!("invalid user name in user id");
}
PROXMOX_AUTH_REALM_STRING_SCHEMA.check_constraints(&data[(name_len + 1)..])
.map_err(|_| format_err!("invalid realm in user id"))?;
@ -413,5 +563,182 @@ impl PartialEq<String> for Userid {
}
}
/// A complete authentication id consisting of a user id and an optional token name.
#[derive(Clone, Debug, Hash)]
pub struct Authid {
user: Userid,
tokenname: Option<Tokenname>
}
impl Authid {
pub const API_SCHEMA: Schema = StringSchema::new("Authentication ID")
.format(&PROXMOX_AUTH_ID_FORMAT)
.min_length(3)
.max_length(64)
.schema();
const fn new(user: Userid, tokenname: Option<Tokenname>) -> Self {
Self { user, tokenname }
}
pub fn user(&self) -> &Userid {
&self.user
}
pub fn is_token(&self) -> bool {
self.tokenname.is_some()
}
pub fn tokenname(&self) -> Option<&TokennameRef> {
match &self.tokenname {
Some(name) => Some(&name),
None => None,
}
}
/// Get the "backup@pam" auth id.
pub fn backup_auth_id() -> &'static Self {
&*BACKUP_AUTHID
}
/// Get the "root@pam" auth id.
pub fn root_auth_id() -> &'static Self {
&*ROOT_AUTHID
}
}
lazy_static! {
pub static ref BACKUP_AUTHID: Authid = Authid::from(Userid::new("backup@pam".to_string(), 6));
pub static ref ROOT_AUTHID: Authid = Authid::from(Userid::new("root@pam".to_string(), 4));
}
impl Eq for Authid {}
impl PartialEq for Authid {
fn eq(&self, rhs: &Self) -> bool {
self.user == rhs.user && match (&self.tokenname, &rhs.tokenname) {
(Some(ours), Some(theirs)) => ours.as_str() == theirs.as_str(),
(None, None) => true,
_ => false,
}
}
}
impl From<Userid> for Authid {
fn from(parts: Userid) -> Self {
Self::new(parts, None)
}
}
impl From<(Userid, Option<Tokenname>)> for Authid {
fn from(parts: (Userid, Option<Tokenname>)) -> Self {
Self::new(parts.0, parts.1)
}
}
impl fmt::Display for Authid {
fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result {
match &self.tokenname {
Some(token) => write!(f, "{}!{}", self.user, token.as_str()),
None => self.user.fmt(f),
}
}
}
impl std::str::FromStr for Authid {
type Err = Error;
fn from_str(id: &str) -> Result<Self, Error> {
let name_len = id
.as_bytes()
.iter()
.rposition(|&b| b == b'@')
.ok_or_else(|| format_err!("not a valid user id"))?;
let realm_end = id
.as_bytes()
.iter()
.rposition(|&b| b == b'!')
.map(|pos| if pos < name_len { id.len() } else { pos })
.unwrap_or(id.len());
if realm_end == id.len() - 1 {
bail!("empty token name in userid");
}
let user = Userid::from_str(&id[..realm_end])?;
if id.len() > realm_end {
let token = Tokenname::try_from(id[(realm_end + 1)..].to_string())?;
Ok(Self::new(user, Some(token)))
} else {
Ok(Self::new(user, None))
}
}
}
impl TryFrom<String> for Authid {
type Error = Error;
fn try_from(mut data: String) -> Result<Self, Error> {
let name_len = data
.as_bytes()
.iter()
.rposition(|&b| b == b'@')
.ok_or_else(|| format_err!("not a valid user id"))?;
let realm_end = data
.as_bytes()
.iter()
.rposition(|&b| b == b'!')
.map(|pos| if pos < name_len { data.len() } else { pos })
.unwrap_or(data.len());
if realm_end == data.len() - 1 {
bail!("empty token name in userid");
}
let tokenname = if data.len() > realm_end {
Some(Tokenname::try_from(data[(realm_end + 1)..].to_string())?)
} else {
None
};
data.truncate(realm_end);
let user:Userid = data.parse()?;
Ok(Self { user, tokenname })
}
}
#[test]
fn test_token_id() {
let userid: Userid = "test@pam".parse().expect("parsing Userid failed");
assert_eq!(userid.name().as_str(), "test");
assert_eq!(userid.realm(), "pam");
assert_eq!(userid, "test@pam");
let auth_id: Authid = "test@pam".parse().expect("parsing user Authid failed");
assert_eq!(auth_id.to_string(), "test@pam".to_string());
assert!(!auth_id.is_token());
assert_eq!(auth_id.user(), &userid);
let user_auth_id = Authid::from(userid.clone());
assert_eq!(user_auth_id, auth_id);
assert!(!user_auth_id.is_token());
let auth_id: Authid = "test@pam!bar".parse().expect("parsing token Authid failed");
let token_userid = auth_id.user();
assert_eq!(&userid, token_userid);
assert!(auth_id.is_token());
assert_eq!(auth_id.tokenname().expect("Token has tokenname").as_str(), TokennameRef::new("bar").as_str());
assert_eq!(auth_id.to_string(), "test@pam!bar".to_string());
}
proxmox::forward_deserialize_to_from_str!(Userid);
proxmox::forward_serialize_to_display!(Userid);
proxmox::forward_deserialize_to_from_str!(Authid);
proxmox::forward_serialize_to_display!(Authid);

View File

@ -1,37 +1,31 @@
use crate::tools;
use anyhow::{bail, format_err, Error};
use regex::Regex;
use std::os::unix::io::RawFd;
use std::path::{PathBuf, Path};
use lazy_static::lazy_static;
use proxmox::const_regex;
use super::manifest::MANIFEST_BLOB_NAME;
macro_rules! BACKUP_ID_RE { () => (r"[A-Za-z0-9][A-Za-z0-9_-]+") }
macro_rules! BACKUP_ID_RE { () => (r"[A-Za-z0-9_][A-Za-z0-9._\-]*") }
macro_rules! BACKUP_TYPE_RE { () => (r"(?:host|vm|ct)") }
macro_rules! BACKUP_TIME_RE { () => (r"[0-9]{4}-[0-9]{2}-[0-9]{2}T[0-9]{2}:[0-9]{2}:[0-9]{2}Z") }
lazy_static!{
static ref BACKUP_FILE_REGEX: Regex = Regex::new(
r"^.*\.([fd]idx|blob)$").unwrap();
const_regex!{
BACKUP_FILE_REGEX = r"^.*\.([fd]idx|blob)$";
static ref BACKUP_TYPE_REGEX: Regex = Regex::new(
concat!(r"^(", BACKUP_TYPE_RE!(), r")$")).unwrap();
BACKUP_TYPE_REGEX = concat!(r"^(", BACKUP_TYPE_RE!(), r")$");
static ref BACKUP_ID_REGEX: Regex = Regex::new(
concat!(r"^", BACKUP_ID_RE!(), r"$")).unwrap();
pub BACKUP_ID_REGEX = concat!(r"^", BACKUP_ID_RE!(), r"$");
static ref BACKUP_DATE_REGEX: Regex = Regex::new(
concat!(r"^", BACKUP_TIME_RE!() ,r"$")).unwrap();
BACKUP_DATE_REGEX = concat!(r"^", BACKUP_TIME_RE!() ,r"$");
static ref GROUP_PATH_REGEX: Regex = Regex::new(
concat!(r"^(", BACKUP_TYPE_RE!(), ")/(", BACKUP_ID_RE!(), r")$")).unwrap();
static ref SNAPSHOT_PATH_REGEX: Regex = Regex::new(
concat!(r"^(", BACKUP_TYPE_RE!(), ")/(", BACKUP_ID_RE!(), ")/(", BACKUP_TIME_RE!(), r")$")).unwrap();
GROUP_PATH_REGEX = concat!(r"^(", BACKUP_TYPE_RE!(), ")/(", BACKUP_ID_RE!(), r")$");
SNAPSHOT_PATH_REGEX = concat!(
r"^(", BACKUP_TYPE_RE!(), ")/(", BACKUP_ID_RE!(), ")/(", BACKUP_TIME_RE!(), r")$");
}
/// BackupGroup is a directory containing a list of BackupDir

View File

@ -78,7 +78,7 @@ pub struct DirEntry {
#[derive(Clone, Debug, PartialEq)]
pub enum DirEntryAttribute {
Directory { start: u64 },
File { size: u64, mtime: u64 },
File { size: u64, mtime: i64 },
Symlink,
Hardlink,
BlockDevice,
@ -89,7 +89,7 @@ pub enum DirEntryAttribute {
impl DirEntry {
fn new(etype: CatalogEntryType, name: Vec<u8>, start: u64, size: u64, mtime:u64) -> Self {
fn new(etype: CatalogEntryType, name: Vec<u8>, start: u64, size: u64, mtime: i64) -> Self {
match etype {
CatalogEntryType::Directory => {
DirEntry { name, attr: DirEntryAttribute::Directory { start } }
@ -184,7 +184,7 @@ impl DirInfo {
catalog_encode_u64(writer, name.len() as u64)?;
writer.write_all(name)?;
catalog_encode_u64(writer, *size)?;
catalog_encode_u64(writer, *mtime)?;
catalog_encode_i64(writer, *mtime)?;
}
DirEntry { name, attr: DirEntryAttribute::Symlink } => {
writer.write_all(&[CatalogEntryType::Symlink as u8])?;
@ -234,7 +234,7 @@ impl DirInfo {
Ok((self.name, data))
}
fn parse<C: FnMut(CatalogEntryType, &[u8], u64, u64, u64) -> Result<bool, Error>>(
fn parse<C: FnMut(CatalogEntryType, &[u8], u64, u64, i64) -> Result<bool, Error>>(
data: &[u8],
mut callback: C,
) -> Result<(), Error> {
@ -265,7 +265,7 @@ impl DirInfo {
}
CatalogEntryType::File => {
let size = catalog_decode_u64(&mut cursor)?;
let mtime = catalog_decode_u64(&mut cursor)?;
let mtime = catalog_decode_i64(&mut cursor)?;
callback(etype, name, 0, size, mtime)?
}
_ => {
@ -362,7 +362,7 @@ impl <W: Write> BackupCatalogWriter for CatalogWriter<W> {
Ok(())
}
fn add_file(&mut self, name: &CStr, size: u64, mtime: u64) -> Result<(), Error> {
fn add_file(&mut self, name: &CStr, size: u64, mtime: i64) -> Result<(), Error> {
let dir = self.dirstack.last_mut().ok_or_else(|| format_err!("outside root"))?;
let name = name.to_bytes().to_vec();
dir.entries.push(DirEntry { name, attr: DirEntryAttribute::File { size, mtime } });
@ -587,14 +587,77 @@ impl <R: Read + Seek> CatalogReader<R> {
}
}
/// Serialize i64 as short, variable length byte sequence
///
/// Stores 7 bits per byte, Bit 8 indicates the end of the sequence (when not set).
/// If the value is negative, we end with a zero byte (0x00).
pub fn catalog_encode_i64<W: Write>(writer: &mut W, v: i64) -> Result<(), Error> {
let mut enc = Vec::new();
let mut d = if v < 0 {
(-1 * (v + 1)) as u64 + 1 // also handles i64::MIN
} else {
v as u64
};
loop {
if d < 128 {
if v < 0 {
enc.push(128 | d as u8);
enc.push(0u8);
} else {
enc.push(d as u8);
}
break;
}
enc.push((128 | (d & 127)) as u8);
d = d >> 7;
}
writer.write_all(&enc)?;
Ok(())
}
/// Deserialize i64 from variable length byte sequence
///
/// We currently read maximal 11 bytes, which give a maximum of 70 bits + sign.
/// this method is compatible with catalog_encode_u64 iff the
/// value encoded is <= 2^63 (values > 2^63 cannot be represented in an i64)
pub fn catalog_decode_i64<R: Read>(reader: &mut R) -> Result<i64, Error> {
let mut v: u64 = 0;
let mut buf = [0u8];
for i in 0..11 { // only allow 11 bytes (70 bits + sign marker)
if buf.is_empty() {
bail!("decode_i64 failed - unexpected EOB");
}
reader.read_exact(&mut buf)?;
let t = buf[0];
if t == 0 {
if v == 0 {
return Ok(0);
}
return Ok(((v - 1) as i64 * -1) - 1); // also handles i64::MIN
} else if t < 128 {
v |= (t as u64) << (i*7);
return Ok(v as i64);
} else {
v |= ((t & 127) as u64) << (i*7);
}
}
bail!("decode_i64 failed - missing end marker");
}
/// Serialize u64 as short, variable length byte sequence
///
/// Stores 7 bits per byte, Bit 8 indicates the end of the sequence (when not set).
/// We limit values to a maximum of 2^63.
pub fn catalog_encode_u64<W: Write>(writer: &mut W, v: u64) -> Result<(), Error> {
let mut enc = Vec::new();
if (v & (1<<63)) != 0 { bail!("catalog_encode_u64 failed - value >= 2^63"); }
let mut d = v;
loop {
if d < 128 {
@ -611,13 +674,14 @@ pub fn catalog_encode_u64<W: Write>(writer: &mut W, v: u64) -> Result<(), Error>
/// Deserialize u64 from variable length byte sequence
///
/// We currently read maximal 9 bytes, which give a maximum of 63 bits.
/// We currently read maximal 10 bytes, which give a maximum of 70 bits,
/// but we currently only encode up to 64 bits
pub fn catalog_decode_u64<R: Read>(reader: &mut R) -> Result<u64, Error> {
let mut v: u64 = 0;
let mut buf = [0u8];
for i in 0..9 { // only allow 9 bytes (63 bits)
for i in 0..10 { // only allow 10 bytes (70 bits)
if buf.is_empty() {
bail!("decode_u64 failed - unexpected EOB");
}
@ -652,9 +716,58 @@ fn test_catalog_u64_encoder() {
assert!(decoded == value);
}
test_encode_decode(u64::MIN);
test_encode_decode(126);
test_encode_decode((1<<12)-1);
test_encode_decode((1<<20)-1);
test_encode_decode((1<<50)-1);
test_encode_decode((1<<63)-1);
test_encode_decode(u64::MAX);
}
#[test]
fn test_catalog_i64_encoder() {
fn test_encode_decode(value: i64) {
let mut data = Vec::new();
catalog_encode_i64(&mut data, value).unwrap();
let slice = &mut &data[..];
let decoded = catalog_decode_i64(slice).unwrap();
assert!(decoded == value);
}
test_encode_decode(0);
test_encode_decode(-0);
test_encode_decode(126);
test_encode_decode(-126);
test_encode_decode((1<<12)-1);
test_encode_decode(-(1<<12)-1);
test_encode_decode((1<<20)-1);
test_encode_decode(-(1<<20)-1);
test_encode_decode(i64::MIN);
test_encode_decode(i64::MAX);
}
#[test]
fn test_catalog_i64_compatibility() {
fn test_encode_decode(value: u64) {
let mut data = Vec::new();
catalog_encode_u64(&mut data, value).unwrap();
let slice = &mut &data[..];
let decoded = catalog_decode_i64(slice).unwrap() as u64;
assert!(decoded == value);
}
test_encode_decode(u64::MIN);
test_encode_decode(126);
test_encode_decode((1<<12)-1);
test_encode_decode((1<<20)-1);
test_encode_decode((1<<50)-1);
test_encode_decode(u64::MAX);
}

View File

@ -23,7 +23,7 @@ use crate::task::TaskState;
use crate::tools;
use crate::tools::format::HumanByte;
use crate::tools::fs::{lock_dir_noblock, DirLockGuard};
use crate::api2::types::{GarbageCollectionStatus, Userid};
use crate::api2::types::{Authid, GarbageCollectionStatus};
use crate::server::UPID;
lazy_static! {
@ -276,8 +276,8 @@ impl DataStore {
/// Returns the backup owner.
///
/// The backup owner is the user who first created the backup group.
pub fn get_owner(&self, backup_group: &BackupGroup) -> Result<Userid, Error> {
/// The backup owner is the entity who first created the backup group.
pub fn get_owner(&self, backup_group: &BackupGroup) -> Result<Authid, Error> {
let mut full_path = self.base_path();
full_path.push(backup_group.group_path());
full_path.push("owner");
@ -289,7 +289,7 @@ impl DataStore {
pub fn set_owner(
&self,
backup_group: &BackupGroup,
userid: &Userid,
auth_id: &Authid,
force: bool,
) -> Result<(), Error> {
let mut path = self.base_path();
@ -309,7 +309,7 @@ impl DataStore {
let mut file = open_options.open(&path)
.map_err(|err| format_err!("unable to create owner file {:?} - {}", path, err))?;
writeln!(file, "{}", userid)
writeln!(file, "{}", auth_id)
.map_err(|err| format_err!("unable to write owner file {:?} - {}", path, err))?;
Ok(())
@ -324,8 +324,8 @@ impl DataStore {
pub fn create_locked_backup_group(
&self,
backup_group: &BackupGroup,
userid: &Userid,
) -> Result<(Userid, DirLockGuard), Error> {
auth_id: &Authid,
) -> Result<(Authid, DirLockGuard), Error> {
// create intermediate path first:
let base_path = self.base_path();
@ -339,7 +339,7 @@ impl DataStore {
match std::fs::create_dir(&full_path) {
Ok(_) => {
let guard = lock_dir_noblock(&full_path, "backup group", "another backup is already running")?;
self.set_owner(backup_group, userid, false)?;
self.set_owner(backup_group, auth_id, false)?;
let owner = self.get_owner(backup_group)?; // just to be sure
Ok((owner, guard))
}
@ -458,38 +458,35 @@ impl DataStore {
) -> Result<(), Error> {
let image_list = self.list_images()?;
let image_count = image_list.len();
let mut done = 0;
let mut last_percentage: usize = 0;
for path in image_list {
for img in image_list {
worker.check_abort()?;
tools::fail_on_shutdown()?;
let full_path = self.chunk_store.relative_path(&path);
match std::fs::File::open(&full_path) {
let path = self.chunk_store.relative_path(&img);
match std::fs::File::open(&path) {
Ok(file) => {
if let Ok(archive_type) = archive_type(&path) {
if let Ok(archive_type) = archive_type(&img) {
if archive_type == ArchiveType::FixedIndex {
let index = FixedIndexReader::new(file)?;
self.index_mark_used_chunks(index, &path, status, worker)?;
let index = FixedIndexReader::new(file).map_err(|e| {
format_err!("can't read index '{}' - {}", path.to_string_lossy(), e)
})?;
self.index_mark_used_chunks(index, &img, status, worker)?;
} else if archive_type == ArchiveType::DynamicIndex {
let index = DynamicIndexReader::new(file)?;
self.index_mark_used_chunks(index, &path, status, worker)?;
let index = DynamicIndexReader::new(file).map_err(|e| {
format_err!("can't read index '{}' - {}", path.to_string_lossy(), e)
})?;
self.index_mark_used_chunks(index, &img, status, worker)?;
}
}
}
Err(err) => {
if err.kind() == std::io::ErrorKind::NotFound {
// simply ignore vanished files
} else {
return Err(err.into());
}
}
Err(err) if err.kind() == io::ErrorKind::NotFound => (), // ignore vanished files
Err(err) => bail!("can't open index {} - {}", path.to_string_lossy(), err),
}
done += 1;
@ -559,7 +556,11 @@ impl DataStore {
);
}
if gc_status.removed_bad > 0 {
crate::task_log!(worker, "Removed bad files: {}", gc_status.removed_bad);
crate::task_log!(worker, "Removed bad chunks: {}", gc_status.removed_bad);
}
if gc_status.still_bad > 0 {
crate::task_log!(worker, "Leftover bad chunks: {}", gc_status.still_bad);
}
crate::task_log!(
@ -580,6 +581,14 @@ impl DataStore {
crate::task_log!(worker, "On-Disk chunks: {}", gc_status.disk_chunks);
let deduplication_factor = if gc_status.disk_bytes > 0 {
(gc_status.index_data_bytes as f64)/(gc_status.disk_bytes as f64)
} else {
1.0
};
crate::task_log!(worker, "Deduplication factor: {:.2}", deduplication_factor);
if gc_status.disk_chunks > 0 {
let avg_chunk = gc_status.disk_bytes/(gc_status.disk_chunks as u64);
crate::task_log!(worker, "Average chunk size: {}", HumanByte::from(avg_chunk));

View File

@ -95,6 +95,18 @@ impl DynamicIndexReader {
let header_size = std::mem::size_of::<DynamicIndexHeader>();
let rawfd = file.as_raw_fd();
let stat = match nix::sys::stat::fstat(rawfd) {
Ok(stat) => stat,
Err(err) => bail!("fstat failed - {}", err),
};
let size = stat.st_size as usize;
if size < header_size {
bail!("index too small ({})", stat.st_size);
}
let header: Box<DynamicIndexHeader> = unsafe { file.read_host_value_boxed()? };
if header.magic != super::DYNAMIC_SIZED_CHUNK_INDEX_1_0 {
@ -103,13 +115,7 @@ impl DynamicIndexReader {
let ctime = proxmox::tools::time::epoch_i64();
let rawfd = file.as_raw_fd();
let stat = nix::sys::stat::fstat(rawfd)?;
let size = stat.st_size as usize;
let index_size = size - header_size;
let index_size = stat.st_size as usize - header_size;
let index_count = index_size / 40;
if index_count * 40 != index_size {
bail!("got unexpected file size");

View File

@ -68,6 +68,19 @@ impl FixedIndexReader {
file.seek(SeekFrom::Start(0))?;
let header_size = std::mem::size_of::<FixedIndexHeader>();
let rawfd = file.as_raw_fd();
let stat = match nix::sys::stat::fstat(rawfd) {
Ok(stat) => stat,
Err(err) => bail!("fstat failed - {}", err),
};
let size = stat.st_size as usize;
if size < header_size {
bail!("index too small ({})", stat.st_size);
}
let header: Box<FixedIndexHeader> = unsafe { file.read_host_value_boxed()? };
if header.magic != super::FIXED_SIZED_CHUNK_INDEX_1_0 {
@ -81,12 +94,6 @@ impl FixedIndexReader {
let index_length = ((size + chunk_size - 1) / chunk_size) as usize;
let index_size = index_length * 32;
let rawfd = file.as_raw_fd();
let stat = match nix::sys::stat::fstat(rawfd) {
Ok(stat) => stat,
Err(err) => bail!("fstat failed - {}", err),
};
let expected_index_size = (stat.st_size as usize) - header_size;
if index_size != expected_index_size {

View File

@ -14,6 +14,7 @@ use crate::{
BackupGroup,
BackupDir,
BackupInfo,
BackupManifest,
IndexFile,
CryptMode,
FileInfo,
@ -284,6 +285,7 @@ pub fn verify_backup_dir(
corrupt_chunks: Arc<Mutex<HashSet<[u8;32]>>>,
worker: Arc<dyn TaskState + Send + Sync>,
upid: UPID,
filter: Option<&dyn Fn(&BackupManifest) -> bool>,
) -> Result<bool, Error> {
let snap_lock = lock_dir_noblock_shared(
&datastore.snapshot_path(&backup_dir),
@ -297,6 +299,7 @@ pub fn verify_backup_dir(
corrupt_chunks,
worker,
upid,
filter,
snap_lock
),
Err(err) => {
@ -320,6 +323,7 @@ pub fn verify_backup_dir_with_lock(
corrupt_chunks: Arc<Mutex<HashSet<[u8;32]>>>,
worker: Arc<dyn TaskState + Send + Sync>,
upid: UPID,
filter: Option<&dyn Fn(&BackupManifest) -> bool>,
_snap_lock: Dir,
) -> Result<bool, Error> {
let manifest = match datastore.load_manifest(&backup_dir) {
@ -336,6 +340,18 @@ pub fn verify_backup_dir_with_lock(
}
};
if let Some(filter) = filter {
if filter(&manifest) == false {
task_log!(
worker,
"SKIPPED: verify {}:{} (recently verified)",
datastore.name(),
backup_dir,
);
return Ok(true);
}
}
task_log!(worker, "verify {}:{}", datastore.name(), backup_dir);
let mut error_count = 0;
@ -412,7 +428,7 @@ pub fn verify_backup_group(
progress: Option<(usize, usize)>, // (done, snapshot_count)
worker: Arc<dyn TaskState + Send + Sync>,
upid: &UPID,
filter: &dyn Fn(&BackupInfo) -> bool,
filter: Option<&dyn Fn(&BackupManifest) -> bool>,
) -> Result<(usize, Vec<String>), Error> {
let mut errors = Vec::new();
@ -439,16 +455,6 @@ pub fn verify_backup_group(
for info in list {
count += 1;
if filter(&info) == false {
task_log!(
worker,
"SKIPPED: verify {}:{} (recently verified)",
datastore.name(),
info.backup_dir,
);
continue;
}
if !verify_backup_dir(
datastore.clone(),
&info.backup_dir,
@ -456,6 +462,7 @@ pub fn verify_backup_group(
corrupt_chunks.clone(),
worker.clone(),
upid.clone(),
filter,
)? {
errors.push(info.backup_dir.to_string());
}
@ -475,7 +482,7 @@ pub fn verify_backup_group(
Ok((count, errors))
}
/// Verify all backups inside a datastore
/// Verify all (owned) backups inside a datastore
///
/// Errors are logged to the worker log.
///
@ -486,14 +493,41 @@ pub fn verify_all_backups(
datastore: Arc<DataStore>,
worker: Arc<dyn TaskState + Send + Sync>,
upid: &UPID,
filter: &dyn Fn(&BackupInfo) -> bool,
owner: Option<Authid>,
filter: Option<&dyn Fn(&BackupManifest) -> bool>,
) -> Result<Vec<String>, Error> {
let mut errors = Vec::new();
if let Some(owner) = &owner {
task_log!(
worker,
"verify datastore {} - limiting to backups owned by {}",
datastore.name(),
owner
);
}
let filter_by_owner = |group: &BackupGroup| {
if let Some(owner) = &owner {
match datastore.get_owner(group) {
Ok(ref group_owner) => {
group_owner == owner
|| (group_owner.is_token()
&& !owner.is_token()
&& group_owner.user() == owner.user())
},
Err(_) => false,
}
} else {
true
}
};
let mut list = match BackupGroup::list_groups(&datastore.base_path()) {
Ok(list) => list
.into_iter()
.filter(|group| !(group.backup_type() == "host" && group.backup_id() == "benchmark"))
.filter(filter_by_owner)
.collect::<Vec<BackupGroup>>(),
Err(err) => {
task_log!(

View File

@ -52,7 +52,9 @@ async fn run() -> Result<(), Error> {
let mut config = server::ApiConfig::new(
buildcfg::JS_DIR, &proxmox_backup::api2::ROUTER, RpcEnvironmentType::PRIVILEGED)?;
config.enable_file_log(buildcfg::API_ACCESS_LOG_FN)?;
let mut commando_sock = server::CommandoSocket::new(server::our_ctrl_sock());
config.enable_file_log(buildcfg::API_ACCESS_LOG_FN, &mut commando_sock)?;
let rest_server = RestServer::new(config);
@ -76,10 +78,12 @@ async fn run() -> Result<(), Error> {
},
);
server::write_pid(buildcfg::PROXMOX_BACKUP_API_PID_FN)?;
daemon::systemd_notify(daemon::SystemdNotify::Ready)?;
let init_result: Result<(), Error> = try_block!({
server::create_task_control_socket()?;
server::register_task_control_commands(&mut commando_sock)?;
commando_sock.spawn()?;
server::server_state_init()?;
Ok(())
});

View File

@ -36,7 +36,7 @@ use proxmox_backup::api2::types::*;
use proxmox_backup::api2::version;
use proxmox_backup::client::*;
use proxmox_backup::pxar::catalog::*;
use proxmox_backup::config::user::complete_user_name;
use proxmox_backup::config::user::complete_userid;
use proxmox_backup::backup::{
archive_type,
decrypt_key,
@ -193,7 +193,7 @@ pub fn complete_repository(_arg: &str, _param: &HashMap<String, String>) -> Vec<
result
}
fn connect(server: &str, port: u16, userid: &Userid) -> Result<HttpClient, Error> {
fn connect(server: &str, port: u16, auth_id: &Authid) -> Result<HttpClient, Error> {
let fingerprint = std::env::var(ENV_VAR_PBS_FINGERPRINT).ok();
@ -212,7 +212,7 @@ fn connect(server: &str, port: u16, userid: &Userid) -> Result<HttpClient, Error
.fingerprint_cache(true)
.ticket_cache(true);
HttpClient::new(server, port, userid, options)
HttpClient::new(server, port, auth_id, options)
}
async fn view_task_result(
@ -366,7 +366,7 @@ async fn list_backup_groups(param: Value) -> Result<Value, Error> {
let repo = extract_repository_from_value(&param)?;
let client = connect(repo.host(), repo.port(), repo.user())?;
let client = connect(repo.host(), repo.port(), repo.auth_id())?;
let path = format!("api2/json/admin/datastore/{}/groups", repo.store());
@ -425,7 +425,7 @@ async fn list_backup_groups(param: Value) -> Result<Value, Error> {
description: "Backup group.",
},
"new-owner": {
type: Userid,
type: Authid,
},
}
}
@ -435,7 +435,7 @@ async fn change_backup_owner(group: String, mut param: Value) -> Result<(), Erro
let repo = extract_repository_from_value(&param)?;
let mut client = connect(repo.host(), repo.port(), repo.user())?;
let mut client = connect(repo.host(), repo.port(), repo.auth_id())?;
param.as_object_mut().unwrap().remove("repository");
@ -478,7 +478,7 @@ async fn list_snapshots(param: Value) -> Result<Value, Error> {
let output_format = get_output_format(&param);
let client = connect(repo.host(), repo.port(), repo.user())?;
let client = connect(repo.host(), repo.port(), repo.auth_id())?;
let group: Option<BackupGroup> = if let Some(path) = param["group"].as_str() {
Some(path.parse()?)
@ -543,7 +543,7 @@ async fn forget_snapshots(param: Value) -> Result<Value, Error> {
let path = tools::required_string_param(&param, "snapshot")?;
let snapshot: BackupDir = path.parse()?;
let mut client = connect(repo.host(), repo.port(), repo.user())?;
let mut client = connect(repo.host(), repo.port(), repo.auth_id())?;
let path = format!("api2/json/admin/datastore/{}/snapshots", repo.store());
@ -573,7 +573,7 @@ async fn api_login(param: Value) -> Result<Value, Error> {
let repo = extract_repository_from_value(&param)?;
let client = connect(repo.host(), repo.port(), repo.user())?;
let client = connect(repo.host(), repo.port(), repo.auth_id())?;
client.login().await?;
record_repository(&repo);
@ -630,7 +630,7 @@ async fn api_version(param: Value) -> Result<(), Error> {
let repo = extract_repository_from_value(&param);
if let Ok(repo) = repo {
let client = connect(repo.host(), repo.port(), repo.user())?;
let client = connect(repo.host(), repo.port(), repo.auth_id())?;
match client.get("api2/json/version", None).await {
Ok(mut result) => version_info["server"] = result["data"].take(),
@ -680,7 +680,7 @@ async fn list_snapshot_files(param: Value) -> Result<Value, Error> {
let output_format = get_output_format(&param);
let client = connect(repo.host(), repo.port(), repo.user())?;
let client = connect(repo.host(), repo.port(), repo.auth_id())?;
let path = format!("api2/json/admin/datastore/{}/files", repo.store());
@ -724,7 +724,7 @@ async fn start_garbage_collection(param: Value) -> Result<Value, Error> {
let output_format = get_output_format(&param);
let mut client = connect(repo.host(), repo.port(), repo.user())?;
let mut client = connect(repo.host(), repo.port(), repo.auth_id())?;
let path = format!("api2/json/admin/datastore/{}/gc", repo.store());
@ -1036,7 +1036,7 @@ async fn create_backup(
let backup_time = backup_time_opt.unwrap_or_else(|| epoch_i64());
let client = connect(repo.host(), repo.port(), repo.user())?;
let client = connect(repo.host(), repo.port(), repo.auth_id())?;
record_repository(&repo);
println!("Starting backup: {}/{}/{}", backup_type, backup_id, BackupDir::backup_time_to_string(backup_time)?);
@ -1339,7 +1339,7 @@ async fn restore(param: Value) -> Result<Value, Error> {
let archive_name = tools::required_string_param(&param, "archive-name")?;
let client = connect(repo.host(), repo.port(), repo.user())?;
let client = connect(repo.host(), repo.port(), repo.auth_id())?;
record_repository(&repo);
@ -1512,7 +1512,7 @@ async fn upload_log(param: Value) -> Result<Value, Error> {
let snapshot = tools::required_string_param(&param, "snapshot")?;
let snapshot: BackupDir = snapshot.parse()?;
let mut client = connect(repo.host(), repo.port(), repo.user())?;
let mut client = connect(repo.host(), repo.port(), repo.auth_id())?;
let (keydata, crypt_mode) = keyfile_parameters(&param)?;
@ -1583,7 +1583,7 @@ fn prune<'a>(
async fn prune_async(mut param: Value) -> Result<Value, Error> {
let repo = extract_repository_from_value(&param)?;
let mut client = connect(repo.host(), repo.port(), repo.user())?;
let mut client = connect(repo.host(), repo.port(), repo.auth_id())?;
let path = format!("api2/json/admin/datastore/{}/prune", repo.store());
@ -1657,7 +1657,10 @@ async fn prune_async(mut param: Value) -> Result<Value, Error> {
optional: true,
},
}
}
},
returns: {
type: StorageStatus,
},
)]
/// Get repository status.
async fn status(param: Value) -> Result<Value, Error> {
@ -1666,7 +1669,7 @@ async fn status(param: Value) -> Result<Value, Error> {
let output_format = get_output_format(&param);
let client = connect(repo.host(), repo.port(), repo.user())?;
let client = connect(repo.host(), repo.port(), repo.auth_id())?;
let path = format!("api2/json/admin/datastore/{}/status", repo.store());
@ -1690,7 +1693,7 @@ async fn status(param: Value) -> Result<Value, Error> {
.column(ColumnConfig::new("used").renderer(render_total_percentage))
.column(ColumnConfig::new("avail").renderer(render_total_percentage));
let schema = &proxmox_backup::api2::admin::datastore::API_RETURN_SCHEMA_STATUS;
let schema = &API_RETURN_SCHEMA_STATUS;
format_and_print_result_full(&mut data, schema, &output_format, &options);
@ -1711,7 +1714,7 @@ async fn try_get(repo: &BackupRepository, url: &str) -> Value {
.fingerprint_cache(true)
.ticket_cache(true);
let client = match HttpClient::new(repo.host(), repo.port(), repo.user(), options) {
let client = match HttpClient::new(repo.host(), repo.port(), repo.auth_id(), options) {
Ok(v) => v,
_ => return Value::Null,
};
@ -2010,7 +2013,7 @@ fn main() {
let change_owner_cmd_def = CliCommand::new(&API_METHOD_CHANGE_BACKUP_OWNER)
.arg_param(&["group", "new-owner"])
.completion_cb("group", complete_backup_group)
.completion_cb("new-owner", complete_user_name)
.completion_cb("new-owner", complete_userid)
.completion_cb("repository", complete_repository);
let cmd_def = CliCommandMap::new()

View File

@ -1,4 +1,5 @@
use std::collections::HashMap;
use std::io::{self, Write};
use anyhow::{format_err, Error};
use serde_json::{json, Value};
@ -62,10 +63,10 @@ fn connect() -> Result<HttpClient, Error> {
let ticket = Ticket::new("PBS", Userid::root_userid())?
.sign(private_auth_key(), None)?;
options = options.password(Some(ticket));
HttpClient::new("localhost", 8007, Userid::root_userid(), options)?
HttpClient::new("localhost", 8007, Authid::root_auth_id(), options)?
} else {
options = options.ticket_cache(true).interactive(true);
HttpClient::new("localhost", 8007, Userid::root_userid(), options)?
HttpClient::new("localhost", 8007, Authid::root_auth_id(), options)?
};
Ok(client)
@ -354,6 +355,14 @@ async fn verify(
Ok(Value::Null)
}
#[api()]
/// System report
async fn report() -> Result<Value, Error> {
let report = proxmox_backup::server::generate_report();
io::stdout().write_all(report.as_bytes())?;
Ok(Value::Null)
}
fn main() {
proxmox_backup::tools::setup_safe_path_env();
@ -368,6 +377,7 @@ fn main() {
.insert("remote", remote_commands())
.insert("garbage-collection", garbage_collection_commands())
.insert("cert", cert_mgmt_cli())
.insert("subscription", subscription_commands())
.insert("sync-job", sync_job_commands())
.insert("task", task_mgmt_cli())
.insert(
@ -383,12 +393,15 @@ fn main() {
CliCommand::new(&API_METHOD_VERIFY)
.arg_param(&["store"])
.completion_cb("store", config::datastore::complete_datastore_name)
)
.insert("report",
CliCommand::new(&API_METHOD_REPORT)
);
let mut rpcenv = CliEnvironment::new();
rpcenv.set_user(Some(String::from("root@pam")));
rpcenv.set_auth_id(Some(String::from("root@pam")));
proxmox_backup::tools::runtime::main(run_async_cli_command(cmd_def, rpcenv));
}

View File

@ -1,4 +1,4 @@
use std::sync::{Arc};
use std::sync::Arc;
use std::path::{Path, PathBuf};
use std::os::unix::io::AsRawFd;
@ -13,7 +13,6 @@ use proxmox::api::RpcEnvironmentType;
use proxmox_backup::{
backup::DataStore,
server::{
UPID,
WorkerTask,
ApiConfig,
rest::*,
@ -30,7 +29,7 @@ use proxmox_backup::{
};
use proxmox_backup::api2::types::Userid;
use proxmox_backup::api2::types::Authid;
use proxmox_backup::configdir;
use proxmox_backup::buildcfg;
use proxmox_backup::server;
@ -41,6 +40,7 @@ use proxmox_backup::tools::{
DiskManage,
zfs_pool_stats,
},
logrotate::LogRotate,
socket::{
set_tcp_keepalive,
PROXMOX_BACKUP_TCP_KEEPALIVE_TIME,
@ -49,6 +49,7 @@ use proxmox_backup::tools::{
use proxmox_backup::api2::pull::do_sync_job;
use proxmox_backup::server::do_verification_job;
use proxmox_backup::server::do_prune_job;
fn main() -> Result<(), Error> {
proxmox_backup::tools::setup_safe_path_env();
@ -73,6 +74,10 @@ async fn run() -> Result<(), Error> {
bail!("unable to inititialize syslog - {}", err);
}
// Note: To debug early connection error use
// PROXMOX_DEBUG=1 ./target/release/proxmox-backup-proxy
let debug = std::env::var("PROXMOX_DEBUG").is_ok();
let _ = public_auth_key(); // load with lazy_static
let _ = csrf_secret(); // load with lazy_static
@ -93,7 +98,9 @@ async fn run() -> Result<(), Error> {
config.register_template("index", &indexpath)?;
config.register_template("console", "/usr/share/pve-xtermjs/index.html.hbs")?;
config.enable_file_log(buildcfg::API_ACCESS_LOG_FN)?;
let mut commando_sock = server::CommandoSocket::new(server::our_ctrl_sock());
config.enable_file_log(buildcfg::API_ACCESS_LOG_FN, &mut commando_sock)?;
let rest_server = RestServer::new(config);
@ -113,22 +120,9 @@ async fn run() -> Result<(), Error> {
let server = daemon::create_daemon(
([0,0,0,0,0,0,0,0], 8007).into(),
|listener, ready| {
let connections = proxmox_backup::tools::async_io::StaticIncoming::from(listener)
.map_err(Error::from)
.try_filter_map(move |(sock, _addr)| {
let acceptor = Arc::clone(&acceptor);
async move {
sock.set_nodelay(true).unwrap();
let _ = set_tcp_keepalive(sock.as_raw_fd(), PROXMOX_BACKUP_TCP_KEEPALIVE_TIME);
Ok(tokio_openssl::accept(&acceptor, sock)
.await
.ok() // handshake errors aren't be fatal, so return None to filter
)
}
});
let connections = proxmox_backup::tools::async_io::HyperAccept(connections);
let connections = accept_connections(listener, acceptor, debug);
let connections = hyper::server::accept::from_stream(connections);
Ok(ready
.and_then(|_| hyper::Server::builder(connections)
@ -142,10 +136,12 @@ async fn run() -> Result<(), Error> {
},
);
server::write_pid(buildcfg::PROXMOX_BACKUP_PROXY_PID_FN)?;
daemon::systemd_notify(daemon::SystemdNotify::Ready)?;
let init_result: Result<(), Error> = try_block!({
server::create_task_control_socket()?;
server::register_task_control_commands(&mut commando_sock)?;
commando_sock.spawn()?;
server::server_state_init()?;
Ok(())
});
@ -165,6 +161,72 @@ async fn run() -> Result<(), Error> {
Ok(())
}
fn accept_connections(
mut listener: tokio::net::TcpListener,
acceptor: Arc<openssl::ssl::SslAcceptor>,
debug: bool,
) -> tokio::sync::mpsc::Receiver<Result<tokio_openssl::SslStream<tokio::net::TcpStream>, Error>> {
const MAX_PENDING_ACCEPTS: usize = 1024;
let (sender, receiver) = tokio::sync::mpsc::channel(MAX_PENDING_ACCEPTS);
let accept_counter = Arc::new(());
tokio::spawn(async move {
loop {
match listener.accept().await {
Err(err) => {
eprintln!("error accepting tcp connection: {}", err);
}
Ok((sock, _addr)) => {
sock.set_nodelay(true).unwrap();
let _ = set_tcp_keepalive(sock.as_raw_fd(), PROXMOX_BACKUP_TCP_KEEPALIVE_TIME);
let acceptor = Arc::clone(&acceptor);
let mut sender = sender.clone();
if Arc::strong_count(&accept_counter) > MAX_PENDING_ACCEPTS {
eprintln!("connection rejected - to many open connections");
continue;
}
let accept_counter = accept_counter.clone();
tokio::spawn(async move {
let accept_future = tokio::time::timeout(
Duration::new(10, 0), tokio_openssl::accept(&acceptor, sock));
let result = accept_future.await;
match result {
Ok(Ok(connection)) => {
if let Err(_) = sender.send(Ok(connection)).await {
if debug {
eprintln!("detect closed connection channel");
}
}
}
Ok(Err(err)) => {
if debug {
eprintln!("https handshake failed - {}", err);
}
}
Err(_) => {
if debug {
eprintln!("https handshake timeout");
}
}
}
drop(accept_counter); // decrease reference count
});
}
}
}
});
receiver
}
fn start_stat_generator() {
let abort_future = server::shutdown_future();
let future = Box::pin(run_stat_generator());
@ -247,8 +309,6 @@ async fn schedule_datastore_garbage_collection() {
},
};
let email = server::lookup_user_email(Userid::root_userid());
let config = match datastore::config() {
Err(err) => {
eprintln!("unable to read datastore config - {}", err);
@ -291,23 +351,12 @@ async fn schedule_datastore_garbage_collection() {
let worker_type = "garbage_collection";
let stat = datastore.last_gc_status();
let last = if let Some(upid_str) = stat.upid {
match upid_str.parse::<UPID>() {
Ok(upid) => upid.starttime,
Err(err) => {
eprintln!("unable to parse upid '{}' - {}", upid_str, err);
continue;
}
}
} else {
match jobstate::last_run_time(worker_type, &store) {
let last = match jobstate::last_run_time(worker_type, &store) {
Ok(time) => time,
Err(err) => {
eprintln!("could not get last run time of {} {}: {}", worker_type, store, err);
continue;
}
}
};
let next = match compute_next_event(&event, last, false) {
@ -323,44 +372,15 @@ async fn schedule_datastore_garbage_collection() {
if next > now { continue; }
let mut job = match Job::new(worker_type, &store) {
let job = match Job::new(worker_type, &store) {
Ok(job) => job,
Err(_) => continue, // could not get lock
};
let store2 = store.clone();
let email2 = email.clone();
let auth_id = Authid::backup_auth_id();
if let Err(err) = WorkerTask::new_thread(
worker_type,
Some(store.clone()),
Userid::backup_userid().clone(),
false,
move |worker| {
job.start(&worker.upid().to_string())?;
worker.log(format!("starting garbage collection on store {}", store));
worker.log(format!("task triggered by schedule '{}'", event_str));
let result = datastore.garbage_collection(&*worker, worker.upid());
let status = worker.create_state(&result);
if let Err(err) = job.finish(status) {
eprintln!("could not finish job state for {}: {}", worker_type, err);
}
if let Some(email2) = email2 {
let gc_status = datastore.last_gc_status();
if let Err(err) = crate::server::send_gc_status(&email2, datastore.name(), &gc_status, &result) {
eprintln!("send gc notification failed: {}", err);
}
}
result
}
) {
eprintln!("unable to start garbage collection on store {} - {}", store2, err);
if let Err(err) = crate::server::do_garbage_collection_job(job, datastore, auth_id, Some(event_str), false) {
eprintln!("unable to start garbage collection job on datastore {} - {}", store, err);
}
}
}
@ -370,8 +390,6 @@ async fn schedule_datastore_prune() {
use proxmox_backup::{
backup::{
PruneOptions,
BackupGroup,
compute_prune_info,
},
config::datastore::{
self,
@ -388,13 +406,6 @@ async fn schedule_datastore_prune() {
};
for (store, (_, store_config)) in config.sections {
let datastore = match DataStore::lookup_datastore(&store) {
Ok(datastore) => datastore,
Err(err) => {
eprintln!("lookup_datastore '{}' failed - {}", store, err);
continue;
}
};
let store_config: DataStoreConfig = match serde_json::from_value(store_config) {
Ok(c) => c,
@ -422,95 +433,18 @@ async fn schedule_datastore_prune() {
continue;
}
let event = match parse_calendar_event(&event_str) {
Ok(event) => event,
Err(err) => {
eprintln!("unable to parse schedule '{}' - {}", event_str, err);
continue;
}
};
let worker_type = "prune";
let last = match jobstate::last_run_time(worker_type, &store) {
Ok(time) => time,
Err(err) => {
eprintln!("could not get last run time of {} {}: {}", worker_type, store, err);
continue;
}
};
let next = match compute_next_event(&event, last, false) {
Ok(Some(next)) => next,
Ok(None) => continue,
Err(err) => {
eprintln!("compute_next_event for '{}' failed - {}", event_str, err);
continue;
}
};
let now = proxmox::tools::time::epoch_i64();
if next > now { continue; }
let mut job = match Job::new(worker_type, &store) {
if check_schedule(worker_type, &event_str, &store) {
let job = match Job::new(worker_type, &store) {
Ok(job) => job,
Err(_) => continue, // could not get lock
};
let store2 = store.clone();
if let Err(err) = WorkerTask::new_thread(
worker_type,
Some(store.clone()),
Userid::backup_userid().clone(),
false,
move |worker| {
job.start(&worker.upid().to_string())?;
let result = try_block!({
worker.log(format!("Starting datastore prune on store \"{}\"", store));
worker.log(format!("task triggered by schedule '{}'", event_str));
worker.log(format!("retention options: {}", prune_options.cli_options_string()));
let base_path = datastore.base_path();
let groups = BackupGroup::list_groups(&base_path)?;
for group in groups {
let list = group.list_backups(&base_path)?;
let mut prune_info = compute_prune_info(list, &prune_options)?;
prune_info.reverse(); // delete older snapshots first
worker.log(format!("Starting prune on store \"{}\" group \"{}/{}\"",
store, group.backup_type(), group.backup_id()));
for (info, keep) in prune_info {
worker.log(format!(
"{} {}/{}/{}",
if keep { "keep" } else { "remove" },
group.backup_type(), group.backup_id(),
info.backup_dir.backup_time_string()));
if !keep {
datastore.remove_backup_dir(&info.backup_dir, true)?;
}
}
}
Ok(())
});
let status = worker.create_state(&result);
if let Err(err) = job.finish(status) {
eprintln!("could not finish job state for {}: {}", worker_type, err);
}
result
}
) {
eprintln!("unable to start datastore prune on store {} - {}", store2, err);
let auth_id = Authid::backup_auth_id().clone();
if let Err(err) = do_prune_job(job, prune_options, store.clone(), &auth_id, Some(event_str)) {
eprintln!("unable to start datastore prune job {} - {}", &store, err);
}
};
}
}
@ -543,47 +477,18 @@ async fn schedule_datastore_sync_jobs() {
None => continue,
};
let event = match parse_calendar_event(&event_str) {
Ok(event) => event,
Err(err) => {
eprintln!("unable to parse schedule '{}' - {}", event_str, err);
continue;
}
};
let worker_type = "syncjob";
let last = match jobstate::last_run_time(worker_type, &job_id) {
Ok(time) => time,
Err(err) => {
eprintln!("could not get last run time of {} {}: {}", worker_type, job_id, err);
continue;
}
};
let next = match compute_next_event(&event, last, false) {
Ok(Some(next)) => next,
Ok(None) => continue,
Err(err) => {
eprintln!("compute_next_event for '{}' failed - {}", event_str, err);
continue;
}
};
let now = proxmox::tools::time::epoch_i64();
if next > now { continue; }
if check_schedule(worker_type, &event_str, &job_id) {
let job = match Job::new(worker_type, &job_id) {
Ok(job) => job,
Err(_) => continue, // could not get lock
};
let userid = Userid::backup_userid().clone();
if let Err(err) = do_sync_job(job, job_config, &userid, Some(event_str)) {
let auth_id = Authid::backup_auth_id().clone();
if let Err(err) = do_sync_job(job, job_config, &auth_id, Some(event_str)) {
eprintln!("unable to start datastore sync job {} - {}", &job_id, err);
}
};
}
}
@ -613,79 +518,30 @@ async fn schedule_datastore_verify_jobs() {
Some(ref event_str) => event_str.clone(),
None => continue,
};
let event = match parse_calendar_event(&event_str) {
Ok(event) => event,
Err(err) => {
eprintln!("unable to parse schedule '{}' - {}", event_str, err);
continue;
}
};
let worker_type = "verificationjob";
let last = match jobstate::last_run_time(worker_type, &job_id) {
Ok(time) => time,
Err(err) => {
eprintln!("could not get last run time of {} {}: {}", worker_type, job_id, err);
continue;
}
};
let next = match compute_next_event(&event, last, false) {
Ok(Some(next)) => next,
Ok(None) => continue,
Err(err) => {
eprintln!("compute_next_event for '{}' failed - {}", event_str, err);
continue;
}
};
let now = proxmox::tools::time::epoch_i64();
if next > now { continue; }
let job = match Job::new(worker_type, &job_id) {
let auth_id = Authid::backup_auth_id().clone();
if check_schedule(worker_type, &event_str, &job_id) {
let job = match Job::new(&worker_type, &job_id) {
Ok(job) => job,
Err(_) => continue, // could not get lock
};
let userid = Userid::backup_userid().clone();
if let Err(err) = do_verification_job(job, job_config, &userid, Some(event_str)) {
if let Err(err) = do_verification_job(job, job_config, &auth_id, Some(event_str)) {
eprintln!("unable to start datastore verification job {} - {}", &job_id, err);
}
};
}
}
async fn schedule_task_log_rotate() {
let worker_type = "logrotate";
let job_id = "task_archive";
let last = match jobstate::last_run_time(worker_type, job_id) {
Ok(time) => time,
Err(err) => {
eprintln!("could not get last run time of task log archive rotation: {}", err);
return;
}
};
let job_id = "access-log_and_task-archive";
// schedule daily at 00:00 like normal logrotate
let schedule = "00:00";
let event = match parse_calendar_event(schedule) {
Ok(event) => event,
Err(err) => {
// should not happen?
eprintln!("unable to parse schedule '{}' - {}", schedule, err);
return;
}
};
let next = match compute_next_event(&event, last, false) {
Ok(Some(next)) => next,
Ok(None) => return,
Err(err) => {
eprintln!("compute_next_event for '{}' failed - {}", schedule, err);
return;
}
};
let now = proxmox::tools::time::epoch_i64();
if next > now {
if !check_schedule(worker_type, schedule, job_id) {
// if we never ran the rotation, schedule instantly
match jobstate::JobState::load(worker_type, job_id) {
Ok(state) => match state {
@ -703,17 +559,16 @@ async fn schedule_task_log_rotate() {
if let Err(err) = WorkerTask::new_thread(
worker_type,
Some(job_id.to_string()),
Userid::backup_userid().clone(),
None,
Authid::backup_auth_id().clone(),
false,
move |worker| {
job.start(&worker.upid().to_string())?;
worker.log(format!("starting task log rotation"));
let result = try_block!({
// rotate task log archive
let max_size = 500000; // a normal entry has about 100b, so ~ 5000 entries/file
let max_files = 20; // times twenty files gives at least 100000 task entries
let max_size = 512 * 1024 - 1; // an entry has ~ 100b, so > 5000 entries/file
let max_files = 20; // times twenty files gives > 100000 task entries
let has_rotated = rotate_task_log_archive(max_size, true, Some(max_files))?;
if has_rotated {
worker.log(format!("task log archive was rotated"));
@ -721,6 +576,28 @@ async fn schedule_task_log_rotate() {
worker.log(format!("task log archive was not rotated"));
}
let max_size = 32 * 1024 * 1024 - 1;
let max_files = 14;
let mut logrotate = LogRotate::new(buildcfg::API_ACCESS_LOG_FN, true)
.ok_or_else(|| format_err!("could not get API access log file names"))?;
if logrotate.rotate(max_size, None, Some(max_files))? {
println!("rotated access log, telling daemons to re-open log file");
proxmox_backup::tools::runtime::block_on(command_reopen_logfiles())?;
worker.log(format!("API access log was rotated"));
} else {
worker.log(format!("API access log was not rotated"));
}
let mut logrotate = LogRotate::new(buildcfg::API_AUTH_LOG_FN, true)
.ok_or_else(|| format_err!("could not get API auth log file names"))?;
if logrotate.rotate(max_size, None, Some(max_files))? {
worker.log(format!("API access log was rotated"));
} else {
worker.log(format!("API access log was not rotated"));
}
Ok(())
});
@ -738,6 +615,28 @@ async fn schedule_task_log_rotate() {
}
async fn command_reopen_logfiles() -> Result<(), Error> {
// only care about the most recent daemon instance for each, proxy & api, as other older ones
// should not respond to new requests anyway, but only finish their current one and then exit.
let sock = server::our_ctrl_sock();
let f1 = server::send_command(sock, serde_json::json!({
"command": "api-access-log-reopen",
}));
let pid = server::read_pid(buildcfg::PROXMOX_BACKUP_API_PID_FN)?;
let sock = server::ctrl_sock_from_pid(pid);
let f2 = server::send_command(sock, serde_json::json!({
"command": "api-access-log-reopen",
}));
match futures::join!(f1, f2) {
(Err(e1), Err(e2)) => Err(format_err!("reopen commands failed, proxy: {}; api: {}", e1, e2)),
(Err(e1), Ok(_)) => Err(format_err!("reopen commands failed, proxy: {}", e1)),
(Ok(_), Err(e2)) => Err(format_err!("reopen commands failed, api: {}", e2)),
_ => Ok(()),
}
}
async fn run_stat_generator() {
let mut count = 0;
@ -850,6 +749,36 @@ async fn generate_host_stats(save: bool) {
});
}
fn check_schedule(worker_type: &str, event_str: &str, id: &str) -> bool {
let event = match parse_calendar_event(event_str) {
Ok(event) => event,
Err(err) => {
eprintln!("unable to parse schedule '{}' - {}", event_str, err);
return false;
}
};
let last = match jobstate::last_run_time(worker_type, &id) {
Ok(time) => time,
Err(err) => {
eprintln!("could not get last run time of {} {}: {}", worker_type, id, err);
return false;
}
};
let next = match compute_next_event(&event, last, false) {
Ok(Some(next)) => next,
Ok(None) => return false,
Err(err) => {
eprintln!("compute_next_event for '{}' failed - {}", event_str, err);
return false;
}
};
let now = proxmox::tools::time::epoch_i64();
next <= now
}
fn gather_disk_stats(disk_manager: Arc<DiskManage>, path: &Path, rrd_prefix: &str, save: bool) {
match proxmox_backup::tools::disks::disk_usage(path) {

View File

@ -0,0 +1,73 @@
use anyhow::Error;
use serde_json::{json, Value};
use proxmox::api::{cli::*, RpcEnvironment, ApiHandler};
use proxmox_backup::api2;
use proxmox_backup::tools::subscription;
async fn wait_for_local_worker(upid_str: &str) -> Result<(), Error> {
let upid: proxmox_backup::server::UPID = upid_str.parse()?;
let sleep_duration = core::time::Duration::new(0, 100_000_000);
loop {
if !proxmox_backup::server::worker_is_active_local(&upid) {
break;
}
tokio::time::delay_for(sleep_duration).await;
}
Ok(())
}
/// Daily update
async fn do_update(
rpcenv: &mut dyn RpcEnvironment,
) -> Result<Value, Error> {
let param = json!({});
let method = &api2::node::subscription::API_METHOD_CHECK_SUBSCRIPTION;
let _res = match method.handler {
ApiHandler::Sync(handler) => (handler)(param, method, rpcenv)?,
_ => unreachable!(),
};
let notify = match subscription::read_subscription() {
Ok(Some(subscription)) => subscription.status == subscription::SubscriptionStatus::ACTIVE,
Ok(None) => false,
Err(err) => {
eprintln!("Error reading subscription - {}", err);
false
},
};
let param = json!({
"notify": notify,
});
let method = &api2::node::apt::API_METHOD_APT_UPDATE_DATABASE;
let upid = match method.handler {
ApiHandler::Sync(handler) => (handler)(param, method, rpcenv)?,
_ => unreachable!(),
};
wait_for_local_worker(upid.as_str().unwrap()).await?;
// TODO: certificate checks/renewal/... ?
// TODO: cleanup tasks like in PVE?
Ok(Value::Null)
}
fn main() {
proxmox_backup::tools::setup_safe_path_env();
let mut rpcenv = CliEnvironment::new();
rpcenv.set_auth_id(Some(String::from("root@pam")));
match proxmox_backup::tools::runtime::main(do_update(&mut rpcenv)) {
Err(err) => {
eprintln!("error during update: {}", err);
std::process::exit(1);
},
_ => (),
}
}

View File

@ -225,7 +225,7 @@ async fn test_upload_speed(
let backup_time = proxmox::tools::time::epoch_i64();
let client = connect(repo.host(), repo.port(), repo.user())?;
let client = connect(repo.host(), repo.port(), repo.auth_id())?;
record_repository(&repo);
if verbose { eprintln!("Connecting to backup server"); }

View File

@ -79,7 +79,7 @@ async fn dump_catalog(param: Value) -> Result<Value, Error> {
}
};
let client = connect(repo.host(), repo.port(), repo.user())?;
let client = connect(repo.host(), repo.port(), repo.auth_id())?;
let client = BackupReader::start(
client,
@ -153,7 +153,7 @@ async fn dump_catalog(param: Value) -> Result<Value, Error> {
/// Shell to interactively inspect and restore snapshots.
async fn catalog_shell(param: Value) -> Result<(), Error> {
let repo = extract_repository_from_value(&param)?;
let client = connect(repo.host(), repo.port(), repo.user())?;
let client = connect(repo.host(), repo.port(), repo.auth_id())?;
let path = tools::required_string_param(&param, "snapshot")?;
let archive_name = tools::required_string_param(&param, "archive-name")?;

View File

@ -163,7 +163,7 @@ fn mount(
async fn mount_do(param: Value, pipe: Option<RawFd>) -> Result<Value, Error> {
let repo = extract_repository_from_value(&param)?;
let archive_name = tools::required_string_param(&param, "archive-name")?;
let client = connect(repo.host(), repo.port(), repo.user())?;
let client = connect(repo.host(), repo.port(), repo.auth_id())?;
let target = param["target"].as_str();

View File

@ -48,7 +48,7 @@ async fn task_list(param: Value) -> Result<Value, Error> {
let output_format = get_output_format(&param);
let repo = extract_repository_from_value(&param)?;
let client = connect(repo.host(), repo.port(), repo.user())?;
let client = connect(repo.host(), repo.port(), repo.auth_id())?;
let limit = param["limit"].as_u64().unwrap_or(50) as usize;
let running = !param["all"].as_bool().unwrap_or(false);
@ -57,7 +57,7 @@ async fn task_list(param: Value) -> Result<Value, Error> {
"running": running,
"start": 0,
"limit": limit,
"userfilter": repo.user(),
"userfilter": repo.auth_id(),
"store": repo.store(),
});
@ -96,7 +96,7 @@ async fn task_log(param: Value) -> Result<Value, Error> {
let repo = extract_repository_from_value(&param)?;
let upid = tools::required_string_param(&param, "upid")?;
let client = connect(repo.host(), repo.port(), repo.user())?;
let client = connect(repo.host(), repo.port(), repo.auth_id())?;
display_task_log(client, upid, true).await?;
@ -122,7 +122,7 @@ async fn task_stop(param: Value) -> Result<Value, Error> {
let repo = extract_repository_from_value(&param)?;
let upid_str = tools::required_string_param(&param, "upid")?;
let mut client = connect(repo.host(), repo.port(), repo.user())?;
let mut client = connect(repo.host(), repo.port(), repo.auth_id())?;
let path = format!("api2/json/nodes/localhost/tasks/{}", upid_str);
let _ = client.delete(&path, None).await?;

View File

@ -60,7 +60,7 @@ pub fn acl_commands() -> CommandLineInterface {
"update",
CliCommand::new(&api2::access::acl::API_METHOD_UPDATE_ACL)
.arg_param(&["path", "role"])
.completion_cb("userid", config::user::complete_user_name)
.completion_cb("auth-id", config::user::complete_authid)
.completion_cb("path", config::datastore::complete_acl_path)
);

View File

@ -14,5 +14,7 @@ mod sync;
pub use sync::*;
mod user;
pub use user::*;
mod subscription;
pub use subscription::*;
mod disk;
pub use disk::*;

View File

@ -0,0 +1,55 @@
use anyhow::Error;
use serde_json::Value;
use proxmox::api::{api, cli::*, RpcEnvironment, ApiHandler};
use proxmox_backup::api2;
#[api(
input: {
properties: {
"output-format": {
schema: OUTPUT_FORMAT,
optional: true,
},
}
}
)]
/// Read subscription info.
fn get(param: Value, rpcenv: &mut dyn RpcEnvironment) -> Result<Value, Error> {
let output_format = get_output_format(&param);
let info = &api2::node::subscription::API_METHOD_GET_SUBSCRIPTION;
let mut data = match info.handler {
ApiHandler::Sync(handler) => (handler)(param, info, rpcenv)?,
_ => unreachable!(),
};
let options = default_table_format_options();
format_and_print_result_full(&mut data, info.returns, &output_format, &options);
Ok(Value::Null)
}
pub fn subscription_commands() -> CommandLineInterface {
let cmd_def = CliCommandMap::new()
.insert("get", CliCommand::new(&API_METHOD_GET))
.insert("set",
CliCommand::new(&api2::node::subscription::API_METHOD_SET_SUBSCRIPTION)
.fixed_param("node", "localhost".into())
.arg_param(&["key"])
)
.insert("update",
CliCommand::new(&api2::node::subscription::API_METHOD_CHECK_SUBSCRIPTION)
.fixed_param("node", "localhost".into())
)
.insert("remove",
CliCommand::new(&api2::node::subscription::API_METHOD_DELETE_SUBSCRIPTION)
.fixed_param("node", "localhost".into())
)
;
cmd_def.into()
}

View File

@ -1,11 +1,14 @@
use anyhow::Error;
use serde_json::Value;
use std::collections::HashMap;
use proxmox::api::{api, cli::*, RpcEnvironment, ApiHandler};
use proxmox_backup::config;
use proxmox_backup::tools;
use proxmox_backup::api2;
use proxmox_backup::api2::types::{ACL_PATH_SCHEMA, Authid, Userid};
#[api(
input: {
@ -48,6 +51,106 @@ fn list_users(param: Value, rpcenv: &mut dyn RpcEnvironment) -> Result<Value, Er
Ok(Value::Null)
}
#[api(
input: {
properties: {
"output-format": {
schema: OUTPUT_FORMAT,
optional: true,
},
userid: {
type: Userid,
}
}
}
)]
/// List tokens associated with user.
fn list_tokens(param: Value, rpcenv: &mut dyn RpcEnvironment) -> Result<Value, Error> {
let output_format = get_output_format(&param);
let info = &api2::access::user::API_METHOD_LIST_TOKENS;
let mut data = match info.handler {
ApiHandler::Sync(handler) => (handler)(param, info, rpcenv)?,
_ => unreachable!(),
};
let options = default_table_format_options()
.column(ColumnConfig::new("tokenid"))
.column(
ColumnConfig::new("enable")
.renderer(tools::format::render_bool_with_default_true)
)
.column(
ColumnConfig::new("expire")
.renderer(tools::format::render_epoch)
)
.column(ColumnConfig::new("comment"));
format_and_print_result_full(&mut data, info.returns, &output_format, &options);
Ok(Value::Null)
}
#[api(
input: {
properties: {
"output-format": {
schema: OUTPUT_FORMAT,
optional: true,
},
"auth-id": {
type: Authid,
},
path: {
schema: ACL_PATH_SCHEMA,
optional: true,
},
}
}
)]
/// List permissions of user/token.
fn list_permissions(param: Value, rpcenv: &mut dyn RpcEnvironment) -> Result<Value, Error> {
let output_format = get_output_format(&param);
let info = &api2::access::API_METHOD_LIST_PERMISSIONS;
let mut data = match info.handler {
ApiHandler::Sync(handler) => (handler)(param, info, rpcenv)?,
_ => unreachable!(),
};
if output_format == "text" {
println!("Privileges with (*) have the propagate flag set\n");
let data:HashMap<String, HashMap<String, bool>> = serde_json::from_value(data)?;
let mut paths:Vec<String> = data.keys().cloned().collect();
paths.sort_unstable();
for path in paths {
println!("Path: {}", path);
let priv_map = data.get(&path).unwrap();
let mut privs:Vec<String> = priv_map.keys().cloned().collect();
if privs.is_empty() {
println!("- NoAccess");
} else {
privs.sort_unstable();
for privilege in privs {
if *priv_map.get(&privilege).unwrap() {
println!("- {} (*)", privilege);
} else {
println!("- {}", privilege);
}
}
}
}
} else {
format_and_print_result(&mut data, &output_format);
}
Ok(Value::Null)
}
pub fn user_commands() -> CommandLineInterface {
let cmd_def = CliCommandMap::new()
@ -62,13 +165,39 @@ pub fn user_commands() -> CommandLineInterface {
"update",
CliCommand::new(&api2::access::user::API_METHOD_UPDATE_USER)
.arg_param(&["userid"])
.completion_cb("userid", config::user::complete_user_name)
.completion_cb("userid", config::user::complete_userid)
)
.insert(
"remove",
CliCommand::new(&api2::access::user::API_METHOD_DELETE_USER)
.arg_param(&["userid"])
.completion_cb("userid", config::user::complete_user_name)
.completion_cb("userid", config::user::complete_userid)
)
.insert(
"list-tokens",
CliCommand::new(&&API_METHOD_LIST_TOKENS)
.arg_param(&["userid"])
.completion_cb("userid", config::user::complete_userid)
)
.insert(
"generate-token",
CliCommand::new(&api2::access::user::API_METHOD_GENERATE_TOKEN)
.arg_param(&["userid", "tokenname"])
.completion_cb("userid", config::user::complete_userid)
)
.insert(
"delete-token",
CliCommand::new(&api2::access::user::API_METHOD_DELETE_TOKEN)
.arg_param(&["userid", "tokenname"])
.completion_cb("userid", config::user::complete_userid)
.completion_cb("tokenname", config::user::complete_token_name)
)
.insert(
"permissions",
CliCommand::new(&&API_METHOD_LIST_PERMISSIONS)
.arg_param(&["auth-id"])
.completion_cb("auth-id", config::user::complete_authid)
.completion_cb("path", config::datastore::complete_acl_path)
);
cmd_def.into()

View File

@ -4,7 +4,31 @@
pub const CONFIGDIR: &str = "/etc/proxmox-backup";
pub const JS_DIR: &str = "/usr/share/javascript/proxmox-backup";
pub const API_ACCESS_LOG_FN: &str = "/var/log/proxmox-backup/api/access.log";
#[macro_export]
macro_rules! PROXMOX_BACKUP_RUN_DIR_M { () => ("/run/proxmox-backup") }
#[macro_export]
macro_rules! PROXMOX_BACKUP_LOG_DIR_M { () => ("/var/log/proxmox-backup") }
/// namespaced directory for in-memory (tmpfs) run state
pub const PROXMOX_BACKUP_RUN_DIR: &str = PROXMOX_BACKUP_RUN_DIR_M!();
/// namespaced directory for persistent logging
pub const PROXMOX_BACKUP_LOG_DIR: &str = PROXMOX_BACKUP_LOG_DIR_M!();
/// logfile for all API reuests handled by the proxy and privileged API daemons. Note that not all
/// failed logins can be logged here with full information, use the auth log for that.
pub const API_ACCESS_LOG_FN: &str = concat!(PROXMOX_BACKUP_LOG_DIR_M!(), "/api/access.log");
/// logfile for any failed authentication, via ticket or via token, and new successfull ticket
/// creations. This file can be useful for fail2ban.
pub const API_AUTH_LOG_FN: &str = concat!(PROXMOX_BACKUP_LOG_DIR_M!(), "/api/auth.log");
/// the PID filename for the unprivileged proxy daemon
pub const PROXMOX_BACKUP_PROXY_PID_FN: &str = concat!(PROXMOX_BACKUP_RUN_DIR_M!(), "/proxy.pid");
/// the PID filename for the privileged api daemon
pub const PROXMOX_BACKUP_API_PID_FN: &str = concat!(PROXMOX_BACKUP_RUN_DIR_M!(), "/api.pid");
/// Prepend configuration directory to a file name
///

View File

@ -16,7 +16,7 @@ pub const BACKUP_REPO_URL: ApiStringFormat = ApiStringFormat::Pattern(&BACKUP_RE
#[derive(Debug)]
pub struct BackupRepository {
/// The user name used for Authentication
user: Option<Userid>,
auth_id: Option<Authid>,
/// The host name or IP address
host: Option<String>,
/// The port
@ -27,20 +27,29 @@ pub struct BackupRepository {
impl BackupRepository {
pub fn new(user: Option<Userid>, host: Option<String>, port: Option<u16>, store: String) -> Self {
pub fn new(auth_id: Option<Authid>, host: Option<String>, port: Option<u16>, store: String) -> Self {
let host = match host {
Some(host) if (IP_V6_REGEX.regex_obj)().is_match(&host) => {
Some(format!("[{}]", host))
},
other => other,
};
Self { user, host, port, store }
Self { auth_id, host, port, store }
}
pub fn auth_id(&self) -> &Authid {
if let Some(ref auth_id) = self.auth_id {
return auth_id;
}
&Authid::root_auth_id()
}
pub fn user(&self) -> &Userid {
if let Some(ref user) = self.user {
return &user;
if let Some(auth_id) = &self.auth_id {
return auth_id.user();
}
Userid::root_userid()
}
@ -65,8 +74,8 @@ impl BackupRepository {
impl fmt::Display for BackupRepository {
fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result {
match (&self.user, &self.host, self.port) {
(Some(user), _, _) => write!(f, "{}@{}:{}:{}", user, self.host(), self.port(), self.store),
match (&self.auth_id, &self.host, self.port) {
(Some(auth_id), _, _) => write!(f, "{}@{}:{}:{}", auth_id, self.host(), self.port(), self.store),
(None, Some(host), None) => write!(f, "{}:{}", host, self.store),
(None, _, Some(port)) => write!(f, "{}:{}:{}", self.host(), port, self.store),
(None, None, None) => write!(f, "{}", self.store),
@ -88,7 +97,7 @@ impl std::str::FromStr for BackupRepository {
.ok_or_else(|| format_err!("unable to parse repository url '{}'", url))?;
Ok(Self {
user: cap.get(1).map(|m| Userid::try_from(m.as_str().to_owned())).transpose()?,
auth_id: cap.get(1).map(|m| Authid::try_from(m.as_str().to_owned())).transpose()?,
host: cap.get(2).map(|m| m.as_str().to_owned()),
port: cap.get(3).map(|m| m.as_str().parse::<u16>()).transpose()?,
store: cap[4].to_owned(),

View File

@ -21,7 +21,7 @@ use proxmox::{
};
use super::pipe_to_stream::PipeToSendStream;
use crate::api2::types::Userid;
use crate::api2::types::{Authid, Userid};
use crate::tools::{
self,
BroadcastFuture,
@ -31,7 +31,7 @@ use crate::tools::{
#[derive(Clone)]
pub struct AuthInfo {
pub userid: Userid,
pub auth_id: Authid,
pub ticket: String,
pub token: String,
}
@ -102,7 +102,7 @@ pub struct HttpClient {
server: String,
port: u16,
fingerprint: Arc<Mutex<Option<String>>>,
first_auth: BroadcastFuture<()>,
first_auth: Option<BroadcastFuture<()>>,
auth: Arc<RwLock<AuthInfo>>,
ticket_abort: futures::future::AbortHandle,
_options: HttpClientOptions,
@ -251,7 +251,7 @@ impl HttpClient {
pub fn new(
server: &str,
port: u16,
userid: &Userid,
auth_id: &Authid,
mut options: HttpClientOptions,
) -> Result<Self, Error> {
@ -311,6 +311,11 @@ impl HttpClient {
let password = if let Some(password) = password {
password
} else {
let userid = if auth_id.is_token() {
bail!("API token secret must be provided!");
} else {
auth_id.user()
};
let mut ticket_info = None;
if use_ticket_cache {
ticket_info = load_ticket_info(options.prefix.as_ref().unwrap(), server, userid);
@ -323,7 +328,7 @@ impl HttpClient {
};
let auth = Arc::new(RwLock::new(AuthInfo {
userid: userid.clone(),
auth_id: auth_id.clone(),
ticket: password.clone(),
token: "".to_string(),
}));
@ -336,14 +341,14 @@ impl HttpClient {
let renewal_future = async move {
loop {
tokio::time::delay_for(Duration::new(60*15, 0)).await; // 15 minutes
let (userid, ticket) = {
let (auth_id, ticket) = {
let authinfo = auth2.read().unwrap().clone();
(authinfo.userid, authinfo.ticket)
(authinfo.auth_id, authinfo.ticket)
};
match Self::credentials(client2.clone(), server2.clone(), port, userid, ticket).await {
match Self::credentials(client2.clone(), server2.clone(), port, auth_id.user().clone(), ticket).await {
Ok(auth) => {
if use_ticket_cache & &prefix2.is_some() {
let _ = store_ticket_info(prefix2.as_ref().unwrap(), &server2, &auth.userid.to_string(), &auth.ticket, &auth.token);
let _ = store_ticket_info(prefix2.as_ref().unwrap(), &server2, &auth.auth_id.to_string(), &auth.ticket, &auth.token);
}
*auth2.write().unwrap() = auth;
},
@ -361,7 +366,7 @@ impl HttpClient {
client.clone(),
server.to_owned(),
port,
userid.to_owned(),
auth_id.user().clone(),
password.to_owned(),
).map_ok({
let server = server.to_string();
@ -370,13 +375,20 @@ impl HttpClient {
move |auth| {
if use_ticket_cache & &prefix.is_some() {
let _ = store_ticket_info(prefix.as_ref().unwrap(), &server, &auth.userid.to_string(), &auth.ticket, &auth.token);
let _ = store_ticket_info(prefix.as_ref().unwrap(), &server, &auth.auth_id.to_string(), &auth.ticket, &auth.token);
}
*authinfo.write().unwrap() = auth;
tokio::spawn(renewal_future);
}
});
let first_auth = if auth_id.is_token() {
// TODO check access here?
None
} else {
Some(BroadcastFuture::new(Box::new(login_future)))
};
Ok(Self {
client,
server: String::from(server),
@ -384,7 +396,7 @@ impl HttpClient {
fingerprint: verified_fingerprint,
auth,
ticket_abort,
first_auth: BroadcastFuture::new(Box::new(login_future)),
first_auth,
_options: options,
})
}
@ -393,8 +405,14 @@ impl HttpClient {
///
/// Login is done on demand, so this is only required if you need
/// access to authentication data in 'AuthInfo'.
///
/// Note: tickets a periodially re-newed, so one can use this
/// to query changed ticket.
pub async fn login(&self) -> Result<AuthInfo, Error> {
self.first_auth.listen().await?;
if let Some(future) = &self.first_auth {
future.listen().await?;
}
let authinfo = self.auth.read().unwrap();
Ok(authinfo.clone())
}
@ -477,10 +495,14 @@ impl HttpClient {
let client = self.client.clone();
let auth = self.login().await?;
if auth.auth_id.is_token() {
let enc_api_token = format!("PBSAPIToken {}:{}", auth.auth_id, percent_encode(auth.ticket.as_bytes(), DEFAULT_ENCODE_SET));
req.headers_mut().insert("Authorization", HeaderValue::from_str(&enc_api_token).unwrap());
} else {
let enc_ticket = format!("PBSAuthCookie={}", percent_encode(auth.ticket.as_bytes(), DEFAULT_ENCODE_SET));
req.headers_mut().insert("Cookie", HeaderValue::from_str(&enc_ticket).unwrap());
req.headers_mut().insert("CSRFPreventionToken", HeaderValue::from_str(&auth.token).unwrap());
}
Self::api_request(client, req).await
}
@ -579,11 +601,18 @@ impl HttpClient {
protocol_name: String,
) -> Result<(H2Client, futures::future::AbortHandle), Error> {
let auth = self.login().await?;
let client = self.client.clone();
let auth = self.login().await?;
if auth.auth_id.is_token() {
let enc_api_token = format!("PBSAPIToken {}:{}", auth.auth_id, percent_encode(auth.ticket.as_bytes(), DEFAULT_ENCODE_SET));
req.headers_mut().insert("Authorization", HeaderValue::from_str(&enc_api_token).unwrap());
} else {
let enc_ticket = format!("PBSAuthCookie={}", percent_encode(auth.ticket.as_bytes(), DEFAULT_ENCODE_SET));
req.headers_mut().insert("Cookie", HeaderValue::from_str(&enc_ticket).unwrap());
req.headers_mut().insert("CSRFPreventionToken", HeaderValue::from_str(&auth.token).unwrap());
}
req.headers_mut().insert("UPGRADE", HeaderValue::from_str(&protocol_name).unwrap());
let resp = client.request(req).await?;
@ -636,7 +665,7 @@ impl HttpClient {
let req = Self::request_builder(&server, port, "POST", "/api2/json/access/ticket", Some(data))?;
let cred = Self::api_request(client, req).await?;
let auth = AuthInfo {
userid: cred["data"]["username"].as_str().unwrap().parse()?,
auth_id: cred["data"]["username"].as_str().unwrap().parse()?,
ticket: cred["data"]["ticket"].as_str().unwrap().to_owned(),
token: cred["data"]["CSRFPreventionToken"].as_str().unwrap().to_owned(),
};

View File

@ -103,7 +103,7 @@ async fn pull_index_chunks<I: IndexFile>(
let bytes = bytes.load(Ordering::SeqCst);
worker.log(format!("downloaded {} bytes ({} MiB/s)", bytes, (bytes as f64)/(1024.0*1024.0*elapsed)));
worker.log(format!("downloaded {} bytes ({:.2} MiB/s)", bytes, (bytes as f64)/(1024.0*1024.0*elapsed)));
Ok(())
}
@ -410,7 +410,8 @@ pub async fn pull_group(
list.sort_unstable_by(|a, b| a.backup_time.cmp(&b.backup_time));
let auth_info = client.login().await?;
client.login().await?; // make sure auth is complete
let fingerprint = client.fingerprint();
let last_sync = tgt_store.last_successful_backup(group)?;
@ -447,11 +448,14 @@ pub async fn pull_group(
if last_sync_time > backup_time { continue; }
}
// get updated auth_info (new tickets)
let auth_info = client.login().await?;
let options = HttpClientOptions::new()
.password(Some(auth_info.ticket.clone()))
.fingerprint(fingerprint.clone());
let new_client = HttpClient::new(src_repo.host(), src_repo.port(), src_repo.user(), options)?;
let new_client = HttpClient::new(src_repo.host(), src_repo.port(), src_repo.auth_id(), options)?;
let reader = BackupReader::start(
new_client,
@ -491,7 +495,7 @@ pub async fn pull_store(
src_repo: &BackupRepository,
tgt_store: Arc<DataStore>,
delete: bool,
userid: Userid,
auth_id: Authid,
) -> Result<(), Error> {
// explicit create shared lock to prevent GC on newly created chunks
@ -524,11 +528,11 @@ pub async fn pull_store(
for (groups_done, item) in list.into_iter().enumerate() {
let group = BackupGroup::new(&item.backup_type, &item.backup_id);
let (owner, _lock_guard) = tgt_store.create_locked_backup_group(&group, &userid)?;
let (owner, _lock_guard) = tgt_store.create_locked_backup_group(&group, &auth_id)?;
// permission check
if userid != owner { // only the owner is allowed to create additional snapshots
if auth_id != owner { // only the owner is allowed to create additional snapshots
worker.log(format!("sync group {}/{} failed - owner check failed ({} != {})",
item.backup_type, item.backup_id, userid, owner));
item.backup_type, item.backup_id, auth_id, owner));
errors = true; // do not stop here, instead continue
} else if let Err(err) = pull_group(

View File

@ -21,6 +21,7 @@ pub mod datastore;
pub mod network;
pub mod remote;
pub mod sync;
pub mod token_shadow;
pub mod user;
pub mod verify;

View File

@ -1,5 +1,5 @@
use std::io::Write;
use std::collections::{HashMap, HashSet, BTreeMap, BTreeSet};
use std::collections::{HashMap, BTreeMap, BTreeSet};
use std::path::{PathBuf, Path};
use std::sync::{Arc, RwLock};
use std::str::FromStr;
@ -15,7 +15,7 @@ use proxmox::tools::{fs::replace_file, fs::CreateOptions};
use proxmox::constnamedbitmap;
use proxmox::api::{api, schema::*};
use crate::api2::types::Userid;
use crate::api2::types::{Authid,Userid};
// define Privilege bitfield
@ -26,14 +26,23 @@ constnamedbitmap! {
PRIV_SYS_MODIFY("Sys.Modify");
PRIV_SYS_POWER_MANAGEMENT("Sys.PowerManagement");
/// Datastore.Audit allows knowing about a datastore,
/// including reading the configuration entry and listing its contents
PRIV_DATASTORE_AUDIT("Datastore.Audit");
/// Datastore.Allocate allows creating or deleting datastores
PRIV_DATASTORE_ALLOCATE("Datastore.Allocate");
/// Datastore.Modify allows modifying a datastore and its contents
PRIV_DATASTORE_MODIFY("Datastore.Modify");
/// Datastore.Read allows reading arbitrary backup contents
PRIV_DATASTORE_READ("Datastore.Read");
/// Allows verifying a datastore
PRIV_DATASTORE_VERIFY("Datastore.Verify");
/// Datastore.Backup also requires backup ownership
/// Datastore.Backup allows Datastore.Read|Verify and creating new snapshots,
/// but also requires backup ownership
PRIV_DATASTORE_BACKUP("Datastore.Backup");
/// Datastore.Prune also requires backup ownership
/// Datastore.Prune allows deleting snapshots,
/// but also requires backup ownership
PRIV_DATASTORE_PRUNE("Datastore.Prune");
PRIV_PERMISSIONS_MODIFY("Permissions.Modify");
@ -41,7 +50,6 @@ constnamedbitmap! {
PRIV_REMOTE_AUDIT("Remote.Audit");
PRIV_REMOTE_MODIFY("Remote.Modify");
PRIV_REMOTE_READ("Remote.Read");
PRIV_REMOTE_PRUNE("Remote.Prune");
PRIV_SYS_CONSOLE("Sys.Console");
}
@ -64,12 +72,14 @@ pub const ROLE_DATASTORE_ADMIN: u64 =
PRIV_DATASTORE_AUDIT |
PRIV_DATASTORE_MODIFY |
PRIV_DATASTORE_READ |
PRIV_DATASTORE_VERIFY |
PRIV_DATASTORE_BACKUP |
PRIV_DATASTORE_PRUNE;
/// Datastore.Reader can read datastore content an do restore
/// Datastore.Reader can read/verify datastore content and do restore
pub const ROLE_DATASTORE_READER: u64 =
PRIV_DATASTORE_AUDIT |
PRIV_DATASTORE_VERIFY |
PRIV_DATASTORE_READ;
/// Datastore.Backup can do backup and restore, but no prune.
@ -93,14 +103,12 @@ PRIV_REMOTE_AUDIT;
pub const ROLE_REMOTE_ADMIN: u64 =
PRIV_REMOTE_AUDIT |
PRIV_REMOTE_MODIFY |
PRIV_REMOTE_READ |
PRIV_REMOTE_PRUNE;
PRIV_REMOTE_READ;
/// Remote.SyncOperator can do read and prune on the remote.
pub const ROLE_REMOTE_SYNC_OPERATOR: u64 =
PRIV_REMOTE_AUDIT |
PRIV_REMOTE_READ |
PRIV_REMOTE_PRUNE;
PRIV_REMOTE_READ;
pub const ROLE_NAME_NO_ACCESS: &str ="NoAccess";
@ -231,7 +239,7 @@ pub struct AclTree {
}
pub struct AclTreeNode {
pub users: HashMap<Userid, HashMap<String, bool>>,
pub users: HashMap<Authid, HashMap<String, bool>>,
pub groups: HashMap<String, HashMap<String, bool>>,
pub children: BTreeMap<String, AclTreeNode>,
}
@ -246,43 +254,43 @@ impl AclTreeNode {
}
}
pub fn extract_roles(&self, user: &Userid, all: bool) -> HashSet<String> {
let user_roles = self.extract_user_roles(user, all);
if !user_roles.is_empty() {
pub fn extract_roles(&self, auth_id: &Authid, all: bool) -> HashMap<String, bool> {
let user_roles = self.extract_user_roles(auth_id, all);
if !user_roles.is_empty() || auth_id.is_token() {
// user privs always override group privs
return user_roles
};
self.extract_group_roles(user, all)
self.extract_group_roles(auth_id.user(), all)
}
pub fn extract_user_roles(&self, user: &Userid, all: bool) -> HashSet<String> {
pub fn extract_user_roles(&self, auth_id: &Authid, all: bool) -> HashMap<String, bool> {
let mut set = HashSet::new();
let mut map = HashMap::new();
let roles = match self.users.get(user) {
let roles = match self.users.get(auth_id) {
Some(m) => m,
None => return set,
None => return map,
};
for (role, propagate) in roles {
if *propagate || all {
if role == ROLE_NAME_NO_ACCESS {
// return a set with a single role 'NoAccess'
let mut set = HashSet::new();
set.insert(role.to_string());
return set;
// return a map with a single role 'NoAccess'
let mut map = HashMap::new();
map.insert(role.to_string(), false);
return map;
}
set.insert(role.to_string());
map.insert(role.to_string(), *propagate);
}
}
set
map
}
pub fn extract_group_roles(&self, _user: &Userid, all: bool) -> HashSet<String> {
pub fn extract_group_roles(&self, _user: &Userid, all: bool) -> HashMap<String, bool> {
let mut set = HashSet::new();
let mut map = HashMap::new();
for (_group, roles) in &self.groups {
let is_member = false; // fixme: check if user is member of the group
@ -291,17 +299,17 @@ impl AclTreeNode {
for (role, propagate) in roles {
if *propagate || all {
if role == ROLE_NAME_NO_ACCESS {
// return a set with a single role 'NoAccess'
let mut set = HashSet::new();
set.insert(role.to_string());
return set;
// return a map with a single role 'NoAccess'
let mut map = HashMap::new();
map.insert(role.to_string(), false);
return map;
}
set.insert(role.to_string());
map.insert(role.to_string(), *propagate);
}
}
}
set
map
}
pub fn delete_group_role(&mut self, group: &str, role: &str) {
@ -312,8 +320,8 @@ impl AclTreeNode {
roles.remove(role);
}
pub fn delete_user_role(&mut self, userid: &Userid, role: &str) {
let roles = match self.users.get_mut(userid) {
pub fn delete_user_role(&mut self, auth_id: &Authid, role: &str) {
let roles = match self.users.get_mut(auth_id) {
Some(r) => r,
None => return,
};
@ -331,8 +339,8 @@ impl AclTreeNode {
}
}
pub fn insert_user_role(&mut self, user: Userid, role: String, propagate: bool) {
let map = self.users.entry(user).or_insert_with(|| HashMap::new());
pub fn insert_user_role(&mut self, auth_id: Authid, role: String, propagate: bool) {
let map = self.users.entry(auth_id).or_insert_with(|| HashMap::new());
if role == ROLE_NAME_NO_ACCESS {
map.clear();
map.insert(role, propagate);
@ -346,7 +354,9 @@ impl AclTreeNode {
impl AclTree {
pub fn new() -> Self {
Self { root: AclTreeNode::new() }
Self {
root: AclTreeNode::new(),
}
}
pub fn find_node(&mut self, path: &str) -> Option<&mut AclTreeNode> {
@ -383,13 +393,13 @@ impl AclTree {
node.delete_group_role(group, role);
}
pub fn delete_user_role(&mut self, path: &str, userid: &Userid, role: &str) {
pub fn delete_user_role(&mut self, path: &str, auth_id: &Authid, role: &str) {
let path = split_acl_path(path);
let node = match self.get_node(&path) {
Some(n) => n,
None => return,
};
node.delete_user_role(userid, role);
node.delete_user_role(auth_id, role);
}
pub fn insert_group_role(&mut self, path: &str, group: &str, role: &str, propagate: bool) {
@ -398,10 +408,10 @@ impl AclTree {
node.insert_group_role(group.to_string(), role.to_string(), propagate);
}
pub fn insert_user_role(&mut self, path: &str, user: &Userid, role: &str, propagate: bool) {
pub fn insert_user_role(&mut self, path: &str, auth_id: &Authid, role: &str, propagate: bool) {
let path = split_acl_path(path);
let node = self.get_or_insert_node(&path);
node.insert_user_role(user.to_owned(), role.to_string(), propagate);
node.insert_user_role(auth_id.to_owned(), role.to_string(), propagate);
}
fn write_node_config(
@ -413,18 +423,18 @@ impl AclTree {
let mut role_ug_map0 = HashMap::new();
let mut role_ug_map1 = HashMap::new();
for (user, roles) in &node.users {
for (auth_id, roles) in &node.users {
// no need to save, because root is always 'Administrator'
if user == "root@pam" { continue; }
if !auth_id.is_token() && auth_id.user() == "root@pam" { continue; }
for (role, propagate) in roles {
let role = role.as_str();
let user = user.to_string();
let auth_id = auth_id.to_string();
if *propagate {
role_ug_map1.entry(role).or_insert_with(|| BTreeSet::new())
.insert(user);
.insert(auth_id);
} else {
role_ug_map0.entry(role).or_insert_with(|| BTreeSet::new())
.insert(user);
.insert(auth_id);
}
}
}
@ -512,7 +522,8 @@ impl AclTree {
bail!("expected '0' or '1' for propagate flag.");
};
let path = split_acl_path(items[2]);
let path_str = items[2];
let path = split_acl_path(path_str);
let node = self.get_or_insert_node(&path);
let uglist: Vec<&str> = items[3].split(',').map(|v| v.trim()).collect();
@ -576,25 +587,26 @@ impl AclTree {
Ok(tree)
}
pub fn roles(&self, userid: &Userid, path: &[&str]) -> HashSet<String> {
pub fn roles(&self, auth_id: &Authid, path: &[&str]) -> HashMap<String, bool> {
let mut node = &self.root;
let mut role_set = node.extract_roles(userid, path.is_empty());
let mut role_map = node.extract_roles(auth_id, path.is_empty());
for (pos, comp) in path.iter().enumerate() {
let last_comp = (pos + 1) == path.len();
node = match node.children.get(*comp) {
Some(n) => n,
None => return role_set, // path not found
None => return role_map, // path not found
};
let new_set = node.extract_roles(userid, last_comp);
if !new_set.is_empty() {
// overwrite previous settings
role_set = new_set;
let new_map = node.extract_roles(auth_id, last_comp);
if !new_map.is_empty() {
// overwrite previous maptings
role_map = new_map;
}
}
role_set
role_map
}
}
@ -675,22 +687,22 @@ mod test {
use anyhow::{Error};
use super::AclTree;
use crate::api2::types::Userid;
use crate::api2::types::Authid;
fn check_roles(
tree: &AclTree,
user: &Userid,
auth_id: &Authid,
path: &str,
expected_roles: &str,
) {
let path_vec = super::split_acl_path(path);
let mut roles = tree.roles(user, &path_vec)
.iter().map(|v| v.clone()).collect::<Vec<String>>();
let mut roles = tree.roles(auth_id, &path_vec)
.iter().map(|(v, _)| v.clone()).collect::<Vec<String>>();
roles.sort();
let roles = roles.join(",");
assert_eq!(roles, expected_roles, "\nat check_roles for '{}' on '{}'", user, path);
assert_eq!(roles, expected_roles, "\nat check_roles for '{}' on '{}'", auth_id, path);
}
#[test]
@ -721,13 +733,13 @@ acl:1:/storage:user1@pbs:Admin
acl:1:/storage/store1:user1@pbs:DatastoreBackup
acl:1:/storage/store2:user2@pbs:DatastoreBackup
"###)?;
let user1: Userid = "user1@pbs".parse()?;
let user1: Authid = "user1@pbs".parse()?;
check_roles(&tree, &user1, "/", "");
check_roles(&tree, &user1, "/storage", "Admin");
check_roles(&tree, &user1, "/storage/store1", "DatastoreBackup");
check_roles(&tree, &user1, "/storage/store2", "Admin");
let user2: Userid = "user2@pbs".parse()?;
let user2: Authid = "user2@pbs".parse()?;
check_roles(&tree, &user2, "/", "");
check_roles(&tree, &user2, "/storage", "");
check_roles(&tree, &user2, "/storage/store1", "");
@ -744,7 +756,7 @@ acl:1:/:user1@pbs:Admin
acl:1:/storage:user1@pbs:NoAccess
acl:1:/storage/store1:user1@pbs:DatastoreBackup
"###)?;
let user1: Userid = "user1@pbs".parse()?;
let user1: Authid = "user1@pbs".parse()?;
check_roles(&tree, &user1, "/", "Admin");
check_roles(&tree, &user1, "/storage", "NoAccess");
check_roles(&tree, &user1, "/storage/store1", "DatastoreBackup");
@ -770,7 +782,7 @@ acl:1:/storage/store1:user1@pbs:DatastoreBackup
let mut tree = AclTree::new();
let user1: Userid = "user1@pbs".parse()?;
let user1: Authid = "user1@pbs".parse()?;
tree.insert_user_role("/", &user1, "Admin", true);
tree.insert_user_role("/", &user1, "Audit", true);
@ -794,7 +806,7 @@ acl:1:/storage/store1:user1@pbs:DatastoreBackup
let mut tree = AclTree::new();
let user1: Userid = "user1@pbs".parse()?;
let user1: Authid = "user1@pbs".parse()?;
tree.insert_user_role("/storage", &user1, "NoAccess", true);

View File

@ -9,10 +9,10 @@ use lazy_static::lazy_static;
use proxmox::api::UserInformation;
use super::acl::{AclTree, ROLE_NAMES, ROLE_ADMIN};
use super::user::User;
use crate::api2::types::Userid;
use super::user::{ApiToken, User};
use crate::api2::types::{Authid, Userid};
/// Cache User/Group/Acl configuration data for fast permission tests
/// Cache User/Group/Token/Acl configuration data for fast permission tests
pub struct CachedUserInfo {
user_cfg: Arc<SectionConfigData>,
acl_tree: Arc<AclTree>,
@ -57,8 +57,18 @@ impl CachedUserInfo {
Ok(config)
}
/// Test if a user account is enabled and not expired
pub fn is_active_user(&self, userid: &Userid) -> bool {
#[cfg(test)]
pub(crate) fn test_new(user_cfg: SectionConfigData, acl_tree: AclTree) -> Self {
Self {
user_cfg: Arc::new(user_cfg),
acl_tree: Arc::new(acl_tree),
}
}
/// Test if a authentication id is enabled and not expired
pub fn is_active_auth_id(&self, auth_id: &Authid) -> bool {
let userid = auth_id.user();
if let Ok(info) = self.user_cfg.lookup::<User>("user", userid.as_str()) {
if !info.enable.unwrap_or(true) {
return false;
@ -68,24 +78,41 @@ impl CachedUserInfo {
return false;
}
}
} else {
return false;
}
if auth_id.is_token() {
if let Ok(info) = self.user_cfg.lookup::<ApiToken>("token", &auth_id.to_string()) {
if !info.enable.unwrap_or(true) {
return false;
}
if let Some(expire) = info.expire {
if expire > 0 && expire <= now() {
return false;
}
}
return true;
} else {
return false;
}
}
return true;
}
pub fn check_privs(
&self,
userid: &Userid,
auth_id: &Authid,
path: &[&str],
required_privs: u64,
partial: bool,
) -> Result<(), Error> {
let user_privs = self.lookup_privs(&userid, path);
let privs = self.lookup_privs(&auth_id, path);
let allowed = if partial {
(user_privs & required_privs) != 0
(privs & required_privs) != 0
} else {
(user_privs & required_privs) == required_privs
(privs & required_privs) == required_privs
};
if !allowed {
// printing the path doesn't leaks any information as long as we
@ -95,29 +122,48 @@ impl CachedUserInfo {
Ok(())
}
pub fn is_superuser(&self, userid: &Userid) -> bool {
userid == "root@pam"
pub fn is_superuser(&self, auth_id: &Authid) -> bool {
!auth_id.is_token() && auth_id.user() == "root@pam"
}
pub fn is_group_member(&self, _userid: &Userid, _group: &str) -> bool {
false
}
pub fn lookup_privs(&self, userid: &Userid, path: &[&str]) -> u64 {
if self.is_superuser(userid) {
return ROLE_ADMIN;
pub fn lookup_privs(&self, auth_id: &Authid, path: &[&str]) -> u64 {
let (privs, _) = self.lookup_privs_details(auth_id, path);
privs
}
let roles = self.acl_tree.roles(userid, path);
pub fn lookup_privs_details(&self, auth_id: &Authid, path: &[&str]) -> (u64, u64) {
if self.is_superuser(auth_id) {
return (ROLE_ADMIN, ROLE_ADMIN);
}
let roles = self.acl_tree.roles(auth_id, path);
let mut privs: u64 = 0;
for role in roles {
let mut propagated_privs: u64 = 0;
for (role, propagate) in roles {
if let Some((role_privs, _)) = ROLE_NAMES.get(role.as_str()) {
if propagate {
propagated_privs |= role_privs;
}
privs |= role_privs;
}
}
privs
if auth_id.is_token() {
// limit privs to that of owning user
let user_auth_id = Authid::from(auth_id.user().clone());
privs &= self.lookup_privs(&user_auth_id, path);
let (owner_privs, owner_propagated_privs) = self.lookup_privs_details(&user_auth_id, path);
privs &= owner_privs;
propagated_privs &= owner_propagated_privs;
}
(privs, propagated_privs)
}
}
impl UserInformation for CachedUserInfo {
@ -129,9 +175,9 @@ impl UserInformation for CachedUserInfo {
false
}
fn lookup_privs(&self, userid: &str, path: &[&str]) -> u64 {
match userid.parse::<Userid>() {
Ok(userid) => Self::lookup_privs(self, &userid, path),
fn lookup_privs(&self, auth_id: &str, path: &[&str]) -> u64 {
match auth_id.parse::<Authid>() {
Ok(auth_id) => Self::lookup_privs(self, &auth_id, path),
Err(_) => 0,
}
}

View File

@ -32,6 +32,14 @@ pub const DIR_NAME_SCHEMA: Schema = StringSchema::new("Directory name").schema()
path: {
schema: DIR_NAME_SCHEMA,
},
"notify-user": {
optional: true,
type: Userid,
},
"notify": {
optional: true,
type: Notify,
},
comment: {
optional: true,
schema: SINGLE_LINE_COMMENT_SCHEMA,
@ -101,6 +109,12 @@ pub struct DataStoreConfig {
/// If enabled, all backups will be verified right after completion.
#[serde(skip_serializing_if="Option::is_none")]
pub verify_new: Option<bool>,
/// Send job email notification to this user
#[serde(skip_serializing_if="Option::is_none")]
pub notify_user: Option<Userid>,
/// Send notification only for job errors
#[serde(skip_serializing_if="Option::is_none")]
pub notify: Option<Notify>,
}
fn init() -> SectionConfig {

View File

@ -57,38 +57,57 @@ lazy_static! {
}
pub fn parse_cidr(cidr: &str) -> Result<(String, u8, bool), Error> {
let (address, mask, is_v6) = parse_address_or_cidr(cidr)?;
if let Some(mask) = mask {
return Ok((address, mask, is_v6));
} else {
bail!("missing netmask in '{}'", cidr);
}
}
pub fn check_netmask(mask: u8, is_v6: bool) -> Result<(), Error> {
if is_v6 {
if !(mask >= 1 && mask <= 128) {
bail!("IPv6 mask '{}' is out of range (1..128).", mask);
}
} else {
if !(mask > 0 && mask <= 32) {
bail!("IPv4 mask '{}' is out of range (1..32).", mask);
}
}
Ok(())
}
// parse ip address with otional cidr mask
pub fn parse_address_or_cidr(cidr: &str) -> Result<(String, Option<u8>, bool), Error> {
lazy_static! {
pub static ref CIDR_V4_REGEX: Regex = Regex::new(
concat!(r"^(", IPV4RE!(), r")(?:/(\d{1,2}))$")
concat!(r"^(", IPV4RE!(), r")(?:/(\d{1,2}))?$")
).unwrap();
pub static ref CIDR_V6_REGEX: Regex = Regex::new(
concat!(r"^(", IPV6RE!(), r")(?:/(\d{1,3}))$")
concat!(r"^(", IPV6RE!(), r")(?:/(\d{1,3}))?$")
).unwrap();
}
if let Some(caps) = CIDR_V4_REGEX.captures(&cidr) {
let address = &caps[1];
let mask = &caps[2];
let mask = u8::from_str_radix(mask, 10)
.map(|mask| {
if !(mask > 0 && mask <= 32) {
bail!("IPv4 mask '{}' is out of range (1..32).", mask);
if let Some(mask) = caps.get(2) {
let mask = u8::from_str_radix(mask.as_str(), 10)?;
check_netmask(mask, false)?;
return Ok((address.to_string(), Some(mask), false));
} else {
return Ok((address.to_string(), None, false));
}
Ok(mask)
})?;
return Ok((address.to_string(), mask.unwrap(), false));
} else if let Some(caps) = CIDR_V6_REGEX.captures(&cidr) {
let address = &caps[1];
let mask = &caps[2];
let mask = u8::from_str_radix(mask, 10)
.map(|mask| {
if !(mask >= 1 && mask <= 128) {
bail!("IPv6 mask '{}' is out of range (1..128).", mask);
if let Some(mask) = caps.get(2) {
let mask = u8::from_str_radix(mask.as_str(), 10)?;
check_netmask(mask, true)?;
return Ok((address.to_string(), Some(mask), true));
} else {
return Ok((address.to_string(), None, true));
}
Ok(mask)
})?;
return Ok((address.to_string(), mask.unwrap(), true));
} else {
bail!("invalid address/mask '{}'", cidr);
}

View File

@ -86,20 +86,35 @@ impl <R: BufRead> NetworkParser<R> {
Ok(())
}
fn parse_iface_address(&mut self, interface: &mut Interface) -> Result<(), Error> {
self.eat(Token::Address)?;
let cidr = self.next_text()?;
fn parse_netmask(&mut self) -> Result<u8, Error> {
self.eat(Token::Netmask)?;
let netmask = self.next_text()?;
let (_address, _mask, ipv6) = parse_cidr(&cidr)?;
if ipv6 {
interface.set_cidr_v6(cidr)?;
let mask = if let Some(mask) = IPV4_MASK_HASH_LOCALNET.get(netmask.as_str()) {
*mask
} else {
interface.set_cidr_v4(cidr)?;
match u8::from_str_radix(netmask.as_str(), 10) {
Ok(mask) => mask,
Err(err) => {
bail!("unable to parse netmask '{}' - {}", netmask, err);
}
}
};
self.eat(Token::Newline)?;
Ok(())
Ok(mask)
}
fn parse_iface_address(&mut self) -> Result<(String, Option<u8>, bool), Error> {
self.eat(Token::Address)?;
let cidr = self.next_text()?;
let (_address, mask, ipv6) = parse_address_or_cidr(&cidr)?;
self.eat(Token::Newline)?;
Ok((cidr, mask, ipv6))
}
fn parse_iface_gateway(&mut self, interface: &mut Interface) -> Result<(), Error> {
@ -191,6 +206,9 @@ impl <R: BufRead> NetworkParser<R> {
address_family_v6: bool,
) -> Result<(), Error> {
let mut netmask = None;
let mut address_list = Vec::new();
loop {
match self.peek()? {
Token::Attribute => { self.eat(Token::Attribute)?; },
@ -214,8 +232,15 @@ impl <R: BufRead> NetworkParser<R> {
}
match self.peek()? {
Token::Address => self.parse_iface_address(interface)?,
Token::Address => {
let (cidr, mask, is_v6) = self.parse_iface_address()?;
address_list.push((cidr, mask, is_v6));
}
Token::Gateway => self.parse_iface_gateway(interface)?,
Token::Netmask => {
//Note: netmask is deprecated, but we try to do our best
netmask = Some(self.parse_netmask()?);
}
Token::MTU => {
let mtu = self.parse_iface_mtu()?;
interface.mtu = Some(mtu);
@ -255,8 +280,6 @@ impl <R: BufRead> NetworkParser<R> {
interface.bond_xmit_hash_policy = Some(policy);
self.eat(Token::Newline)?;
}
Token::Netmask => bail!("netmask is deprecated and no longer supported"),
_ => { // parse addon attributes
let option = self.parse_to_eol()?;
if !option.is_empty() {
@ -270,6 +293,38 @@ impl <R: BufRead> NetworkParser<R> {
}
}
if let Some(netmask) = netmask {
if address_list.len() > 1 {
bail!("unable to apply netmask to multiple addresses (please use cidr notation)");
} else if address_list.len() == 1 {
let (mut cidr, mask, is_v6) = address_list.pop().unwrap();
if mask.is_some() {
// address already has a mask - ignore netmask
} else {
check_netmask(netmask, is_v6)?;
cidr.push_str(&format!("/{}", netmask));
}
if is_v6 {
interface.set_cidr_v6(cidr)?;
} else {
interface.set_cidr_v4(cidr)?;
}
} else {
// no address - simply ignore useless netmask
}
} else {
for (cidr, mask, is_v6) in address_list {
if mask.is_none() {
bail!("missing netmask in '{}'", cidr);
}
if is_v6 {
interface.set_cidr_v6(cidr)?;
} else {
interface.set_cidr_v4(cidr)?;
}
}
}
Ok(())
}

View File

@ -45,7 +45,7 @@ pub const REMOTE_PASSWORD_SCHEMA: Schema = StringSchema::new("Password or auth t
type: u16,
},
userid: {
type: Userid,
type: Authid,
},
password: {
schema: REMOTE_PASSWORD_SCHEMA,
@ -65,7 +65,7 @@ pub struct Remote {
pub host: String,
#[serde(skip_serializing_if="Option::is_none")]
pub port: Option<u16>,
pub userid: Userid,
pub userid: Authid,
#[serde(skip_serializing_if="String::is_empty")]
#[serde(with = "proxmox::tools::serde::string_as_base64")]
pub password: String,

View File

@ -21,7 +21,6 @@ lazy_static! {
static ref CONFIG: SectionConfig = init();
}
#[api(
properties: {
id: {
@ -30,6 +29,10 @@ lazy_static! {
store: {
schema: DATASTORE_SCHEMA,
},
"owner": {
type: Authid,
optional: true,
},
remote: {
schema: REMOTE_ID_SCHEMA,
},
@ -51,11 +54,13 @@ lazy_static! {
}
)]
#[serde(rename_all="kebab-case")]
#[derive(Serialize,Deserialize)]
#[derive(Serialize,Deserialize,Clone)]
/// Sync Job
pub struct SyncJobConfig {
pub id: String,
pub store: String,
#[serde(skip_serializing_if="Option::is_none")]
pub owner: Option<Authid>,
pub remote: String,
pub remote_store: String,
#[serde(skip_serializing_if="Option::is_none")]
@ -66,6 +71,21 @@ pub struct SyncJobConfig {
pub schedule: Option<String>,
}
impl From<&SyncJobStatus> for SyncJobConfig {
fn from(job_status: &SyncJobStatus) -> Self {
Self {
id: job_status.id.clone(),
store: job_status.store.clone(),
owner: job_status.owner.clone(),
remote: job_status.remote.clone(),
remote_store: job_status.remote_store.clone(),
remove_vanished: job_status.remove_vanished.clone(),
comment: job_status.comment.clone(),
schedule: job_status.schedule.clone(),
}
}
}
// FIXME: generate duplicate schemas/structs from one listing?
#[api(
properties: {
@ -75,6 +95,10 @@ pub struct SyncJobConfig {
store: {
schema: DATASTORE_SCHEMA,
},
owner: {
type: Authid,
optional: true,
},
remote: {
schema: REMOTE_ID_SCHEMA,
},
@ -121,6 +145,8 @@ pub struct SyncJobConfig {
pub struct SyncJobStatus {
pub id: String,
pub store: String,
#[serde(skip_serializing_if="Option::is_none")]
pub owner: Option<Authid>,
pub remote: String,
pub remote_store: String,
#[serde(skip_serializing_if="Option::is_none")]

View File

@ -0,0 +1,91 @@
use std::collections::HashMap;
use std::time::Duration;
use anyhow::{bail, format_err, Error};
use serde::{Serialize, Deserialize};
use serde_json::{from_value, Value};
use proxmox::tools::fs::{open_file_locked, CreateOptions};
use crate::api2::types::Authid;
use crate::auth;
const LOCK_FILE: &str = configdir!("/token.shadow.lock");
const CONF_FILE: &str = configdir!("/token.shadow");
const LOCK_TIMEOUT: Duration = Duration::from_secs(5);
#[serde(rename_all="kebab-case")]
#[derive(Serialize, Deserialize)]
/// ApiToken id / secret pair
pub struct ApiTokenSecret {
pub tokenid: Authid,
pub secret: String,
}
fn read_file() -> Result<HashMap<Authid, String>, Error> {
let json = proxmox::tools::fs::file_get_json(CONF_FILE, Some(Value::Null))?;
if json == Value::Null {
Ok(HashMap::new())
} else {
// swallow serde error which might contain sensitive data
from_value(json).map_err(|_err| format_err!("unable to parse '{}'", CONF_FILE))
}
}
fn write_file(data: HashMap<Authid, String>) -> Result<(), Error> {
let backup_user = crate::backup::backup_user()?;
let options = CreateOptions::new()
.perm(nix::sys::stat::Mode::from_bits_truncate(0o0640))
.owner(backup_user.uid)
.group(backup_user.gid);
let json = serde_json::to_vec(&data)?;
proxmox::tools::fs::replace_file(CONF_FILE, &json, options)
}
/// Verifies that an entry for given tokenid / API token secret exists
pub fn verify_secret(tokenid: &Authid, secret: &str) -> Result<(), Error> {
if !tokenid.is_token() {
bail!("not an API token ID");
}
let data = read_file()?;
match data.get(tokenid) {
Some(hashed_secret) => {
auth::verify_crypt_pw(secret, &hashed_secret)
},
None => bail!("invalid API token"),
}
}
/// Adds a new entry for the given tokenid / API token secret. The secret is stored as salted hash.
pub fn set_secret(tokenid: &Authid, secret: &str) -> Result<(), Error> {
if !tokenid.is_token() {
bail!("not an API token ID");
}
let _guard = open_file_locked(LOCK_FILE, LOCK_TIMEOUT, true)?;
let mut data = read_file()?;
let hashed_secret = auth::encrypt_pw(secret)?;
data.insert(tokenid.clone(), hashed_secret);
write_file(data)?;
Ok(())
}
/// Deletes the entry for the given tokenid.
pub fn delete_secret(tokenid: &Authid) -> Result<(), Error> {
if !tokenid.is_token() {
bail!("not an API token ID");
}
let _guard = open_file_locked(LOCK_FILE, LOCK_TIMEOUT, true)?;
let mut data = read_file()?;
data.remove(tokenid);
write_file(data)?;
Ok(())
}

View File

@ -52,6 +52,36 @@ pub const EMAIL_SCHEMA: Schema = StringSchema::new("E-Mail Address.")
.max_length(64)
.schema();
#[api(
properties: {
tokenid: {
schema: PROXMOX_TOKEN_ID_SCHEMA,
},
comment: {
optional: true,
schema: SINGLE_LINE_COMMENT_SCHEMA,
},
enable: {
optional: true,
schema: ENABLE_USER_SCHEMA,
},
expire: {
optional: true,
schema: EXPIRE_USER_SCHEMA,
},
}
)]
#[derive(Serialize,Deserialize)]
/// ApiToken properties.
pub struct ApiToken {
pub tokenid: Authid,
#[serde(skip_serializing_if="Option::is_none")]
pub comment: Option<String>,
#[serde(skip_serializing_if="Option::is_none")]
pub enable: Option<bool>,
#[serde(skip_serializing_if="Option::is_none")]
pub expire: Option<i64>,
}
#[api(
properties: {
@ -103,15 +133,21 @@ pub struct User {
}
fn init() -> SectionConfig {
let obj_schema = match User::API_SCHEMA {
Schema::Object(ref obj_schema) => obj_schema,
let mut config = SectionConfig::new(&Authid::API_SCHEMA);
let user_schema = match User::API_SCHEMA {
Schema::Object(ref user_schema) => user_schema,
_ => unreachable!(),
};
let user_plugin = SectionConfigPlugin::new("user".to_string(), Some("userid".to_string()), user_schema);
config.register_plugin(user_plugin);
let plugin = SectionConfigPlugin::new("user".to_string(), Some("userid".to_string()), obj_schema);
let mut config = SectionConfig::new(&Userid::API_SCHEMA);
config.register_plugin(plugin);
let token_schema = match ApiToken::API_SCHEMA {
Schema::Object(ref token_schema) => token_schema,
_ => unreachable!(),
};
let token_plugin = SectionConfigPlugin::new("token".to_string(), Some("tokenid".to_string()), token_schema);
config.register_plugin(token_plugin);
config
}
@ -205,10 +241,66 @@ pub fn save_config(config: &SectionConfigData) -> Result<(), Error> {
Ok(())
}
#[cfg(test)]
pub(crate) fn test_cfg_from_str(raw: &str) -> Result<(SectionConfigData, [u8;32]), Error> {
let cfg = init();
let parsed = cfg.parse("test_user_cfg", raw)?;
Ok((parsed, [0;32]))
}
// shell completion helper
pub fn complete_user_name(_arg: &str, _param: &HashMap<String, String>) -> Vec<String> {
pub fn complete_userid(_arg: &str, _param: &HashMap<String, String>) -> Vec<String> {
match config() {
Ok((data, _digest)) => data.sections.iter().map(|(id, _)| id.to_string()).collect(),
Ok((data, _digest)) => {
data.sections.iter()
.filter_map(|(id, (section_type, _))| {
if section_type == "user" {
Some(id.to_string())
} else {
None
}
}).collect()
},
Err(_) => return vec![],
}
}
// shell completion helper
pub fn complete_authid(_arg: &str, _param: &HashMap<String, String>) -> Vec<String> {
match config() {
Ok((data, _digest)) => data.sections.iter().map(|(id, _)| id.to_string()).collect(),
Err(_) => vec![],
}
}
// shell completion helper
pub fn complete_token_name(_arg: &str, param: &HashMap<String, String>) -> Vec<String> {
let data = match config() {
Ok((data, _digest)) => data,
Err(_) => return Vec::new(),
};
match param.get("userid") {
Some(userid) => {
let user = data.lookup::<User>("user", userid);
let tokens = data.convert_to_typed_array("token");
match (user, tokens) {
(Ok(_), Ok(tokens)) => {
tokens
.into_iter()
.filter_map(|token: ApiToken| {
let tokenid = token.tokenid;
if tokenid.is_token() && tokenid.user() == userid {
Some(tokenid.tokenname().unwrap().as_str().to_string())
} else {
None
}
}).collect()
},
_ => vec![],
}
},
None => vec![],
}
}

View File

@ -9,7 +9,7 @@ use std::ffi::CStr;
pub trait BackupCatalogWriter {
fn start_directory(&mut self, name: &CStr) -> Result<(), Error>;
fn end_directory(&mut self) -> Result<(), Error>;
fn add_file(&mut self, name: &CStr, size: u64, mtime: u64) -> Result<(), Error>;
fn add_file(&mut self, name: &CStr, size: u64, mtime: i64) -> Result<(), Error>;
fn add_symlink(&mut self, name: &CStr) -> Result<(), Error>;
fn add_hardlink(&mut self, name: &CStr) -> Result<(), Error>;
fn add_block_device(&mut self, name: &CStr) -> Result<(), Error>;

View File

@ -535,7 +535,7 @@ impl<'a, 'b> Archiver<'a, 'b> {
let file_size = stat.st_size as u64;
if let Some(ref mut catalog) = self.catalog {
catalog.add_file(c_file_name, file_size, stat.st_mtime as u64)?;
catalog.add_file(c_file_name, file_size, stat.st_mtime)?;
}
let offset: LinkOffset =

View File

@ -4,6 +4,47 @@
//! services. We want async IO, so this is built on top of
//! tokio/hyper.
use anyhow::{format_err, Error};
use lazy_static::lazy_static;
use nix::unistd::Pid;
use proxmox::sys::linux::procfs::PidStat;
use crate::buildcfg;
lazy_static! {
static ref PID: i32 = unsafe { libc::getpid() };
static ref PSTART: u64 = PidStat::read_from_pid(Pid::from_raw(*PID)).unwrap().starttime;
}
pub fn pid() -> i32 {
*PID
}
pub fn pstart() -> u64 {
*PSTART
}
pub fn write_pid(pid_fn: &str) -> Result<(), Error> {
let pid_str = format!("{}\n", *PID);
let opts = proxmox::tools::fs::CreateOptions::new();
proxmox::tools::fs::replace_file(pid_fn, pid_str.as_bytes(), opts)
}
pub fn read_pid(pid_fn: &str) -> Result<i32, Error> {
let pid = proxmox::tools::fs::file_get_contents(pid_fn)?;
let pid = std::str::from_utf8(&pid)?.trim();
pid.parse().map_err(|err| format_err!("could not parse pid - {}", err))
}
pub fn ctrl_sock_from_pid(pid: i32) -> String {
format!("\0{}/control-{}.sock", buildcfg::PROXMOX_BACKUP_RUN_DIR, pid)
}
pub fn our_ctrl_sock() -> String {
ctrl_sock_from_pid(*PID)
}
mod environment;
pub use environment::*;
@ -35,5 +76,14 @@ pub mod jobstate;
mod verify_job;
pub use verify_job::*;
mod prune_job;
pub use prune_job::*;
mod gc_job;
pub use gc_job::*;
mod email_notifications;
pub use email_notifications::*;
mod report;
pub use report::*;

View File

@ -1,17 +1,17 @@
use anyhow::{bail, format_err, Error};
use futures::*;
use tokio::net::UnixListener;
use std::path::PathBuf;
use serde_json::Value;
use std::sync::Arc;
use std::collections::HashMap;
use std::os::unix::io::AsRawFd;
use std::path::PathBuf;
use std::sync::Arc;
use futures::*;
use tokio::net::UnixListener;
use serde_json::Value;
use nix::sys::socket;
/// Listens on a Unix Socket to handle simple command asynchronously
pub fn create_control_socket<P, F>(path: P, func: F) -> Result<impl Future<Output = ()>, Error>
fn create_control_socket<P, F>(path: P, func: F) -> Result<impl Future<Output = ()>, Error>
where
P: Into<PathBuf>,
F: Fn(Value) -> Result<Value, Error> + Send + Sync + 'static,
@ -140,3 +140,76 @@ pub async fn send_command<P>(
}
}).await
}
/// A callback for a specific commando socket.
pub type CommandoSocketFn = Box<(dyn Fn(Option<&Value>) -> Result<Value, Error> + Send + Sync + 'static)>;
/// Tooling to get a single control command socket where one can register multiple commands
/// dynamically.
/// You need to call `spawn()` to make the socket active.
pub struct CommandoSocket {
socket: PathBuf,
commands: HashMap<String, CommandoSocketFn>,
}
impl CommandoSocket {
pub fn new<P>(path: P) -> Self
where P: Into<PathBuf>,
{
CommandoSocket {
socket: path.into(),
commands: HashMap::new(),
}
}
/// Spawn the socket and consume self, meaning you cannot register commands anymore after
/// calling this.
pub fn spawn(self) -> Result<(), Error> {
let control_future = create_control_socket(self.socket.to_owned(), move |param| {
let param = param
.as_object()
.ok_or_else(|| format_err!("unable to parse parameters (expected json object)"))?;
let command = match param.get("command") {
Some(Value::String(command)) => command.as_str(),
None => bail!("no command"),
_ => bail!("unable to parse command"),
};
if !self.commands.contains_key(command) {
bail!("got unknown command '{}'", command);
}
match self.commands.get(command) {
None => bail!("got unknown command '{}'", command),
Some(handler) => {
let args = param.get("args"); //.unwrap_or(&Value::Null);
(handler)(args)
},
}
})?;
tokio::spawn(control_future);
Ok(())
}
/// Register a new command with a callback.
pub fn register_command<F>(
&mut self,
command: String,
handler: F,
) -> Result<(), Error>
where
F: Fn(Option<&Value>) -> Result<Value, Error> + Send + Sync + 'static,
{
if self.commands.contains_key(&command) {
bail!("command '{}' already exists!", command);
}
self.commands.insert(command, Box::new(handler));
Ok(())
}
}

View File

@ -2,7 +2,7 @@ use std::collections::HashMap;
use std::path::PathBuf;
use std::time::SystemTime;
use std::fs::metadata;
use std::sync::{Mutex, RwLock};
use std::sync::{Arc, Mutex, RwLock};
use anyhow::{bail, Error, format_err};
use hyper::Method;
@ -21,7 +21,7 @@ pub struct ApiConfig {
env_type: RpcEnvironmentType,
templates: RwLock<Handlebars<'static>>,
template_files: RwLock<HashMap<String, (SystemTime, PathBuf)>>,
request_log: Option<Mutex<FileLogger>>,
request_log: Option<Arc<Mutex<FileLogger>>>,
}
impl ApiConfig {
@ -124,7 +124,11 @@ impl ApiConfig {
}
}
pub fn enable_file_log<P>(&mut self, path: P) -> Result<(), Error>
pub fn enable_file_log<P>(
&mut self,
path: P,
commando_sock: &mut super::CommandoSocket,
) -> Result<(), Error>
where
P: Into<PathBuf>
{
@ -142,11 +146,19 @@ impl ApiConfig {
owned_by_backup: true,
..Default::default()
};
self.request_log = Some(Mutex::new(FileLogger::new(&path, logger_options)?));
let request_log = Arc::new(Mutex::new(FileLogger::new(&path, logger_options)?));
self.request_log = Some(Arc::clone(&request_log));
commando_sock.register_command("api-access-log-reopen".into(), move |_args| {
println!("re-opening log file");
request_log.lock().unwrap().reopen()?;
Ok(serde_json::Value::Null)
})?;
Ok(())
}
pub fn get_file_log(&self) -> Option<&Mutex<FileLogger>> {
pub fn get_file_log(&self) -> Option<&Arc<Mutex<FileLogger>>> {
self.request_log.as_ref()
}
}

View File

@ -6,10 +6,14 @@ use handlebars::{Handlebars, Helper, Context, RenderError, RenderContext, Output
use proxmox::tools::email::sendmail;
use crate::{
config::datastore::DataStoreConfig,
config::verify::VerificationJobConfig,
config::sync::SyncJobConfig,
api2::types::{
Userid,
APTUpdateInfo,
GarbageCollectionStatus,
Userid,
Notify,
},
tools::format::HumanByte,
};
@ -22,16 +26,24 @@ Index file count: {{status.index-file-count}}
Removed garbage: {{human-bytes status.removed-bytes}}
Removed chunks: {{status.removed-chunks}}
Remove bad files: {{status.removed-bad}}
Removed bad chunks: {{status.removed-bad}}
Leftover bad chunks: {{status.still-bad}}
Pending removals: {{human-bytes status.pending-bytes}} (in {{status.pending-chunks}} chunks)
Original Data usage: {{human-bytes status.index-data-bytes}}
On Disk usage: {{human-bytes status.disk-bytes}} ({{relative-percentage status.disk-bytes status.index-data-bytes}})
On Disk chunks: {{status.disk-chunks}}
On-Disk usage: {{human-bytes status.disk-bytes}} ({{relative-percentage status.disk-bytes status.index-data-bytes}})
On-Disk chunks: {{status.disk-chunks}}
Deduplication Factor: {{deduplication-factor}}
Garbage collection successful.
Please visit the web interface for futher details:
<https://{{fqdn}}:{{port}}/#DataStore-{{datastore}}>
"###;
@ -41,6 +53,11 @@ Datastore: {{datastore}}
Garbage collection failed: {{error}}
Please visit the web interface for futher details:
<https://{{fqdn}}:{{port}}/#pbsServerAdministration:tasks>
"###;
const VERIFY_OK_TEMPLATE: &str = r###"
@ -50,6 +67,11 @@ Datastore: {{job.store}}
Verification successful.
Please visit the web interface for futher details:
<https://{{fqdn}}:{{port}}/#DataStore-{{job.store}}>
"###;
const VERIFY_ERR_TEMPLATE: &str = r###"
@ -60,11 +82,61 @@ Datastore: {{job.store}}
Verification failed on these snapshots:
{{#each errors}}
{{this}}
{{this~}}
{{/each}}
Please visit the web interface for futher details:
<https://{{fqdn}}:{{port}}/#pbsServerAdministration:tasks>
"###;
const SYNC_OK_TEMPLATE: &str = r###"
Job ID: {{job.id}}
Datastore: {{job.store}}
Remote: {{job.remote}}
Remote Store: {{job.remote-store}}
Synchronization successful.
Please visit the web interface for futher details:
<https://{{fqdn}}:{{port}}/#DataStore-{{job.store}}>
"###;
const SYNC_ERR_TEMPLATE: &str = r###"
Job ID: {{job.id}}
Datastore: {{job.store}}
Remote: {{job.remote}}
Remote Store: {{job.remote-store}}
Synchronization failed: {{error}}
Please visit the web interface for futher details:
<https://{{fqdn}}:{{port}}/#pbsServerAdministration:tasks>
"###;
const PACKAGE_UPDATES_TEMPLATE: &str = r###"
Proxmox Backup Server has the following updates available:
{{#each updates }}
{{Package}}: {{OldVersion}} -> {{Version~}}
{{/each }}
To upgrade visit the web interface:
<https://{{fqdn}}:{{port}}/#pbsServerAdministration:updates>
"###;
lazy_static::lazy_static!{
static ref HANDLEBARS: Handlebars<'static> = {
@ -81,6 +153,11 @@ lazy_static::lazy_static!{
hb.register_template_string("verify_ok_template", VERIFY_OK_TEMPLATE).unwrap();
hb.register_template_string("verify_err_template", VERIFY_ERR_TEMPLATE).unwrap();
hb.register_template_string("sync_ok_template", SYNC_OK_TEMPLATE).unwrap();
hb.register_template_string("sync_err_template", SYNC_ERR_TEMPLATE).unwrap();
hb.register_template_string("package_update_template", PACKAGE_UPDATES_TEMPLATE).unwrap();
hb
};
}
@ -93,7 +170,7 @@ fn send_job_status_mail(
// Note: OX has serious problems displaying text mails,
// so we include html as well
let html = format!("<html><body><pre>\n{}\n<pre>", text);
let html = format!("<html><body><pre>\n{}\n<pre>", handlebars::html_escape(text));
let nodename = proxmox::tools::nodename();
@ -113,24 +190,38 @@ fn send_job_status_mail(
pub fn send_gc_status(
email: &str,
notify: Notify,
datastore: &str,
status: &GarbageCollectionStatus,
result: &Result<(), Error>,
) -> Result<(), Error> {
if notify == Notify::Never || (result.is_ok() && notify == Notify::Error) {
return Ok(());
}
let (fqdn, port) = get_server_url();
let mut data = json!({
"datastore": datastore,
"fqdn": fqdn,
"port": port,
});
let text = match result {
Ok(()) => {
let data = json!({
"status": status,
"datastore": datastore,
});
let deduplication_factor = if status.disk_bytes > 0 {
(status.index_data_bytes as f64)/(status.disk_bytes as f64)
} else {
1.0
};
data["status"] = json!(status);
data["deduplication-factor"] = format!("{:.2}", deduplication_factor).into();
HANDLEBARS.render("gc_ok_template", &data)?
}
Err(err) => {
let data = json!({
"error": err.to_string(),
"datastore": datastore,
});
data["error"] = err.to_string().into();
HANDLEBARS.render("gc_err_template", &data)?
}
};
@ -153,22 +244,32 @@ pub fn send_gc_status(
pub fn send_verify_status(
email: &str,
notify: Notify,
job: VerificationJobConfig,
result: &Result<Vec<String>, Error>,
) -> Result<(), Error> {
if notify == Notify::Never || (result.is_ok() && notify == Notify::Error) {
return Ok(());
}
let (fqdn, port) = get_server_url();
let mut data = json!({
"job": job,
"fqdn": fqdn,
"port": port,
});
let text = match result {
Ok(errors) if errors.is_empty() => {
let data = json!({ "job": job });
HANDLEBARS.render("verify_ok_template", &data)?
}
Ok(errors) => {
let data = json!({ "job": job, "errors": errors });
data["errors"] = json!(errors);
HANDLEBARS.render("verify_err_template", &data)?
}
Err(_) => {
// aboreted job - do not send any email
// aborted job - do not send any email
return Ok(());
}
};
@ -189,10 +290,96 @@ pub fn send_verify_status(
Ok(())
}
pub fn send_sync_status(
email: &str,
notify: Notify,
job: &SyncJobConfig,
result: &Result<(), Error>,
) -> Result<(), Error> {
if notify == Notify::Never || (result.is_ok() && notify == Notify::Error) {
return Ok(());
}
let (fqdn, port) = get_server_url();
let mut data = json!({
"job": job,
"fqdn": fqdn,
"port": port,
});
let text = match result {
Ok(()) => {
HANDLEBARS.render("sync_ok_template", &data)?
}
Err(err) => {
data["error"] = err.to_string().into();
HANDLEBARS.render("sync_err_template", &data)?
}
};
let subject = match result {
Ok(()) => format!(
"Sync remote '{}' datastore '{}' successful",
job.remote,
job.remote_store,
),
Err(_) => format!(
"Sync remote '{}' datastore '{}' failed",
job.remote,
job.remote_store,
),
};
send_job_status_mail(email, &subject, &text)?;
Ok(())
}
fn get_server_url() -> (String, usize) {
// user will surely request that they can change this
let nodename = proxmox::tools::nodename();
let mut fqdn = nodename.to_owned();
if let Ok(resolv_conf) = crate::api2::node::dns::read_etc_resolv_conf() {
if let Some(search) = resolv_conf["search"].as_str() {
fqdn.push('.');
fqdn.push_str(search);
}
}
let port = 8007;
(fqdn, port)
}
pub fn send_updates_available(
updates: &Vec<&APTUpdateInfo>,
) -> Result<(), Error> {
// update mails always go to the root@pam configured email..
if let Some(email) = lookup_user_email(Userid::root_userid()) {
let nodename = proxmox::tools::nodename();
let subject = format!("New software packages available ({})", nodename);
let (fqdn, port) = get_server_url();
let text = HANDLEBARS.render("package_update_template", &json!({
"fqdn": fqdn,
"port": port,
"updates": updates,
}))?;
send_job_status_mail(&email, &subject, &text)?;
}
Ok(())
}
/// Lookup users email address
///
/// For "backup@pam", this returns the address from "root@pam".
pub fn lookup_user_email(userid: &Userid) -> Option<String> {
fn lookup_user_email(userid: &Userid) -> Option<String> {
use crate::config::user::{self, User};
@ -209,6 +396,36 @@ pub fn lookup_user_email(userid: &Userid) -> Option<String> {
None
}
/// Lookup Datastore notify settings
pub fn lookup_datastore_notify_settings(
store: &str,
) -> (Option<String>, Notify) {
let mut notify = Notify::Always;
let mut email = None;
let (config, _digest) = match crate::config::datastore::config() {
Ok(result) => result,
Err(_) => return (email, notify),
};
let config: DataStoreConfig = match config.lookup("datastore", store) {
Ok(result) => result,
Err(_) => return (email, notify),
};
email = match config.notify_user {
Some(ref userid) => lookup_user_email(userid),
None => lookup_user_email(Userid::backup_userid()),
};
if let Some(value) = config.notify {
notify = value;
}
(email, notify)
}
// Handlerbar helper functions
fn handlebars_humam_bytes_helper(

View File

@ -6,7 +6,7 @@ use proxmox::api::{RpcEnvironment, RpcEnvironmentType};
pub struct RestEnvironment {
env_type: RpcEnvironmentType,
result_attributes: Value,
user: Option<String>,
auth_id: Option<String>,
client_ip: Option<std::net::SocketAddr>,
}
@ -14,7 +14,7 @@ impl RestEnvironment {
pub fn new(env_type: RpcEnvironmentType) -> Self {
Self {
result_attributes: json!({}),
user: None,
auth_id: None,
client_ip: None,
env_type,
}
@ -35,12 +35,12 @@ impl RpcEnvironment for RestEnvironment {
self.env_type
}
fn set_user(&mut self, user: Option<String>) {
self.user = user;
fn set_auth_id(&mut self, auth_id: Option<String>) {
self.auth_id = auth_id;
}
fn get_user(&self) -> Option<String> {
self.user.clone()
fn get_auth_id(&self) -> Option<String> {
self.auth_id.clone()
}
fn set_client_ip(&mut self, client_ip: Option<std::net::SocketAddr>) {

63
src/server/gc_job.rs Normal file
View File

@ -0,0 +1,63 @@
use std::sync::Arc;
use anyhow::Error;
use crate::{
server::WorkerTask,
api2::types::*,
server::jobstate::Job,
backup::DataStore,
};
/// Runs a garbage collection job.
pub fn do_garbage_collection_job(
mut job: Job,
datastore: Arc<DataStore>,
auth_id: &Authid,
schedule: Option<String>,
to_stdout: bool,
) -> Result<String, Error> {
let store = datastore.name().to_string();
let (email, notify) = crate::server::lookup_datastore_notify_settings(&store);
let worker_type = job.jobtype().to_string();
let upid_str = WorkerTask::new_thread(
&worker_type,
Some(store.clone()),
auth_id.clone(),
to_stdout,
move |worker| {
job.start(&worker.upid().to_string())?;
worker.log(format!("starting garbage collection on store {}", store));
if let Some(event_str) = schedule {
worker.log(format!("task triggered by schedule '{}'", event_str));
}
let result = datastore.garbage_collection(&*worker, worker.upid());
let status = worker.create_state(&result);
match job.finish(status) {
Err(err) => eprintln!(
"could not finish job state for {}: {}",
job.jobtype().to_string(),
err
),
Ok(_) => (),
}
if let Some(email) = email {
let gc_status = datastore.last_gc_status();
if let Err(err) = crate::server::send_gc_status(&email, notify, &store, &gc_status, &result) {
eprintln!("send gc notification failed: {}", err);
}
}
result
}
)?;
Ok(upid_str)
}

91
src/server/prune_job.rs Normal file
View File

@ -0,0 +1,91 @@
use anyhow::Error;
use proxmox::try_block;
use crate::{
api2::types::*,
backup::{compute_prune_info, BackupGroup, DataStore, PruneOptions},
server::jobstate::Job,
server::WorkerTask,
task_log,
};
pub fn do_prune_job(
mut job: Job,
prune_options: PruneOptions,
store: String,
auth_id: &Authid,
schedule: Option<String>,
) -> Result<String, Error> {
let datastore = DataStore::lookup_datastore(&store)?;
let worker_type = job.jobtype().to_string();
let upid_str = WorkerTask::new_thread(
&worker_type,
Some(job.jobname().to_string()),
auth_id.clone(),
false,
move |worker| {
job.start(&worker.upid().to_string())?;
let result = try_block!({
task_log!(worker, "Starting datastore prune on store \"{}\"", store);
if let Some(event_str) = schedule {
task_log!(worker, "task triggered by schedule '{}'", event_str);
}
task_log!(
worker,
"retention options: {}",
prune_options.cli_options_string()
);
let base_path = datastore.base_path();
let groups = BackupGroup::list_groups(&base_path)?;
for group in groups {
let list = group.list_backups(&base_path)?;
let mut prune_info = compute_prune_info(list, &prune_options)?;
prune_info.reverse(); // delete older snapshots first
task_log!(
worker,
"Starting prune on store \"{}\" group \"{}/{}\"",
store,
group.backup_type(),
group.backup_id()
);
for (info, keep) in prune_info {
task_log!(
worker,
"{} {}/{}/{}",
if keep { "keep" } else { "remove" },
group.backup_type(),
group.backup_id(),
info.backup_dir.backup_time_string()
);
if !keep {
datastore.remove_backup_dir(&info.backup_dir, true)?;
}
}
}
Ok(())
});
let status = worker.create_state(&result);
if let Err(err) = job.finish(status) {
eprintln!(
"could not finish job state for {}: {}",
job.jobtype().to_string(),
err
);
}
result
},
)?;
Ok(upid_str)
}

91
src/server/report.rs Normal file
View File

@ -0,0 +1,91 @@
use std::path::Path;
use std::process::Command;
use crate::config::datastore;
fn files() -> Vec<&'static str> {
vec![
"/etc/hostname",
"/etc/hosts",
"/etc/network/interfaces",
"/etc/proxmox-backup/datastore.cfg",
"/etc/proxmox-backup/user.cfg",
"/etc/proxmox-backup/acl.cfg",
"/etc/proxmox-backup/remote.cfg",
"/etc/proxmox-backup/sync.cfg",
"/etc/proxmox-backup/verification.cfg",
]
}
fn commands() -> Vec<(&'static str, Vec<&'static str>)> {
vec![
// ("<command>", vec![<arg [, arg]>])
("df", vec!["-h"]),
("lsblk", vec!["--ascii"]),
("zpool", vec!["status"]),
("zfs", vec!["list"]),
("proxmox-backup-manager", vec!["subscription", "get"]),
]
}
// (<description>, <function to call>)
fn function_calls() -> Vec<(&'static str, fn() -> String)> {
vec![
("Datastores", || {
let config = match datastore::config() {
Ok((config, _digest)) => config,
_ => return String::from("could not read datastore config"),
};
let mut list = Vec::new();
for (store, _) in &config.sections {
list.push(store.as_str());
}
list.join(", ")
})
]
}
pub fn generate_report() -> String {
use proxmox::tools::fs::file_read_optional_string;
let file_contents = files()
.iter()
.map(|file_name| {
let content = match file_read_optional_string(Path::new(file_name)) {
Ok(Some(content)) => content,
Ok(None) => String::from("# file does not exists"),
Err(err) => err.to_string(),
};
format!("# cat '{}'\n{}", file_name, content)
})
.collect::<Vec<String>>()
.join("\n\n");
let command_outputs = commands()
.iter()
.map(|(command, args)| {
let output = Command::new(command)
.env("PROXMOX_OUTPUT_NO_BORDER", "1")
.args(args)
.output();
let output = match output {
Ok(output) => String::from_utf8_lossy(&output.stdout).to_string(),
Err(err) => err.to_string(),
};
format!("# `{} {}`\n{}", command, args.join(" "), output)
})
.collect::<Vec<String>>()
.join("\n\n");
let function_outputs = function_calls()
.iter()
.map(|(desc, function)| format!("# {}\n{}", desc, function()))
.collect::<Vec<String>>()
.join("\n\n");
format!(
"= FILES =\n\n{}\n= COMMANDS =\n\n{}\n= FUNCTIONS =\n\n{}\n",
file_contents, command_outputs, function_outputs
)
}

View File

@ -17,6 +17,7 @@ use lazy_static::lazy_static;
use serde_json::{json, Value};
use tokio::fs::File;
use tokio::time::Instant;
use percent_encoding::percent_decode_str;
use url::form_urlencoded;
use regex::Regex;
@ -42,7 +43,7 @@ use super::formatter::*;
use super::ApiConfig;
use crate::auth_helpers::*;
use crate::api2::types::Userid;
use crate::api2::types::{Authid, Userid};
use crate::tools;
use crate::tools::FileLogger;
use crate::tools::ticket::Ticket;
@ -111,7 +112,7 @@ pub struct ApiService {
}
fn log_response(
logfile: Option<&Mutex<FileLogger>>,
logfile: Option<&Arc<Mutex<FileLogger>>>,
peer: &std::net::SocketAddr,
method: hyper::Method,
path_query: &str,
@ -138,9 +139,9 @@ fn log_response(
log::error!("{} {}: {} {}: [client {}] {}", method.as_str(), path, status.as_str(), reason, peer, message);
}
if let Some(logfile) = logfile {
let user = match resp.extensions().get::<Userid>() {
Some(userid) => userid.as_str(),
None => "-",
let auth_id = match resp.extensions().get::<Authid>() {
Some(auth_id) => auth_id.to_string(),
None => "-".to_string(),
};
let now = proxmox::tools::time::epoch_i64();
// time format which apache/nginx use (by default), copied from pve-http-server
@ -153,7 +154,7 @@ fn log_response(
.log(format!(
"{} - {} [{}] \"{} {}\" {} {} {}",
peer.ip(),
user,
auth_id,
datetime,
method.as_str(),
path,
@ -163,6 +164,15 @@ fn log_response(
));
}
}
pub fn auth_logger() -> Result<FileLogger, Error> {
let logger_options = tools::FileLogOptions {
append: true,
prefix_time: true,
owned_by_backup: true,
..Default::default()
};
FileLogger::new(crate::buildcfg::API_AUTH_LOG_FN, logger_options)
}
fn get_proxied_peer(headers: &HeaderMap) -> Option<std::net::SocketAddr> {
lazy_static! {
@ -441,7 +451,7 @@ fn get_index(
.unwrap();
if let Some(userid) = userid {
resp.extensions_mut().insert(userid);
resp.extensions_mut().insert(Authid::from((userid, None)));
}
resp
@ -531,50 +541,99 @@ async fn handle_static_file_download(filename: PathBuf) -> Result<Response<Body
}
}
fn extract_auth_data(headers: &http::HeaderMap) -> (Option<String>, Option<String>, Option<String>) {
let mut ticket = None;
let mut language = None;
fn extract_lang_header(headers: &http::HeaderMap) -> Option<String> {
if let Some(raw_cookie) = headers.get("COOKIE") {
if let Ok(cookie) = raw_cookie.to_str() {
ticket = tools::extract_cookie(cookie, "PBSAuthCookie");
language = tools::extract_cookie(cookie, "PBSLangCookie");
return tools::extract_cookie(cookie, "PBSLangCookie");
}
}
None
}
struct UserAuthData{
ticket: String,
csrf_token: Option<String>,
}
enum AuthData {
User(UserAuthData),
ApiToken(String),
}
fn extract_auth_data(headers: &http::HeaderMap) -> Option<AuthData> {
if let Some(raw_cookie) = headers.get(header::COOKIE) {
if let Ok(cookie) = raw_cookie.to_str() {
if let Some(ticket) = tools::extract_cookie(cookie, "PBSAuthCookie") {
let csrf_token = match headers.get("CSRFPreventionToken").map(|v| v.to_str()) {
Some(Ok(v)) => Some(v.to_owned()),
_ => None,
};
return Some(AuthData::User(UserAuthData {
ticket,
csrf_token,
}));
}
}
}
(ticket, csrf_token, language)
match headers.get(header::AUTHORIZATION).map(|v| v.to_str()) {
Some(Ok(v)) => {
if v.starts_with("PBSAPIToken ") || v.starts_with("PBSAPIToken=") {
Some(AuthData::ApiToken(v["PBSAPIToken ".len()..].to_owned()))
} else {
None
}
},
_ => None,
}
}
fn check_auth(
method: &hyper::Method,
ticket: &Option<String>,
csrf_token: &Option<String>,
auth_data: &AuthData,
user_info: &CachedUserInfo,
) -> Result<Userid, Error> {
) -> Result<Authid, Error> {
match auth_data {
AuthData::User(user_auth_data) => {
let ticket = user_auth_data.ticket.clone();
let ticket_lifetime = tools::ticket::TICKET_LIFETIME;
let ticket = ticket.as_ref().map(String::as_str);
let userid: Userid = Ticket::parse(&ticket.ok_or_else(|| format_err!("missing ticket"))?)?
let userid: Userid = Ticket::parse(&ticket)?
.verify_with_time_frame(public_auth_key(), "PBS", None, -300..ticket_lifetime)?;
if !user_info.is_active_user(&userid) {
let auth_id = Authid::from(userid.clone());
if !user_info.is_active_auth_id(&auth_id) {
bail!("user account disabled or expired.");
}
if method != hyper::Method::GET {
if let Some(csrf_token) = csrf_token {
if let Some(csrf_token) = &user_auth_data.csrf_token {
verify_csrf_prevention_token(csrf_secret(), &userid, &csrf_token, -300, ticket_lifetime)?;
} else {
bail!("missing CSRF prevention token");
}
}
Ok(userid)
Ok(auth_id)
},
AuthData::ApiToken(api_token) => {
let mut parts = api_token.splitn(2, ':');
let tokenid = parts.next()
.ok_or_else(|| format_err!("failed to split API token header"))?;
let tokenid: Authid = tokenid.parse()?;
let tokensecret = parts.next()
.ok_or_else(|| format_err!("failed to split API token header"))?;
let tokensecret = percent_decode_str(tokensecret)
.decode_utf8()
.map_err(|_| format_err!("failed to decode API token header"))?;
crate::config::token_shadow::verify_secret(&tokenid, &tokensecret)?;
Ok(tokenid)
}
}
}
async fn handle_request(
@ -630,10 +689,17 @@ async fn handle_request(
}
if auth_required {
let (ticket, csrf_token, _) = extract_auth_data(&parts.headers);
match check_auth(&method, &ticket, &csrf_token, &user_info) {
Ok(userid) => rpcenv.set_user(Some(userid.to_string())),
let auth_result = match extract_auth_data(&parts.headers) {
Some(auth_data) => check_auth(&method, &auth_data, &user_info),
None => Err(format_err!("no authentication credentials provided.")),
};
match auth_result {
Ok(authid) => rpcenv.set_auth_id(Some(authid.to_string())),
Err(err) => {
let peer = peer.ip();
auth_logger()?
.log(format!("authentication failure; rhost={} msg={}", peer, err));
// always delay unauthorized calls by 3 seconds (from start of request)
let err = http_err!(UNAUTHORIZED, "authentication failed - {}", err);
tokio::time::delay_until(Instant::from_std(delay_unauth_time)).await;
@ -648,8 +714,8 @@ async fn handle_request(
return Ok((formatter.format_error)(err));
}
Some(api_method) => {
let user = rpcenv.get_user();
if !check_api_permission(api_method.access.permission, user.as_deref(), &uri_param, user_info.as_ref()) {
let auth_id = rpcenv.get_auth_id();
if !check_api_permission(api_method.access.permission, auth_id.as_deref(), &uri_param, user_info.as_ref()) {
let err = http_err!(FORBIDDEN, "permission check failed");
tokio::time::delay_until(Instant::from_std(access_forbidden_time)).await;
return Ok((formatter.format_error)(err));
@ -666,9 +732,9 @@ async fn handle_request(
Err(err) => (formatter.format_error)(err),
};
if let Some(user) = user {
let userid: Userid = user.parse()?;
response.extensions_mut().insert(userid);
if let Some(auth_id) = auth_id {
let auth_id: Authid = auth_id.parse()?;
response.extensions_mut().insert(auth_id);
}
return Ok(response);
@ -684,13 +750,14 @@ async fn handle_request(
}
if comp_len == 0 {
let (ticket, csrf_token, language) = extract_auth_data(&parts.headers);
if ticket != None {
match check_auth(&method, &ticket, &csrf_token, &user_info) {
Ok(userid) => {
let new_csrf_token = assemble_csrf_prevention_token(csrf_secret(), &userid);
return Ok(get_index(Some(userid), Some(new_csrf_token), language, &api, parts));
}
let language = extract_lang_header(&parts.headers);
if let Some(auth_data) = extract_auth_data(&parts.headers) {
match check_auth(&method, &auth_data, &user_info) {
Ok(auth_id) if !auth_id.is_token() => {
let userid = auth_id.user();
let new_csrf_token = assemble_csrf_prevention_token(csrf_secret(), userid);
return Ok(get_index(Some(userid.clone()), Some(new_csrf_token), language, &api, parts));
},
_ => {
tokio::time::delay_until(Instant::from_std(delay_unauth_time)).await;
return Ok(get_index(None, None, language, &api, parts));

View File

@ -6,7 +6,7 @@ use proxmox::api::schema::{ApiStringFormat, Schema, StringSchema};
use proxmox::const_regex;
use proxmox::sys::linux::procfs;
use crate::api2::types::Userid;
use crate::api2::types::Authid;
/// Unique Process/Task Identifier
///
@ -34,8 +34,8 @@ pub struct UPID {
pub worker_type: String,
/// Worker ID (arbitrary ASCII string)
pub worker_id: Option<String>,
/// The user who started the task
pub userid: Userid,
/// The authenticated entity who started the task
pub auth_id: Authid,
/// The node name.
pub node: String,
}
@ -47,7 +47,7 @@ const_regex! {
pub PROXMOX_UPID_REGEX = concat!(
r"^UPID:(?P<node>[a-zA-Z0-9]([a-zA-Z0-9\-]*[a-zA-Z0-9])?):(?P<pid>[0-9A-Fa-f]{8}):",
r"(?P<pstart>[0-9A-Fa-f]{8,9}):(?P<task_id>[0-9A-Fa-f]{8,16}):(?P<starttime>[0-9A-Fa-f]{8}):",
r"(?P<wtype>[^:\s]+):(?P<wid>[^:\s]*):(?P<userid>[^:\s]+):$"
r"(?P<wtype>[^:\s]+):(?P<wid>[^:\s]*):(?P<authid>[^:\s]+):$"
);
}
@ -65,7 +65,7 @@ impl UPID {
pub fn new(
worker_type: &str,
worker_id: Option<String>,
userid: Userid,
auth_id: Authid,
) -> Result<Self, Error> {
let pid = unsafe { libc::getpid() };
@ -87,7 +87,7 @@ impl UPID {
task_id,
worker_type: worker_type.to_owned(),
worker_id,
userid,
auth_id,
node: proxmox::tools::nodename().to_owned(),
})
}
@ -122,7 +122,7 @@ impl std::str::FromStr for UPID {
task_id: usize::from_str_radix(&cap["task_id"], 16).unwrap(),
worker_type: cap["wtype"].to_string(),
worker_id,
userid: cap["userid"].parse()?,
auth_id: cap["authid"].parse()?,
node: cap["node"].to_string(),
})
} else {
@ -146,6 +146,6 @@ impl std::fmt::Display for UPID {
// more that 8 characters for pstart
write!(f, "UPID:{}:{:08X}:{:08X}:{:08X}:{:08X}:{}:{}:{}:",
self.node, self.pid, self.pstart, self.task_id, self.starttime, self.worker_type, wid, self.userid)
self.node, self.pid, self.pstart, self.task_id, self.starttime, self.worker_type, wid, self.auth_id)
}
}

View File

@ -7,7 +7,7 @@ use crate::{
config::verify::VerificationJobConfig,
backup::{
DataStore,
BackupInfo,
BackupManifest,
verify_all_backups,
},
task_log,
@ -17,25 +17,19 @@ use crate::{
pub fn do_verification_job(
mut job: Job,
verification_job: VerificationJobConfig,
userid: &Userid,
auth_id: &Authid,
schedule: Option<String>,
) -> Result<String, Error> {
let datastore = DataStore::lookup_datastore(&verification_job.store)?;
let datastore2 = datastore.clone();
let outdated_after = verification_job.outdated_after.clone();
let ignore_verified_snapshots = verification_job.ignore_verified.unwrap_or(true);
let filter = move |backup_info: &BackupInfo| {
let filter = move |manifest: &BackupManifest| {
if !ignore_verified_snapshots {
return true;
}
let manifest = match datastore2.load_manifest(&backup_info.backup_dir) {
Ok((manifest, _)) => manifest,
Err(_) => return true, // include, so task picks this up as error
};
let raw_verify_state = manifest.unprotected["verify_state"].clone();
match serde_json::from_value::<SnapshotVerifyState>(raw_verify_state) {
@ -54,14 +48,14 @@ pub fn do_verification_job(
}
};
let email = crate::server::lookup_user_email(userid);
let (email, notify) = crate::server::lookup_datastore_notify_settings(&verification_job.store);
let job_id = job.jobname().to_string();
let worker_type = job.jobtype().to_string();
let upid_str = WorkerTask::new_thread(
&worker_type,
Some(job.jobname().to_string()),
userid.clone(),
auth_id.clone(),
false,
move |worker| {
job.start(&worker.upid().to_string())?;
@ -71,7 +65,7 @@ pub fn do_verification_job(
task_log!(worker,"task triggered by schedule '{}'", event_str);
}
let result = verify_all_backups(datastore, worker.clone(), worker.upid(), &filter);
let result = verify_all_backups(datastore, worker.clone(), worker.upid(), None, Some(&filter));
let job_result = match result {
Ok(ref errors) if errors.is_empty() => Ok(()),
Ok(_) => Err(format_err!("verification failed - please check the log for details")),
@ -90,7 +84,7 @@ pub fn do_verification_job(
}
if let Some(email) = email {
if let Err(err) = crate::server::send_verify_status(&email, verification_job, &result) {
if let Err(err) = crate::server::send_verify_status(&email, notify, verification_job, &result) {
eprintln!("send verify notification failed: {}", err);
}
}

View File

@ -8,7 +8,6 @@ use std::sync::{Arc, Mutex};
use anyhow::{bail, format_err, Error};
use futures::*;
use lazy_static::lazy_static;
use nix::unistd::Pid;
use serde_json::{json, Value};
use serde::{Serialize, Deserialize};
use tokio::sync::oneshot;
@ -19,36 +18,33 @@ use proxmox::tools::fs::{create_path, open_file_locked, replace_file, CreateOpti
use super::UPID;
use crate::buildcfg;
use crate::server;
use crate::tools::logrotate::{LogRotate, LogRotateFiles};
use crate::tools::{FileLogger, FileLogOptions};
use crate::api2::types::Userid;
use crate::api2::types::{Authid, TaskStateType};
macro_rules! PROXMOX_BACKUP_VAR_RUN_DIR_M { () => ("/run/proxmox-backup") }
macro_rules! PROXMOX_BACKUP_LOG_DIR_M { () => ("/var/log/proxmox-backup") }
macro_rules! PROXMOX_BACKUP_TASK_DIR_M { () => (concat!( PROXMOX_BACKUP_LOG_DIR_M!(), "/tasks")) }
pub const PROXMOX_BACKUP_VAR_RUN_DIR: &str = PROXMOX_BACKUP_VAR_RUN_DIR_M!();
pub const PROXMOX_BACKUP_LOG_DIR: &str = PROXMOX_BACKUP_LOG_DIR_M!();
pub const PROXMOX_BACKUP_TASK_DIR: &str = PROXMOX_BACKUP_TASK_DIR_M!();
pub const PROXMOX_BACKUP_TASK_LOCK_FN: &str = concat!(PROXMOX_BACKUP_TASK_DIR_M!(), "/.active.lock");
pub const PROXMOX_BACKUP_ACTIVE_TASK_FN: &str = concat!(PROXMOX_BACKUP_TASK_DIR_M!(), "/active");
pub const PROXMOX_BACKUP_INDEX_TASK_FN: &str = concat!(PROXMOX_BACKUP_TASK_DIR_M!(), "/index");
pub const PROXMOX_BACKUP_ARCHIVE_TASK_FN: &str = concat!(PROXMOX_BACKUP_TASK_DIR_M!(), "/archive");
const MAX_INDEX_TASKS: usize = 1000;
macro_rules! taskdir {
($subdir:expr) => (concat!(PROXMOX_BACKUP_LOG_DIR_M!(), "/tasks", $subdir))
}
pub const PROXMOX_BACKUP_TASK_DIR: &str = taskdir!("/");
pub const PROXMOX_BACKUP_TASK_LOCK_FN: &str = taskdir!("/.active.lock");
pub const PROXMOX_BACKUP_ACTIVE_TASK_FN: &str = taskdir!("/active");
pub const PROXMOX_BACKUP_INDEX_TASK_FN: &str = taskdir!("/index");
pub const PROXMOX_BACKUP_ARCHIVE_TASK_FN: &str = taskdir!("/archive");
lazy_static! {
static ref WORKER_TASK_LIST: Mutex<HashMap<usize, Arc<WorkerTask>>> = Mutex::new(HashMap::new());
}
static ref MY_PID: i32 = unsafe { libc::getpid() };
static ref MY_PID_PSTART: u64 = procfs::PidStat::read_from_pid(Pid::from_raw(*MY_PID))
.unwrap()
.starttime;
/// checks if the task UPID refers to a worker from this process
fn is_local_worker(upid: &UPID) -> bool {
upid.pid == server::pid() && upid.pstart == server::pstart()
}
/// Test if the task is still running
pub async fn worker_is_active(upid: &UPID) -> Result<bool, Error> {
if (upid.pid == *MY_PID) && (upid.pstart == *MY_PID_PSTART) {
if is_local_worker(upid) {
return Ok(WORKER_TASK_LIST.lock().unwrap().contains_key(&upid.task_id));
}
@ -56,15 +52,14 @@ pub async fn worker_is_active(upid: &UPID) -> Result<bool, Error> {
return Ok(false);
}
let socketname = format!(
"\0{}/proxmox-task-control-{}.sock", PROXMOX_BACKUP_VAR_RUN_DIR, upid.pid);
let sock = server::ctrl_sock_from_pid(upid.pid);
let cmd = json!({
"command": "status",
"command": "worker-task-status",
"args": {
"upid": upid.to_string(),
},
});
let status = super::send_command(socketname, cmd).await?;
let status = super::send_command(sock, cmd).await?;
if let Some(active) = status.as_bool() {
Ok(active)
@ -75,69 +70,48 @@ pub async fn worker_is_active(upid: &UPID) -> Result<bool, Error> {
/// Test if the task is still running (fast but inaccurate implementation)
///
/// If the task is spanned from a different process, we simply return if
/// If the task is spawned from a different process, we simply return if
/// that process is still running. This information is good enough to detect
/// stale tasks...
pub fn worker_is_active_local(upid: &UPID) -> bool {
if (upid.pid == *MY_PID) && (upid.pstart == *MY_PID_PSTART) {
if is_local_worker(upid) {
WORKER_TASK_LIST.lock().unwrap().contains_key(&upid.task_id)
} else {
procfs::check_process_running_pstart(upid.pid, upid.pstart).is_some()
}
}
pub fn create_task_control_socket() -> Result<(), Error> {
let socketname = format!(
"\0{}/proxmox-task-control-{}.sock", PROXMOX_BACKUP_VAR_RUN_DIR, *MY_PID);
let control_future = super::create_control_socket(socketname, |param| {
let param = param
.as_object()
.ok_or_else(|| format_err!("unable to parse parameters (expected json object)"))?;
if param.keys().count() != 2 { bail!("wrong number of parameters"); }
let command = param["command"]
.as_str()
.ok_or_else(|| format_err!("unable to parse parameters (missing command)"))?;
// we have only two commands for now
if !(command == "abort-task" || command == "status") {
bail!("got unknown command '{}'", command);
}
let upid_str = param["upid"]
.as_str()
.ok_or_else(|| format_err!("unable to parse parameters (missing upid)"))?;
let upid = upid_str.parse::<UPID>()?;
if !(upid.pid == *MY_PID && upid.pstart == *MY_PID_PSTART) {
pub fn register_task_control_commands(
commando_sock: &mut super::CommandoSocket,
) -> Result<(), Error> {
fn get_upid(args: Option<&Value>) -> Result<UPID, Error> {
let args = if let Some(args) = args { args } else { bail!("missing args") };
let upid = match args.get("upid") {
Some(Value::String(upid)) => upid.parse::<UPID>()?,
None => bail!("no upid in args"),
_ => bail!("unable to parse upid"),
};
if !is_local_worker(&upid) {
bail!("upid does not belong to this process");
}
Ok(upid)
}
let hash = WORKER_TASK_LIST.lock().unwrap();
commando_sock.register_command("worker-task-abort".into(), move |args| {
let upid = get_upid(args)?;
match command {
"abort-task" => {
if let Some(ref worker) = hash.get(&upid.task_id) {
if let Some(ref worker) = WORKER_TASK_LIST.lock().unwrap().get(&upid.task_id) {
worker.request_abort();
} else {
// assume task is already stopped
}
Ok(Value::Null)
}
"status" => {
let active = hash.contains_key(&upid.task_id);
Ok(active.into())
}
_ => {
bail!("got unknown command '{}'", command);
}
}
})?;
commando_sock.register_command("worker-task-status".into(), move |args| {
let upid = get_upid(args)?;
tokio::spawn(control_future);
let active = WORKER_TASK_LIST.lock().unwrap().contains_key(&upid.task_id);
Ok(active.into())
})?;
Ok(())
}
@ -152,17 +126,14 @@ pub fn abort_worker_async(upid: UPID) {
pub async fn abort_worker(upid: UPID) -> Result<(), Error> {
let target_pid = upid.pid;
let socketname = format!(
"\0{}/proxmox-task-control-{}.sock", PROXMOX_BACKUP_VAR_RUN_DIR, target_pid);
let sock = server::ctrl_sock_from_pid(upid.pid);
let cmd = json!({
"command": "abort-task",
"command": "worker-task-abort",
"args": {
"upid": upid.to_string(),
},
});
super::send_command(socketname, cmd).map_ok(|_| ()).await
super::send_command(sock, cmd).map_ok(|_| ()).await
}
fn parse_worker_status_line(line: &str) -> Result<(String, UPID, Option<TaskState>), Error> {
@ -191,9 +162,9 @@ pub fn create_task_log_dirs() -> Result<(), Error> {
.owner(backup_user.uid)
.group(backup_user.gid);
create_path(PROXMOX_BACKUP_LOG_DIR, None, Some(opts.clone()))?;
create_path(buildcfg::PROXMOX_BACKUP_LOG_DIR, None, Some(opts.clone()))?;
create_path(PROXMOX_BACKUP_TASK_DIR, None, Some(opts.clone()))?;
create_path(PROXMOX_BACKUP_VAR_RUN_DIR, None, Some(opts))?;
create_path(buildcfg::PROXMOX_BACKUP_RUN_DIR, None, Some(opts))?;
Ok(())
}).map_err(|err: Error| format_err!("unable to create task log dir - {}", err))?;
@ -275,6 +246,15 @@ impl TaskState {
}
}
pub fn tasktype(&self) -> TaskStateType {
match self {
TaskState::OK { .. } => TaskStateType::OK,
TaskState::Unknown { .. } => TaskStateType::Unknown,
TaskState::Error { .. } => TaskStateType::Error,
TaskState::Warning { .. } => TaskStateType::Warning,
}
}
fn result_text(&self) -> String {
match self {
TaskState::Error { message, .. } => format!("TASK ERROR: {}", message),
@ -363,7 +343,10 @@ fn update_active_workers(new_upid: Option<&UPID>) -> Result<(), Error> {
let lock = lock_task_list_files(true)?;
// TODO remove with 1.x
let mut finish_list: Vec<TaskListInfo> = read_task_file_from_path(PROXMOX_BACKUP_INDEX_TASK_FN)?;
let had_index_file = !finish_list.is_empty();
let mut active_list: Vec<TaskListInfo> = read_task_file_from_path(PROXMOX_BACKUP_ACTIVE_TASK_FN)?
.into_iter()
.filter_map(|info| {
@ -374,7 +357,7 @@ fn update_active_workers(new_upid: Option<&UPID>) -> Result<(), Error> {
}
if !worker_is_active_local(&info.upid) {
println!("Detected stopped UPID {}", &info.upid_str);
println!("Detected stopped task '{}'", &info.upid_str);
let now = proxmox::tools::time::epoch_i64();
let status = upid_read_status(&info.upid)
.unwrap_or_else(|_| TaskState::Unknown { endtime: now });
@ -412,33 +395,10 @@ fn update_active_workers(new_upid: Option<&UPID>) -> Result<(), Error> {
}
});
let start = if finish_list.len() > MAX_INDEX_TASKS {
finish_list.len() - MAX_INDEX_TASKS
} else {
0
};
let end = (start+MAX_INDEX_TASKS).min(finish_list.len());
let index_raw = if end > start {
render_task_list(&finish_list[start..end])
} else {
"".to_string()
};
replace_file(
PROXMOX_BACKUP_INDEX_TASK_FN,
index_raw.as_bytes(),
CreateOptions::new()
.owner(backup_user.uid)
.group(backup_user.gid),
)?;
if !finish_list.is_empty() && start > 0 {
if !finish_list.is_empty() {
match std::fs::OpenOptions::new().append(true).create(true).open(PROXMOX_BACKUP_ARCHIVE_TASK_FN) {
Ok(mut writer) => {
for info in &finish_list[0..start] {
for info in &finish_list {
writer.write_all(render_task_line(&info).as_bytes())?;
}
},
@ -448,6 +408,12 @@ fn update_active_workers(new_upid: Option<&UPID>) -> Result<(), Error> {
nix::unistd::chown(PROXMOX_BACKUP_ARCHIVE_TASK_FN, Some(backup_user.uid), Some(backup_user.gid))?;
}
// TODO Remove with 1.x
// for compatibility, if we had an INDEX file, we do not need it anymore
if had_index_file {
let _ = nix::unistd::unlink(PROXMOX_BACKUP_INDEX_TASK_FN);
}
drop(lock);
Ok(())
@ -511,16 +477,9 @@ where
read_task_file(file)
}
enum TaskFile {
Active,
Index,
Archive,
End,
}
pub struct TaskListInfoIterator {
list: VecDeque<TaskListInfo>,
file: TaskFile,
end: bool,
archive: Option<LogRotateFiles>,
lock: Option<File>,
}
@ -535,7 +494,10 @@ impl TaskListInfoIterator {
.iter()
.any(|info| info.state.is_some() || !worker_is_active_local(&info.upid));
if needs_update {
// TODO remove with 1.x
let index_exists = std::path::Path::new(PROXMOX_BACKUP_INDEX_TASK_FN).is_file();
if needs_update || index_exists {
drop(lock);
update_active_workers(None)?;
let lock = lock_task_list_files(false)?;
@ -554,12 +516,11 @@ impl TaskListInfoIterator {
Some(logrotate.files())
};
let file = if active_only { TaskFile::End } else { TaskFile::Active };
let lock = if active_only { None } else { Some(read_lock) };
Ok(Self {
list: active_list.into(),
file,
end: active_only,
archive,
lock,
})
@ -573,17 +534,9 @@ impl Iterator for TaskListInfoIterator {
loop {
if let Some(element) = self.list.pop_back() {
return Some(Ok(element));
} else if self.end {
return None;
} else {
match self.file {
TaskFile::Active => {
let index = match read_task_file_from_path(PROXMOX_BACKUP_INDEX_TASK_FN) {
Ok(index) => index,
Err(err) => return Some(Err(err)),
};
self.list.append(&mut index.into());
self.file = TaskFile::Index;
},
TaskFile::Index | TaskFile::Archive => {
if let Some(mut archive) = self.archive.take() {
if let Some(file) = archive.next() {
let list = match read_task_file(file) {
@ -592,16 +545,12 @@ impl Iterator for TaskListInfoIterator {
};
self.list.append(&mut list.into());
self.archive = Some(archive);
self.file = TaskFile::Archive;
continue;
}
}
self.file = TaskFile::End;
self.end = true;
self.lock.take();
return None;
}
TaskFile::End => return None,
}
}
}
}
@ -635,24 +584,15 @@ struct WorkerTaskData {
pub abort_listeners: Vec<oneshot::Sender<()>>,
}
impl Drop for WorkerTask {
fn drop(&mut self) {
println!("unregister worker");
}
}
impl WorkerTask {
pub fn new(worker_type: &str, worker_id: Option<String>, userid: Userid, to_stdout: bool) -> Result<Arc<Self>, Error> {
println!("register worker");
let upid = UPID::new(worker_type, worker_id, userid)?;
pub fn new(worker_type: &str, worker_id: Option<String>, auth_id: Authid, to_stdout: bool) -> Result<Arc<Self>, Error> {
let upid = UPID::new(worker_type, worker_id, auth_id)?;
let task_id = upid.task_id;
let mut path = std::path::PathBuf::from(PROXMOX_BACKUP_TASK_DIR);
path.push(format!("{:02X}", upid.pstart % 256));
path.push(format!("{:02X}", upid.pstart & 255));
let backup_user = crate::backup::backup_user()?;
@ -660,8 +600,6 @@ impl WorkerTask {
path.push(upid.to_string());
println!("FILE: {:?}", path);
let logger_options = FileLogOptions {
to_stdout: to_stdout,
exclusive: true,
@ -699,14 +637,14 @@ impl WorkerTask {
pub fn spawn<F, T>(
worker_type: &str,
worker_id: Option<String>,
userid: Userid,
auth_id: Authid,
to_stdout: bool,
f: F,
) -> Result<String, Error>
where F: Send + 'static + FnOnce(Arc<WorkerTask>) -> T,
T: Send + 'static + Future<Output = Result<(), Error>>,
{
let worker = WorkerTask::new(worker_type, worker_id, userid, to_stdout)?;
let worker = WorkerTask::new(worker_type, worker_id, auth_id, to_stdout)?;
let upid_str = worker.upid.to_string();
let f = f(worker.clone());
tokio::spawn(async move {
@ -721,15 +659,13 @@ impl WorkerTask {
pub fn new_thread<F>(
worker_type: &str,
worker_id: Option<String>,
userid: Userid,
auth_id: Authid,
to_stdout: bool,
f: F,
) -> Result<String, Error>
where F: Send + UnwindSafe + 'static + FnOnce(Arc<WorkerTask>) -> Result<(), Error>
{
println!("register worker thread");
let worker = WorkerTask::new(worker_type, worker_id, userid, to_stdout)?;
let worker = WorkerTask::new(worker_type, worker_id, auth_id, to_stdout)?;
let upid_str = worker.upid.to_string();
let _child = std::thread::Builder::new().name(upid_str.clone()).spawn(move || {

View File

@ -19,6 +19,7 @@ use proxmox::tools::vec;
pub use proxmox::tools::fd::Fd;
pub mod acl;
pub mod apt;
pub mod async_io;
pub mod borrow;
pub mod cert;
@ -462,7 +463,7 @@ pub fn run_command(
let output = command.output()
.map_err(|err| format_err!("failed to execute {:?} - {}", command, err))?;
let output = crate::tools::command_output_as_string(output, exit_code_check)
let output = command_output_as_string(output, exit_code_check)
.map_err(|err| format_err!("command {:?} failed - {}", command, err))?;
Ok(output)

361
src/tools/apt.rs Normal file
View File

@ -0,0 +1,361 @@
use std::collections::HashSet;
use std::collections::HashMap;
use anyhow::{Error, bail, format_err};
use apt_pkg_native::Cache;
use proxmox::const_regex;
use proxmox::tools::fs::{file_read_optional_string, replace_file, CreateOptions};
use crate::api2::types::APTUpdateInfo;
const APT_PKG_STATE_FN: &str = "/var/lib/proxmox-backup/pkg-state.json";
#[derive(Debug, serde::Serialize, serde::Deserialize)]
/// Some information we cache about the package (update) state, like what pending update version
/// we already notfied an user about
pub struct PkgState {
/// simple map from package name to most recently notified (emailed) version
pub notified: Option<HashMap<String, String>>,
/// A list of pending updates
pub package_status: Vec<APTUpdateInfo>,
}
pub fn write_pkg_cache(state: &PkgState) -> Result<(), Error> {
let serialized_state = serde_json::to_string(state)?;
replace_file(APT_PKG_STATE_FN, &serialized_state.as_bytes(), CreateOptions::new())
.map_err(|err| format_err!("Error writing package cache - {}", err))?;
Ok(())
}
pub fn read_pkg_state() -> Result<Option<PkgState>, Error> {
let serialized_state = match file_read_optional_string(&APT_PKG_STATE_FN) {
Ok(Some(raw)) => raw,
Ok(None) => return Ok(None),
Err(err) => bail!("could not read cached package state file - {}", err),
};
serde_json::from_str(&serialized_state)
.map(|s| Some(s))
.map_err(|err| format_err!("could not parse cached package status - {}", err))
}
pub fn pkg_cache_expired () -> Result<bool, Error> {
if let Ok(pbs_cache) = std::fs::metadata(APT_PKG_STATE_FN) {
let apt_pkgcache = std::fs::metadata("/var/cache/apt/pkgcache.bin")?;
let dpkg_status = std::fs::metadata("/var/lib/dpkg/status")?;
let mtime = pbs_cache.modified()?;
if apt_pkgcache.modified()? <= mtime && dpkg_status.modified()? <= mtime {
return Ok(false);
}
}
Ok(true)
}
pub fn update_cache() -> Result<PkgState, Error> {
// update our cache
let all_upgradeable = list_installed_apt_packages(|data| {
data.candidate_version == data.active_version &&
data.installed_version != Some(data.candidate_version)
}, None);
let cache = match read_pkg_state() {
Ok(Some(mut cache)) => {
cache.package_status = all_upgradeable;
cache
},
_ => PkgState {
notified: None,
package_status: all_upgradeable,
},
};
write_pkg_cache(&cache)?;
Ok(cache)
}
const_regex! {
VERSION_EPOCH_REGEX = r"^\d+:";
FILENAME_EXTRACT_REGEX = r"^.*/.*?_(.*)_Packages$";
}
// FIXME: once the 'changelog' API call switches over to 'apt-get changelog' only,
// consider removing this function entirely, as it's value is never used anywhere
// then (widget-toolkit doesn't use the value either)
fn get_changelog_url(
package: &str,
filename: &str,
version: &str,
origin: &str,
component: &str,
) -> Result<String, Error> {
if origin == "" {
bail!("no origin available for package {}", package);
}
if origin == "Debian" {
let mut command = std::process::Command::new("apt-get");
command.arg("changelog");
command.arg("--print-uris");
command.arg(package);
let output = crate::tools::run_command(command, None)?; // format: 'http://foo/bar' package.changelog
let output = match output.splitn(2, ' ').next() {
Some(output) => {
if output.len() < 2 {
bail!("invalid output (URI part too short) from 'apt-get changelog --print-uris': {}", output)
}
output[1..output.len()-1].to_owned()
},
None => bail!("invalid output from 'apt-get changelog --print-uris': {}", output)
};
return Ok(output);
} else if origin == "Proxmox" {
// FIXME: Use above call to 'apt changelog <pkg> --print-uris' as well.
// Currently not possible as our packages do not have a URI set in their Release file.
let version = (VERSION_EPOCH_REGEX.regex_obj)().replace_all(version, "");
let base = match (FILENAME_EXTRACT_REGEX.regex_obj)().captures(filename) {
Some(captures) => {
let base_capture = captures.get(1);
match base_capture {
Some(base_underscore) => base_underscore.as_str().replace("_", "/"),
None => bail!("incompatible filename, cannot find regex group")
}
},
None => bail!("incompatible filename, doesn't match regex")
};
return Ok(format!("http://download.proxmox.com/{}/{}_{}.changelog",
base, package, version));
}
bail!("unknown origin ({}) or component ({})", origin, component)
}
pub struct FilterData<'a> {
// this is version info returned by APT
pub installed_version: Option<&'a str>,
pub candidate_version: &'a str,
// this is the version info the filter is supposed to check
pub active_version: &'a str,
}
enum PackagePreSelect {
OnlyInstalled,
OnlyNew,
All,
}
pub fn list_installed_apt_packages<F: Fn(FilterData) -> bool>(
filter: F,
only_versions_for: Option<&str>,
) -> Vec<APTUpdateInfo> {
let mut ret = Vec::new();
let mut depends = HashSet::new();
// note: this is not an 'apt update', it just re-reads the cache from disk
let mut cache = Cache::get_singleton();
cache.reload();
let mut cache_iter = match only_versions_for {
Some(name) => cache.find_by_name(name),
None => cache.iter()
};
loop {
match cache_iter.next() {
Some(view) => {
let di = if only_versions_for.is_some() {
query_detailed_info(
PackagePreSelect::All,
&filter,
view,
None
)
} else {
query_detailed_info(
PackagePreSelect::OnlyInstalled,
&filter,
view,
Some(&mut depends)
)
};
if let Some(info) = di {
ret.push(info);
}
if only_versions_for.is_some() {
break;
}
},
None => {
drop(cache_iter);
// also loop through missing dependencies, as they would be installed
for pkg in depends.iter() {
let mut iter = cache.find_by_name(&pkg);
let view = match iter.next() {
Some(view) => view,
None => continue // package not found, ignore
};
let di = query_detailed_info(
PackagePreSelect::OnlyNew,
&filter,
view,
None
);
if let Some(info) = di {
ret.push(info);
}
}
break;
}
}
}
return ret;
}
fn query_detailed_info<'a, F, V>(
pre_select: PackagePreSelect,
filter: F,
view: V,
depends: Option<&mut HashSet<String>>,
) -> Option<APTUpdateInfo>
where
F: Fn(FilterData) -> bool,
V: std::ops::Deref<Target = apt_pkg_native::sane::PkgView<'a>>
{
let current_version = view.current_version();
let candidate_version = view.candidate_version();
let (current_version, candidate_version) = match pre_select {
PackagePreSelect::OnlyInstalled => match (current_version, candidate_version) {
(Some(cur), Some(can)) => (Some(cur), can), // package installed and there is an update
(Some(cur), None) => (Some(cur.clone()), cur), // package installed and up-to-date
(None, Some(_)) => return None, // package could be installed
(None, None) => return None, // broken
},
PackagePreSelect::OnlyNew => match (current_version, candidate_version) {
(Some(_), Some(_)) => return None,
(Some(_), None) => return None,
(None, Some(can)) => (None, can),
(None, None) => return None,
},
PackagePreSelect::All => match (current_version, candidate_version) {
(Some(cur), Some(can)) => (Some(cur), can),
(Some(cur), None) => (Some(cur.clone()), cur),
(None, Some(can)) => (None, can),
(None, None) => return None,
},
};
// get additional information via nested APT 'iterators'
let mut view_iter = view.versions();
while let Some(ver) = view_iter.next() {
let package = view.name();
let version = ver.version();
let mut origin_res = "unknown".to_owned();
let mut section_res = "unknown".to_owned();
let mut priority_res = "unknown".to_owned();
let mut change_log_url = "".to_owned();
let mut short_desc = package.clone();
let mut long_desc = "".to_owned();
let fd = FilterData {
installed_version: current_version.as_deref(),
candidate_version: &candidate_version,
active_version: &version,
};
if filter(fd) {
if let Some(section) = ver.section() {
section_res = section;
}
if let Some(prio) = ver.priority_type() {
priority_res = prio;
}
// assume every package has only one origin file (not
// origin, but origin *file*, for some reason those seem to
// be different concepts in APT)
let mut origin_iter = ver.origin_iter();
let origin = origin_iter.next();
if let Some(origin) = origin {
if let Some(sd) = origin.short_desc() {
short_desc = sd;
}
if let Some(ld) = origin.long_desc() {
long_desc = ld;
}
// the package files appear in priority order, meaning
// the one for the candidate version is first - this is fine
// however, as the source package should be the same for all
// versions anyway
let mut pkg_iter = origin.file();
let pkg_file = pkg_iter.next();
if let Some(pkg_file) = pkg_file {
if let Some(origin_name) = pkg_file.origin() {
origin_res = origin_name;
}
let filename = pkg_file.file_name();
let component = pkg_file.component();
// build changelog URL from gathered information
// ignore errors, use empty changelog instead
let url = get_changelog_url(&package, &filename,
&version, &origin_res, &component);
if let Ok(url) = url {
change_log_url = url;
}
}
}
if let Some(depends) = depends {
let mut dep_iter = ver.dep_iter();
loop {
let dep = match dep_iter.next() {
Some(dep) if dep.dep_type() != "Depends" => continue,
Some(dep) => dep,
None => break
};
let dep_pkg = dep.target_pkg();
let name = dep_pkg.name();
depends.insert(name);
}
}
return Some(APTUpdateInfo {
package,
title: short_desc,
arch: view.arch(),
description: long_desc,
change_log_url,
origin: origin_res,
version: candidate_version.clone(),
old_version: match current_version {
Some(vers) => vers,
None => "".to_owned()
},
priority: priority_res,
section: section_res,
});
}
}
return None;
}

View File

@ -47,6 +47,7 @@ pub struct FileLogOptions {
#[derive(Debug)]
pub struct FileLogger {
file: std::fs::File,
file_name: std::path::PathBuf,
options: FileLogOptions,
}
@ -63,6 +64,23 @@ impl FileLogger {
file_name: P,
options: FileLogOptions,
) -> Result<Self, Error> {
let file = Self::open(&file_name, &options)?;
let file_name: std::path::PathBuf = file_name.as_ref().to_path_buf();
Ok(Self { file, file_name, options })
}
pub fn reopen(&mut self) -> Result<&Self, Error> {
let file = Self::open(&self.file_name, &self.options)?;
self.file = file;
Ok(self)
}
fn open<P: AsRef<std::path::Path>>(
file_name: P,
options: &FileLogOptions,
) -> Result<std::fs::File, Error> {
let file = std::fs::OpenOptions::new()
.read(options.read)
.write(true)
@ -76,7 +94,7 @@ impl FileLogger {
nix::unistd::chown(file_name.as_ref(), Some(backup_user.uid), Some(backup_user.gid))?;
}
Ok(Self { file, options })
Ok(file)
}
pub fn log<S: AsRef<str>>(&mut self, msg: S) {
@ -88,15 +106,21 @@ impl FileLogger {
stdout.write_all(b"\n").unwrap();
}
let now = proxmox::tools::time::epoch_i64();
let rfc3339 = proxmox::tools::time::epoch_to_rfc3339(now).unwrap();
let line = if self.options.prefix_time {
let now = proxmox::tools::time::epoch_i64();
let rfc3339 = match proxmox::tools::time::epoch_to_rfc3339(now) {
Ok(rfc3339) => rfc3339,
Err(_) => "1970-01-01T00:00:00Z".into(), // for safety, should really not happen!
};
format!("{}: {}\n", rfc3339, msg)
} else {
format!("{}\n", msg)
};
self.file.write_all(line.as_bytes()).unwrap();
if let Err(err) = self.file.write_all(line.as_bytes()) {
// avoid panicking, log methods should not do that
// FIXME: or, return result???
eprintln!("error writing to log file - {}", err);
}
}
}

View File

@ -92,25 +92,25 @@ impl LogRotate {
if filenames.is_empty() {
return Ok(()); // no file means nothing to rotate
}
let count = filenames.len() + 1;
let mut next_filename = self.base_path.clone().canonicalize()?.into_os_string();
next_filename.push(format!(".{}", filenames.len()));
if self.compress && count > 2 {
next_filename.push(".zst");
}
filenames.push(PathBuf::from(next_filename));
let count = filenames.len();
for i in (0..count-1).rev() {
if self.compress
&& filenames[i+0].extension().unwrap_or(std::ffi::OsStr::new("")) != "zst"
&& filenames[i+1].extension().unwrap_or(std::ffi::OsStr::new("")) == "zst"
{
Self::compress(&filenames[i], &filenames[i+1], &options)?;
} else {
rename(&filenames[i], &filenames[i+1])?;
}
if self.compress {
for i in 2..count {
if filenames[i].extension().unwrap_or(std::ffi::OsStr::new("")) != "zst" {
let mut target = filenames[i].clone().into_os_string();
target.push(".zstd");
Self::compress(&filenames[i], &target.into(), &options)?;
}
}
}
if let Some(max_files) = max_files {

View File

@ -42,8 +42,10 @@ fn worker_task_abort() -> Result<(), Error> {
let mut rt = tokio::runtime::Runtime::new().unwrap();
rt.block_on(async move {
let mut commando_sock = server::CommandoSocket::new(server::our_ctrl_sock());
let init_result: Result<(), Error> = try_block!({
server::create_task_control_socket()?;
server::register_task_control_commands(&mut commando_sock)?;
server::server_state_init()?;
Ok(())
});
@ -57,7 +59,7 @@ fn worker_task_abort() -> Result<(), Error> {
let res = server::WorkerTask::new_thread(
"garbage_collection",
None,
proxmox_backup::api2::types::Userid::root_userid().clone(),
proxmox_backup::api2::types::Authid::root_auth_id().clone(),
true,
move |worker| {
println!("WORKER {}", worker);

34
www/AccessControlPanel.js Normal file
View File

@ -0,0 +1,34 @@
Ext.define('PBS.AccessControlPanel', {
extend: 'Ext.tab.Panel',
alias: 'widget.pbsAccessControlPanel',
mixins: ['Proxmox.Mixin.CBind'],
title: gettext('Access Control'),
border: false,
defaults: {
border: false,
},
items: [
{
xtype: 'pbsUserView',
title: gettext('User Management'),
itemId: 'users',
iconCls: 'fa fa-user',
},
{
xtype: 'pbsTokenView',
title: gettext('API Token'),
itemId: 'apitokens',
iconCls: 'fa fa-user-o',
},
{
xtype: 'pbsACLView',
title: gettext('Permissions'),
itemId: 'permissions',
iconCls: 'fa fa-unlock',
},
],
});

View File

@ -63,7 +63,9 @@ Ext.define('PBS.Dashboard', {
updateSubscription: function(store, records, success) {
if (!success) { return; }
let me = this;
let subStatus = records[0].data.status === 'Active' ? 2 : 0; // 2 = all good, 1 = different leves, 0 = none
let status = records[0].data.status || 'unknown';
// 2 = all good, 1 = different leves, 0 = none
let subStatus = status.toLowerCase() === 'active' ? 2 : 0;
me.lookup('subscription').setSubStatus(subStatus);
},
@ -230,8 +232,9 @@ Ext.define('PBS.Dashboard', {
model: 'proxmox-tasks',
proxy: {
type: 'proxmox',
url: '/api2/json/status/tasks',
url: '/api2/json/nodes/localhost/tasks',
extraParams: {
limit: 0,
since: '{sinceEpoch}',
},
},

Some files were not shown because too many files have changed in this diff Show More