Compare commits

...

118 Commits

Author SHA1 Message Date
9d79cec4d5 bump version to 0.9.6-1
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-11-04 19:13:04 +01:00
4935681cf4 ui: sync jobs: add tooltip for remove vanished
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-11-04 19:07:07 +01:00
669fa672d9 ui: sync jobs: reorder fields
group local ones togeteher on the left side, and source + schedule
on the right side.

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-11-04 19:05:48 +01:00
a797583535 ui: sync jobs: fix originalValue of owner and improve label
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-11-04 19:04:42 +01:00
54ed1b2a71 ui: sync jobs: only set default schedule when creating new jobs
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-11-04 19:04:06 +01:00
8e12e86f0b ui: add shell panel under administration
some users prefer an inline console
we still have the pop-out console in 'Administration'

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-11-04 18:16:49 +01:00
fe7bdc9d29 proxy: also rotate auth.log file
no need for triggering re-open here, we always re-open that file.

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-11-04 17:26:34 +01:00
546b6a23df proxy: logrotate: do not serialize sending async log-reopen commands
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-11-04 17:26:34 +01:00
4fdf13f95f api: factor out auth logger and use for all API authentication failures
we have information here not available in the access log, especially
if the /api2/extjs formatter is used, which encapsulates errors in a
200 response.

So keep the auth log for now, but extend it use from create ticket
calls to all authentication failures for API calls, this ensures one
can also fail2ban tokens.

Do that logging in a central place, which makes it simple but means
that we do not have the user ID information available to include in
the log.

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-11-04 17:26:34 +01:00
385681c9ab worker task: fix passing upid to send command
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-11-04 17:16:55 +01:00
be99df2767 log rotate: only add .zst to new file after second rotation
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-11-04 17:16:55 +01:00
30200b5c4a ui: fix task description for log rotate
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-11-04 14:20:44 +01:00
f47c1d3a2f proxy: use new datastore notify settings 2020-11-04 11:54:29 +01:00
6e545d0058 config: allow to configure who receives job notify emails 2020-11-04 11:54:29 +01:00
84006f98b2 ui: SyncJobEdit: fix sending 'delete' values on SyncJob creation
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-11-04 11:39:52 +01:00
42ca9e918a sync: improve log format 2020-11-04 09:10:56 +01:00
ea93bea7bf proxy: log if there are too many open connections 2020-11-04 08:49:35 +01:00
0081903f7c fix bug #2870: use updated tickets 2020-11-04 08:20:36 +01:00
c53797f627 ui: set default deduplication factor to 1.0 2020-11-04 07:12:55 +01:00
e1d367df47 proxy: use env PROXMOX_DEBUG to enable/disable debug output
We only print early connection errors when this env var is set.
2020-11-04 06:55:57 +01:00
71f413cd27 cleanup: use Arc to count open connections 2020-11-04 06:35:44 +01:00
48aa2b93b7 fix #3106: correctly queue incoming connections 2020-11-04 06:24:42 +01:00
641862ddad bump version to 0.9.5-1
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-11-03 17:41:26 +01:00
2f08ee1fe3 report: add more commands/files to check
add all of our configuration files in /etc/proxmox-backup/ further,
call some ZFS tool to get that status.

Also, use the subscription command form manager, as we often require
more info than the status. Also, adapt formatting a bit.

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-11-03 17:33:16 +01:00
93f077c5cf report: avoid lazy_static for command/files/.. definitions
those are not in a hot code path, and it is not really much work to
build them on the go..

It may not matther much, but it is unnecessary. Rust will probably
inline most of it anyway..

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-11-03 17:27:16 +01:00
941342f70e manager: report: call method directly, avoid HTTPS request
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-11-03 17:23:43 +01:00
9a556c8a30 manager: add report cli command
Signed-off-by: Hannes Laimer <h.laimer@proxmox.com>
2020-11-03 15:16:42 +01:00
46dce62be6 report: add webui button for system report
Signed-off-by: Hannes Laimer <h.laimer@proxmox.com>
2020-11-03 15:16:42 +01:00
b0ef9631e6 report: add api endpoint and function to generate report
Signed-off-by: Hannes Laimer <h.laimer@proxmox.com>
2020-11-03 15:16:42 +01:00
fb0d9833af ui: task filter: add button icons
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-11-03 14:49:04 +01:00
bfe4b7d782 ui: task filter: reorder to avoid wasting vertical space
Includes some eslint fixes and label changes as well, was to much
work to split that out in its own commit.

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-11-03 14:48:20 +01:00
185dab7678 ui: add panel/Tasks and use it for the node tasks
this is a panel that is heavily inspired from widget-toolkits
node/Tasks panel, but is adapted to use the extended api calls of
pbs (e.g. since/until filter)

has 'filter' panel (like pmgs log tracker gui), but it is collapsible

if we extend the api calls of the other projects, we can merge this
again into the widget-toolkit one and use that

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-11-03 11:35:21 +01:00
c1fa057cce api2/node/tasks: add optional until filter
so that users select specific time ranges with 'since' and 'until'
(e.g. a single day)

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-11-03 11:35:21 +01:00
f66565203a api2/status: remove list_task api call
we do not need it anymore, we can do everything with nodes/NODE/tasks
instead

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-11-03 11:35:21 +01:00
a2a7dd1535 api2/node/tasks: add optional since/typefilter/statusfilter
and change all users of the /status/tasks api call to this

with this change we can now delete the /status/tasks api call

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-11-03 11:35:21 +01:00
e7dd169fdf api2/node/tasks: change limit behaviour when it is 0
instead of returning 0 elements (which does not really make sense anyway),
change it so that there is no limit anymore (besides usize::MAX)

this is technically a breaking change for the api, but i guess
no one is using limit=0 for anything sensible anyway

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-11-03 11:35:21 +01:00
fa31f4c54c server/worker_task: add tasktype to return the api type of a taskstate
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-11-03 11:35:21 +01:00
038ee59960 cleanup: use const_regex, use BACKUP_ID_REGEX for api too 2020-11-03 06:36:50 +01:00
e1c1533790 fix #3039: use the same ID regex for info and api
in the api we use PROXMOX_SAFE_ID_REGEX for backup ids, but here
(where we use it to list them) we use a local regex

since the first is a superset of the one used here, simply extend
the local one

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-11-03 06:25:06 +01:00
9de7c71a81 docs: extend managing remotes
with information about required privileges and limitations

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2020-11-02 21:13:24 +01:00
aa64e06540 sync: add access check tests
should cover all the current scenarios. remote server-side checks can't
be meaningfully unit-tested, but they are simple enough so should
hopefully never break.

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2020-11-02 21:13:24 +01:00
18077ac633 user.cfg/user info: add test constructors
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2020-11-02 21:13:24 +01:00
a71a009313 proxy: drop now unused UPID import
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-11-02 21:08:38 +01:00
b6ba5acd29 proxmox-backup-proxy: use only jobstate for garbage_collection schedule
in case the garbage_collection errors out, we never set the in-memory
state, so if it failed, the last 'good' starttime was considered
for the schedule

this could lead to the job running every minute instead of the
correct schedule

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-11-02 21:08:38 +01:00
4fdf5ddf5b api2/admin/datastore: start the garbage_collection task with our helper
instead of manually, this has the advantage that we now set
the jobstate correctly and can return with an error if it is
currently running (instead of failing in the task)

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-11-02 21:08:38 +01:00
c724f65805 server/gc_job: add 'to_stdout'
we will use this for the manual api call

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-11-02 21:08:38 +01:00
79c9bf55b9 backup/{dynamic, fixed}_index: improve error message for small index files
index files that were smaller than their respective header size,
would fail with

"failed to fill whole buffer"

instead now check explicitely for the size and fail with
"index too small (size)"

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-11-02 21:08:38 +01:00
788d82d9b7 gc: mark_used_chunks: reduce implementation noise
try do reduce some unecessary lines, make match arms more precise so
one can faster see what's actually happening.

Also, avoid
> return Err(format_err!(...))
stuff, just use bail!()

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-11-02 21:08:38 +01:00
2f0b92352d garbage collect: improve index error messages
so that in case of a broken index file, the user knows which it is

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-11-02 20:08:50 +01:00
b7f2be5137 log rotate task: make task archive limits be binary based
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-11-02 19:53:30 +01:00
72aa1834dc log rotate task: adapt internal jobstate ID, set worker one to None for now
as we have only one logrotate task currently..

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-11-02 19:53:30 +01:00
fe4cc5b1a1 server: implement access log rotation with re-open via command socket
re-use the future we already have for task log rotation to trigger
it.

Move the FileLogger in ApiConfig into an Arc, so that we can actually
update it and REST using the new one.

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-11-02 19:53:30 +01:00
04b053d87e server: write main daemon PID to run directory
so that we can easily get the main PID of the last recently launched
daemon. Will be used to get the control socket of that one for access
lgo rotate in a future patch

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-11-02 19:50:24 +01:00
b469011fd1 command socket: make create_control_socket private
this is internal for now, use the comanndo socket struct
implementation, and ideally not a new one but the existing ones
created in the proxy and api daemons.

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-11-02 19:50:24 +01:00
a68768cf31 server: use generalized commando socket for worker tasks commands
Allows to extend the use of that socket in the future, e.g., for log
rotate re-open signaling.

To reflect this we use a more general name, and change the commandos
to a more clear namespace.

Both are actually somewhat a breaking change, but the single real
world issue it should be able to cause is, that one won't be able to
stop task from older daemons, which still use the older abstract
socket name format.

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-11-02 19:48:04 +01:00
f3df613cb7 server: add CommandoSocket where multiple users can register commands
This is a preparatory step to replace the task control socket with it
and provide a "reopen log file" command for the rest server.

Kept it simple by disallowing to register new commands after the
socket gets spawned, this avoids the need for locking.

If we really need that we can always wrap it in a Arc<RWLock<..>> or
something like that, or even nicer, register at compile time.

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-11-02 19:32:22 +01:00
056ee78567 config: network: use error message when parsing netmask failed
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-11-02 19:32:22 +01:00
3cd529ea51 tools: file logger: avoid some possible unwraps in log method
writing to a file can explode quite easily.
time formatting to rfc3339 should be more robust, but it has a few
conditions where it could fail, so catch that too (and only really
do it if required).

The writes to stdout are left as is, it normally is redirected to
journal which is in memory, and thus breaks later than most stuff,
and at that point we probably do not care anymore anyway.

It could make sense to actually return a result here..

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-11-02 19:32:22 +01:00
3aade17125 tools: log rotate: compressing rotated files
We renamed the last one always to a file without compression
extension, even if it was .zst previously. So always add the correct
ending to the new last one, if compress was true.

Further, we cannot detect if there'd be a compression required if we
rotated (renamed) it already to the file with .zst included.

So check on rotation itself if it would be a "no .zst" -> ",zst"
transition, and call compress there.

it really should be OK now *knocking wood*

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-11-02 18:35:13 +01:00
1dc2fe20dd tools: log rotate: fix file ending for compressed files
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-11-02 18:35:13 +01:00
645a47ff6e config: support netmask when parsing interfaces file 2020-11-02 14:32:35 +01:00
b1456a8ea7 ui: fix verificationjob task description 2020-11-02 10:15:52 +01:00
a9fcbec9dc file logger: allow reopening file
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-11-02 10:03:10 +01:00
346a488e35 pull out /run and /var/log directory constants to buildcfg
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-11-02 10:03:10 +01:00
3066f56481 notify: add link to server GUI 2020-11-02 09:12:14 +01:00
07ca4e3609 gc: remove extra empty lines in email notification template 2020-11-02 09:12:14 +01:00
dcd75edb72 ui: fix dashboard subscription
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-11-02 08:08:44 +01:00
59af9ca98e sync: allow sync for non-superusers
by requiring
- Datastore.Backup permission for target datastore
- Remote.Read permission for source remote/datastore
- Datastore.Prune if vanished snapshots should be removed
- Datastore.Modify if another user should own the freshly synced
snapshots

reading a sync job entry only requires knowing about both the source
remote and the target datastore.

note that this does not affect the Authid used to authenticate with the
remote, which of course also needs permissions to access the source
datastore.

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2020-11-02 07:10:12 +01:00
f1694b062d fix #2864: add owner option to sync
instead of hard-coding 'backup@pam'. this allows a bit more flexibility
(e.g., syncing to a datastore that can directly be used as restore
source) without overly complicating things.

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2020-11-02 07:08:05 +01:00
fa7aceeb15 manager: subscription commands s/delete/remove/
no idea why I added it as "delete", for all other such operations we
use the "remove" sub-command...

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-11-01 13:19:30 +01:00
0e16f57e37 apt: sort packages for update notifcation mail
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-10-31 22:58:52 +01:00
bc00289bce add daily update and maintenance task
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-10-31 22:51:26 +01:00
86d602457a api: apt: implement support to send notification email on new updates
again, base idea copied off PVE, but, we safe the information about
which pending version we send a mail out already in a separate
object, to keep the api return type APTUpdateInfo clean.

This also makes a few things a bit easier, as we can update the
package status without saving/restoring the notify information.

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-10-31 22:51:26 +01:00
33508b1237 api: implement apt pkg cache
based on the idea of PVE

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-10-31 21:42:49 +01:00
b282557563 api: apt: factor out and improve calling apt update
apt changes some of its state/cache also if it errors out, most of
the time, so we actually want to print both, stderr and stdout.

Further, only warn if its exit code is non-zero, for the same
rationale, it may bring updates available even if it errors (e.g.,
because a future pbs-enterprise repo is additionally configured but
not accessible).

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-10-31 21:31:59 +01:00
e6513bd5de api/tools: split out apt helpers from api to own module
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-10-31 21:31:36 +01:00
5911f74096 api types: derive Debug for APTUpdateInfo
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-10-31 21:31:36 +01:00
0bb74e54b1 worker task: drop debug prints
they are not useful anymore, rather noisy

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-10-31 21:31:36 +01:00
f254a27071 tools: do not unnecessarily prefix module path
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-10-31 21:31:36 +01:00
d0abba3397 trivial: fix typo in comment
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-10-31 21:31:36 +01:00
54adea366c ui: ACL view: do not save grid state
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-10-31 11:36:48 +01:00
ba2e4b15da ui: improve ACL view layout
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-10-31 11:33:31 +01:00
0ccdd1b6a4 ui: bump sync/verify grid stateid
so that people get the improved view by default

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-10-31 10:58:57 +01:00
fb66c85363 ui: improve sync job view layout
Avoid overuse of flex, that is as bad as having all to fixed widths.

In spirit similar to the previous commit for the verify panel, see
that for some rationale.

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-10-31 10:56:51 +01:00
aae4c30ceb ui: improve verify job view layout, show job-id
Avoid overuse of flex, that is as bad as having all to fixed widths.

* Set date-time fields to 150 px as they are fixed width text.
* Duration is maximal 3 units, so it can be made fixed too.
* Schedule is flex with lower and upper limits, this is useful as
  it's a field which can be both, quite short (daily) or long
  (mon..fri *-10..12-1..7 02:00/30:30)
* Status and comment is flex, this way we always get a filled grid

Move status after last verify date and duration field, increases
information density at the left of the grid - reducing need for eye
movement, also, it groups together the "information about last job"
nicer.

Show job-id by default even if they are auto generated when adding
over the gui, as it can help finding the respective job faster when
getting a mail with an error.

Reported-by: Dietmar Maurer <dietmar@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-10-31 10:56:51 +01:00
0656344ae4 ui: administration: set icons for tabs
orient on PVE, the ones for Updates, ServerStatus, should by
self-explanatory.

Services is in PVE named "System", but reusing that cogs icon makes
similar sense here too, and seems in line with search result of a
"service icons" query.

Syslog is the same as our general log icon, but as we also use this
normally for worker task logs and that is present here too, I
changed the worker task log icon to the alternative list, which
resembles a task view window - so IMO even better than before.

Sync that change also into the always present tasks button at the top
right.

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-10-31 09:11:11 +01:00
1143f6ca93 cleanup: fix wording in GC status emails 2020-10-31 07:56:42 +01:00
90e94aa280 docs: client: avoid that repo gets detected as email address
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-10-30 17:08:08 +01:00
c0af05e143 docs: fixup bad RST table format
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-10-30 17:05:49 +01:00
4aef06f1b6 docs: add token example to client, and reformat a bit
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-10-30 17:01:22 +01:00
034cf70b72 docs: add API tokens to documentation
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2020-10-30 16:46:19 +01:00
8b600f9965 api: replace auth_id with auth-id
in parameters, and fix up the completion for the ACL update parameter.

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2020-10-30 16:46:19 +01:00
e4e280183e privs: add some more comments explaining privileges
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2020-10-30 16:42:30 +01:00
2fc45a97a9 privs: remove PRIV_REMOVE_PRUNE
it's not used anywhere, and not needed either until the day we might
implement push syncs.

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2020-10-30 16:42:26 +01:00
b7ce2e575f verify jobs: add permissions
equivalent to verifying a whole datastore, except for reading job
(entries), which is accessible to regular Datastore.Audit/Backup users
as well.

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2020-10-30 16:36:52 +01:00
09f6a24078 verify: introduce & use new Datastore.Verify privilege
for verifying a whole datastore. Datastore.Backup now allows verifying
only backups owned by the triggering user.

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2020-10-30 16:36:52 +01:00
b728a69e7d privs: use Datastore.Modify|Backup to set backup notes
Datastore.Backup is limited to owned groups, as usual.

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2020-10-30 16:36:52 +01:00
1401f4be5f privs: allow reading notes with Datastore.Audit
they are returned when reading the manifest, which just requires
Datastore.Audit as well. Datastore.Read is for reading backup contents,
not metadata.

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2020-10-30 16:36:52 +01:00
fdb4416bae ui: permission path selector: cbind typeAhead to editable
ExtJS throws an exception if 'typeAhead' is true but 'editable' is
false.

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-10-30 16:31:53 +01:00
abe1edfc95 update d/control
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2020-10-30 16:11:50 +01:00
e4a864bd21 impl From<Authid> for Userid
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2020-10-30 15:19:07 +01:00
7a7368ee08 bump proxmox dependency to 0.7.0 for totp udpates
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2020-10-30 15:19:07 +01:00
e707fd2b3b ui: Utils: add product specific task descriptions
and sort them alphabetically

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-10-30 14:05:17 +01:00
625a56b75e server/rest: accept also = as token separator
Like we do in Proxmox VE

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-10-30 13:34:26 +01:00
6d8a1ac9e4 server/rest: user constants for HTTP headers
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-10-30 13:33:36 +01:00
362739054e api tokens: add authorization method
and properly decode secret (which is a no-op with the current scheme).

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2020-10-30 13:15:14 +01:00
2762481cc8 proxmox-backup-manager: add subscription commands
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-10-30 13:03:58 +01:00
652506e6b8 api: define subscription module and methods as public
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-10-30 13:03:58 +01:00
926d253126 api: define subscription key schema and use it
nicer to have the correct regex checked in parameter verification
already

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-10-30 12:57:14 +01:00
1cd951c93e proxy: fix warnings
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2020-10-30 12:49:43 +01:00
3b707fbb8f proxy: split out code to run garbage collection job 2020-10-30 11:01:45 +01:00
b15751bf55 check_schedule cleanup: use &str instead of String
This way we can avoid many clone() calls.
2020-10-30 09:49:50 +01:00
82c05b41fa proxy: extract commonly used logic for scheduling into new function
Signed-off-by: Hannes Laimer <h.laimer@proxmox.com>
2020-10-30 09:49:50 +01:00
b8d9079835 proxy: move prune logic into new file
Signed-off-by: Hannes Laimer <h.laimer@proxmox.com>
2020-10-30 09:49:50 +01:00
f8a682a873 ui: user menu: allow changing language while logged in
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-10-30 09:46:04 +01:00
b03a19b6e8 bump version to 0.9.4-2
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-10-29 20:25:37 +01:00
603a6bd183 d/postinst: followup: grep and sed use different regex escaping ..
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-10-29 20:25:37 +01:00
83b039af35 d/postinst: make more resilient
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-10-29 19:58:41 +01:00
92 changed files with 3221 additions and 1121 deletions

View File

@ -1,6 +1,6 @@
[package]
name = "proxmox-backup"
version = "0.9.4"
version = "0.9.6"
authors = ["Dietmar Maurer <dietmar@proxmox.com>"]
edition = "2018"
license = "AGPL-3"
@ -38,7 +38,7 @@ pam-sys = "0.5"
percent-encoding = "2.1"
pin-utils = "0.1.0"
pathpatterns = "0.1.2"
proxmox = { version = "0.6.0", features = [ "sortable-macro", "api-macro", "websocket" ] }
proxmox = { version = "0.7.0", features = [ "sortable-macro", "api-macro", "websocket" ] }
#proxmox = { git = "git://git.proxmox.com/git/proxmox", version = "0.1.2", features = [ "sortable-macro", "api-macro" ] }
#proxmox = { path = "../proxmox/proxmox", features = [ "sortable-macro", "api-macro", "websocket" ] }
proxmox-fuse = "0.1.0"

View File

@ -19,7 +19,8 @@ USR_SBIN := \
SERVICE_BIN := \
proxmox-backup-api \
proxmox-backup-banner \
proxmox-backup-proxy
proxmox-backup-proxy \
proxmox-daily-update \
ifeq ($(BUILD_MODE), release)
CARGO_BUILD_ARGS += --release

71
debian/changelog vendored
View File

@ -1,3 +1,74 @@
rust-proxmox-backup (0.9.6-1) unstable; urgency=medium
* fix #3106: improve queueing new incoming connections
* fix #2870: sync: ensure a updated ticket is used, if available
* proxy: log if there are too many open connections
* ui: SyncJobEdit: fix sending 'delete' values on SyncJob creation
* datastore config: allow to configure who receives job notify emails
* ui: fix task description for log rotate
* proxy: also rotate auth.log file
* ui: add shell panel under administration
* ui: sync jobs: only set default schedule when creating new jobs and some
other small fixes
-- Proxmox Support Team <support@proxmox.com> Wed, 04 Nov 2020 19:12:57 +0100
rust-proxmox-backup (0.9.5-1) unstable; urgency=medium
* ui: user menu: allow one to change the language while staying logged in
* proxmox-backup-manager: add subscription commands
* server/rest: also accept = as token separator
* privs: allow reading snapshot notes with Datastore.Audit
* privs: enforce Datastore.Modify|Backup to set backup notes
* verify: introduce and use new Datastore.Verify privilege
* docs: add API tokens to documentation
* ui: various smaller layout and icon improvements
* api: implement apt pkg cache for caching pending updates
* api: apt: implement support to send notification email on new updates
* add daily update and maintenance task
* fix #2864: add owner option to sync
* sync: allow sync for non-superusers under special conditions
* config: support depreacated netmask when parsing interfaces file
* server: implement access log rotation with re-open via command socket
* garbage collect: improve index error messages
* fix #3039: use the same ID regex for info and api
* ui: administration: allow extensive filtering of the worker task
* report: add api endpoint and function to generate report
-- Proxmox Support Team <support@proxmox.com> Tue, 03 Nov 2020 17:41:17 +0100
rust-proxmox-backup (0.9.4-2) unstable; urgency=medium
* make postinst (update) script more resilient
-- Proxmox Support Team <support@proxmox.com> Thu, 29 Oct 2020 20:09:30 +0100
rust-proxmox-backup (0.9.4-1) unstable; urgency=medium
* implement API-token

8
debian/control vendored
View File

@ -34,10 +34,10 @@ Build-Depends: debhelper (>= 11),
librust-pathpatterns-0.1+default-dev (>= 0.1.2-~~),
librust-percent-encoding-2+default-dev (>= 2.1-~~),
librust-pin-utils-0.1+default-dev,
librust-proxmox-0.6+api-macro-dev,
librust-proxmox-0.6+default-dev,
librust-proxmox-0.6+sortable-macro-dev,
librust-proxmox-0.6+websocket-dev,
librust-proxmox-0.7+api-macro-dev,
librust-proxmox-0.7+default-dev,
librust-proxmox-0.7+sortable-macro-dev,
librust-proxmox-0.7+websocket-dev,
librust-proxmox-fuse-0.1+default-dev,
librust-pxar-0.6+default-dev (>= 0.6.1-~~),
librust-pxar-0.6+futures-io-dev (>= 0.6.1-~~),

18
debian/postinst vendored
View File

@ -15,12 +15,24 @@ case "$1" in
fi
deb-systemd-invoke $_dh_action proxmox-backup.service proxmox-backup-proxy.service >/dev/null || true
flock -w 30 /etc/proxmox-backup/.datastore.lck sed -i '/^\s\+verify-schedule /d' /etc/proxmox-backup/datastore.cfg
# FIXME: Remove with 1.1
if test -n "$2"; then
if dpkg --compare-versions "$2" 'lt' '0.9.4-1'; then
if grep -s -q -P -e '^\s+verify-schedule ' /etc/proxmox-backup/datastore.cfg; then
echo "NOTE: drop all verify schedules from datastore config."
echo "You can now add more flexible verify jobs"
flock -w 30 /etc/proxmox-backup/.datastore.lck \
sed -i '/^\s\+verify-schedule /d' /etc/proxmox-backup/datastore.cfg || true
fi
fi
if dpkg --compare-versions "$2" 'le' '0.9.5-1'; then
chown --quiet backup:backup /var/log/proxmox-backup/api/auth.log || true
fi
fi
# FIXME: Remove in future version once we're sure no broken entries remain in anyone's files
if grep -q -e ':termproxy::[^@]\+: ' /var/log/proxmox-backup/tasks/active; then
echo "Fixing up termproxy user id in task log..."
flock -w 30 /var/log/proxmox-backup/tasks/active.lock sed -i 's/:termproxy::\([^@]\+\): /:termproxy::\1@pam: /' /var/log/proxmox-backup/tasks/active
flock -w 30 /var/log/proxmox-backup/tasks/active.lock sed -i 's/:termproxy::\([^@]\+\): /:termproxy::\1@pam: /' /var/log/proxmox-backup/tasks/active || true
fi
;;

3
debian/prerm vendored
View File

@ -6,5 +6,6 @@ set -e
# modeled after dh_systemd_start output
if [ -d /run/systemd/system ] && [ "$1" = remove ]; then
deb-systemd-invoke stop 'proxmox-backup-banner.service' 'proxmox-backup-proxy.service' 'proxmox-backup.service' >/dev/null || true
deb-systemd-invoke stop 'proxmox-backup-banner.service' 'proxmox-backup-proxy.service' \
'proxmox-backup.service' 'proxmox-backup-daily-update.timer' >/dev/null || true
fi

View File

@ -1,10 +1,13 @@
etc/proxmox-backup-proxy.service /lib/systemd/system/
etc/proxmox-backup.service /lib/systemd/system/
etc/proxmox-backup-banner.service /lib/systemd/system/
etc/proxmox-backup-daily-update.service /lib/systemd/system/
etc/proxmox-backup-daily-update.timer /lib/systemd/system/
etc/pbstest-beta.list /etc/apt/sources.list.d/
usr/lib/x86_64-linux-gnu/proxmox-backup/proxmox-backup-api
usr/lib/x86_64-linux-gnu/proxmox-backup/proxmox-backup-proxy
usr/lib/x86_64-linux-gnu/proxmox-backup/proxmox-backup-banner
usr/lib/x86_64-linux-gnu/proxmox-backup/proxmox-daily-update
usr/sbin/proxmox-backup-manager
usr/share/javascript/proxmox-backup/index.hbs
usr/share/javascript/proxmox-backup/css/ext6-pbs.css

1
debian/rules vendored
View File

@ -38,6 +38,7 @@ override_dh_auto_install:
LIBDIR=/usr/lib/$(DEB_HOST_MULTIARCH)
override_dh_installsystemd:
dh_installsystemd -pproxmox-backup-server proxmox-backup-daily-update.timer
# note: we start/try-reload-restart services manually in postinst
dh_installsystemd --no-start --no-restart-after-upgrade

View File

@ -12,31 +12,31 @@ on the backup server.
[[username@]server[:port]:]datastore
The default value for ``username`` is ``root@pam``. If no server is specified,
The default value for ``username`` is ``root@pam``. If no server is specified,
the default is the local host (``localhost``).
You can specify a port if your backup server is only reachable on a different
port (e.g. with NAT and port forwarding).
Note that if the server is an IPv6 address, you have to write it with
square brackets (e.g. [fe80::01]).
Note that if the server is an IPv6 address, you have to write it with square
brackets (for example, `[fe80::01]`).
You can pass the repository with the ``--repository`` command
line option, or by setting the ``PBS_REPOSITORY`` environment
variable.
You can pass the repository with the ``--repository`` command line option, or
by setting the ``PBS_REPOSITORY`` environment variable.
Here some examples of valid repositories and the real values
================================ ============ ================== ===========
Example User Host:Port Datastore
================================ ============ ================== ===========
mydatastore ``root@pam`` localhost:8007 mydatastore
myhostname:mydatastore ``root@pam`` myhostname:8007 mydatastore
user@pbs@myhostname:mydatastore ``user@pbs`` myhostname:8007 mydatastore
192.168.55.55:1234:mydatastore ``root@pam`` 192.168.55.55:1234 mydatastore
[ff80::51]:mydatastore ``root@pam`` [ff80::51]:8007 mydatastore
[ff80::51]:1234:mydatastore ``root@pam`` [ff80::51]:1234 mydatastore
================================ ============ ================== ===========
================================ ================== ================== ===========
Example User Host:Port Datastore
================================ ================== ================== ===========
mydatastore ``root@pam`` localhost:8007 mydatastore
myhostname:mydatastore ``root@pam`` myhostname:8007 mydatastore
user@pbs@myhostname:mydatastore ``user@pbs`` myhostname:8007 mydatastore
user\@pbs!token@host:store ``user@pbs!token`` myhostname:8007 mydatastore
192.168.55.55:1234:mydatastore ``root@pam`` 192.168.55.55:1234 mydatastore
[ff80::51]:mydatastore ``root@pam`` [ff80::51]:8007 mydatastore
[ff80::51]:1234:mydatastore ``root@pam`` [ff80::51]:1234 mydatastore
================================ ================== ================== ===========
Environment Variables
---------------------
@ -45,16 +45,16 @@ Environment Variables
The default backup repository.
``PBS_PASSWORD``
When set, this value is used for the password required for the
backup server.
When set, this value is used for the password required for the backup server.
You can also set this to a API token secret.
``PBS_ENCRYPTION_PASSWORD``
When set, this value is used to access the secret encryption key (if
protected by password).
``PBS_FINGERPRINT`` When set, this value is used to verify the server
certificate (only used if the system CA certificates cannot
validate the certificate).
certificate (only used if the system CA certificates cannot validate the
certificate).
Output Format

View File

@ -79,4 +79,17 @@ either start it manually on the GUI or provide it with a schedule (see
└────────────┴───────┴────────┴──────────────┴───────────┴─────────┘
# proxmox-backup-manager sync-job remove pbs2-local
For setting up sync jobs, the configuring user needs the following permissions:
#. ``Remote.Read`` on the ``/remote/{remote}/{remote-store}`` path
#. at least ``Datastore.Backup`` on the local target datastore (``/datastore/{store}``)
If the ``remove-vanished`` option is set, ``Datastore.Prune`` is required on
the local datastore as well. If the ``owner`` option is not set (defaulting to
``backup@pam``) or set to something other than the configuring user,
``Datastore.Modify`` is required as well.
.. note:: A sync job can only sync backup groups that the configured remote's
user/API token can read. If a remote is configured with a user/API token that
only has ``Datastore.Backup`` privileges, only the limited set of accessible
snapshots owned by that user/API token can be synced.

View File

@ -70,7 +70,7 @@ The resulting user list looks like this:
│ root@pam │ 1 │ │ │ │ │ Superuser │
└──────────┴────────┴────────┴───────────┴──────────┴──────────────────┴──────────────────┘
Newly created users do not have any permissions. Please read the next
Newly created users do not have any permissions. Please read the Access Control
section to learn how to set access permissions.
If you want to disable a user account, you can do that by setting ``--enable`` to ``0``
@ -85,15 +85,69 @@ Or completely remove the user with:
# proxmox-backup-manager user remove john@pbs
.. _user_tokens:
API Tokens
----------
Any authenticated user can generate API tokens which can in turn be used to
configure various clients, instead of directly providing the username and
password.
API tokens serve two purposes:
#. Easy revocation in case client gets compromised
#. Limit permissions for each client/token within the users' permission
An API token consists of two parts: an identifier consisting of the user name,
the realm and a tokenname (``user@realm!tokenname``), and a secret value. Both
need to be provided to the client in place of the user ID (``user@realm``) and
the user password, respectively.
The API token is passed from the client to the server by setting the
``Authorization`` HTTP header with method ``PBSAPIToken`` to the value
``TOKENID:TOKENSECRET``.
Generating new tokens can done using ``proxmox-backup-manager`` or the GUI:
.. code-block:: console
# proxmox-backup-manager user generate-token john@pbs client1
Result: {
"tokenid": "john@pbs!client1",
"value": "d63e505a-e3ec-449a-9bc7-1da610d4ccde"
}
.. note:: The displayed secret value needs to be saved, since it cannot be
displayed again after generating the API token.
The ``user list-tokens`` sub-command can be used to display tokens and their
metadata:
.. code-block:: console
# proxmox-backup-manager user list-tokens john@pbs
┌──────────────────┬────────┬────────┬─────────┐
│ tokenid │ enable │ expire │ comment │
╞══════════════════╪════════╪════════╪═════════╡
│ john@pbs!client1 │ 1 │ │ │
└──────────────────┴────────┴────────┴─────────┘
Similarly, the ``user delete-token`` subcommand can be used to delete a token
again.
Newly generated API tokens don't have any permissions. Please read the next
section to learn how to set access permissions.
.. _user_acl:
Access Control
--------------
By default new users do not have any permission. Instead you need to
specify what is allowed and what is not. You can do this by assigning
roles to users on specific objects like datastores or remotes. The
By default new users and API tokens do not have any permission. Instead you
need to specify what is allowed and what is not. You can do this by assigning
roles to users/tokens on specific objects like datastores or remotes. The
following roles exist:
**NoAccess**
@ -148,20 +202,21 @@ The data represented in each field is as follows:
#. The object on which the permission is set. This can be a specific object
(single datastore, remote, etc.) or a top level object, which with
propagation enabled, represents all children of the object also.
#. The user for which the permission is set
#. The user(s)/token(s) for which the permission is set
#. The role being set
You can manage datastore permissions from **Configuration -> Permissions** in the
web interface. Likewise, you can use the ``acl`` subcommand to manage and
monitor user permissions from the command line. For example, the command below
will add the user ``john@pbs`` as a **DatastoreAdmin** for the datastore
``store1``, located at ``/backup/disk1/store1``:
You can manage permissions via **Configuration -> Access Control ->
Permissions** in the web interface. Likewise, you can use the ``acl``
subcommand to manage and monitor user permissions from the command line. For
example, the command below will add the user ``john@pbs`` as a
**DatastoreAdmin** for the datastore ``store1``, located at
``/backup/disk1/store1``:
.. code-block:: console
# proxmox-backup-manager acl update /datastore/store1 DatastoreAdmin --userid john@pbs
# proxmox-backup-manager acl update /datastore/store1 DatastoreAdmin --auth-id john@pbs
You can monitor the roles of each user using the following command:
You can list the ACLs of each user/token using the following command:
.. code-block:: console
@ -172,7 +227,7 @@ You can monitor the roles of each user using the following command:
│ john@pbs │ /datastore/disk1 │ 1 │ DatastoreAdmin │
└──────────┴──────────────────┴───────────┴────────────────┘
A single user can be assigned multiple permission sets for different datastores.
A single user/token can be assigned multiple permission sets for different datastores.
.. Note::
Naming convention is important here. For datastores on the host,
@ -183,4 +238,39 @@ A single user can be assigned multiple permission sets for different datastores.
remote (see `Remote` below) and ``{storename}`` is the name of the datastore on
the remote.
API Token permissions
~~~~~~~~~~~~~~~~~~~~~
API token permissions are calculated based on ACLs containing their ID
independent of those of their corresponding user. The resulting permission set
on a given path is then intersected with that of the corresponding user.
In practice this means:
#. API tokens require their own ACL entries
#. API tokens can never do more than their corresponding user
Effective permissions
~~~~~~~~~~~~~~~~~~~~~
To calculate and display the effective permission set of a user or API token
you can use the ``proxmox-backup-manager user permission`` command:
.. code-block:: console
# proxmox-backup-manager user permissions john@pbs -- path /datastore/store1
Privileges with (*) have the propagate flag set
Path: /datastore/store1
- Datastore.Audit (*)
- Datastore.Backup (*)
- Datastore.Modify (*)
- Datastore.Prune (*)
- Datastore.Read (*)
# proxmox-backup-manager acl update /datastore/store1 DatastoreBackup --auth-id 'john@pbs!client1'
# proxmox-backup-manager user permissions 'john@pbs!test' -- path /datastore/store1
Privileges with (*) have the propagate flag set
Path: /datastore/store1
- Datastore.Backup (*)

View File

@ -1,9 +1,11 @@
include ../defines.mk
UNITS :=
UNITS := \
proxmox-backup-daily-update.timer \
DYNAMIC_UNITS := \
proxmox-backup-banner.service \
proxmox-backup-daily-update.service \
proxmox-backup.service \
proxmox-backup-proxy.service

View File

@ -0,0 +1,8 @@
[Unit]
Description=Daily Proxmox Backup Server update and maintenance activities
After=network-online.target
Wants=network-online.target
[Service]
Type=oneshot
ExecStart=%LIBEXECDIR%/proxmox-backup/proxmox-daily-update

View File

@ -0,0 +1,10 @@
[Unit]
Description=Daily Proxmox Backup Server update and maintenance activities
[Timer]
OnCalendar=*-*-* 1:00
RandomizedDelaySec=5h
Persistent=true
[Install]
WantedBy=timers.target

View File

@ -9,6 +9,7 @@ After=proxmox-backup.service
Type=notify
ExecStart=%LIBEXECDIR%/proxmox-backup/proxmox-backup-proxy
ExecReload=/bin/kill -HUP $MAINPID
PIDFile=/run/proxmox-backup/proxy.pid
Restart=on-failure
User=%PROXY_USER%
Group=%PROXY_USER%

View File

@ -7,6 +7,7 @@ After=network.target
Type=notify
ExecStart=%LIBEXECDIR%/proxmox-backup/proxmox-backup-api
ExecReload=/bin/kill -HUP $MAINPID
PIDFile=/run/proxmox-backup/api.pid
Restart=on-failure
[Install]

View File

@ -12,7 +12,6 @@ use proxmox::{http_err, list_subdirs_api_method};
use crate::tools::ticket::{self, Empty, Ticket};
use crate::auth_helpers::*;
use crate::api2::types::*;
use crate::tools::{FileLogOptions, FileLogger};
use crate::config::acl as acl_config;
use crate::config::acl::{PRIVILEGES, PRIV_SYS_AUDIT, PRIV_PERMISSIONS_MODIFY};
@ -144,20 +143,13 @@ fn create_ticket(
port: Option<u16>,
rpcenv: &mut dyn RpcEnvironment,
) -> Result<Value, Error> {
let logger_options = FileLogOptions {
append: true,
prefix_time: true,
..Default::default()
};
let mut auth_log = FileLogger::new("/var/log/proxmox-backup/api/auth.log", logger_options)?;
match authenticate_user(&username, &password, path, privs, port) {
Ok(true) => {
let ticket = Ticket::new("PBS", &username)?.sign(private_auth_key(), None)?;
let token = assemble_csrf_prevention_token(csrf_secret(), &username);
auth_log.log(format!("successful auth for user '{}'", username));
crate::server::rest::auth_logger()?.log(format!("successful auth for user '{}'", username));
Ok(json!({
"username": username,
@ -180,7 +172,7 @@ fn create_ticket(
username,
err.to_string()
);
auth_log.log(&msg);
crate::server::rest::auth_logger()?.log(&msg);
log::error!("{}", msg);
Err(http_err!(UNAUTHORIZED, "permission check failed."))
@ -244,7 +236,7 @@ fn change_password(
#[api(
input: {
properties: {
auth_id: {
"auth-id": {
type: Authid,
optional: true,
},

View File

@ -169,7 +169,7 @@ pub fn read_acl(
optional: true,
schema: ACL_PROPAGATE_SCHEMA,
},
auth_id: {
"auth-id": {
optional: true,
type: Authid,
},

View File

@ -29,7 +29,7 @@ use crate::backup::*;
use crate::config::datastore;
use crate::config::cached_user_info::CachedUserInfo;
use crate::server::WorkerTask;
use crate::server::{jobstate::Job, WorkerTask};
use crate::tools::{
self,
zip::{ZipEncoder, ZipEntry},
@ -42,6 +42,7 @@ use crate::config::acl::{
PRIV_DATASTORE_READ,
PRIV_DATASTORE_PRUNE,
PRIV_DATASTORE_BACKUP,
PRIV_DATASTORE_VERIFY,
};
fn check_priv_or_backup_owner(
@ -537,7 +538,7 @@ pub fn status(
schema: UPID_SCHEMA,
},
access: {
permission: &Permission::Privilege(&["datastore", "{store}"], PRIV_DATASTORE_READ | PRIV_DATASTORE_BACKUP, true), // fixme
permission: &Permission::Privilege(&["datastore", "{store}"], PRIV_DATASTORE_VERIFY | PRIV_DATASTORE_BACKUP, true),
},
)]
/// Verify backups.
@ -553,6 +554,7 @@ pub fn verify(
) -> Result<Value, Error> {
let datastore = DataStore::lookup_datastore(&store)?;
let auth_id: Authid = rpcenv.get_auth_id().unwrap().parse()?;
let worker_id;
let mut backup_dir = None;
@ -563,12 +565,18 @@ pub fn verify(
(Some(backup_type), Some(backup_id), Some(backup_time)) => {
worker_id = format!("{}:{}/{}/{:08X}", store, backup_type, backup_id, backup_time);
let dir = BackupDir::new(backup_type, backup_id, backup_time)?;
check_priv_or_backup_owner(&datastore, dir.group(), &auth_id, PRIV_DATASTORE_VERIFY)?;
backup_dir = Some(dir);
worker_type = "verify_snapshot";
}
(Some(backup_type), Some(backup_id), None) => {
worker_id = format!("{}:{}/{}", store, backup_type, backup_id);
let group = BackupGroup::new(backup_type, backup_id);
check_priv_or_backup_owner(&datastore, &group, &auth_id, PRIV_DATASTORE_VERIFY)?;
backup_group = Some(group);
worker_type = "verify_group";
}
@ -578,13 +586,12 @@ pub fn verify(
_ => bail!("parameters do not specify a backup group or snapshot"),
}
let auth_id: Authid = rpcenv.get_auth_id().unwrap().parse()?;
let to_stdout = if rpcenv.env_type() == RpcEnvironmentType::CLI { true } else { false };
let upid_str = WorkerTask::new_thread(
worker_type,
Some(worker_id.clone()),
auth_id,
auth_id.clone(),
to_stdout,
move |worker| {
let verified_chunks = Arc::new(Mutex::new(HashSet::with_capacity(1024*16)));
@ -617,7 +624,16 @@ pub fn verify(
)?;
failed_dirs
} else {
verify_all_backups(datastore, worker.clone(), worker.upid(), None)?
let privs = CachedUserInfo::new()?
.lookup_privs(&auth_id, &["datastore", &store]);
let owner = if privs & PRIV_DATASTORE_VERIFY == 0 {
Some(auth_id)
} else {
None
};
verify_all_backups(datastore, worker.clone(), worker.upid(), owner, None)?
};
if failed_dirs.len() > 0 {
worker.log("Failed to verify following snapshots:");
@ -840,20 +856,13 @@ fn start_garbage_collection(
let datastore = DataStore::lookup_datastore(&store)?;
let auth_id: Authid = rpcenv.get_auth_id().unwrap().parse()?;
println!("Starting garbage collection on store {}", store);
let job = Job::new("garbage_collection", &store)
.map_err(|_| format_err!("garbage collection already running"))?;
let to_stdout = if rpcenv.env_type() == RpcEnvironmentType::CLI { true } else { false };
let upid_str = WorkerTask::new_thread(
"garbage_collection",
Some(store.clone()),
auth_id.clone(),
to_stdout,
move |worker| {
worker.log(format!("starting garbage collection on store {}", store));
datastore.garbage_collection(&*worker, worker.upid())
},
)?;
let upid_str = crate::server::do_garbage_collection_job(job, datastore, &auth_id, None, to_stdout)
.map_err(|err| format_err!("unable to start garbage collection job on datastore {} - {}", store, err))?;
Ok(json!(upid_str))
}
@ -1562,7 +1571,7 @@ fn get_rrd_stats(
},
},
access: {
permission: &Permission::Privilege(&["datastore", "{store}"], PRIV_DATASTORE_READ | PRIV_DATASTORE_BACKUP, true),
permission: &Permission::Privilege(&["datastore", "{store}"], PRIV_DATASTORE_AUDIT | PRIV_DATASTORE_BACKUP, true),
},
)]
/// Get "notes" for a specific backup
@ -1578,7 +1587,7 @@ fn get_notes(
let auth_id: Authid = rpcenv.get_auth_id().unwrap().parse()?;
let backup_dir = BackupDir::new(backup_type, backup_id, backup_time)?;
check_priv_or_backup_owner(&datastore, backup_dir.group(), &auth_id, PRIV_DATASTORE_READ)?;
check_priv_or_backup_owner(&datastore, backup_dir.group(), &auth_id, PRIV_DATASTORE_AUDIT)?;
let (manifest, _) = datastore.load_manifest(&backup_dir)?;
@ -1610,7 +1619,9 @@ fn get_notes(
},
},
access: {
permission: &Permission::Privilege(&["datastore", "{store}"], PRIV_DATASTORE_MODIFY, true),
permission: &Permission::Privilege(&["datastore", "{store}"],
PRIV_DATASTORE_MODIFY | PRIV_DATASTORE_BACKUP,
true),
},
)]
/// Set "notes" for a specific backup
@ -1627,7 +1638,7 @@ fn set_notes(
let auth_id: Authid = rpcenv.get_auth_id().unwrap().parse()?;
let backup_dir = BackupDir::new(backup_type, backup_id, backup_time)?;
check_priv_or_backup_owner(&datastore, backup_dir.group(), &auth_id, PRIV_DATASTORE_READ)?;
check_priv_or_backup_owner(&datastore, backup_dir.group(), &auth_id, PRIV_DATASTORE_MODIFY)?;
datastore.update_manifest(&backup_dir,|manifest| {
manifest.unprotected["notes"] = notes.into();

View File

@ -1,12 +1,15 @@
use anyhow::{format_err, Error};
use anyhow::{bail, format_err, Error};
use serde_json::Value;
use proxmox::api::{api, ApiMethod, Router, RpcEnvironment};
use proxmox::api::{api, ApiMethod, Permission, Router, RpcEnvironment};
use proxmox::api::router::SubdirMap;
use proxmox::{list_subdirs_api_method, sortable};
use crate::api2::types::*;
use crate::api2::pull::do_sync_job;
use crate::api2::config::sync::{check_sync_job_modify_access, check_sync_job_read_access};
use crate::config::cached_user_info::CachedUserInfo;
use crate::config::sync::{self, SyncJobStatus, SyncJobConfig};
use crate::server::UPID;
use crate::server::jobstate::{Job, JobState};
@ -27,6 +30,10 @@ use crate::tools::systemd::time::{
type: Array,
items: { type: sync::SyncJobStatus },
},
access: {
description: "Limited to sync jobs where user has Datastore.Audit on target datastore, and Remote.Audit on source remote.",
permission: &Permission::Anybody,
},
)]
/// List all sync jobs
pub fn list_sync_jobs(
@ -35,6 +42,9 @@ pub fn list_sync_jobs(
mut rpcenv: &mut dyn RpcEnvironment,
) -> Result<Vec<SyncJobStatus>, Error> {
let auth_id: Authid = rpcenv.get_auth_id().unwrap().parse()?;
let user_info = CachedUserInfo::new()?;
let (config, digest) = sync::config()?;
let mut list: Vec<SyncJobStatus> = config
@ -46,6 +56,10 @@ pub fn list_sync_jobs(
} else {
true
}
})
.filter(|job: &SyncJobStatus| {
let as_config: SyncJobConfig = job.clone().into();
check_sync_job_read_access(&user_info, &auth_id, &as_config)
}).collect();
for job in &mut list {
@ -89,7 +103,11 @@ pub fn list_sync_jobs(
schema: JOB_ID_SCHEMA,
}
}
}
},
access: {
description: "User needs Datastore.Backup on target datastore, and Remote.Read on source remote. Additionally, remove_vanished requires Datastore.Prune, and any owner other than the user themselves requires Datastore.Modify",
permission: &Permission::Anybody,
},
)]
/// Runs the sync jobs manually.
fn run_sync_job(
@ -97,11 +115,15 @@ fn run_sync_job(
_info: &ApiMethod,
rpcenv: &mut dyn RpcEnvironment,
) -> Result<String, Error> {
let auth_id: Authid = rpcenv.get_auth_id().unwrap().parse()?;
let user_info = CachedUserInfo::new()?;
let (config, _digest) = sync::config()?;
let sync_job: SyncJobConfig = config.lookup("sync", &id)?;
let auth_id: Authid = rpcenv.get_auth_id().unwrap().parse()?;
if !check_sync_job_modify_access(&user_info, &auth_id, &sync_job) {
bail!("permission check failed");
}
let job = Job::new("syncjob", &id)?;

View File

@ -68,6 +68,14 @@ pub fn list_datastores(
optional: true,
schema: SINGLE_LINE_COMMENT_SCHEMA,
},
"notify-user": {
optional: true,
type: Userid,
},
"notify": {
optional: true,
type: Notify,
},
"gc-schedule": {
optional: true,
schema: GC_SCHEDULE_SCHEMA,
@ -187,6 +195,10 @@ pub enum DeletableProperty {
keep_monthly,
/// Delete the keep-yearly property
keep_yearly,
/// Delete the notify-user property
notify_user,
/// Delete the notify property
notify,
}
#[api(
@ -200,6 +212,14 @@ pub enum DeletableProperty {
optional: true,
schema: SINGLE_LINE_COMMENT_SCHEMA,
},
"notify-user": {
optional: true,
type: Userid,
},
"notify": {
optional: true,
type: Notify,
},
"gc-schedule": {
optional: true,
schema: GC_SCHEDULE_SCHEMA,
@ -262,6 +282,8 @@ pub fn update_datastore(
keep_weekly: Option<u64>,
keep_monthly: Option<u64>,
keep_yearly: Option<u64>,
notify: Option<Notify>,
notify_user: Option<Userid>,
delete: Option<Vec<DeletableProperty>>,
digest: Option<String>,
) -> Result<(), Error> {
@ -290,6 +312,8 @@ pub fn update_datastore(
DeletableProperty::keep_weekly => { data.keep_weekly = None; },
DeletableProperty::keep_monthly => { data.keep_monthly = None; },
DeletableProperty::keep_yearly => { data.keep_yearly = None; },
DeletableProperty::notify => { data.notify = None; },
DeletableProperty::notify_user => { data.notify_user = None; },
}
}
}
@ -322,6 +346,9 @@ pub fn update_datastore(
if keep_monthly.is_some() { data.keep_monthly = keep_monthly; }
if keep_yearly.is_some() { data.keep_yearly = keep_yearly; }
if notify.is_some() { data.notify = notify; }
if notify_user.is_some() { data.notify_user = notify_user; }
config.set_data(&name, "datastore", &data)?;
datastore::save_config(&config)?;

View File

@ -6,6 +6,7 @@ use proxmox::api::{api, ApiMethod, Router, RpcEnvironment, Permission};
use proxmox::tools::fs::open_file_locked;
use crate::api2::types::*;
use crate::config::cached_user_info::CachedUserInfo;
use crate::config::remote;
use crate::config::acl::{PRIV_REMOTE_AUDIT, PRIV_REMOTE_MODIFY};
@ -22,7 +23,8 @@ use crate::config::acl::{PRIV_REMOTE_AUDIT, PRIV_REMOTE_MODIFY};
},
},
access: {
permission: &Permission::Privilege(&["remote"], PRIV_REMOTE_AUDIT, false),
description: "List configured remotes filtered by Remote.Audit privileges",
permission: &Permission::Anybody,
},
)]
/// List all remotes
@ -31,16 +33,25 @@ pub fn list_remotes(
_info: &ApiMethod,
mut rpcenv: &mut dyn RpcEnvironment,
) -> Result<Vec<remote::Remote>, Error> {
let auth_id: Authid = rpcenv.get_auth_id().unwrap().parse()?;
let user_info = CachedUserInfo::new()?;
let (config, digest) = remote::config()?;
let mut list: Vec<remote::Remote> = config.convert_to_typed_array("remote")?;
// don't return password in api
for remote in &mut list {
remote.password = "".to_string();
}
let list = list
.into_iter()
.filter(|remote| {
let privs = user_info.lookup_privs(&auth_id, &["remote", &remote.name]);
privs & PRIV_REMOTE_AUDIT != 0
})
.collect();
rpcenv["digest"] = proxmox::tools::digest_to_hex(&digest).into();
Ok(list)
}

View File

@ -2,13 +2,73 @@ use anyhow::{bail, Error};
use serde_json::Value;
use ::serde::{Deserialize, Serialize};
use proxmox::api::{api, Router, RpcEnvironment};
use proxmox::api::{api, Permission, Router, RpcEnvironment};
use proxmox::tools::fs::open_file_locked;
use crate::api2::types::*;
use crate::config::acl::{
PRIV_DATASTORE_AUDIT,
PRIV_DATASTORE_BACKUP,
PRIV_DATASTORE_MODIFY,
PRIV_DATASTORE_PRUNE,
PRIV_REMOTE_AUDIT,
PRIV_REMOTE_READ,
};
use crate::config::cached_user_info::CachedUserInfo;
use crate::config::sync::{self, SyncJobConfig};
// fixme: add access permissions
pub fn check_sync_job_read_access(
user_info: &CachedUserInfo,
auth_id: &Authid,
job: &SyncJobConfig,
) -> bool {
let datastore_privs = user_info.lookup_privs(&auth_id, &["datastore", &job.store]);
if datastore_privs & PRIV_DATASTORE_AUDIT == 0 {
return false;
}
let remote_privs = user_info.lookup_privs(&auth_id, &["remote", &job.remote]);
remote_privs & PRIV_REMOTE_AUDIT != 0
}
// user can run the corresponding pull job
pub fn check_sync_job_modify_access(
user_info: &CachedUserInfo,
auth_id: &Authid,
job: &SyncJobConfig,
) -> bool {
let datastore_privs = user_info.lookup_privs(&auth_id, &["datastore", &job.store]);
if datastore_privs & PRIV_DATASTORE_BACKUP == 0 {
return false;
}
if let Some(true) = job.remove_vanished {
if datastore_privs & PRIV_DATASTORE_PRUNE == 0 {
return false;
}
}
let correct_owner = match job.owner {
Some(ref owner) => {
owner == auth_id
|| (owner.is_token()
&& !auth_id.is_token()
&& owner.user() == auth_id.user())
},
// default sync owner
None => auth_id == Authid::backup_auth_id(),
};
// same permission as changing ownership after syncing
if !correct_owner && datastore_privs & PRIV_DATASTORE_MODIFY == 0 {
return false;
}
let remote_privs = user_info.lookup_privs(&auth_id, &["remote", &job.remote, &job.remote_store]);
remote_privs & PRIV_REMOTE_READ != 0
}
#[api(
input: {
@ -19,12 +79,18 @@ use crate::config::sync::{self, SyncJobConfig};
type: Array,
items: { type: sync::SyncJobConfig },
},
access: {
description: "Limited to sync job entries where user has Datastore.Audit on target datastore, and Remote.Audit on source remote.",
permission: &Permission::Anybody,
},
)]
/// List all sync jobs
pub fn list_sync_jobs(
_param: Value,
mut rpcenv: &mut dyn RpcEnvironment,
) -> Result<Vec<SyncJobConfig>, Error> {
let auth_id: Authid = rpcenv.get_auth_id().unwrap().parse()?;
let user_info = CachedUserInfo::new()?;
let (config, digest) = sync::config()?;
@ -32,7 +98,11 @@ pub fn list_sync_jobs(
rpcenv["digest"] = proxmox::tools::digest_to_hex(&digest).into();
Ok(list)
let list = list
.into_iter()
.filter(|sync_job| check_sync_job_read_access(&user_info, &auth_id, &sync_job))
.collect();
Ok(list)
}
#[api(
@ -45,6 +115,10 @@ pub fn list_sync_jobs(
store: {
schema: DATASTORE_SCHEMA,
},
owner: {
type: Authid,
optional: true,
},
remote: {
schema: REMOTE_ID_SCHEMA,
},
@ -65,13 +139,25 @@ pub fn list_sync_jobs(
},
},
},
access: {
description: "User needs Datastore.Backup on target datastore, and Remote.Read on source remote. Additionally, remove_vanished requires Datastore.Prune, and any owner other than the user themselves requires Datastore.Modify",
permission: &Permission::Anybody,
},
)]
/// Create a new sync job.
pub fn create_sync_job(param: Value) -> Result<(), Error> {
pub fn create_sync_job(
param: Value,
rpcenv: &mut dyn RpcEnvironment,
) -> Result<(), Error> {
let auth_id: Authid = rpcenv.get_auth_id().unwrap().parse()?;
let user_info = CachedUserInfo::new()?;
let _lock = open_file_locked(sync::SYNC_CFG_LOCKFILE, std::time::Duration::new(10, 0), true)?;
let sync_job: sync::SyncJobConfig = serde_json::from_value(param.clone())?;
if !check_sync_job_modify_access(&user_info, &auth_id, &sync_job) {
bail!("permission check failed");
}
let (mut config, _digest) = sync::config()?;
@ -100,15 +186,26 @@ pub fn create_sync_job(param: Value) -> Result<(), Error> {
description: "The sync job configuration.",
type: sync::SyncJobConfig,
},
access: {
description: "Limited to sync job entries where user has Datastore.Audit on target datastore, and Remote.Audit on source remote.",
permission: &Permission::Anybody,
},
)]
/// Read a sync job configuration.
pub fn read_sync_job(
id: String,
mut rpcenv: &mut dyn RpcEnvironment,
) -> Result<SyncJobConfig, Error> {
let auth_id: Authid = rpcenv.get_auth_id().unwrap().parse()?;
let user_info = CachedUserInfo::new()?;
let (config, digest) = sync::config()?;
let sync_job = config.lookup("sync", &id)?;
if !check_sync_job_read_access(&user_info, &auth_id, &sync_job) {
bail!("permission check failed");
}
rpcenv["digest"] = proxmox::tools::digest_to_hex(&digest).into();
Ok(sync_job)
@ -120,6 +217,8 @@ pub fn read_sync_job(
#[allow(non_camel_case_types)]
/// Deletable property name
pub enum DeletableProperty {
/// Delete the owner property.
owner,
/// Delete the comment property.
comment,
/// Delete the job schedule.
@ -139,6 +238,10 @@ pub enum DeletableProperty {
schema: DATASTORE_SCHEMA,
optional: true,
},
owner: {
type: Authid,
optional: true,
},
remote: {
schema: REMOTE_ID_SCHEMA,
optional: true,
@ -173,11 +276,16 @@ pub enum DeletableProperty {
},
},
},
access: {
permission: &Permission::Anybody,
description: "User needs Datastore.Backup on target datastore, and Remote.Read on source remote. Additionally, remove_vanished requires Datastore.Prune, and any owner other than the user themselves requires Datastore.Modify",
},
)]
/// Update sync job config.
pub fn update_sync_job(
id: String,
store: Option<String>,
owner: Option<Authid>,
remote: Option<String>,
remote_store: Option<String>,
remove_vanished: Option<bool>,
@ -185,7 +293,10 @@ pub fn update_sync_job(
schedule: Option<String>,
delete: Option<Vec<DeletableProperty>>,
digest: Option<String>,
rpcenv: &mut dyn RpcEnvironment,
) -> Result<(), Error> {
let auth_id: Authid = rpcenv.get_auth_id().unwrap().parse()?;
let user_info = CachedUserInfo::new()?;
let _lock = open_file_locked(sync::SYNC_CFG_LOCKFILE, std::time::Duration::new(10, 0), true)?;
@ -202,6 +313,7 @@ pub fn update_sync_job(
if let Some(delete) = delete {
for delete_prop in delete {
match delete_prop {
DeletableProperty::owner => { data.owner = None; },
DeletableProperty::comment => { data.comment = None; },
DeletableProperty::schedule => { data.schedule = None; },
DeletableProperty::remove_vanished => { data.remove_vanished = None; },
@ -221,11 +333,15 @@ pub fn update_sync_job(
if let Some(store) = store { data.store = store; }
if let Some(remote) = remote { data.remote = remote; }
if let Some(remote_store) = remote_store { data.remote_store = remote_store; }
if let Some(owner) = owner { data.owner = Some(owner); }
if schedule.is_some() { data.schedule = schedule; }
if remove_vanished.is_some() { data.remove_vanished = remove_vanished; }
if !check_sync_job_modify_access(&user_info, &auth_id, &data) {
bail!("permission check failed");
}
config.set_data(&id, "sync", &data)?;
sync::save_config(&config)?;
@ -246,9 +362,19 @@ pub fn update_sync_job(
},
},
},
access: {
permission: &Permission::Anybody,
description: "User needs Datastore.Backup on target datastore, and Remote.Read on source remote. Additionally, remove_vanished requires Datastore.Prune, and any owner other than the user themselves requires Datastore.Modify",
},
)]
/// Remove a sync job configuration
pub fn delete_sync_job(id: String, digest: Option<String>) -> Result<(), Error> {
pub fn delete_sync_job(
id: String,
digest: Option<String>,
rpcenv: &mut dyn RpcEnvironment,
) -> Result<(), Error> {
let auth_id: Authid = rpcenv.get_auth_id().unwrap().parse()?;
let user_info = CachedUserInfo::new()?;
let _lock = open_file_locked(sync::SYNC_CFG_LOCKFILE, std::time::Duration::new(10, 0), true)?;
@ -259,10 +385,15 @@ pub fn delete_sync_job(id: String, digest: Option<String>) -> Result<(), Error>
crate::tools::detect_modified_configuration_file(&digest, &expected_digest)?;
}
match config.sections.get(&id) {
Some(_) => { config.sections.remove(&id); },
None => bail!("job '{}' does not exist.", id),
}
match config.lookup("sync", &id) {
Ok(job) => {
if !check_sync_job_modify_access(&user_info, &auth_id, &job) {
bail!("permission check failed");
}
config.sections.remove(&id);
},
Err(_) => { bail!("job '{}' does not exist.", id) },
};
sync::save_config(&config)?;
@ -280,3 +411,116 @@ pub const ROUTER: Router = Router::new()
.get(&API_METHOD_LIST_SYNC_JOBS)
.post(&API_METHOD_CREATE_SYNC_JOB)
.match_all("id", &ITEM_ROUTER);
#[test]
fn sync_job_access_test() -> Result<(), Error> {
let (user_cfg, _) = crate::config::user::test_cfg_from_str(r###"
user: noperm@pbs
user: read@pbs
user: write@pbs
"###).expect("test user.cfg is not parsable");
let acl_tree = crate::config::acl::AclTree::from_raw(r###"
acl:1:/datastore/localstore1:read@pbs,write@pbs:DatastoreAudit
acl:1:/datastore/localstore1:write@pbs:DatastoreBackup
acl:1:/datastore/localstore2:write@pbs:DatastorePowerUser
acl:1:/datastore/localstore3:write@pbs:DatastoreAdmin
acl:1:/remote/remote1:read@pbs,write@pbs:RemoteAudit
acl:1:/remote/remote1/remotestore1:write@pbs:RemoteSyncOperator
"###).expect("test acl.cfg is not parsable");
let user_info = CachedUserInfo::test_new(user_cfg, acl_tree);
let root_auth_id = Authid::root_auth_id();
let no_perm_auth_id: Authid = "noperm@pbs".parse()?;
let read_auth_id: Authid = "read@pbs".parse()?;
let write_auth_id: Authid = "write@pbs".parse()?;
let mut job = SyncJobConfig {
id: "regular".to_string(),
remote: "remote0".to_string(),
remote_store: "remotestore1".to_string(),
store: "localstore0".to_string(),
owner: Some(write_auth_id.clone()),
comment: None,
remove_vanished: None,
schedule: None,
};
// should work without ACLs
assert_eq!(check_sync_job_read_access(&user_info, &root_auth_id, &job), true);
assert_eq!(check_sync_job_modify_access(&user_info, &root_auth_id, &job), true);
// user without permissions must fail
assert_eq!(check_sync_job_read_access(&user_info, &no_perm_auth_id, &job), false);
assert_eq!(check_sync_job_modify_access(&user_info, &no_perm_auth_id, &job), false);
// reading without proper read permissions on either remote or local must fail
assert_eq!(check_sync_job_read_access(&user_info, &read_auth_id, &job), false);
// reading without proper read permissions on local end must fail
job.remote = "remote1".to_string();
assert_eq!(check_sync_job_read_access(&user_info, &read_auth_id, &job), false);
// reading without proper read permissions on remote end must fail
job.remote = "remote0".to_string();
job.store = "localstore1".to_string();
assert_eq!(check_sync_job_read_access(&user_info, &read_auth_id, &job), false);
// writing without proper write permissions on either end must fail
job.store = "localstore0".to_string();
assert_eq!(check_sync_job_modify_access(&user_info, &write_auth_id, &job), false);
// writing without proper write permissions on local end must fail
job.remote = "remote1".to_string();
// writing without proper write permissions on remote end must fail
job.remote = "remote0".to_string();
job.store = "localstore1".to_string();
assert_eq!(check_sync_job_modify_access(&user_info, &write_auth_id, &job), false);
// reset remote to one where users have access
job.remote = "remote1".to_string();
// user with read permission can only read, but not modify/run
assert_eq!(check_sync_job_read_access(&user_info, &read_auth_id, &job), true);
job.owner = Some(read_auth_id.clone());
assert_eq!(check_sync_job_modify_access(&user_info, &read_auth_id, &job), false);
job.owner = None;
assert_eq!(check_sync_job_modify_access(&user_info, &read_auth_id, &job), false);
job.owner = Some(write_auth_id.clone());
assert_eq!(check_sync_job_modify_access(&user_info, &read_auth_id, &job), false);
// user with simple write permission can modify/run
assert_eq!(check_sync_job_read_access(&user_info, &write_auth_id, &job), true);
assert_eq!(check_sync_job_modify_access(&user_info, &write_auth_id, &job), true);
// but can't modify/run with deletion
job.remove_vanished = Some(true);
assert_eq!(check_sync_job_modify_access(&user_info, &write_auth_id, &job), false);
// unless they have Datastore.Prune as well
job.store = "localstore2".to_string();
assert_eq!(check_sync_job_modify_access(&user_info, &write_auth_id, &job), true);
// changing owner is not possible
job.owner = Some(read_auth_id.clone());
assert_eq!(check_sync_job_modify_access(&user_info, &write_auth_id, &job), false);
// also not to the default 'backup@pam'
job.owner = None;
assert_eq!(check_sync_job_modify_access(&user_info, &write_auth_id, &job), false);
// unless they have Datastore.Modify as well
job.store = "localstore3".to_string();
job.owner = Some(read_auth_id.clone());
assert_eq!(check_sync_job_modify_access(&user_info, &write_auth_id, &job), true);
job.owner = None;
assert_eq!(check_sync_job_modify_access(&user_info, &write_auth_id, &job), true);
Ok(())
}

View File

@ -2,10 +2,17 @@ use anyhow::{bail, Error};
use serde_json::Value;
use ::serde::{Deserialize, Serialize};
use proxmox::api::{api, Router, RpcEnvironment};
use proxmox::api::{api, Permission, Router, RpcEnvironment};
use proxmox::tools::fs::open_file_locked;
use crate::api2::types::*;
use crate::config::acl::{
PRIV_DATASTORE_AUDIT,
PRIV_DATASTORE_BACKUP,
PRIV_DATASTORE_VERIFY,
};
use crate::config::verify::{self, VerificationJobConfig};
#[api(
@ -17,6 +24,12 @@ use crate::config::verify::{self, VerificationJobConfig};
type: Array,
items: { type: verify::VerificationJobConfig },
},
access: {
permission: &Permission::Privilege(
&["datastore", "{store}"],
PRIV_DATASTORE_AUDIT | PRIV_DATASTORE_BACKUP | PRIV_DATASTORE_VERIFY,
true),
},
)]
/// List all verification jobs
pub fn list_verification_jobs(
@ -61,7 +74,13 @@ pub fn list_verification_jobs(
schema: VERIFICATION_SCHEDULE_SCHEMA,
},
}
}
},
access: {
permission: &Permission::Privilege(
&["datastore", "{store}"],
PRIV_DATASTORE_VERIFY,
true),
},
)]
/// Create a new verification job.
pub fn create_verification_job(param: Value) -> Result<(), Error> {
@ -97,6 +116,12 @@ pub fn create_verification_job(param: Value) -> Result<(), Error> {
description: "The verification job configuration.",
type: verify::VerificationJobConfig,
},
access: {
permission: &Permission::Privilege(
&["datastore", "{store}"],
PRIV_DATASTORE_AUDIT | PRIV_DATASTORE_BACKUP | PRIV_DATASTORE_VERIFY,
true),
},
)]
/// Read a verification job configuration.
pub fn read_verification_job(
@ -167,6 +192,12 @@ pub enum DeletableProperty {
},
},
},
access: {
permission: &Permission::Privilege(
&["datastore", "{store}"],
PRIV_DATASTORE_VERIFY,
true),
},
)]
/// Update verification job config.
pub fn update_verification_job(
@ -238,6 +269,12 @@ pub fn update_verification_job(
},
},
},
access: {
permission: &Permission::Privilege(
&["datastore", "{store}"],
PRIV_DATASTORE_VERIFY,
true),
},
)]
/// Remove a verification job configuration
pub fn delete_verification_job(id: String, digest: Option<String>) -> Result<(), Error> {

View File

@ -24,20 +24,21 @@ use crate::server::WorkerTask;
use crate::tools;
use crate::tools::ticket::{self, Empty, Ticket};
pub mod apt;
pub mod disks;
pub mod dns;
pub mod network;
pub mod tasks;
pub mod subscription;
pub(crate) mod rrd;
mod apt;
mod journal;
mod services;
mod status;
mod subscription;
mod syslog;
mod time;
mod report;
pub const SHELL_CMD_SCHEMA: Schema = StringSchema::new("The command to run.")
.format(&ApiStringFormat::Enum(&[
@ -310,6 +311,7 @@ pub const SUBDIRS: SubdirMap = &[
("dns", &dns::ROUTER),
("journal", &journal::ROUTER),
("network", &network::ROUTER),
("report", &report::ROUTER),
("rrd", &rrd::ROUTER),
("services", &services::ROUTER),
("status", &status::ROUTER),

View File

@ -1,302 +1,16 @@
use std::collections::HashSet;
use apt_pkg_native::Cache;
use anyhow::{Error, bail, format_err};
use serde_json::{json, Value};
use proxmox::{list_subdirs_api_method, const_regex};
use proxmox::list_subdirs_api_method;
use proxmox::api::{api, RpcEnvironment, RpcEnvironmentType, Permission};
use proxmox::api::router::{Router, SubdirMap};
use crate::server::WorkerTask;
use crate::tools::http;
use crate::tools::{apt, http};
use crate::config::acl::{PRIV_SYS_AUDIT, PRIV_SYS_MODIFY};
use crate::api2::types::{Authid, APTUpdateInfo, NODE_SCHEMA, UPID_SCHEMA};
const_regex! {
VERSION_EPOCH_REGEX = r"^\d+:";
FILENAME_EXTRACT_REGEX = r"^.*/.*?_(.*)_Packages$";
}
// FIXME: once the 'changelog' API call switches over to 'apt-get changelog' only,
// consider removing this function entirely, as it's value is never used anywhere
// then (widget-toolkit doesn't use the value either)
fn get_changelog_url(
package: &str,
filename: &str,
version: &str,
origin: &str,
component: &str,
) -> Result<String, Error> {
if origin == "" {
bail!("no origin available for package {}", package);
}
if origin == "Debian" {
let mut command = std::process::Command::new("apt-get");
command.arg("changelog");
command.arg("--print-uris");
command.arg(package);
let output = crate::tools::run_command(command, None)?; // format: 'http://foo/bar' package.changelog
let output = match output.splitn(2, ' ').next() {
Some(output) => {
if output.len() < 2 {
bail!("invalid output (URI part too short) from 'apt-get changelog --print-uris': {}", output)
}
output[1..output.len()-1].to_owned()
},
None => bail!("invalid output from 'apt-get changelog --print-uris': {}", output)
};
return Ok(output);
} else if origin == "Proxmox" {
// FIXME: Use above call to 'apt changelog <pkg> --print-uris' as well.
// Currently not possible as our packages do not have a URI set in their Release file.
let version = (VERSION_EPOCH_REGEX.regex_obj)().replace_all(version, "");
let base = match (FILENAME_EXTRACT_REGEX.regex_obj)().captures(filename) {
Some(captures) => {
let base_capture = captures.get(1);
match base_capture {
Some(base_underscore) => base_underscore.as_str().replace("_", "/"),
None => bail!("incompatible filename, cannot find regex group")
}
},
None => bail!("incompatible filename, doesn't match regex")
};
return Ok(format!("http://download.proxmox.com/{}/{}_{}.changelog",
base, package, version));
}
bail!("unknown origin ({}) or component ({})", origin, component)
}
struct FilterData<'a> {
// this is version info returned by APT
installed_version: Option<&'a str>,
candidate_version: &'a str,
// this is the version info the filter is supposed to check
active_version: &'a str,
}
enum PackagePreSelect {
OnlyInstalled,
OnlyNew,
All,
}
fn list_installed_apt_packages<F: Fn(FilterData) -> bool>(
filter: F,
only_versions_for: Option<&str>,
) -> Vec<APTUpdateInfo> {
let mut ret = Vec::new();
let mut depends = HashSet::new();
// note: this is not an 'apt update', it just re-reads the cache from disk
let mut cache = Cache::get_singleton();
cache.reload();
let mut cache_iter = match only_versions_for {
Some(name) => cache.find_by_name(name),
None => cache.iter()
};
loop {
match cache_iter.next() {
Some(view) => {
let di = if only_versions_for.is_some() {
query_detailed_info(
PackagePreSelect::All,
&filter,
view,
None
)
} else {
query_detailed_info(
PackagePreSelect::OnlyInstalled,
&filter,
view,
Some(&mut depends)
)
};
if let Some(info) = di {
ret.push(info);
}
if only_versions_for.is_some() {
break;
}
},
None => {
drop(cache_iter);
// also loop through missing dependencies, as they would be installed
for pkg in depends.iter() {
let mut iter = cache.find_by_name(&pkg);
let view = match iter.next() {
Some(view) => view,
None => continue // package not found, ignore
};
let di = query_detailed_info(
PackagePreSelect::OnlyNew,
&filter,
view,
None
);
if let Some(info) = di {
ret.push(info);
}
}
break;
}
}
}
return ret;
}
fn query_detailed_info<'a, F, V>(
pre_select: PackagePreSelect,
filter: F,
view: V,
depends: Option<&mut HashSet<String>>,
) -> Option<APTUpdateInfo>
where
F: Fn(FilterData) -> bool,
V: std::ops::Deref<Target = apt_pkg_native::sane::PkgView<'a>>
{
let current_version = view.current_version();
let candidate_version = view.candidate_version();
let (current_version, candidate_version) = match pre_select {
PackagePreSelect::OnlyInstalled => match (current_version, candidate_version) {
(Some(cur), Some(can)) => (Some(cur), can), // package installed and there is an update
(Some(cur), None) => (Some(cur.clone()), cur), // package installed and up-to-date
(None, Some(_)) => return None, // package could be installed
(None, None) => return None, // broken
},
PackagePreSelect::OnlyNew => match (current_version, candidate_version) {
(Some(_), Some(_)) => return None,
(Some(_), None) => return None,
(None, Some(can)) => (None, can),
(None, None) => return None,
},
PackagePreSelect::All => match (current_version, candidate_version) {
(Some(cur), Some(can)) => (Some(cur), can),
(Some(cur), None) => (Some(cur.clone()), cur),
(None, Some(can)) => (None, can),
(None, None) => return None,
},
};
// get additional information via nested APT 'iterators'
let mut view_iter = view.versions();
while let Some(ver) = view_iter.next() {
let package = view.name();
let version = ver.version();
let mut origin_res = "unknown".to_owned();
let mut section_res = "unknown".to_owned();
let mut priority_res = "unknown".to_owned();
let mut change_log_url = "".to_owned();
let mut short_desc = package.clone();
let mut long_desc = "".to_owned();
let fd = FilterData {
installed_version: current_version.as_deref(),
candidate_version: &candidate_version,
active_version: &version,
};
if filter(fd) {
if let Some(section) = ver.section() {
section_res = section;
}
if let Some(prio) = ver.priority_type() {
priority_res = prio;
}
// assume every package has only one origin file (not
// origin, but origin *file*, for some reason those seem to
// be different concepts in APT)
let mut origin_iter = ver.origin_iter();
let origin = origin_iter.next();
if let Some(origin) = origin {
if let Some(sd) = origin.short_desc() {
short_desc = sd;
}
if let Some(ld) = origin.long_desc() {
long_desc = ld;
}
// the package files appear in priority order, meaning
// the one for the candidate version is first - this is fine
// however, as the source package should be the same for all
// versions anyway
let mut pkg_iter = origin.file();
let pkg_file = pkg_iter.next();
if let Some(pkg_file) = pkg_file {
if let Some(origin_name) = pkg_file.origin() {
origin_res = origin_name;
}
let filename = pkg_file.file_name();
let component = pkg_file.component();
// build changelog URL from gathered information
// ignore errors, use empty changelog instead
let url = get_changelog_url(&package, &filename,
&version, &origin_res, &component);
if let Ok(url) = url {
change_log_url = url;
}
}
}
if let Some(depends) = depends {
let mut dep_iter = ver.dep_iter();
loop {
let dep = match dep_iter.next() {
Some(dep) if dep.dep_type() != "Depends" => continue,
Some(dep) => dep,
None => break
};
let dep_pkg = dep.target_pkg();
let name = dep_pkg.name();
depends.insert(name);
}
}
return Some(APTUpdateInfo {
package,
title: short_desc,
arch: view.arch(),
description: long_desc,
change_log_url,
origin: origin_res,
version: candidate_version.clone(),
old_version: match current_version {
Some(vers) => vers,
None => "".to_owned()
},
priority: priority_res,
section: section_res,
});
}
}
return None;
}
#[api(
input: {
properties: {
@ -308,19 +22,60 @@ where
returns: {
description: "A list of packages with available updates.",
type: Array,
items: { type: APTUpdateInfo },
items: {
type: APTUpdateInfo
},
},
protected: true,
access: {
permission: &Permission::Privilege(&[], PRIV_SYS_AUDIT, false),
},
)]
/// List available APT updates
fn apt_update_available(_param: Value) -> Result<Value, Error> {
let all_upgradeable = list_installed_apt_packages(|data| {
data.candidate_version == data.active_version &&
data.installed_version != Some(data.candidate_version)
}, None);
Ok(json!(all_upgradeable))
match apt::pkg_cache_expired() {
Ok(false) => {
if let Ok(Some(cache)) = apt::read_pkg_state() {
return Ok(json!(cache.package_status));
}
},
_ => (),
}
let cache = apt::update_cache()?;
return Ok(json!(cache.package_status));
}
fn do_apt_update(worker: &WorkerTask, quiet: bool) -> Result<(), Error> {
if !quiet { worker.log("starting apt-get update") }
// TODO: set proxy /etc/apt/apt.conf.d/76pbsproxy like PVE
let mut command = std::process::Command::new("apt-get");
command.arg("update");
// apt "errors" quite easily, and run_command is a bit rigid, so handle this inline for now.
let output = command.output()
.map_err(|err| format_err!("failed to execute {:?} - {}", command, err))?;
if !quiet {
worker.log(String::from_utf8(output.stdout)?);
}
// TODO: improve run_command to allow outputting both, stderr and stdout
if !output.status.success() {
if output.status.code().is_some() {
let msg = String::from_utf8(output.stderr)
.map(|m| if m.is_empty() { String::from("no error message") } else { m })
.unwrap_or_else(|_| String::from("non utf8 error message (suppressed)"));
worker.warn(msg);
} else {
bail!("terminated by signal");
}
}
Ok(())
}
#[api(
@ -330,6 +85,13 @@ fn apt_update_available(_param: Value) -> Result<Value, Error> {
node: {
schema: NODE_SCHEMA,
},
notify: {
type: bool,
description: r#"Send notification mail about new package updates availanle to the
email address configured for 'root@pam')."#,
optional: true,
default: false,
},
quiet: {
description: "Only produces output suitable for logging, omitting progress indicators.",
type: bool,
@ -347,26 +109,46 @@ fn apt_update_available(_param: Value) -> Result<Value, Error> {
)]
/// Update the APT database
pub fn apt_update_database(
notify: Option<bool>,
quiet: Option<bool>,
rpcenv: &mut dyn RpcEnvironment,
) -> Result<String, Error> {
let auth_id: Authid = rpcenv.get_auth_id().unwrap().parse()?;
let to_stdout = if rpcenv.env_type() == RpcEnvironmentType::CLI { true } else { false };
// FIXME: change to non-option in signature and drop below once we have proxmox-api-macro 0.2.3
let quiet = quiet.unwrap_or(API_METHOD_APT_UPDATE_DATABASE_PARAM_DEFAULT_QUIET);
let notify = notify.unwrap_or(API_METHOD_APT_UPDATE_DATABASE_PARAM_DEFAULT_NOTIFY);
let upid_str = WorkerTask::new_thread("aptupdate", None, auth_id, to_stdout, move |worker| {
if !quiet { worker.log("starting apt-get update") }
do_apt_update(&worker, quiet)?;
// TODO: set proxy /etc/apt/apt.conf.d/76pbsproxy like PVE
let mut cache = apt::update_cache()?;
let mut command = std::process::Command::new("apt-get");
command.arg("update");
if notify {
let mut notified = match cache.notified {
Some(notified) => notified,
None => std::collections::HashMap::new(),
};
let mut to_notify: Vec<&APTUpdateInfo> = Vec::new();
let output = crate::tools::run_command(command, None)?;
if !quiet { worker.log(output) }
// TODO: add mail notify for new updates like PVE
for pkg in &cache.package_status {
match notified.insert(pkg.package.to_owned(), pkg.version.to_owned()) {
Some(notified_version) => {
if notified_version != pkg.version {
to_notify.push(pkg);
}
},
None => to_notify.push(pkg),
}
}
if !to_notify.is_empty() {
to_notify.sort_unstable_by_key(|k| &k.package);
crate::server::send_updates_available(&to_notify)?;
}
cache.notified = Some(notified);
apt::write_pkg_cache(&cache)?;
}
Ok(())
})?;
@ -406,7 +188,7 @@ fn apt_get_changelog(
let name = crate::tools::required_string_param(&param, "name")?.to_owned();
let version = param["version"].as_str();
let pkg_info = list_installed_apt_packages(|data| {
let pkg_info = apt::list_installed_apt_packages(|data| {
match version {
Some(version) => version == data.active_version,
None => data.active_version == data.candidate_version

35
src/api2/node/report.rs Normal file
View File

@ -0,0 +1,35 @@
use anyhow::Error;
use proxmox::api::{api, ApiMethod, Permission, Router, RpcEnvironment};
use serde_json::{json, Value};
use crate::api2::types::*;
use crate::config::acl::PRIV_SYS_AUDIT;
use crate::server::generate_report;
#[api(
input: {
properties: {
node: {
schema: NODE_SCHEMA,
},
},
},
returns: {
type: String,
description: "Returns report of the node"
},
access: {
permission: &Permission::Privilege(&["system", "status"], PRIV_SYS_AUDIT, false),
},
)]
/// Generate a report
fn get_report(
_param: Value,
_info: &ApiMethod,
_rpcenv: &mut dyn RpcEnvironment,
) -> Result<Value, Error> {
Ok(json!(generate_report()))
}
pub const ROUTER: Router = Router::new()
.get(&API_METHOD_GET_REPORT);

View File

@ -7,7 +7,7 @@ use crate::tools;
use crate::tools::subscription::{self, SubscriptionStatus, SubscriptionInfo};
use crate::config::acl::{PRIV_SYS_AUDIT,PRIV_SYS_MODIFY};
use crate::config::cached_user_info::CachedUserInfo;
use crate::api2::types::{NODE_SCHEMA, Authid};
use crate::api2::types::{NODE_SCHEMA, SUBSCRIPTION_KEY_SCHEMA, Authid};
#[api(
input: {
@ -29,7 +29,7 @@ use crate::api2::types::{NODE_SCHEMA, Authid};
},
)]
/// Check and update subscription status.
fn check_subscription(
pub fn check_subscription(
force: bool,
) -> Result<(), Error> {
// FIXME: drop once proxmox-api-macro is bumped to >> 5.0.0-1
@ -82,7 +82,7 @@ fn check_subscription(
},
)]
/// Read subscription info.
fn get_subscription(
pub fn get_subscription(
_param: Value,
rpcenv: &mut dyn RpcEnvironment,
) -> Result<SubscriptionInfo, Error> {
@ -124,9 +124,7 @@ fn get_subscription(
schema: NODE_SCHEMA,
},
key: {
description: "Proxmox Backup Server subscription key",
type: String,
max_length: 32,
schema: SUBSCRIPTION_KEY_SCHEMA,
},
},
},
@ -136,7 +134,7 @@ fn get_subscription(
},
)]
/// Set a subscription key and check it.
fn set_subscription(
pub fn set_subscription(
key: String,
) -> Result<(), Error> {
@ -164,7 +162,7 @@ fn set_subscription(
},
)]
/// Delete subscription info.
fn delete_subscription() -> Result<(), Error> {
pub fn delete_subscription() -> Result<(), Error> {
subscription::delete_subscription()
.map_err(|err| format_err!("Deleting subscription failed: {}", err))?;

View File

@ -271,7 +271,7 @@ fn stop_task(
},
limit: {
type: u64,
description: "Only list this amount of tasks.",
description: "Only list this amount of tasks. (0 means no limit)",
default: 50,
optional: true,
},
@ -296,6 +296,29 @@ fn stop_task(
type: String,
description: "Only list tasks from this user.",
},
since: {
type: i64,
description: "Only list tasks since this UNIX epoch.",
optional: true,
},
until: {
type: i64,
description: "Only list tasks until this UNIX epoch.",
optional: true,
},
typefilter: {
optional: true,
type: String,
description: "Only list tasks whose type contains this.",
},
statusfilter: {
optional: true,
type: Array,
description: "Only list tasks which have any one of the listed status.",
items: {
type: TaskStateType,
},
},
},
},
returns: {
@ -315,6 +338,10 @@ pub fn list_tasks(
errors: bool,
running: bool,
userfilter: Option<String>,
since: Option<i64>,
until: Option<i64>,
typefilter: Option<String>,
statusfilter: Option<Vec<TaskStateType>>,
param: Value,
mut rpcenv: &mut dyn RpcEnvironment,
) -> Result<Vec<TaskListItem>, Error> {
@ -328,9 +355,23 @@ pub fn list_tasks(
let store = param["store"].as_str();
let list = TaskListInfoIterator::new(running)?;
let limit = if limit > 0 { limit as usize } else { usize::MAX };
let result: Vec<TaskListItem> = list
.take_while(|info| !info.is_err())
.skip_while(|info| {
match (info, until) {
(Ok(info), Some(until)) => info.upid.starttime > until,
(Ok(_), None) => false,
(Err(_), _) => false,
}
})
.take_while(|info| {
match (info, since) {
(Ok(info), Some(since)) => info.upid.starttime > since,
(Ok(_), None) => true,
(Err(_), _) => false,
}
})
.filter_map(|info| {
let info = match info {
Ok(info) => info,
@ -364,19 +405,31 @@ pub fn list_tasks(
}
}
match info.state {
Some(_) if running => return None,
Some(crate::server::TaskState::OK { .. }) if errors => return None,
if let Some(typefilter) = &typefilter {
if !info.upid.worker_type.contains(typefilter) {
return None;
}
}
match (&info.state, &statusfilter) {
(Some(_), _) if running => return None,
(Some(crate::server::TaskState::OK { .. }), _) if errors => return None,
(Some(state), Some(filters)) => {
if !filters.contains(&state.tasktype()) {
return None;
}
},
(None, Some(_)) => return None,
_ => {},
}
Some(info.into())
}).skip(start as usize)
.take(limit as usize)
.take(limit)
.collect();
let mut count = result.len() + start as usize;
if result.len() > 0 && result.len() >= limit as usize { // we have a 'virtual' entry as long as we have any new
if result.len() > 0 && result.len() >= limit { // we have a 'virtual' entry as long as we have any new
count += 1;
}

View File

@ -75,7 +75,7 @@ pub fn do_sync_job(
let job_id = job.jobname().to_string();
let worker_type = job.jobtype().to_string();
let email = crate::server::lookup_user_email(auth_id.user());
let (email, notify) = crate::server::lookup_datastore_notify_settings(&sync_job.store);
let upid_str = WorkerTask::spawn(
&worker_type,
@ -92,6 +92,7 @@ pub fn do_sync_job(
let worker_future = async move {
let delete = sync_job.remove_vanished.unwrap_or(true);
let sync_owner = sync_job.owner.unwrap_or(Authid::backup_auth_id().clone());
let (client, src_repo, tgt_store) = get_pull_parameters(&sync_job.store, &sync_job.remote, &sync_job.remote_store).await?;
worker.log(format!("Starting datastore sync job '{}'", job_id));
@ -101,9 +102,7 @@ pub fn do_sync_job(
worker.log(format!("Sync datastore '{}' from '{}/{}'",
sync_job.store, sync_job.remote, sync_job.remote_store));
let backup_auth_id = Authid::backup_auth_id();
crate::client::pull::pull_store(&worker, &client, &src_repo, tgt_store.clone(), delete, backup_auth_id.clone()).await?;
crate::client::pull::pull_store(&worker, &client, &src_repo, tgt_store.clone(), delete, sync_owner).await?;
worker.log(format!("sync job '{}' end", &job_id));
@ -127,7 +126,7 @@ pub fn do_sync_job(
}
if let Some(email) = email {
if let Err(err) = crate::server::send_sync_status(&email, &sync_job2, &result) {
if let Err(err) = crate::server::send_sync_status(&email, notify, &sync_job2, &result) {
eprintln!("send sync notification failed: {}", err);
}
}

View File

@ -17,17 +17,13 @@ use crate::api2::types::{
RRDMode,
RRDTimeFrameResolution,
Authid,
TaskListItem,
TaskStateType,
};
use crate::server;
use crate::backup::{DataStore};
use crate::config::datastore;
use crate::tools::statistics::{linear_regression};
use crate::config::cached_user_info::CachedUserInfo;
use crate::config::acl::{
PRIV_SYS_AUDIT,
PRIV_DATASTORE_AUDIT,
PRIV_DATASTORE_BACKUP,
};
@ -179,103 +175,8 @@ fn datastore_status(
Ok(list.into())
}
#[api(
input: {
properties: {
since: {
type: i64,
description: "Only list tasks since this UNIX epoch.",
optional: true,
},
typefilter: {
optional: true,
type: String,
description: "Only list tasks, whose type contains this string.",
},
statusfilter: {
optional: true,
type: Array,
description: "Only list tasks which have any one of the listed status.",
items: {
type: TaskStateType,
},
},
},
},
returns: {
description: "A list of tasks.",
type: Array,
items: { type: TaskListItem },
},
access: {
description: "Users can only see there own tasks, unless the have Sys.Audit on /system/tasks.",
permission: &Permission::Anybody,
},
)]
/// List tasks.
pub fn list_tasks(
since: Option<i64>,
typefilter: Option<String>,
statusfilter: Option<Vec<TaskStateType>>,
_param: Value,
rpcenv: &mut dyn RpcEnvironment,
) -> Result<Vec<TaskListItem>, Error> {
let auth_id: Authid = rpcenv.get_auth_id().unwrap().parse()?;
let user_info = CachedUserInfo::new()?;
let user_privs = user_info.lookup_privs(&auth_id, &["system", "tasks"]);
let list_all = (user_privs & PRIV_SYS_AUDIT) != 0;
let since = since.unwrap_or_else(|| 0);
let list: Vec<TaskListItem> = server::TaskListInfoIterator::new(false)?
.take_while(|info| {
match info {
Ok(info) => info.upid.starttime > since,
Err(_) => false
}
})
.filter_map(|info| {
match info {
Ok(info) => {
if list_all || info.upid.auth_id == auth_id {
if let Some(filter) = &typefilter {
if !info.upid.worker_type.contains(filter) {
return None;
}
}
if let Some(filters) = &statusfilter {
if let Some(state) = &info.state {
let statetype = match state {
server::TaskState::OK { .. } => TaskStateType::OK,
server::TaskState::Unknown { .. } => TaskStateType::Unknown,
server::TaskState::Error { .. } => TaskStateType::Error,
server::TaskState::Warning { .. } => TaskStateType::Warning,
};
if !filters.contains(&statetype) {
return None;
}
}
}
Some(Ok(TaskListItem::from(info)))
} else {
None
}
}
Err(err) => Some(Err(err))
}
})
.collect::<Result<Vec<TaskListItem>, Error>>()?;
Ok(list.into())
}
const SUBDIRS: SubdirMap = &[
("datastore-usage", &Router::new().get(&API_METHOD_DATASTORE_STATUS)),
("tasks", &Router::new().get(&API_METHOD_LIST_TASKS)),
];
pub const ROUTER: Router = Router::new()

View File

@ -5,7 +5,7 @@ use proxmox::api::{api, schema::*};
use proxmox::const_regex;
use proxmox::{IPRE, IPRE_BRACKET, IPV4RE, IPV6RE, IPV4OCTET, IPV6H16, IPV6LS32};
use crate::backup::CryptMode;
use crate::backup::{CryptMode, BACKUP_ID_REGEX};
use crate::server::UPID;
#[macro_use]
@ -73,6 +73,8 @@ const_regex!{
pub ACL_PATH_REGEX = concat!(r"^(?:/|", r"(?:/", PROXMOX_SAFE_ID_REGEX_STR!(), ")+", r")$");
pub SUBSCRIPTION_KEY_REGEX = concat!(r"^pbs(?:[cbsp])-[0-9a-f]{10}$");
pub BLOCKDEVICE_NAME_REGEX = r"^(:?(:?h|s|x?v)d[a-z]+)|(:?nvme\d+n\d+)$";
pub ZPOOL_NAME_REGEX = r"^[a-zA-Z][a-z0-9A-Z\-_.:]+$";
@ -99,6 +101,9 @@ pub const CERT_FINGERPRINT_SHA256_FORMAT: ApiStringFormat =
pub const PROXMOX_SAFE_ID_FORMAT: ApiStringFormat =
ApiStringFormat::Pattern(&PROXMOX_SAFE_ID_REGEX);
pub const BACKUP_ID_FORMAT: ApiStringFormat =
ApiStringFormat::Pattern(&BACKUP_ID_REGEX);
pub const SINGLE_LINE_COMMENT_FORMAT: ApiStringFormat =
ApiStringFormat::Pattern(&SINGLE_LINE_COMMENT_REGEX);
@ -129,6 +134,9 @@ pub const CIDR_V6_FORMAT: ApiStringFormat =
pub const CIDR_FORMAT: ApiStringFormat =
ApiStringFormat::Pattern(&CIDR_REGEX);
pub const SUBSCRIPTION_KEY_FORMAT: ApiStringFormat =
ApiStringFormat::Pattern(&SUBSCRIPTION_KEY_REGEX);
pub const BLOCKDEVICE_NAME_FORMAT: ApiStringFormat =
ApiStringFormat::Pattern(&BLOCKDEVICE_NAME_REGEX);
@ -271,7 +279,7 @@ pub const BACKUP_TYPE_SCHEMA: Schema =
pub const BACKUP_ID_SCHEMA: Schema =
StringSchema::new("Backup ID.")
.format(&PROXMOX_SAFE_ID_FORMAT)
.format(&BACKUP_ID_FORMAT)
.schema();
pub const BACKUP_TIME_SCHEMA: Schema =
@ -348,6 +356,12 @@ pub const DNS_NAME_OR_IP_SCHEMA: Schema = StringSchema::new("DNS name or IP addr
.format(&DNS_NAME_OR_IP_FORMAT)
.schema();
pub const SUBSCRIPTION_KEY_SCHEMA: Schema = StringSchema::new("Proxmox Backup Server subscription key.")
.format(&SUBSCRIPTION_KEY_FORMAT)
.min_length(15)
.max_length(16)
.schema();
pub const BLOCKDEVICE_NAME_SCHEMA: Schema = StringSchema::new("Block device name (/sys/block/<name>).")
.format(&BLOCKDEVICE_NAME_FORMAT)
.min_length(3)
@ -1115,7 +1129,7 @@ pub enum RRDTimeFrameResolution {
}
#[api()]
#[derive(Serialize, Deserialize)]
#[derive(Debug, Serialize, Deserialize)]
#[serde(rename_all = "PascalCase")]
/// Describes a package for which an update is available.
pub struct APTUpdateInfo {
@ -1140,3 +1154,16 @@ pub struct APTUpdateInfo {
/// URL under which the package's changelog can be retrieved
pub change_log_url: String,
}
#[api()]
#[derive(Debug, Copy, Clone, PartialEq, Serialize, Deserialize)]
#[serde(rename_all = "lowercase")]
/// When do we send notifications
pub enum Notify {
/// Never send notification
Never,
/// Send notifications for failed and sucessful jobs
Always,
/// Send notifications for failed jobs only
Error,
}

View File

@ -474,6 +474,12 @@ impl PartialEq for Userid {
}
}
impl From<Authid> for Userid {
fn from(authid: Authid) -> Self {
authid.user
}
}
impl From<(Username, Realm)> for Userid {
fn from(parts: (Username, Realm)) -> Self {
Self::from((parts.0.as_ref(), parts.1.as_ref()))

View File

@ -1,37 +1,31 @@
use crate::tools;
use anyhow::{bail, format_err, Error};
use regex::Regex;
use std::os::unix::io::RawFd;
use std::path::{PathBuf, Path};
use lazy_static::lazy_static;
use proxmox::const_regex;
use super::manifest::MANIFEST_BLOB_NAME;
macro_rules! BACKUP_ID_RE { () => (r"[A-Za-z0-9][A-Za-z0-9_-]+") }
macro_rules! BACKUP_ID_RE { () => (r"[A-Za-z0-9_][A-Za-z0-9._\-]*") }
macro_rules! BACKUP_TYPE_RE { () => (r"(?:host|vm|ct)") }
macro_rules! BACKUP_TIME_RE { () => (r"[0-9]{4}-[0-9]{2}-[0-9]{2}T[0-9]{2}:[0-9]{2}:[0-9]{2}Z") }
lazy_static!{
static ref BACKUP_FILE_REGEX: Regex = Regex::new(
r"^.*\.([fd]idx|blob)$").unwrap();
const_regex!{
BACKUP_FILE_REGEX = r"^.*\.([fd]idx|blob)$";
static ref BACKUP_TYPE_REGEX: Regex = Regex::new(
concat!(r"^(", BACKUP_TYPE_RE!(), r")$")).unwrap();
BACKUP_TYPE_REGEX = concat!(r"^(", BACKUP_TYPE_RE!(), r")$");
static ref BACKUP_ID_REGEX: Regex = Regex::new(
concat!(r"^", BACKUP_ID_RE!(), r"$")).unwrap();
pub BACKUP_ID_REGEX = concat!(r"^", BACKUP_ID_RE!(), r"$");
static ref BACKUP_DATE_REGEX: Regex = Regex::new(
concat!(r"^", BACKUP_TIME_RE!() ,r"$")).unwrap();
BACKUP_DATE_REGEX = concat!(r"^", BACKUP_TIME_RE!() ,r"$");
static ref GROUP_PATH_REGEX: Regex = Regex::new(
concat!(r"^(", BACKUP_TYPE_RE!(), ")/(", BACKUP_ID_RE!(), r")$")).unwrap();
static ref SNAPSHOT_PATH_REGEX: Regex = Regex::new(
concat!(r"^(", BACKUP_TYPE_RE!(), ")/(", BACKUP_ID_RE!(), ")/(", BACKUP_TIME_RE!(), r")$")).unwrap();
GROUP_PATH_REGEX = concat!(r"^(", BACKUP_TYPE_RE!(), ")/(", BACKUP_ID_RE!(), r")$");
SNAPSHOT_PATH_REGEX = concat!(
r"^(", BACKUP_TYPE_RE!(), ")/(", BACKUP_ID_RE!(), ")/(", BACKUP_TIME_RE!(), r")$");
}
/// BackupGroup is a directory containing a list of BackupDir

View File

@ -458,38 +458,35 @@ impl DataStore {
) -> Result<(), Error> {
let image_list = self.list_images()?;
let image_count = image_list.len();
let mut done = 0;
let mut last_percentage: usize = 0;
for path in image_list {
for img in image_list {
worker.check_abort()?;
tools::fail_on_shutdown()?;
let full_path = self.chunk_store.relative_path(&path);
match std::fs::File::open(&full_path) {
let path = self.chunk_store.relative_path(&img);
match std::fs::File::open(&path) {
Ok(file) => {
if let Ok(archive_type) = archive_type(&path) {
if let Ok(archive_type) = archive_type(&img) {
if archive_type == ArchiveType::FixedIndex {
let index = FixedIndexReader::new(file)?;
self.index_mark_used_chunks(index, &path, status, worker)?;
let index = FixedIndexReader::new(file).map_err(|e| {
format_err!("can't read index '{}' - {}", path.to_string_lossy(), e)
})?;
self.index_mark_used_chunks(index, &img, status, worker)?;
} else if archive_type == ArchiveType::DynamicIndex {
let index = DynamicIndexReader::new(file)?;
self.index_mark_used_chunks(index, &path, status, worker)?;
let index = DynamicIndexReader::new(file).map_err(|e| {
format_err!("can't read index '{}' - {}", path.to_string_lossy(), e)
})?;
self.index_mark_used_chunks(index, &img, status, worker)?;
}
}
}
Err(err) => {
if err.kind() == std::io::ErrorKind::NotFound {
// simply ignore vanished files
} else {
return Err(err.into());
}
}
Err(err) if err.kind() == io::ErrorKind::NotFound => (), // ignore vanished files
Err(err) => bail!("can't open index {} - {}", path.to_string_lossy(), err),
}
done += 1;

View File

@ -95,6 +95,18 @@ impl DynamicIndexReader {
let header_size = std::mem::size_of::<DynamicIndexHeader>();
let rawfd = file.as_raw_fd();
let stat = match nix::sys::stat::fstat(rawfd) {
Ok(stat) => stat,
Err(err) => bail!("fstat failed - {}", err),
};
let size = stat.st_size as usize;
if size < header_size {
bail!("index too small ({})", stat.st_size);
}
let header: Box<DynamicIndexHeader> = unsafe { file.read_host_value_boxed()? };
if header.magic != super::DYNAMIC_SIZED_CHUNK_INDEX_1_0 {
@ -103,13 +115,7 @@ impl DynamicIndexReader {
let ctime = proxmox::tools::time::epoch_i64();
let rawfd = file.as_raw_fd();
let stat = nix::sys::stat::fstat(rawfd)?;
let size = stat.st_size as usize;
let index_size = size - header_size;
let index_size = stat.st_size as usize - header_size;
let index_count = index_size / 40;
if index_count * 40 != index_size {
bail!("got unexpected file size");

View File

@ -68,6 +68,19 @@ impl FixedIndexReader {
file.seek(SeekFrom::Start(0))?;
let header_size = std::mem::size_of::<FixedIndexHeader>();
let rawfd = file.as_raw_fd();
let stat = match nix::sys::stat::fstat(rawfd) {
Ok(stat) => stat,
Err(err) => bail!("fstat failed - {}", err),
};
let size = stat.st_size as usize;
if size < header_size {
bail!("index too small ({})", stat.st_size);
}
let header: Box<FixedIndexHeader> = unsafe { file.read_host_value_boxed()? };
if header.magic != super::FIXED_SIZED_CHUNK_INDEX_1_0 {
@ -81,12 +94,6 @@ impl FixedIndexReader {
let index_length = ((size + chunk_size - 1) / chunk_size) as usize;
let index_size = index_length * 32;
let rawfd = file.as_raw_fd();
let stat = match nix::sys::stat::fstat(rawfd) {
Ok(stat) => stat,
Err(err) => bail!("fstat failed - {}", err),
};
let expected_index_size = (stat.st_size as usize) - header_size;
if index_size != expected_index_size {

View File

@ -482,7 +482,7 @@ pub fn verify_backup_group(
Ok((count, errors))
}
/// Verify all backups inside a datastore
/// Verify all (owned) backups inside a datastore
///
/// Errors are logged to the worker log.
///
@ -493,14 +493,41 @@ pub fn verify_all_backups(
datastore: Arc<DataStore>,
worker: Arc<dyn TaskState + Send + Sync>,
upid: &UPID,
owner: Option<Authid>,
filter: Option<&dyn Fn(&BackupManifest) -> bool>,
) -> Result<Vec<String>, Error> {
let mut errors = Vec::new();
if let Some(owner) = &owner {
task_log!(
worker,
"verify datastore {} - limiting to backups owned by {}",
datastore.name(),
owner
);
}
let filter_by_owner = |group: &BackupGroup| {
if let Some(owner) = &owner {
match datastore.get_owner(group) {
Ok(ref group_owner) => {
group_owner == owner
|| (group_owner.is_token()
&& !owner.is_token()
&& group_owner.user() == owner.user())
},
Err(_) => false,
}
} else {
true
}
};
let mut list = match BackupGroup::list_groups(&datastore.base_path()) {
Ok(list) => list
.into_iter()
.filter(|group| !(group.backup_type() == "host" && group.backup_id() == "benchmark"))
.filter(filter_by_owner)
.collect::<Vec<BackupGroup>>(),
Err(err) => {
task_log!(

View File

@ -52,7 +52,9 @@ async fn run() -> Result<(), Error> {
let mut config = server::ApiConfig::new(
buildcfg::JS_DIR, &proxmox_backup::api2::ROUTER, RpcEnvironmentType::PRIVILEGED)?;
config.enable_file_log(buildcfg::API_ACCESS_LOG_FN)?;
let mut commando_sock = server::CommandoSocket::new(server::our_ctrl_sock());
config.enable_file_log(buildcfg::API_ACCESS_LOG_FN, &mut commando_sock)?;
let rest_server = RestServer::new(config);
@ -76,10 +78,12 @@ async fn run() -> Result<(), Error> {
},
);
server::write_pid(buildcfg::PROXMOX_BACKUP_API_PID_FN)?;
daemon::systemd_notify(daemon::SystemdNotify::Ready)?;
let init_result: Result<(), Error> = try_block!({
server::create_task_control_socket()?;
server::register_task_control_commands(&mut commando_sock)?;
commando_sock.spawn()?;
server::server_state_init()?;
Ok(())
});

View File

@ -1,4 +1,5 @@
use std::collections::HashMap;
use std::io::{self, Write};
use anyhow::{format_err, Error};
use serde_json::{json, Value};
@ -354,6 +355,14 @@ async fn verify(
Ok(Value::Null)
}
#[api()]
/// System report
async fn report() -> Result<Value, Error> {
let report = proxmox_backup::server::generate_report();
io::stdout().write_all(report.as_bytes())?;
Ok(Value::Null)
}
fn main() {
proxmox_backup::tools::setup_safe_path_env();
@ -368,6 +377,7 @@ fn main() {
.insert("remote", remote_commands())
.insert("garbage-collection", garbage_collection_commands())
.insert("cert", cert_mgmt_cli())
.insert("subscription", subscription_commands())
.insert("sync-job", sync_job_commands())
.insert("task", task_mgmt_cli())
.insert(
@ -383,6 +393,9 @@ fn main() {
CliCommand::new(&API_METHOD_VERIFY)
.arg_param(&["store"])
.completion_cb("store", config::datastore::complete_datastore_name)
)
.insert("report",
CliCommand::new(&API_METHOD_REPORT)
);

View File

@ -1,4 +1,4 @@
use std::sync::{Arc};
use std::sync::Arc;
use std::path::{Path, PathBuf};
use std::os::unix::io::AsRawFd;
@ -13,7 +13,6 @@ use proxmox::api::RpcEnvironmentType;
use proxmox_backup::{
backup::DataStore,
server::{
UPID,
WorkerTask,
ApiConfig,
rest::*,
@ -30,7 +29,7 @@ use proxmox_backup::{
};
use proxmox_backup::api2::types::{Authid, Userid};
use proxmox_backup::api2::types::Authid;
use proxmox_backup::configdir;
use proxmox_backup::buildcfg;
use proxmox_backup::server;
@ -41,6 +40,7 @@ use proxmox_backup::tools::{
DiskManage,
zfs_pool_stats,
},
logrotate::LogRotate,
socket::{
set_tcp_keepalive,
PROXMOX_BACKUP_TCP_KEEPALIVE_TIME,
@ -49,6 +49,7 @@ use proxmox_backup::tools::{
use proxmox_backup::api2::pull::do_sync_job;
use proxmox_backup::server::do_verification_job;
use proxmox_backup::server::do_prune_job;
fn main() -> Result<(), Error> {
proxmox_backup::tools::setup_safe_path_env();
@ -73,6 +74,10 @@ async fn run() -> Result<(), Error> {
bail!("unable to inititialize syslog - {}", err);
}
// Note: To debug early connection error use
// PROXMOX_DEBUG=1 ./target/release/proxmox-backup-proxy
let debug = std::env::var("PROXMOX_DEBUG").is_ok();
let _ = public_auth_key(); // load with lazy_static
let _ = csrf_secret(); // load with lazy_static
@ -93,7 +98,9 @@ async fn run() -> Result<(), Error> {
config.register_template("index", &indexpath)?;
config.register_template("console", "/usr/share/pve-xtermjs/index.html.hbs")?;
config.enable_file_log(buildcfg::API_ACCESS_LOG_FN)?;
let mut commando_sock = server::CommandoSocket::new(server::our_ctrl_sock());
config.enable_file_log(buildcfg::API_ACCESS_LOG_FN, &mut commando_sock)?;
let rest_server = RestServer::new(config);
@ -113,25 +120,12 @@ async fn run() -> Result<(), Error> {
let server = daemon::create_daemon(
([0,0,0,0,0,0,0,0], 8007).into(),
|listener, ready| {
let connections = proxmox_backup::tools::async_io::StaticIncoming::from(listener)
.map_err(Error::from)
.try_filter_map(move |(sock, _addr)| {
let acceptor = Arc::clone(&acceptor);
async move {
sock.set_nodelay(true).unwrap();
let _ = set_tcp_keepalive(sock.as_raw_fd(), PROXMOX_BACKUP_TCP_KEEPALIVE_TIME);
Ok(tokio_openssl::accept(&acceptor, sock)
.await
.ok() // handshake errors aren't be fatal, so return None to filter
)
}
});
let connections = proxmox_backup::tools::async_io::HyperAccept(connections);
let connections = accept_connections(listener, acceptor, debug);
let connections = hyper::server::accept::from_stream(connections);
Ok(ready
.and_then(|_| hyper::Server::builder(connections)
.and_then(|_| hyper::Server::builder(connections)
.serve(rest_server)
.with_graceful_shutdown(server::shutdown_future())
.map_err(Error::from)
@ -142,10 +136,12 @@ async fn run() -> Result<(), Error> {
},
);
server::write_pid(buildcfg::PROXMOX_BACKUP_PROXY_PID_FN)?;
daemon::systemd_notify(daemon::SystemdNotify::Ready)?;
let init_result: Result<(), Error> = try_block!({
server::create_task_control_socket()?;
server::register_task_control_commands(&mut commando_sock)?;
commando_sock.spawn()?;
server::server_state_init()?;
Ok(())
});
@ -165,6 +161,72 @@ async fn run() -> Result<(), Error> {
Ok(())
}
fn accept_connections(
mut listener: tokio::net::TcpListener,
acceptor: Arc<openssl::ssl::SslAcceptor>,
debug: bool,
) -> tokio::sync::mpsc::Receiver<Result<tokio_openssl::SslStream<tokio::net::TcpStream>, Error>> {
const MAX_PENDING_ACCEPTS: usize = 1024;
let (sender, receiver) = tokio::sync::mpsc::channel(MAX_PENDING_ACCEPTS);
let accept_counter = Arc::new(());
tokio::spawn(async move {
loop {
match listener.accept().await {
Err(err) => {
eprintln!("error accepting tcp connection: {}", err);
}
Ok((sock, _addr)) => {
sock.set_nodelay(true).unwrap();
let _ = set_tcp_keepalive(sock.as_raw_fd(), PROXMOX_BACKUP_TCP_KEEPALIVE_TIME);
let acceptor = Arc::clone(&acceptor);
let mut sender = sender.clone();
if Arc::strong_count(&accept_counter) > MAX_PENDING_ACCEPTS {
eprintln!("connection rejected - to many open connections");
continue;
}
let accept_counter = accept_counter.clone();
tokio::spawn(async move {
let accept_future = tokio::time::timeout(
Duration::new(10, 0), tokio_openssl::accept(&acceptor, sock));
let result = accept_future.await;
match result {
Ok(Ok(connection)) => {
if let Err(_) = sender.send(Ok(connection)).await {
if debug {
eprintln!("detect closed connection channel");
}
}
}
Ok(Err(err)) => {
if debug {
eprintln!("https handshake failed - {}", err);
}
}
Err(_) => {
if debug {
eprintln!("https handshake timeout");
}
}
}
drop(accept_counter); // decrease reference count
});
}
}
}
});
receiver
}
fn start_stat_generator() {
let abort_future = server::shutdown_future();
let future = Box::pin(run_stat_generator());
@ -247,8 +309,6 @@ async fn schedule_datastore_garbage_collection() {
},
};
let email = server::lookup_user_email(Userid::root_userid());
let config = match datastore::config() {
Err(err) => {
eprintln!("unable to read datastore config - {}", err);
@ -291,22 +351,11 @@ async fn schedule_datastore_garbage_collection() {
let worker_type = "garbage_collection";
let stat = datastore.last_gc_status();
let last = if let Some(upid_str) = stat.upid {
match upid_str.parse::<UPID>() {
Ok(upid) => upid.starttime,
Err(err) => {
eprintln!("unable to parse upid '{}' - {}", upid_str, err);
continue;
}
}
} else {
match jobstate::last_run_time(worker_type, &store) {
Ok(time) => time,
Err(err) => {
eprintln!("could not get last run time of {} {}: {}", worker_type, store, err);
continue;
}
let last = match jobstate::last_run_time(worker_type, &store) {
Ok(time) => time,
Err(err) => {
eprintln!("could not get last run time of {} {}: {}", worker_type, store, err);
continue;
}
};
@ -323,44 +372,15 @@ async fn schedule_datastore_garbage_collection() {
if next > now { continue; }
let mut job = match Job::new(worker_type, &store) {
let job = match Job::new(worker_type, &store) {
Ok(job) => job,
Err(_) => continue, // could not get lock
};
let store2 = store.clone();
let email2 = email.clone();
let auth_id = Authid::backup_auth_id();
if let Err(err) = WorkerTask::new_thread(
worker_type,
Some(store.clone()),
Authid::backup_auth_id().clone(),
false,
move |worker| {
job.start(&worker.upid().to_string())?;
worker.log(format!("starting garbage collection on store {}", store));
worker.log(format!("task triggered by schedule '{}'", event_str));
let result = datastore.garbage_collection(&*worker, worker.upid());
let status = worker.create_state(&result);
if let Err(err) = job.finish(status) {
eprintln!("could not finish job state for {}: {}", worker_type, err);
}
if let Some(email2) = email2 {
let gc_status = datastore.last_gc_status();
if let Err(err) = crate::server::send_gc_status(&email2, datastore.name(), &gc_status, &result) {
eprintln!("send gc notification failed: {}", err);
}
}
result
}
) {
eprintln!("unable to start garbage collection on store {} - {}", store2, err);
if let Err(err) = crate::server::do_garbage_collection_job(job, datastore, auth_id, Some(event_str), false) {
eprintln!("unable to start garbage collection job on datastore {} - {}", store, err);
}
}
}
@ -370,8 +390,6 @@ async fn schedule_datastore_prune() {
use proxmox_backup::{
backup::{
PruneOptions,
BackupGroup,
compute_prune_info,
},
config::datastore::{
self,
@ -388,13 +406,6 @@ async fn schedule_datastore_prune() {
};
for (store, (_, store_config)) in config.sections {
let datastore = match DataStore::lookup_datastore(&store) {
Ok(datastore) => datastore,
Err(err) => {
eprintln!("lookup_datastore '{}' failed - {}", store, err);
continue;
}
};
let store_config: DataStoreConfig = match serde_json::from_value(store_config) {
Ok(c) => c,
@ -422,95 +433,18 @@ async fn schedule_datastore_prune() {
continue;
}
let event = match parse_calendar_event(&event_str) {
Ok(event) => event,
Err(err) => {
eprintln!("unable to parse schedule '{}' - {}", event_str, err);
continue;
}
};
let worker_type = "prune";
if check_schedule(worker_type, &event_str, &store) {
let job = match Job::new(worker_type, &store) {
Ok(job) => job,
Err(_) => continue, // could not get lock
};
let last = match jobstate::last_run_time(worker_type, &store) {
Ok(time) => time,
Err(err) => {
eprintln!("could not get last run time of {} {}: {}", worker_type, store, err);
continue;
let auth_id = Authid::backup_auth_id().clone();
if let Err(err) = do_prune_job(job, prune_options, store.clone(), &auth_id, Some(event_str)) {
eprintln!("unable to start datastore prune job {} - {}", &store, err);
}
};
let next = match compute_next_event(&event, last, false) {
Ok(Some(next)) => next,
Ok(None) => continue,
Err(err) => {
eprintln!("compute_next_event for '{}' failed - {}", event_str, err);
continue;
}
};
let now = proxmox::tools::time::epoch_i64();
if next > now { continue; }
let mut job = match Job::new(worker_type, &store) {
Ok(job) => job,
Err(_) => continue, // could not get lock
};
let store2 = store.clone();
if let Err(err) = WorkerTask::new_thread(
worker_type,
Some(store.clone()),
Authid::backup_auth_id().clone(),
false,
move |worker| {
job.start(&worker.upid().to_string())?;
let result = try_block!({
worker.log(format!("Starting datastore prune on store \"{}\"", store));
worker.log(format!("task triggered by schedule '{}'", event_str));
worker.log(format!("retention options: {}", prune_options.cli_options_string()));
let base_path = datastore.base_path();
let groups = BackupGroup::list_groups(&base_path)?;
for group in groups {
let list = group.list_backups(&base_path)?;
let mut prune_info = compute_prune_info(list, &prune_options)?;
prune_info.reverse(); // delete older snapshots first
worker.log(format!("Starting prune on store \"{}\" group \"{}/{}\"",
store, group.backup_type(), group.backup_id()));
for (info, keep) in prune_info {
worker.log(format!(
"{} {}/{}/{}",
if keep { "keep" } else { "remove" },
group.backup_type(), group.backup_id(),
info.backup_dir.backup_time_string()));
if !keep {
datastore.remove_backup_dir(&info.backup_dir, true)?;
}
}
}
Ok(())
});
let status = worker.create_state(&result);
if let Err(err) = job.finish(status) {
eprintln!("could not finish job state for {}: {}", worker_type, err);
}
result
}
) {
eprintln!("unable to start datastore prune on store {} - {}", store2, err);
}
}
}
@ -543,47 +477,18 @@ async fn schedule_datastore_sync_jobs() {
None => continue,
};
let event = match parse_calendar_event(&event_str) {
Ok(event) => event,
Err(err) => {
eprintln!("unable to parse schedule '{}' - {}", event_str, err);
continue;
}
};
let worker_type = "syncjob";
if check_schedule(worker_type, &event_str, &job_id) {
let job = match Job::new(worker_type, &job_id) {
Ok(job) => job,
Err(_) => continue, // could not get lock
};
let last = match jobstate::last_run_time(worker_type, &job_id) {
Ok(time) => time,
Err(err) => {
eprintln!("could not get last run time of {} {}: {}", worker_type, job_id, err);
continue;
let auth_id = Authid::backup_auth_id().clone();
if let Err(err) = do_sync_job(job, job_config, &auth_id, Some(event_str)) {
eprintln!("unable to start datastore sync job {} - {}", &job_id, err);
}
};
let next = match compute_next_event(&event, last, false) {
Ok(Some(next)) => next,
Ok(None) => continue,
Err(err) => {
eprintln!("compute_next_event for '{}' failed - {}", event_str, err);
continue;
}
};
let now = proxmox::tools::time::epoch_i64();
if next > now { continue; }
let job = match Job::new(worker_type, &job_id) {
Ok(job) => job,
Err(_) => continue, // could not get lock
};
let auth_id = Authid::backup_auth_id();
if let Err(err) = do_sync_job(job, job_config, &auth_id, Some(event_str)) {
eprintln!("unable to start datastore sync job {} - {}", &job_id, err);
}
}
}
@ -613,79 +518,30 @@ async fn schedule_datastore_verify_jobs() {
Some(ref event_str) => event_str.clone(),
None => continue,
};
let event = match parse_calendar_event(&event_str) {
Ok(event) => event,
Err(err) => {
eprintln!("unable to parse schedule '{}' - {}", event_str, err);
continue;
}
};
let worker_type = "verificationjob";
let last = match jobstate::last_run_time(worker_type, &job_id) {
Ok(time) => time,
Err(err) => {
eprintln!("could not get last run time of {} {}: {}", worker_type, job_id, err);
continue;
let auth_id = Authid::backup_auth_id().clone();
if check_schedule(worker_type, &event_str, &job_id) {
let job = match Job::new(&worker_type, &job_id) {
Ok(job) => job,
Err(_) => continue, // could not get lock
};
if let Err(err) = do_verification_job(job, job_config, &auth_id, Some(event_str)) {
eprintln!("unable to start datastore verification job {} - {}", &job_id, err);
}
};
let next = match compute_next_event(&event, last, false) {
Ok(Some(next)) => next,
Ok(None) => continue,
Err(err) => {
eprintln!("compute_next_event for '{}' failed - {}", event_str, err);
continue;
}
};
let now = proxmox::tools::time::epoch_i64();
if next > now { continue; }
let job = match Job::new(worker_type, &job_id) {
Ok(job) => job,
Err(_) => continue, // could not get lock
};
let auth_id = Authid::backup_auth_id();
if let Err(err) = do_verification_job(job, job_config, &auth_id, Some(event_str)) {
eprintln!("unable to start datastore verification job {} - {}", &job_id, err);
}
}
}
async fn schedule_task_log_rotate() {
let worker_type = "logrotate";
let job_id = "task_archive";
let last = match jobstate::last_run_time(worker_type, job_id) {
Ok(time) => time,
Err(err) => {
eprintln!("could not get last run time of task log archive rotation: {}", err);
return;
}
};
let job_id = "access-log_and_task-archive";
// schedule daily at 00:00 like normal logrotate
let schedule = "00:00";
let event = match parse_calendar_event(schedule) {
Ok(event) => event,
Err(err) => {
// should not happen?
eprintln!("unable to parse schedule '{}' - {}", schedule, err);
return;
}
};
let next = match compute_next_event(&event, last, false) {
Ok(Some(next)) => next,
Ok(None) => return,
Err(err) => {
eprintln!("compute_next_event for '{}' failed - {}", schedule, err);
return;
}
};
let now = proxmox::tools::time::epoch_i64();
if next > now {
if !check_schedule(worker_type, schedule, job_id) {
// if we never ran the rotation, schedule instantly
match jobstate::JobState::load(worker_type, job_id) {
Ok(state) => match state {
@ -703,7 +559,7 @@ async fn schedule_task_log_rotate() {
if let Err(err) = WorkerTask::new_thread(
worker_type,
Some(job_id.to_string()),
None,
Authid::backup_auth_id().clone(),
false,
move |worker| {
@ -711,9 +567,8 @@ async fn schedule_task_log_rotate() {
worker.log(format!("starting task log rotation"));
let result = try_block!({
// rotate task log archive
let max_size = 500000; // a normal entry has about 100b, so ~ 5000 entries/file
let max_files = 20; // times twenty files gives at least 100000 task entries
let max_size = 512 * 1024 - 1; // an entry has ~ 100b, so > 5000 entries/file
let max_files = 20; // times twenty files gives > 100000 task entries
let has_rotated = rotate_task_log_archive(max_size, true, Some(max_files))?;
if has_rotated {
worker.log(format!("task log archive was rotated"));
@ -721,6 +576,28 @@ async fn schedule_task_log_rotate() {
worker.log(format!("task log archive was not rotated"));
}
let max_size = 32 * 1024 * 1024 - 1;
let max_files = 14;
let mut logrotate = LogRotate::new(buildcfg::API_ACCESS_LOG_FN, true)
.ok_or_else(|| format_err!("could not get API access log file names"))?;
if logrotate.rotate(max_size, None, Some(max_files))? {
println!("rotated access log, telling daemons to re-open log file");
proxmox_backup::tools::runtime::block_on(command_reopen_logfiles())?;
worker.log(format!("API access log was rotated"));
} else {
worker.log(format!("API access log was not rotated"));
}
let mut logrotate = LogRotate::new(buildcfg::API_AUTH_LOG_FN, true)
.ok_or_else(|| format_err!("could not get API auth log file names"))?;
if logrotate.rotate(max_size, None, Some(max_files))? {
worker.log(format!("API access log was rotated"));
} else {
worker.log(format!("API access log was not rotated"));
}
Ok(())
});
@ -738,6 +615,28 @@ async fn schedule_task_log_rotate() {
}
async fn command_reopen_logfiles() -> Result<(), Error> {
// only care about the most recent daemon instance for each, proxy & api, as other older ones
// should not respond to new requests anyway, but only finish their current one and then exit.
let sock = server::our_ctrl_sock();
let f1 = server::send_command(sock, serde_json::json!({
"command": "api-access-log-reopen",
}));
let pid = server::read_pid(buildcfg::PROXMOX_BACKUP_API_PID_FN)?;
let sock = server::ctrl_sock_from_pid(pid);
let f2 = server::send_command(sock, serde_json::json!({
"command": "api-access-log-reopen",
}));
match futures::join!(f1, f2) {
(Err(e1), Err(e2)) => Err(format_err!("reopen commands failed, proxy: {}; api: {}", e1, e2)),
(Err(e1), Ok(_)) => Err(format_err!("reopen commands failed, proxy: {}", e1)),
(Ok(_), Err(e2)) => Err(format_err!("reopen commands failed, api: {}", e2)),
_ => Ok(()),
}
}
async fn run_stat_generator() {
let mut count = 0;
@ -850,6 +749,36 @@ async fn generate_host_stats(save: bool) {
});
}
fn check_schedule(worker_type: &str, event_str: &str, id: &str) -> bool {
let event = match parse_calendar_event(event_str) {
Ok(event) => event,
Err(err) => {
eprintln!("unable to parse schedule '{}' - {}", event_str, err);
return false;
}
};
let last = match jobstate::last_run_time(worker_type, &id) {
Ok(time) => time,
Err(err) => {
eprintln!("could not get last run time of {} {}: {}", worker_type, id, err);
return false;
}
};
let next = match compute_next_event(&event, last, false) {
Ok(Some(next)) => next,
Ok(None) => return false,
Err(err) => {
eprintln!("compute_next_event for '{}' failed - {}", event_str, err);
return false;
}
};
let now = proxmox::tools::time::epoch_i64();
next <= now
}
fn gather_disk_stats(disk_manager: Arc<DiskManage>, path: &Path, rrd_prefix: &str, save: bool) {
match proxmox_backup::tools::disks::disk_usage(path) {

View File

@ -0,0 +1,73 @@
use anyhow::Error;
use serde_json::{json, Value};
use proxmox::api::{cli::*, RpcEnvironment, ApiHandler};
use proxmox_backup::api2;
use proxmox_backup::tools::subscription;
async fn wait_for_local_worker(upid_str: &str) -> Result<(), Error> {
let upid: proxmox_backup::server::UPID = upid_str.parse()?;
let sleep_duration = core::time::Duration::new(0, 100_000_000);
loop {
if !proxmox_backup::server::worker_is_active_local(&upid) {
break;
}
tokio::time::delay_for(sleep_duration).await;
}
Ok(())
}
/// Daily update
async fn do_update(
rpcenv: &mut dyn RpcEnvironment,
) -> Result<Value, Error> {
let param = json!({});
let method = &api2::node::subscription::API_METHOD_CHECK_SUBSCRIPTION;
let _res = match method.handler {
ApiHandler::Sync(handler) => (handler)(param, method, rpcenv)?,
_ => unreachable!(),
};
let notify = match subscription::read_subscription() {
Ok(Some(subscription)) => subscription.status == subscription::SubscriptionStatus::ACTIVE,
Ok(None) => false,
Err(err) => {
eprintln!("Error reading subscription - {}", err);
false
},
};
let param = json!({
"notify": notify,
});
let method = &api2::node::apt::API_METHOD_APT_UPDATE_DATABASE;
let upid = match method.handler {
ApiHandler::Sync(handler) => (handler)(param, method, rpcenv)?,
_ => unreachable!(),
};
wait_for_local_worker(upid.as_str().unwrap()).await?;
// TODO: certificate checks/renewal/... ?
// TODO: cleanup tasks like in PVE?
Ok(Value::Null)
}
fn main() {
proxmox_backup::tools::setup_safe_path_env();
let mut rpcenv = CliEnvironment::new();
rpcenv.set_auth_id(Some(String::from("root@pam")));
match proxmox_backup::tools::runtime::main(do_update(&mut rpcenv)) {
Err(err) => {
eprintln!("error during update: {}", err);
std::process::exit(1);
},
_ => (),
}
}

View File

@ -60,7 +60,7 @@ pub fn acl_commands() -> CommandLineInterface {
"update",
CliCommand::new(&api2::access::acl::API_METHOD_UPDATE_ACL)
.arg_param(&["path", "role"])
.completion_cb("userid", config::user::complete_userid)
.completion_cb("auth-id", config::user::complete_authid)
.completion_cb("path", config::datastore::complete_acl_path)
);

View File

@ -14,5 +14,7 @@ mod sync;
pub use sync::*;
mod user;
pub use user::*;
mod subscription;
pub use subscription::*;
mod disk;
pub use disk::*;

View File

@ -0,0 +1,55 @@
use anyhow::Error;
use serde_json::Value;
use proxmox::api::{api, cli::*, RpcEnvironment, ApiHandler};
use proxmox_backup::api2;
#[api(
input: {
properties: {
"output-format": {
schema: OUTPUT_FORMAT,
optional: true,
},
}
}
)]
/// Read subscription info.
fn get(param: Value, rpcenv: &mut dyn RpcEnvironment) -> Result<Value, Error> {
let output_format = get_output_format(&param);
let info = &api2::node::subscription::API_METHOD_GET_SUBSCRIPTION;
let mut data = match info.handler {
ApiHandler::Sync(handler) => (handler)(param, info, rpcenv)?,
_ => unreachable!(),
};
let options = default_table_format_options();
format_and_print_result_full(&mut data, info.returns, &output_format, &options);
Ok(Value::Null)
}
pub fn subscription_commands() -> CommandLineInterface {
let cmd_def = CliCommandMap::new()
.insert("get", CliCommand::new(&API_METHOD_GET))
.insert("set",
CliCommand::new(&api2::node::subscription::API_METHOD_SET_SUBSCRIPTION)
.fixed_param("node", "localhost".into())
.arg_param(&["key"])
)
.insert("update",
CliCommand::new(&api2::node::subscription::API_METHOD_CHECK_SUBSCRIPTION)
.fixed_param("node", "localhost".into())
)
.insert("remove",
CliCommand::new(&api2::node::subscription::API_METHOD_DELETE_SUBSCRIPTION)
.fixed_param("node", "localhost".into())
)
;
cmd_def.into()
}

View File

@ -100,7 +100,7 @@ fn list_tokens(param: Value, rpcenv: &mut dyn RpcEnvironment) -> Result<Value, E
schema: OUTPUT_FORMAT,
optional: true,
},
auth_id: {
"auth-id": {
type: Authid,
},
path: {
@ -195,8 +195,8 @@ pub fn user_commands() -> CommandLineInterface {
.insert(
"permissions",
CliCommand::new(&&API_METHOD_LIST_PERMISSIONS)
.arg_param(&["auth_id"])
.completion_cb("auth_id", config::user::complete_authid)
.arg_param(&["auth-id"])
.completion_cb("auth-id", config::user::complete_authid)
.completion_cb("path", config::datastore::complete_acl_path)
);

View File

@ -4,7 +4,31 @@
pub const CONFIGDIR: &str = "/etc/proxmox-backup";
pub const JS_DIR: &str = "/usr/share/javascript/proxmox-backup";
pub const API_ACCESS_LOG_FN: &str = "/var/log/proxmox-backup/api/access.log";
#[macro_export]
macro_rules! PROXMOX_BACKUP_RUN_DIR_M { () => ("/run/proxmox-backup") }
#[macro_export]
macro_rules! PROXMOX_BACKUP_LOG_DIR_M { () => ("/var/log/proxmox-backup") }
/// namespaced directory for in-memory (tmpfs) run state
pub const PROXMOX_BACKUP_RUN_DIR: &str = PROXMOX_BACKUP_RUN_DIR_M!();
/// namespaced directory for persistent logging
pub const PROXMOX_BACKUP_LOG_DIR: &str = PROXMOX_BACKUP_LOG_DIR_M!();
/// logfile for all API reuests handled by the proxy and privileged API daemons. Note that not all
/// failed logins can be logged here with full information, use the auth log for that.
pub const API_ACCESS_LOG_FN: &str = concat!(PROXMOX_BACKUP_LOG_DIR_M!(), "/api/access.log");
/// logfile for any failed authentication, via ticket or via token, and new successfull ticket
/// creations. This file can be useful for fail2ban.
pub const API_AUTH_LOG_FN: &str = concat!(PROXMOX_BACKUP_LOG_DIR_M!(), "/api/auth.log");
/// the PID filename for the unprivileged proxy daemon
pub const PROXMOX_BACKUP_PROXY_PID_FN: &str = concat!(PROXMOX_BACKUP_RUN_DIR_M!(), "/proxy.pid");
/// the PID filename for the privileged api daemon
pub const PROXMOX_BACKUP_API_PID_FN: &str = concat!(PROXMOX_BACKUP_RUN_DIR_M!(), "/api.pid");
/// Prepend configuration directory to a file name
///

View File

@ -405,6 +405,9 @@ impl HttpClient {
///
/// Login is done on demand, so this is only required if you need
/// access to authentication data in 'AuthInfo'.
///
/// Note: tickets a periodially re-newed, so one can use this
/// to query changed ticket.
pub async fn login(&self) -> Result<AuthInfo, Error> {
if let Some(future) = &self.first_auth {
future.listen().await?;
@ -493,7 +496,7 @@ impl HttpClient {
let auth = self.login().await?;
if auth.auth_id.is_token() {
let enc_api_token = format!("{}:{}", auth.auth_id, percent_encode(auth.ticket.as_bytes(), DEFAULT_ENCODE_SET));
let enc_api_token = format!("PBSAPIToken {}:{}", auth.auth_id, percent_encode(auth.ticket.as_bytes(), DEFAULT_ENCODE_SET));
req.headers_mut().insert("Authorization", HeaderValue::from_str(&enc_api_token).unwrap());
} else {
let enc_ticket = format!("PBSAuthCookie={}", percent_encode(auth.ticket.as_bytes(), DEFAULT_ENCODE_SET));
@ -602,7 +605,7 @@ impl HttpClient {
let auth = self.login().await?;
if auth.auth_id.is_token() {
let enc_api_token = format!("{}:{}", auth.auth_id, percent_encode(auth.ticket.as_bytes(), DEFAULT_ENCODE_SET));
let enc_api_token = format!("PBSAPIToken {}:{}", auth.auth_id, percent_encode(auth.ticket.as_bytes(), DEFAULT_ENCODE_SET));
req.headers_mut().insert("Authorization", HeaderValue::from_str(&enc_api_token).unwrap());
} else {
let enc_ticket = format!("PBSAuthCookie={}", percent_encode(auth.ticket.as_bytes(), DEFAULT_ENCODE_SET));

View File

@ -103,7 +103,7 @@ async fn pull_index_chunks<I: IndexFile>(
let bytes = bytes.load(Ordering::SeqCst);
worker.log(format!("downloaded {} bytes ({} MiB/s)", bytes, (bytes as f64)/(1024.0*1024.0*elapsed)));
worker.log(format!("downloaded {} bytes ({:.2} MiB/s)", bytes, (bytes as f64)/(1024.0*1024.0*elapsed)));
Ok(())
}
@ -410,7 +410,8 @@ pub async fn pull_group(
list.sort_unstable_by(|a, b| a.backup_time.cmp(&b.backup_time));
let auth_info = client.login().await?;
client.login().await?; // make sure auth is complete
let fingerprint = client.fingerprint();
let last_sync = tgt_store.last_successful_backup(group)?;
@ -447,6 +448,9 @@ pub async fn pull_group(
if last_sync_time > backup_time { continue; }
}
// get updated auth_info (new tickets)
let auth_info = client.login().await?;
let options = HttpClientOptions::new()
.password(Some(auth_info.ticket.clone()))
.fingerprint(fingerprint.clone());

View File

@ -26,14 +26,23 @@ constnamedbitmap! {
PRIV_SYS_MODIFY("Sys.Modify");
PRIV_SYS_POWER_MANAGEMENT("Sys.PowerManagement");
/// Datastore.Audit allows knowing about a datastore,
/// including reading the configuration entry and listing its contents
PRIV_DATASTORE_AUDIT("Datastore.Audit");
/// Datastore.Allocate allows creating or deleting datastores
PRIV_DATASTORE_ALLOCATE("Datastore.Allocate");
/// Datastore.Modify allows modifying a datastore and its contents
PRIV_DATASTORE_MODIFY("Datastore.Modify");
/// Datastore.Read allows reading arbitrary backup contents
PRIV_DATASTORE_READ("Datastore.Read");
/// Allows verifying a datastore
PRIV_DATASTORE_VERIFY("Datastore.Verify");
/// Datastore.Backup also requires backup ownership
/// Datastore.Backup allows Datastore.Read|Verify and creating new snapshots,
/// but also requires backup ownership
PRIV_DATASTORE_BACKUP("Datastore.Backup");
/// Datastore.Prune also requires backup ownership
/// Datastore.Prune allows deleting snapshots,
/// but also requires backup ownership
PRIV_DATASTORE_PRUNE("Datastore.Prune");
PRIV_PERMISSIONS_MODIFY("Permissions.Modify");
@ -41,7 +50,6 @@ constnamedbitmap! {
PRIV_REMOTE_AUDIT("Remote.Audit");
PRIV_REMOTE_MODIFY("Remote.Modify");
PRIV_REMOTE_READ("Remote.Read");
PRIV_REMOTE_PRUNE("Remote.Prune");
PRIV_SYS_CONSOLE("Sys.Console");
}
@ -64,12 +72,14 @@ pub const ROLE_DATASTORE_ADMIN: u64 =
PRIV_DATASTORE_AUDIT |
PRIV_DATASTORE_MODIFY |
PRIV_DATASTORE_READ |
PRIV_DATASTORE_VERIFY |
PRIV_DATASTORE_BACKUP |
PRIV_DATASTORE_PRUNE;
/// Datastore.Reader can read datastore content an do restore
/// Datastore.Reader can read/verify datastore content and do restore
pub const ROLE_DATASTORE_READER: u64 =
PRIV_DATASTORE_AUDIT |
PRIV_DATASTORE_VERIFY |
PRIV_DATASTORE_READ;
/// Datastore.Backup can do backup and restore, but no prune.
@ -93,14 +103,12 @@ PRIV_REMOTE_AUDIT;
pub const ROLE_REMOTE_ADMIN: u64 =
PRIV_REMOTE_AUDIT |
PRIV_REMOTE_MODIFY |
PRIV_REMOTE_READ |
PRIV_REMOTE_PRUNE;
PRIV_REMOTE_READ;
/// Remote.SyncOperator can do read and prune on the remote.
pub const ROLE_REMOTE_SYNC_OPERATOR: u64 =
PRIV_REMOTE_AUDIT |
PRIV_REMOTE_READ |
PRIV_REMOTE_PRUNE;
PRIV_REMOTE_READ;
pub const ROLE_NAME_NO_ACCESS: &str ="NoAccess";

View File

@ -57,6 +57,14 @@ impl CachedUserInfo {
Ok(config)
}
#[cfg(test)]
pub(crate) fn test_new(user_cfg: SectionConfigData, acl_tree: AclTree) -> Self {
Self {
user_cfg: Arc::new(user_cfg),
acl_tree: Arc::new(acl_tree),
}
}
/// Test if a authentication id is enabled and not expired
pub fn is_active_auth_id(&self, auth_id: &Authid) -> bool {
let userid = auth_id.user();

View File

@ -32,6 +32,14 @@ pub const DIR_NAME_SCHEMA: Schema = StringSchema::new("Directory name").schema()
path: {
schema: DIR_NAME_SCHEMA,
},
"notify-user": {
optional: true,
type: Userid,
},
"notify": {
optional: true,
type: Notify,
},
comment: {
optional: true,
schema: SINGLE_LINE_COMMENT_SCHEMA,
@ -101,6 +109,12 @@ pub struct DataStoreConfig {
/// If enabled, all backups will be verified right after completion.
#[serde(skip_serializing_if="Option::is_none")]
pub verify_new: Option<bool>,
/// Send job email notification to this user
#[serde(skip_serializing_if="Option::is_none")]
pub notify_user: Option<Userid>,
/// Send notification only for job errors
#[serde(skip_serializing_if="Option::is_none")]
pub notify: Option<Notify>,
}
fn init() -> SectionConfig {

View File

@ -57,38 +57,57 @@ lazy_static! {
}
pub fn parse_cidr(cidr: &str) -> Result<(String, u8, bool), Error> {
let (address, mask, is_v6) = parse_address_or_cidr(cidr)?;
if let Some(mask) = mask {
return Ok((address, mask, is_v6));
} else {
bail!("missing netmask in '{}'", cidr);
}
}
pub fn check_netmask(mask: u8, is_v6: bool) -> Result<(), Error> {
if is_v6 {
if !(mask >= 1 && mask <= 128) {
bail!("IPv6 mask '{}' is out of range (1..128).", mask);
}
} else {
if !(mask > 0 && mask <= 32) {
bail!("IPv4 mask '{}' is out of range (1..32).", mask);
}
}
Ok(())
}
// parse ip address with otional cidr mask
pub fn parse_address_or_cidr(cidr: &str) -> Result<(String, Option<u8>, bool), Error> {
lazy_static! {
pub static ref CIDR_V4_REGEX: Regex = Regex::new(
concat!(r"^(", IPV4RE!(), r")(?:/(\d{1,2}))$")
concat!(r"^(", IPV4RE!(), r")(?:/(\d{1,2}))?$")
).unwrap();
pub static ref CIDR_V6_REGEX: Regex = Regex::new(
concat!(r"^(", IPV6RE!(), r")(?:/(\d{1,3}))$")
concat!(r"^(", IPV6RE!(), r")(?:/(\d{1,3}))?$")
).unwrap();
}
if let Some(caps) = CIDR_V4_REGEX.captures(&cidr) {
let address = &caps[1];
let mask = &caps[2];
let mask = u8::from_str_radix(mask, 10)
.map(|mask| {
if !(mask > 0 && mask <= 32) {
bail!("IPv4 mask '{}' is out of range (1..32).", mask);
}
Ok(mask)
})?;
return Ok((address.to_string(), mask.unwrap(), false));
if let Some(mask) = caps.get(2) {
let mask = u8::from_str_radix(mask.as_str(), 10)?;
check_netmask(mask, false)?;
return Ok((address.to_string(), Some(mask), false));
} else {
return Ok((address.to_string(), None, false));
}
} else if let Some(caps) = CIDR_V6_REGEX.captures(&cidr) {
let address = &caps[1];
let mask = &caps[2];
let mask = u8::from_str_radix(mask, 10)
.map(|mask| {
if !(mask >= 1 && mask <= 128) {
bail!("IPv6 mask '{}' is out of range (1..128).", mask);
}
Ok(mask)
})?;
return Ok((address.to_string(), mask.unwrap(), true));
if let Some(mask) = caps.get(2) {
let mask = u8::from_str_radix(mask.as_str(), 10)?;
check_netmask(mask, true)?;
return Ok((address.to_string(), Some(mask), true));
} else {
return Ok((address.to_string(), None, true));
}
} else {
bail!("invalid address/mask '{}'", cidr);
}

View File

@ -86,20 +86,35 @@ impl <R: BufRead> NetworkParser<R> {
Ok(())
}
fn parse_iface_address(&mut self, interface: &mut Interface) -> Result<(), Error> {
self.eat(Token::Address)?;
let cidr = self.next_text()?;
fn parse_netmask(&mut self) -> Result<u8, Error> {
self.eat(Token::Netmask)?;
let netmask = self.next_text()?;
let (_address, _mask, ipv6) = parse_cidr(&cidr)?;
if ipv6 {
interface.set_cidr_v6(cidr)?;
let mask = if let Some(mask) = IPV4_MASK_HASH_LOCALNET.get(netmask.as_str()) {
*mask
} else {
interface.set_cidr_v4(cidr)?;
}
match u8::from_str_radix(netmask.as_str(), 10) {
Ok(mask) => mask,
Err(err) => {
bail!("unable to parse netmask '{}' - {}", netmask, err);
}
}
};
self.eat(Token::Newline)?;
Ok(())
Ok(mask)
}
fn parse_iface_address(&mut self) -> Result<(String, Option<u8>, bool), Error> {
self.eat(Token::Address)?;
let cidr = self.next_text()?;
let (_address, mask, ipv6) = parse_address_or_cidr(&cidr)?;
self.eat(Token::Newline)?;
Ok((cidr, mask, ipv6))
}
fn parse_iface_gateway(&mut self, interface: &mut Interface) -> Result<(), Error> {
@ -191,6 +206,9 @@ impl <R: BufRead> NetworkParser<R> {
address_family_v6: bool,
) -> Result<(), Error> {
let mut netmask = None;
let mut address_list = Vec::new();
loop {
match self.peek()? {
Token::Attribute => { self.eat(Token::Attribute)?; },
@ -214,8 +232,15 @@ impl <R: BufRead> NetworkParser<R> {
}
match self.peek()? {
Token::Address => self.parse_iface_address(interface)?,
Token::Address => {
let (cidr, mask, is_v6) = self.parse_iface_address()?;
address_list.push((cidr, mask, is_v6));
}
Token::Gateway => self.parse_iface_gateway(interface)?,
Token::Netmask => {
//Note: netmask is deprecated, but we try to do our best
netmask = Some(self.parse_netmask()?);
}
Token::MTU => {
let mtu = self.parse_iface_mtu()?;
interface.mtu = Some(mtu);
@ -255,8 +280,6 @@ impl <R: BufRead> NetworkParser<R> {
interface.bond_xmit_hash_policy = Some(policy);
self.eat(Token::Newline)?;
}
Token::Netmask => bail!("netmask is deprecated and no longer supported"),
_ => { // parse addon attributes
let option = self.parse_to_eol()?;
if !option.is_empty() {
@ -270,6 +293,38 @@ impl <R: BufRead> NetworkParser<R> {
}
}
if let Some(netmask) = netmask {
if address_list.len() > 1 {
bail!("unable to apply netmask to multiple addresses (please use cidr notation)");
} else if address_list.len() == 1 {
let (mut cidr, mask, is_v6) = address_list.pop().unwrap();
if mask.is_some() {
// address already has a mask - ignore netmask
} else {
check_netmask(netmask, is_v6)?;
cidr.push_str(&format!("/{}", netmask));
}
if is_v6 {
interface.set_cidr_v6(cidr)?;
} else {
interface.set_cidr_v4(cidr)?;
}
} else {
// no address - simply ignore useless netmask
}
} else {
for (cidr, mask, is_v6) in address_list {
if mask.is_none() {
bail!("missing netmask in '{}'", cidr);
}
if is_v6 {
interface.set_cidr_v6(cidr)?;
} else {
interface.set_cidr_v4(cidr)?;
}
}
}
Ok(())
}

View File

@ -21,7 +21,6 @@ lazy_static! {
static ref CONFIG: SectionConfig = init();
}
#[api(
properties: {
id: {
@ -30,6 +29,10 @@ lazy_static! {
store: {
schema: DATASTORE_SCHEMA,
},
"owner": {
type: Authid,
optional: true,
},
remote: {
schema: REMOTE_ID_SCHEMA,
},
@ -56,6 +59,8 @@ lazy_static! {
pub struct SyncJobConfig {
pub id: String,
pub store: String,
#[serde(skip_serializing_if="Option::is_none")]
pub owner: Option<Authid>,
pub remote: String,
pub remote_store: String,
#[serde(skip_serializing_if="Option::is_none")]
@ -66,6 +71,21 @@ pub struct SyncJobConfig {
pub schedule: Option<String>,
}
impl From<&SyncJobStatus> for SyncJobConfig {
fn from(job_status: &SyncJobStatus) -> Self {
Self {
id: job_status.id.clone(),
store: job_status.store.clone(),
owner: job_status.owner.clone(),
remote: job_status.remote.clone(),
remote_store: job_status.remote_store.clone(),
remove_vanished: job_status.remove_vanished.clone(),
comment: job_status.comment.clone(),
schedule: job_status.schedule.clone(),
}
}
}
// FIXME: generate duplicate schemas/structs from one listing?
#[api(
properties: {
@ -75,6 +95,10 @@ pub struct SyncJobConfig {
store: {
schema: DATASTORE_SCHEMA,
},
owner: {
type: Authid,
optional: true,
},
remote: {
schema: REMOTE_ID_SCHEMA,
},
@ -121,6 +145,8 @@ pub struct SyncJobConfig {
pub struct SyncJobStatus {
pub id: String,
pub store: String,
#[serde(skip_serializing_if="Option::is_none")]
pub owner: Option<Authid>,
pub remote: String,
pub remote_store: String,
#[serde(skip_serializing_if="Option::is_none")]

View File

@ -241,6 +241,14 @@ pub fn save_config(config: &SectionConfigData) -> Result<(), Error> {
Ok(())
}
#[cfg(test)]
pub(crate) fn test_cfg_from_str(raw: &str) -> Result<(SectionConfigData, [u8;32]), Error> {
let cfg = init();
let parsed = cfg.parse("test_user_cfg", raw)?;
Ok((parsed, [0;32]))
}
// shell completion helper
pub fn complete_userid(_arg: &str, _param: &HashMap<String, String>) -> Vec<String> {
match config() {

View File

@ -4,6 +4,47 @@
//! services. We want async IO, so this is built on top of
//! tokio/hyper.
use anyhow::{format_err, Error};
use lazy_static::lazy_static;
use nix::unistd::Pid;
use proxmox::sys::linux::procfs::PidStat;
use crate::buildcfg;
lazy_static! {
static ref PID: i32 = unsafe { libc::getpid() };
static ref PSTART: u64 = PidStat::read_from_pid(Pid::from_raw(*PID)).unwrap().starttime;
}
pub fn pid() -> i32 {
*PID
}
pub fn pstart() -> u64 {
*PSTART
}
pub fn write_pid(pid_fn: &str) -> Result<(), Error> {
let pid_str = format!("{}\n", *PID);
let opts = proxmox::tools::fs::CreateOptions::new();
proxmox::tools::fs::replace_file(pid_fn, pid_str.as_bytes(), opts)
}
pub fn read_pid(pid_fn: &str) -> Result<i32, Error> {
let pid = proxmox::tools::fs::file_get_contents(pid_fn)?;
let pid = std::str::from_utf8(&pid)?.trim();
pid.parse().map_err(|err| format_err!("could not parse pid - {}", err))
}
pub fn ctrl_sock_from_pid(pid: i32) -> String {
format!("\0{}/control-{}.sock", buildcfg::PROXMOX_BACKUP_RUN_DIR, pid)
}
pub fn our_ctrl_sock() -> String {
ctrl_sock_from_pid(*PID)
}
mod environment;
pub use environment::*;
@ -35,5 +76,14 @@ pub mod jobstate;
mod verify_job;
pub use verify_job::*;
mod prune_job;
pub use prune_job::*;
mod gc_job;
pub use gc_job::*;
mod email_notifications;
pub use email_notifications::*;
mod report;
pub use report::*;

View File

@ -1,17 +1,17 @@
use anyhow::{bail, format_err, Error};
use futures::*;
use tokio::net::UnixListener;
use std::path::PathBuf;
use serde_json::Value;
use std::sync::Arc;
use std::collections::HashMap;
use std::os::unix::io::AsRawFd;
use std::path::PathBuf;
use std::sync::Arc;
use futures::*;
use tokio::net::UnixListener;
use serde_json::Value;
use nix::sys::socket;
/// Listens on a Unix Socket to handle simple command asynchronously
pub fn create_control_socket<P, F>(path: P, func: F) -> Result<impl Future<Output = ()>, Error>
fn create_control_socket<P, F>(path: P, func: F) -> Result<impl Future<Output = ()>, Error>
where
P: Into<PathBuf>,
F: Fn(Value) -> Result<Value, Error> + Send + Sync + 'static,
@ -140,3 +140,76 @@ pub async fn send_command<P>(
}
}).await
}
/// A callback for a specific commando socket.
pub type CommandoSocketFn = Box<(dyn Fn(Option<&Value>) -> Result<Value, Error> + Send + Sync + 'static)>;
/// Tooling to get a single control command socket where one can register multiple commands
/// dynamically.
/// You need to call `spawn()` to make the socket active.
pub struct CommandoSocket {
socket: PathBuf,
commands: HashMap<String, CommandoSocketFn>,
}
impl CommandoSocket {
pub fn new<P>(path: P) -> Self
where P: Into<PathBuf>,
{
CommandoSocket {
socket: path.into(),
commands: HashMap::new(),
}
}
/// Spawn the socket and consume self, meaning you cannot register commands anymore after
/// calling this.
pub fn spawn(self) -> Result<(), Error> {
let control_future = create_control_socket(self.socket.to_owned(), move |param| {
let param = param
.as_object()
.ok_or_else(|| format_err!("unable to parse parameters (expected json object)"))?;
let command = match param.get("command") {
Some(Value::String(command)) => command.as_str(),
None => bail!("no command"),
_ => bail!("unable to parse command"),
};
if !self.commands.contains_key(command) {
bail!("got unknown command '{}'", command);
}
match self.commands.get(command) {
None => bail!("got unknown command '{}'", command),
Some(handler) => {
let args = param.get("args"); //.unwrap_or(&Value::Null);
(handler)(args)
},
}
})?;
tokio::spawn(control_future);
Ok(())
}
/// Register a new command with a callback.
pub fn register_command<F>(
&mut self,
command: String,
handler: F,
) -> Result<(), Error>
where
F: Fn(Option<&Value>) -> Result<Value, Error> + Send + Sync + 'static,
{
if self.commands.contains_key(&command) {
bail!("command '{}' already exists!", command);
}
self.commands.insert(command, Box::new(handler));
Ok(())
}
}

View File

@ -2,7 +2,7 @@ use std::collections::HashMap;
use std::path::PathBuf;
use std::time::SystemTime;
use std::fs::metadata;
use std::sync::{Mutex, RwLock};
use std::sync::{Arc, Mutex, RwLock};
use anyhow::{bail, Error, format_err};
use hyper::Method;
@ -21,7 +21,7 @@ pub struct ApiConfig {
env_type: RpcEnvironmentType,
templates: RwLock<Handlebars<'static>>,
template_files: RwLock<HashMap<String, (SystemTime, PathBuf)>>,
request_log: Option<Mutex<FileLogger>>,
request_log: Option<Arc<Mutex<FileLogger>>>,
}
impl ApiConfig {
@ -124,7 +124,11 @@ impl ApiConfig {
}
}
pub fn enable_file_log<P>(&mut self, path: P) -> Result<(), Error>
pub fn enable_file_log<P>(
&mut self,
path: P,
commando_sock: &mut super::CommandoSocket,
) -> Result<(), Error>
where
P: Into<PathBuf>
{
@ -142,11 +146,19 @@ impl ApiConfig {
owned_by_backup: true,
..Default::default()
};
self.request_log = Some(Mutex::new(FileLogger::new(&path, logger_options)?));
let request_log = Arc::new(Mutex::new(FileLogger::new(&path, logger_options)?));
self.request_log = Some(Arc::clone(&request_log));
commando_sock.register_command("api-access-log-reopen".into(), move |_args| {
println!("re-opening log file");
request_log.lock().unwrap().reopen()?;
Ok(serde_json::Value::Null)
})?;
Ok(())
}
pub fn get_file_log(&self) -> Option<&Mutex<FileLogger>> {
pub fn get_file_log(&self) -> Option<&Arc<Mutex<FileLogger>>> {
self.request_log.as_ref()
}
}

View File

@ -6,11 +6,14 @@ use handlebars::{Handlebars, Helper, Context, RenderError, RenderContext, Output
use proxmox::tools::email::sendmail;
use crate::{
config::datastore::DataStoreConfig,
config::verify::VerificationJobConfig,
config::sync::SyncJobConfig,
api2::types::{
Userid,
APTUpdateInfo,
GarbageCollectionStatus,
Userid,
Notify,
},
tools::format::HumanByte,
};
@ -23,19 +26,24 @@ Index file count: {{status.index-file-count}}
Removed garbage: {{human-bytes status.removed-bytes}}
Removed chunks: {{status.removed-chunks}}
Remove bad files: {{status.removed-bad}}
Removed bad chunks: {{status.removed-bad}}
Bad files: {{status.still-bad}}
Leftover bad chunks: {{status.still-bad}}
Pending removals: {{human-bytes status.pending-bytes}} (in {{status.pending-chunks}} chunks)
Original Data usage: {{human-bytes status.index-data-bytes}}
On Disk usage: {{human-bytes status.disk-bytes}} ({{relative-percentage status.disk-bytes status.index-data-bytes}})
On Disk chunks: {{status.disk-chunks}}
On-Disk usage: {{human-bytes status.disk-bytes}} ({{relative-percentage status.disk-bytes status.index-data-bytes}})
On-Disk chunks: {{status.disk-chunks}}
Deduplication Factor: {{deduplication-factor}}
Garbage collection successful.
Please visit the web interface for futher details:
<https://{{fqdn}}:{{port}}/#DataStore-{{datastore}}>
"###;
@ -45,6 +53,11 @@ Datastore: {{datastore}}
Garbage collection failed: {{error}}
Please visit the web interface for futher details:
<https://{{fqdn}}:{{port}}/#pbsServerAdministration:tasks>
"###;
const VERIFY_OK_TEMPLATE: &str = r###"
@ -54,6 +67,11 @@ Datastore: {{job.store}}
Verification successful.
Please visit the web interface for futher details:
<https://{{fqdn}}:{{port}}/#DataStore-{{job.store}}>
"###;
const VERIFY_ERR_TEMPLATE: &str = r###"
@ -64,9 +82,14 @@ Datastore: {{job.store}}
Verification failed on these snapshots:
{{#each errors}}
{{this}}
{{this~}}
{{/each}}
Please visit the web interface for futher details:
<https://{{fqdn}}:{{port}}/#pbsServerAdministration:tasks>
"###;
const SYNC_OK_TEMPLATE: &str = r###"
@ -78,6 +101,11 @@ Remote Store: {{job.remote-store}}
Synchronization successful.
Please visit the web interface for futher details:
<https://{{fqdn}}:{{port}}/#DataStore-{{job.store}}>
"###;
const SYNC_ERR_TEMPLATE: &str = r###"
@ -89,8 +117,26 @@ Remote Store: {{job.remote-store}}
Synchronization failed: {{error}}
Please visit the web interface for futher details:
<https://{{fqdn}}:{{port}}/#pbsServerAdministration:tasks>
"###;
const PACKAGE_UPDATES_TEMPLATE: &str = r###"
Proxmox Backup Server has the following updates available:
{{#each updates }}
{{Package}}: {{OldVersion}} -> {{Version~}}
{{/each }}
To upgrade visit the web interface:
<https://{{fqdn}}:{{port}}/#pbsServerAdministration:updates>
"###;
lazy_static::lazy_static!{
static ref HANDLEBARS: Handlebars<'static> = {
@ -110,6 +156,8 @@ lazy_static::lazy_static!{
hb.register_template_string("sync_ok_template", SYNC_OK_TEMPLATE).unwrap();
hb.register_template_string("sync_err_template", SYNC_ERR_TEMPLATE).unwrap();
hb.register_template_string("package_update_template", PACKAGE_UPDATES_TEMPLATE).unwrap();
hb
};
}
@ -142,11 +190,23 @@ fn send_job_status_mail(
pub fn send_gc_status(
email: &str,
notify: Notify,
datastore: &str,
status: &GarbageCollectionStatus,
result: &Result<(), Error>,
) -> Result<(), Error> {
if notify == Notify::Never || (result.is_ok() && notify == Notify::Error) {
return Ok(());
}
let (fqdn, port) = get_server_url();
let mut data = json!({
"datastore": datastore,
"fqdn": fqdn,
"port": port,
});
let text = match result {
Ok(()) => {
let deduplication_factor = if status.disk_bytes > 0 {
@ -155,19 +215,13 @@ pub fn send_gc_status(
1.0
};
let data = json!({
"status": status,
"datastore": datastore,
"deduplication-factor": format!("{:.2}", deduplication_factor),
});
data["status"] = json!(status);
data["deduplication-factor"] = format!("{:.2}", deduplication_factor).into();
HANDLEBARS.render("gc_ok_template", &data)?
}
Err(err) => {
let data = json!({
"error": err.to_string(),
"datastore": datastore,
});
data["error"] = err.to_string().into();
HANDLEBARS.render("gc_err_template", &data)?
}
};
@ -190,22 +244,32 @@ pub fn send_gc_status(
pub fn send_verify_status(
email: &str,
notify: Notify,
job: VerificationJobConfig,
result: &Result<Vec<String>, Error>,
) -> Result<(), Error> {
if notify == Notify::Never || (result.is_ok() && notify == Notify::Error) {
return Ok(());
}
let (fqdn, port) = get_server_url();
let mut data = json!({
"job": job,
"fqdn": fqdn,
"port": port,
});
let text = match result {
Ok(errors) if errors.is_empty() => {
let data = json!({ "job": job });
HANDLEBARS.render("verify_ok_template", &data)?
}
Ok(errors) => {
let data = json!({ "job": job, "errors": errors });
data["errors"] = json!(errors);
HANDLEBARS.render("verify_err_template", &data)?
}
Err(_) => {
// aboreted job - do not send any email
// aborted job - do not send any email
return Ok(());
}
};
@ -228,17 +292,28 @@ pub fn send_verify_status(
pub fn send_sync_status(
email: &str,
notify: Notify,
job: &SyncJobConfig,
result: &Result<(), Error>,
) -> Result<(), Error> {
if notify == Notify::Never || (result.is_ok() && notify == Notify::Error) {
return Ok(());
}
let (fqdn, port) = get_server_url();
let mut data = json!({
"job": job,
"fqdn": fqdn,
"port": port,
});
let text = match result {
Ok(()) => {
let data = json!({ "job": job });
HANDLEBARS.render("sync_ok_template", &data)?
}
Err(err) => {
let data = json!({ "job": job, "error": err.to_string() });
data["error"] = err.to_string().into();
HANDLEBARS.render("sync_err_template", &data)?
}
};
@ -261,10 +336,50 @@ pub fn send_sync_status(
Ok(())
}
fn get_server_url() -> (String, usize) {
// user will surely request that they can change this
let nodename = proxmox::tools::nodename();
let mut fqdn = nodename.to_owned();
if let Ok(resolv_conf) = crate::api2::node::dns::read_etc_resolv_conf() {
if let Some(search) = resolv_conf["search"].as_str() {
fqdn.push('.');
fqdn.push_str(search);
}
}
let port = 8007;
(fqdn, port)
}
pub fn send_updates_available(
updates: &Vec<&APTUpdateInfo>,
) -> Result<(), Error> {
// update mails always go to the root@pam configured email..
if let Some(email) = lookup_user_email(Userid::root_userid()) {
let nodename = proxmox::tools::nodename();
let subject = format!("New software packages available ({})", nodename);
let (fqdn, port) = get_server_url();
let text = HANDLEBARS.render("package_update_template", &json!({
"fqdn": fqdn,
"port": port,
"updates": updates,
}))?;
send_job_status_mail(&email, &subject, &text)?;
}
Ok(())
}
/// Lookup users email address
///
/// For "backup@pam", this returns the address from "root@pam".
pub fn lookup_user_email(userid: &Userid) -> Option<String> {
fn lookup_user_email(userid: &Userid) -> Option<String> {
use crate::config::user::{self, User};
@ -281,6 +396,36 @@ pub fn lookup_user_email(userid: &Userid) -> Option<String> {
None
}
/// Lookup Datastore notify settings
pub fn lookup_datastore_notify_settings(
store: &str,
) -> (Option<String>, Notify) {
let mut notify = Notify::Always;
let mut email = None;
let (config, _digest) = match crate::config::datastore::config() {
Ok(result) => result,
Err(_) => return (email, notify),
};
let config: DataStoreConfig = match config.lookup("datastore", store) {
Ok(result) => result,
Err(_) => return (email, notify),
};
email = match config.notify_user {
Some(ref userid) => lookup_user_email(userid),
None => lookup_user_email(Userid::backup_userid()),
};
if let Some(value) = config.notify {
notify = value;
}
(email, notify)
}
// Handlerbar helper functions
fn handlebars_humam_bytes_helper(

63
src/server/gc_job.rs Normal file
View File

@ -0,0 +1,63 @@
use std::sync::Arc;
use anyhow::Error;
use crate::{
server::WorkerTask,
api2::types::*,
server::jobstate::Job,
backup::DataStore,
};
/// Runs a garbage collection job.
pub fn do_garbage_collection_job(
mut job: Job,
datastore: Arc<DataStore>,
auth_id: &Authid,
schedule: Option<String>,
to_stdout: bool,
) -> Result<String, Error> {
let store = datastore.name().to_string();
let (email, notify) = crate::server::lookup_datastore_notify_settings(&store);
let worker_type = job.jobtype().to_string();
let upid_str = WorkerTask::new_thread(
&worker_type,
Some(store.clone()),
auth_id.clone(),
to_stdout,
move |worker| {
job.start(&worker.upid().to_string())?;
worker.log(format!("starting garbage collection on store {}", store));
if let Some(event_str) = schedule {
worker.log(format!("task triggered by schedule '{}'", event_str));
}
let result = datastore.garbage_collection(&*worker, worker.upid());
let status = worker.create_state(&result);
match job.finish(status) {
Err(err) => eprintln!(
"could not finish job state for {}: {}",
job.jobtype().to_string(),
err
),
Ok(_) => (),
}
if let Some(email) = email {
let gc_status = datastore.last_gc_status();
if let Err(err) = crate::server::send_gc_status(&email, notify, &store, &gc_status, &result) {
eprintln!("send gc notification failed: {}", err);
}
}
result
}
)?;
Ok(upid_str)
}

91
src/server/prune_job.rs Normal file
View File

@ -0,0 +1,91 @@
use anyhow::Error;
use proxmox::try_block;
use crate::{
api2::types::*,
backup::{compute_prune_info, BackupGroup, DataStore, PruneOptions},
server::jobstate::Job,
server::WorkerTask,
task_log,
};
pub fn do_prune_job(
mut job: Job,
prune_options: PruneOptions,
store: String,
auth_id: &Authid,
schedule: Option<String>,
) -> Result<String, Error> {
let datastore = DataStore::lookup_datastore(&store)?;
let worker_type = job.jobtype().to_string();
let upid_str = WorkerTask::new_thread(
&worker_type,
Some(job.jobname().to_string()),
auth_id.clone(),
false,
move |worker| {
job.start(&worker.upid().to_string())?;
let result = try_block!({
task_log!(worker, "Starting datastore prune on store \"{}\"", store);
if let Some(event_str) = schedule {
task_log!(worker, "task triggered by schedule '{}'", event_str);
}
task_log!(
worker,
"retention options: {}",
prune_options.cli_options_string()
);
let base_path = datastore.base_path();
let groups = BackupGroup::list_groups(&base_path)?;
for group in groups {
let list = group.list_backups(&base_path)?;
let mut prune_info = compute_prune_info(list, &prune_options)?;
prune_info.reverse(); // delete older snapshots first
task_log!(
worker,
"Starting prune on store \"{}\" group \"{}/{}\"",
store,
group.backup_type(),
group.backup_id()
);
for (info, keep) in prune_info {
task_log!(
worker,
"{} {}/{}/{}",
if keep { "keep" } else { "remove" },
group.backup_type(),
group.backup_id(),
info.backup_dir.backup_time_string()
);
if !keep {
datastore.remove_backup_dir(&info.backup_dir, true)?;
}
}
}
Ok(())
});
let status = worker.create_state(&result);
if let Err(err) = job.finish(status) {
eprintln!(
"could not finish job state for {}: {}",
job.jobtype().to_string(),
err
);
}
result
},
)?;
Ok(upid_str)
}

91
src/server/report.rs Normal file
View File

@ -0,0 +1,91 @@
use std::path::Path;
use std::process::Command;
use crate::config::datastore;
fn files() -> Vec<&'static str> {
vec![
"/etc/hostname",
"/etc/hosts",
"/etc/network/interfaces",
"/etc/proxmox-backup/datastore.cfg",
"/etc/proxmox-backup/user.cfg",
"/etc/proxmox-backup/acl.cfg",
"/etc/proxmox-backup/remote.cfg",
"/etc/proxmox-backup/sync.cfg",
"/etc/proxmox-backup/verification.cfg",
]
}
fn commands() -> Vec<(&'static str, Vec<&'static str>)> {
vec![
// ("<command>", vec![<arg [, arg]>])
("df", vec!["-h"]),
("lsblk", vec!["--ascii"]),
("zpool", vec!["status"]),
("zfs", vec!["list"]),
("proxmox-backup-manager", vec!["subscription", "get"]),
]
}
// (<description>, <function to call>)
fn function_calls() -> Vec<(&'static str, fn() -> String)> {
vec![
("Datastores", || {
let config = match datastore::config() {
Ok((config, _digest)) => config,
_ => return String::from("could not read datastore config"),
};
let mut list = Vec::new();
for (store, _) in &config.sections {
list.push(store.as_str());
}
list.join(", ")
})
]
}
pub fn generate_report() -> String {
use proxmox::tools::fs::file_read_optional_string;
let file_contents = files()
.iter()
.map(|file_name| {
let content = match file_read_optional_string(Path::new(file_name)) {
Ok(Some(content)) => content,
Ok(None) => String::from("# file does not exists"),
Err(err) => err.to_string(),
};
format!("# cat '{}'\n{}", file_name, content)
})
.collect::<Vec<String>>()
.join("\n\n");
let command_outputs = commands()
.iter()
.map(|(command, args)| {
let output = Command::new(command)
.env("PROXMOX_OUTPUT_NO_BORDER", "1")
.args(args)
.output();
let output = match output {
Ok(output) => String::from_utf8_lossy(&output.stdout).to_string(),
Err(err) => err.to_string(),
};
format!("# `{} {}`\n{}", command, args.join(" "), output)
})
.collect::<Vec<String>>()
.join("\n\n");
let function_outputs = function_calls()
.iter()
.map(|(desc, function)| format!("# {}\n{}", desc, function()))
.collect::<Vec<String>>()
.join("\n\n");
format!(
"= FILES =\n\n{}\n= COMMANDS =\n\n{}\n= FUNCTIONS =\n\n{}\n",
file_contents, command_outputs, function_outputs
)
}

View File

@ -17,6 +17,7 @@ use lazy_static::lazy_static;
use serde_json::{json, Value};
use tokio::fs::File;
use tokio::time::Instant;
use percent_encoding::percent_decode_str;
use url::form_urlencoded;
use regex::Regex;
@ -111,7 +112,7 @@ pub struct ApiService {
}
fn log_response(
logfile: Option<&Mutex<FileLogger>>,
logfile: Option<&Arc<Mutex<FileLogger>>>,
peer: &std::net::SocketAddr,
method: hyper::Method,
path_query: &str,
@ -163,6 +164,15 @@ fn log_response(
));
}
}
pub fn auth_logger() -> Result<FileLogger, Error> {
let logger_options = tools::FileLogOptions {
append: true,
prefix_time: true,
owned_by_backup: true,
..Default::default()
};
FileLogger::new(crate::buildcfg::API_AUTH_LOG_FN, logger_options)
}
fn get_proxied_peer(headers: &HeaderMap) -> Option<std::net::SocketAddr> {
lazy_static! {
@ -552,7 +562,7 @@ enum AuthData {
}
fn extract_auth_data(headers: &http::HeaderMap) -> Option<AuthData> {
if let Some(raw_cookie) = headers.get("COOKIE") {
if let Some(raw_cookie) = headers.get(header::COOKIE) {
if let Ok(cookie) = raw_cookie.to_str() {
if let Some(ticket) = tools::extract_cookie(cookie, "PBSAuthCookie") {
let csrf_token = match headers.get("CSRFPreventionToken").map(|v| v.to_str()) {
@ -567,8 +577,14 @@ fn extract_auth_data(headers: &http::HeaderMap) -> Option<AuthData> {
}
}
match headers.get("AUTHORIZATION").map(|v| v.to_str()) {
Some(Ok(v)) => Some(AuthData::ApiToken(v.to_owned())),
match headers.get(header::AUTHORIZATION).map(|v| v.to_str()) {
Some(Ok(v)) => {
if v.starts_with("PBSAPIToken ") || v.starts_with("PBSAPIToken=") {
Some(AuthData::ApiToken(v["PBSAPIToken ".len()..].to_owned()))
} else {
None
}
},
_ => None,
}
}
@ -609,6 +625,10 @@ fn check_auth(
let tokensecret = parts.next()
.ok_or_else(|| format_err!("failed to split API token header"))?;
let tokensecret = percent_decode_str(tokensecret)
.decode_utf8()
.map_err(|_| format_err!("failed to decode API token header"))?;
crate::config::token_shadow::verify_secret(&tokenid, &tokensecret)?;
Ok(tokenid)
@ -676,6 +696,10 @@ async fn handle_request(
match auth_result {
Ok(authid) => rpcenv.set_auth_id(Some(authid.to_string())),
Err(err) => {
let peer = peer.ip();
auth_logger()?
.log(format!("authentication failure; rhost={} msg={}", peer, err));
// always delay unauthorized calls by 3 seconds (from start of request)
let err = http_err!(UNAUTHORIZED, "authentication failed - {}", err);
tokio::time::delay_until(Instant::from_std(delay_unauth_time)).await;

View File

@ -48,7 +48,7 @@ pub fn do_verification_job(
}
};
let email = crate::server::lookup_user_email(auth_id.user());
let (email, notify) = crate::server::lookup_datastore_notify_settings(&verification_job.store);
let job_id = job.jobname().to_string();
let worker_type = job.jobtype().to_string();
@ -65,7 +65,7 @@ pub fn do_verification_job(
task_log!(worker,"task triggered by schedule '{}'", event_str);
}
let result = verify_all_backups(datastore, worker.clone(), worker.upid(), Some(&filter));
let result = verify_all_backups(datastore, worker.clone(), worker.upid(), None, Some(&filter));
let job_result = match result {
Ok(ref errors) if errors.is_empty() => Ok(()),
Ok(_) => Err(format_err!("verification failed - please check the log for details")),
@ -84,7 +84,7 @@ pub fn do_verification_job(
}
if let Some(email) = email {
if let Err(err) = crate::server::send_verify_status(&email, verification_job, &result) {
if let Err(err) = crate::server::send_verify_status(&email, notify, verification_job, &result) {
eprintln!("send verify notification failed: {}", err);
}
}

View File

@ -8,7 +8,6 @@ use std::sync::{Arc, Mutex};
use anyhow::{bail, format_err, Error};
use futures::*;
use lazy_static::lazy_static;
use nix::unistd::Pid;
use serde_json::{json, Value};
use serde::{Serialize, Deserialize};
use tokio::sync::oneshot;
@ -19,34 +18,33 @@ use proxmox::tools::fs::{create_path, open_file_locked, replace_file, CreateOpti
use super::UPID;
use crate::buildcfg;
use crate::server;
use crate::tools::logrotate::{LogRotate, LogRotateFiles};
use crate::tools::{FileLogger, FileLogOptions};
use crate::api2::types::Authid;
use crate::api2::types::{Authid, TaskStateType};
macro_rules! PROXMOX_BACKUP_VAR_RUN_DIR_M { () => ("/run/proxmox-backup") }
macro_rules! PROXMOX_BACKUP_LOG_DIR_M { () => ("/var/log/proxmox-backup") }
macro_rules! PROXMOX_BACKUP_TASK_DIR_M { () => (concat!( PROXMOX_BACKUP_LOG_DIR_M!(), "/tasks")) }
pub const PROXMOX_BACKUP_VAR_RUN_DIR: &str = PROXMOX_BACKUP_VAR_RUN_DIR_M!();
pub const PROXMOX_BACKUP_LOG_DIR: &str = PROXMOX_BACKUP_LOG_DIR_M!();
pub const PROXMOX_BACKUP_TASK_DIR: &str = PROXMOX_BACKUP_TASK_DIR_M!();
pub const PROXMOX_BACKUP_TASK_LOCK_FN: &str = concat!(PROXMOX_BACKUP_TASK_DIR_M!(), "/.active.lock");
pub const PROXMOX_BACKUP_ACTIVE_TASK_FN: &str = concat!(PROXMOX_BACKUP_TASK_DIR_M!(), "/active");
pub const PROXMOX_BACKUP_INDEX_TASK_FN: &str = concat!(PROXMOX_BACKUP_TASK_DIR_M!(), "/index");
pub const PROXMOX_BACKUP_ARCHIVE_TASK_FN: &str = concat!(PROXMOX_BACKUP_TASK_DIR_M!(), "/archive");
macro_rules! taskdir {
($subdir:expr) => (concat!(PROXMOX_BACKUP_LOG_DIR_M!(), "/tasks", $subdir))
}
pub const PROXMOX_BACKUP_TASK_DIR: &str = taskdir!("/");
pub const PROXMOX_BACKUP_TASK_LOCK_FN: &str = taskdir!("/.active.lock");
pub const PROXMOX_BACKUP_ACTIVE_TASK_FN: &str = taskdir!("/active");
pub const PROXMOX_BACKUP_INDEX_TASK_FN: &str = taskdir!("/index");
pub const PROXMOX_BACKUP_ARCHIVE_TASK_FN: &str = taskdir!("/archive");
lazy_static! {
static ref WORKER_TASK_LIST: Mutex<HashMap<usize, Arc<WorkerTask>>> = Mutex::new(HashMap::new());
}
static ref MY_PID: i32 = unsafe { libc::getpid() };
static ref MY_PID_PSTART: u64 = procfs::PidStat::read_from_pid(Pid::from_raw(*MY_PID))
.unwrap()
.starttime;
/// checks if the task UPID refers to a worker from this process
fn is_local_worker(upid: &UPID) -> bool {
upid.pid == server::pid() && upid.pstart == server::pstart()
}
/// Test if the task is still running
pub async fn worker_is_active(upid: &UPID) -> Result<bool, Error> {
if (upid.pid == *MY_PID) && (upid.pstart == *MY_PID_PSTART) {
if is_local_worker(upid) {
return Ok(WORKER_TASK_LIST.lock().unwrap().contains_key(&upid.task_id));
}
@ -54,15 +52,14 @@ pub async fn worker_is_active(upid: &UPID) -> Result<bool, Error> {
return Ok(false);
}
let socketname = format!(
"\0{}/proxmox-task-control-{}.sock", PROXMOX_BACKUP_VAR_RUN_DIR, upid.pid);
let sock = server::ctrl_sock_from_pid(upid.pid);
let cmd = json!({
"command": "status",
"upid": upid.to_string(),
"command": "worker-task-status",
"args": {
"upid": upid.to_string(),
},
});
let status = super::send_command(socketname, cmd).await?;
let status = super::send_command(sock, cmd).await?;
if let Some(active) = status.as_bool() {
Ok(active)
@ -73,69 +70,48 @@ pub async fn worker_is_active(upid: &UPID) -> Result<bool, Error> {
/// Test if the task is still running (fast but inaccurate implementation)
///
/// If the task is spanned from a different process, we simply return if
/// If the task is spawned from a different process, we simply return if
/// that process is still running. This information is good enough to detect
/// stale tasks...
pub fn worker_is_active_local(upid: &UPID) -> bool {
if (upid.pid == *MY_PID) && (upid.pstart == *MY_PID_PSTART) {
if is_local_worker(upid) {
WORKER_TASK_LIST.lock().unwrap().contains_key(&upid.task_id)
} else {
procfs::check_process_running_pstart(upid.pid, upid.pstart).is_some()
}
}
pub fn create_task_control_socket() -> Result<(), Error> {
let socketname = format!(
"\0{}/proxmox-task-control-{}.sock", PROXMOX_BACKUP_VAR_RUN_DIR, *MY_PID);
let control_future = super::create_control_socket(socketname, |param| {
let param = param
.as_object()
.ok_or_else(|| format_err!("unable to parse parameters (expected json object)"))?;
if param.keys().count() != 2 { bail!("wrong number of parameters"); }
let command = param["command"]
.as_str()
.ok_or_else(|| format_err!("unable to parse parameters (missing command)"))?;
// we have only two commands for now
if !(command == "abort-task" || command == "status") {
bail!("got unknown command '{}'", command);
}
let upid_str = param["upid"]
.as_str()
.ok_or_else(|| format_err!("unable to parse parameters (missing upid)"))?;
let upid = upid_str.parse::<UPID>()?;
if !(upid.pid == *MY_PID && upid.pstart == *MY_PID_PSTART) {
pub fn register_task_control_commands(
commando_sock: &mut super::CommandoSocket,
) -> Result<(), Error> {
fn get_upid(args: Option<&Value>) -> Result<UPID, Error> {
let args = if let Some(args) = args { args } else { bail!("missing args") };
let upid = match args.get("upid") {
Some(Value::String(upid)) => upid.parse::<UPID>()?,
None => bail!("no upid in args"),
_ => bail!("unable to parse upid"),
};
if !is_local_worker(&upid) {
bail!("upid does not belong to this process");
}
Ok(upid)
}
let hash = WORKER_TASK_LIST.lock().unwrap();
commando_sock.register_command("worker-task-abort".into(), move |args| {
let upid = get_upid(args)?;
match command {
"abort-task" => {
if let Some(ref worker) = hash.get(&upid.task_id) {
worker.request_abort();
} else {
// assume task is already stopped
}
Ok(Value::Null)
}
"status" => {
let active = hash.contains_key(&upid.task_id);
Ok(active.into())
}
_ => {
bail!("got unknown command '{}'", command);
}
if let Some(ref worker) = WORKER_TASK_LIST.lock().unwrap().get(&upid.task_id) {
worker.request_abort();
}
Ok(Value::Null)
})?;
commando_sock.register_command("worker-task-status".into(), move |args| {
let upid = get_upid(args)?;
tokio::spawn(control_future);
let active = WORKER_TASK_LIST.lock().unwrap().contains_key(&upid.task_id);
Ok(active.into())
})?;
Ok(())
}
@ -150,17 +126,14 @@ pub fn abort_worker_async(upid: UPID) {
pub async fn abort_worker(upid: UPID) -> Result<(), Error> {
let target_pid = upid.pid;
let socketname = format!(
"\0{}/proxmox-task-control-{}.sock", PROXMOX_BACKUP_VAR_RUN_DIR, target_pid);
let sock = server::ctrl_sock_from_pid(upid.pid);
let cmd = json!({
"command": "abort-task",
"upid": upid.to_string(),
"command": "worker-task-abort",
"args": {
"upid": upid.to_string(),
},
});
super::send_command(socketname, cmd).map_ok(|_| ()).await
super::send_command(sock, cmd).map_ok(|_| ()).await
}
fn parse_worker_status_line(line: &str) -> Result<(String, UPID, Option<TaskState>), Error> {
@ -189,9 +162,9 @@ pub fn create_task_log_dirs() -> Result<(), Error> {
.owner(backup_user.uid)
.group(backup_user.gid);
create_path(PROXMOX_BACKUP_LOG_DIR, None, Some(opts.clone()))?;
create_path(buildcfg::PROXMOX_BACKUP_LOG_DIR, None, Some(opts.clone()))?;
create_path(PROXMOX_BACKUP_TASK_DIR, None, Some(opts.clone()))?;
create_path(PROXMOX_BACKUP_VAR_RUN_DIR, None, Some(opts))?;
create_path(buildcfg::PROXMOX_BACKUP_RUN_DIR, None, Some(opts))?;
Ok(())
}).map_err(|err: Error| format_err!("unable to create task log dir - {}", err))?;
@ -273,6 +246,15 @@ impl TaskState {
}
}
pub fn tasktype(&self) -> TaskStateType {
match self {
TaskState::OK { .. } => TaskStateType::OK,
TaskState::Unknown { .. } => TaskStateType::Unknown,
TaskState::Error { .. } => TaskStateType::Error,
TaskState::Warning { .. } => TaskStateType::Warning,
}
}
fn result_text(&self) -> String {
match self {
TaskState::Error { message, .. } => format!("TASK ERROR: {}", message),
@ -602,18 +584,9 @@ struct WorkerTaskData {
pub abort_listeners: Vec<oneshot::Sender<()>>,
}
impl Drop for WorkerTask {
fn drop(&mut self) {
println!("unregister worker");
}
}
impl WorkerTask {
pub fn new(worker_type: &str, worker_id: Option<String>, auth_id: Authid, to_stdout: bool) -> Result<Arc<Self>, Error> {
println!("register worker");
let upid = UPID::new(worker_type, worker_id, auth_id)?;
let task_id = upid.task_id;
@ -692,8 +665,6 @@ impl WorkerTask {
) -> Result<String, Error>
where F: Send + UnwindSafe + 'static + FnOnce(Arc<WorkerTask>) -> Result<(), Error>
{
println!("register worker thread");
let worker = WorkerTask::new(worker_type, worker_id, auth_id, to_stdout)?;
let upid_str = worker.upid.to_string();

View File

@ -19,6 +19,7 @@ use proxmox::tools::vec;
pub use proxmox::tools::fd::Fd;
pub mod acl;
pub mod apt;
pub mod async_io;
pub mod borrow;
pub mod cert;
@ -462,7 +463,7 @@ pub fn run_command(
let output = command.output()
.map_err(|err| format_err!("failed to execute {:?} - {}", command, err))?;
let output = crate::tools::command_output_as_string(output, exit_code_check)
let output = command_output_as_string(output, exit_code_check)
.map_err(|err| format_err!("command {:?} failed - {}", command, err))?;
Ok(output)

361
src/tools/apt.rs Normal file
View File

@ -0,0 +1,361 @@
use std::collections::HashSet;
use std::collections::HashMap;
use anyhow::{Error, bail, format_err};
use apt_pkg_native::Cache;
use proxmox::const_regex;
use proxmox::tools::fs::{file_read_optional_string, replace_file, CreateOptions};
use crate::api2::types::APTUpdateInfo;
const APT_PKG_STATE_FN: &str = "/var/lib/proxmox-backup/pkg-state.json";
#[derive(Debug, serde::Serialize, serde::Deserialize)]
/// Some information we cache about the package (update) state, like what pending update version
/// we already notfied an user about
pub struct PkgState {
/// simple map from package name to most recently notified (emailed) version
pub notified: Option<HashMap<String, String>>,
/// A list of pending updates
pub package_status: Vec<APTUpdateInfo>,
}
pub fn write_pkg_cache(state: &PkgState) -> Result<(), Error> {
let serialized_state = serde_json::to_string(state)?;
replace_file(APT_PKG_STATE_FN, &serialized_state.as_bytes(), CreateOptions::new())
.map_err(|err| format_err!("Error writing package cache - {}", err))?;
Ok(())
}
pub fn read_pkg_state() -> Result<Option<PkgState>, Error> {
let serialized_state = match file_read_optional_string(&APT_PKG_STATE_FN) {
Ok(Some(raw)) => raw,
Ok(None) => return Ok(None),
Err(err) => bail!("could not read cached package state file - {}", err),
};
serde_json::from_str(&serialized_state)
.map(|s| Some(s))
.map_err(|err| format_err!("could not parse cached package status - {}", err))
}
pub fn pkg_cache_expired () -> Result<bool, Error> {
if let Ok(pbs_cache) = std::fs::metadata(APT_PKG_STATE_FN) {
let apt_pkgcache = std::fs::metadata("/var/cache/apt/pkgcache.bin")?;
let dpkg_status = std::fs::metadata("/var/lib/dpkg/status")?;
let mtime = pbs_cache.modified()?;
if apt_pkgcache.modified()? <= mtime && dpkg_status.modified()? <= mtime {
return Ok(false);
}
}
Ok(true)
}
pub fn update_cache() -> Result<PkgState, Error> {
// update our cache
let all_upgradeable = list_installed_apt_packages(|data| {
data.candidate_version == data.active_version &&
data.installed_version != Some(data.candidate_version)
}, None);
let cache = match read_pkg_state() {
Ok(Some(mut cache)) => {
cache.package_status = all_upgradeable;
cache
},
_ => PkgState {
notified: None,
package_status: all_upgradeable,
},
};
write_pkg_cache(&cache)?;
Ok(cache)
}
const_regex! {
VERSION_EPOCH_REGEX = r"^\d+:";
FILENAME_EXTRACT_REGEX = r"^.*/.*?_(.*)_Packages$";
}
// FIXME: once the 'changelog' API call switches over to 'apt-get changelog' only,
// consider removing this function entirely, as it's value is never used anywhere
// then (widget-toolkit doesn't use the value either)
fn get_changelog_url(
package: &str,
filename: &str,
version: &str,
origin: &str,
component: &str,
) -> Result<String, Error> {
if origin == "" {
bail!("no origin available for package {}", package);
}
if origin == "Debian" {
let mut command = std::process::Command::new("apt-get");
command.arg("changelog");
command.arg("--print-uris");
command.arg(package);
let output = crate::tools::run_command(command, None)?; // format: 'http://foo/bar' package.changelog
let output = match output.splitn(2, ' ').next() {
Some(output) => {
if output.len() < 2 {
bail!("invalid output (URI part too short) from 'apt-get changelog --print-uris': {}", output)
}
output[1..output.len()-1].to_owned()
},
None => bail!("invalid output from 'apt-get changelog --print-uris': {}", output)
};
return Ok(output);
} else if origin == "Proxmox" {
// FIXME: Use above call to 'apt changelog <pkg> --print-uris' as well.
// Currently not possible as our packages do not have a URI set in their Release file.
let version = (VERSION_EPOCH_REGEX.regex_obj)().replace_all(version, "");
let base = match (FILENAME_EXTRACT_REGEX.regex_obj)().captures(filename) {
Some(captures) => {
let base_capture = captures.get(1);
match base_capture {
Some(base_underscore) => base_underscore.as_str().replace("_", "/"),
None => bail!("incompatible filename, cannot find regex group")
}
},
None => bail!("incompatible filename, doesn't match regex")
};
return Ok(format!("http://download.proxmox.com/{}/{}_{}.changelog",
base, package, version));
}
bail!("unknown origin ({}) or component ({})", origin, component)
}
pub struct FilterData<'a> {
// this is version info returned by APT
pub installed_version: Option<&'a str>,
pub candidate_version: &'a str,
// this is the version info the filter is supposed to check
pub active_version: &'a str,
}
enum PackagePreSelect {
OnlyInstalled,
OnlyNew,
All,
}
pub fn list_installed_apt_packages<F: Fn(FilterData) -> bool>(
filter: F,
only_versions_for: Option<&str>,
) -> Vec<APTUpdateInfo> {
let mut ret = Vec::new();
let mut depends = HashSet::new();
// note: this is not an 'apt update', it just re-reads the cache from disk
let mut cache = Cache::get_singleton();
cache.reload();
let mut cache_iter = match only_versions_for {
Some(name) => cache.find_by_name(name),
None => cache.iter()
};
loop {
match cache_iter.next() {
Some(view) => {
let di = if only_versions_for.is_some() {
query_detailed_info(
PackagePreSelect::All,
&filter,
view,
None
)
} else {
query_detailed_info(
PackagePreSelect::OnlyInstalled,
&filter,
view,
Some(&mut depends)
)
};
if let Some(info) = di {
ret.push(info);
}
if only_versions_for.is_some() {
break;
}
},
None => {
drop(cache_iter);
// also loop through missing dependencies, as they would be installed
for pkg in depends.iter() {
let mut iter = cache.find_by_name(&pkg);
let view = match iter.next() {
Some(view) => view,
None => continue // package not found, ignore
};
let di = query_detailed_info(
PackagePreSelect::OnlyNew,
&filter,
view,
None
);
if let Some(info) = di {
ret.push(info);
}
}
break;
}
}
}
return ret;
}
fn query_detailed_info<'a, F, V>(
pre_select: PackagePreSelect,
filter: F,
view: V,
depends: Option<&mut HashSet<String>>,
) -> Option<APTUpdateInfo>
where
F: Fn(FilterData) -> bool,
V: std::ops::Deref<Target = apt_pkg_native::sane::PkgView<'a>>
{
let current_version = view.current_version();
let candidate_version = view.candidate_version();
let (current_version, candidate_version) = match pre_select {
PackagePreSelect::OnlyInstalled => match (current_version, candidate_version) {
(Some(cur), Some(can)) => (Some(cur), can), // package installed and there is an update
(Some(cur), None) => (Some(cur.clone()), cur), // package installed and up-to-date
(None, Some(_)) => return None, // package could be installed
(None, None) => return None, // broken
},
PackagePreSelect::OnlyNew => match (current_version, candidate_version) {
(Some(_), Some(_)) => return None,
(Some(_), None) => return None,
(None, Some(can)) => (None, can),
(None, None) => return None,
},
PackagePreSelect::All => match (current_version, candidate_version) {
(Some(cur), Some(can)) => (Some(cur), can),
(Some(cur), None) => (Some(cur.clone()), cur),
(None, Some(can)) => (None, can),
(None, None) => return None,
},
};
// get additional information via nested APT 'iterators'
let mut view_iter = view.versions();
while let Some(ver) = view_iter.next() {
let package = view.name();
let version = ver.version();
let mut origin_res = "unknown".to_owned();
let mut section_res = "unknown".to_owned();
let mut priority_res = "unknown".to_owned();
let mut change_log_url = "".to_owned();
let mut short_desc = package.clone();
let mut long_desc = "".to_owned();
let fd = FilterData {
installed_version: current_version.as_deref(),
candidate_version: &candidate_version,
active_version: &version,
};
if filter(fd) {
if let Some(section) = ver.section() {
section_res = section;
}
if let Some(prio) = ver.priority_type() {
priority_res = prio;
}
// assume every package has only one origin file (not
// origin, but origin *file*, for some reason those seem to
// be different concepts in APT)
let mut origin_iter = ver.origin_iter();
let origin = origin_iter.next();
if let Some(origin) = origin {
if let Some(sd) = origin.short_desc() {
short_desc = sd;
}
if let Some(ld) = origin.long_desc() {
long_desc = ld;
}
// the package files appear in priority order, meaning
// the one for the candidate version is first - this is fine
// however, as the source package should be the same for all
// versions anyway
let mut pkg_iter = origin.file();
let pkg_file = pkg_iter.next();
if let Some(pkg_file) = pkg_file {
if let Some(origin_name) = pkg_file.origin() {
origin_res = origin_name;
}
let filename = pkg_file.file_name();
let component = pkg_file.component();
// build changelog URL from gathered information
// ignore errors, use empty changelog instead
let url = get_changelog_url(&package, &filename,
&version, &origin_res, &component);
if let Ok(url) = url {
change_log_url = url;
}
}
}
if let Some(depends) = depends {
let mut dep_iter = ver.dep_iter();
loop {
let dep = match dep_iter.next() {
Some(dep) if dep.dep_type() != "Depends" => continue,
Some(dep) => dep,
None => break
};
let dep_pkg = dep.target_pkg();
let name = dep_pkg.name();
depends.insert(name);
}
}
return Some(APTUpdateInfo {
package,
title: short_desc,
arch: view.arch(),
description: long_desc,
change_log_url,
origin: origin_res,
version: candidate_version.clone(),
old_version: match current_version {
Some(vers) => vers,
None => "".to_owned()
},
priority: priority_res,
section: section_res,
});
}
}
return None;
}

View File

@ -47,6 +47,7 @@ pub struct FileLogOptions {
#[derive(Debug)]
pub struct FileLogger {
file: std::fs::File,
file_name: std::path::PathBuf,
options: FileLogOptions,
}
@ -63,6 +64,23 @@ impl FileLogger {
file_name: P,
options: FileLogOptions,
) -> Result<Self, Error> {
let file = Self::open(&file_name, &options)?;
let file_name: std::path::PathBuf = file_name.as_ref().to_path_buf();
Ok(Self { file, file_name, options })
}
pub fn reopen(&mut self) -> Result<&Self, Error> {
let file = Self::open(&self.file_name, &self.options)?;
self.file = file;
Ok(self)
}
fn open<P: AsRef<std::path::Path>>(
file_name: P,
options: &FileLogOptions,
) -> Result<std::fs::File, Error> {
let file = std::fs::OpenOptions::new()
.read(options.read)
.write(true)
@ -76,7 +94,7 @@ impl FileLogger {
nix::unistd::chown(file_name.as_ref(), Some(backup_user.uid), Some(backup_user.gid))?;
}
Ok(Self { file, options })
Ok(file)
}
pub fn log<S: AsRef<str>>(&mut self, msg: S) {
@ -88,15 +106,21 @@ impl FileLogger {
stdout.write_all(b"\n").unwrap();
}
let now = proxmox::tools::time::epoch_i64();
let rfc3339 = proxmox::tools::time::epoch_to_rfc3339(now).unwrap();
let line = if self.options.prefix_time {
let now = proxmox::tools::time::epoch_i64();
let rfc3339 = match proxmox::tools::time::epoch_to_rfc3339(now) {
Ok(rfc3339) => rfc3339,
Err(_) => "1970-01-01T00:00:00Z".into(), // for safety, should really not happen!
};
format!("{}: {}\n", rfc3339, msg)
} else {
format!("{}\n", msg)
};
self.file.write_all(line.as_bytes()).unwrap();
if let Err(err) = self.file.write_all(line.as_bytes()) {
// avoid panicking, log methods should not do that
// FIXME: or, return result???
eprintln!("error writing to log file - {}", err);
}
}
}

View File

@ -92,24 +92,24 @@ impl LogRotate {
if filenames.is_empty() {
return Ok(()); // no file means nothing to rotate
}
let count = filenames.len() + 1;
let mut next_filename = self.base_path.clone().canonicalize()?.into_os_string();
next_filename.push(format!(".{}", filenames.len()));
filenames.push(PathBuf::from(next_filename));
let count = filenames.len();
for i in (0..count-1).rev() {
rename(&filenames[i], &filenames[i+1])?;
if self.compress && count > 2 {
next_filename.push(".zst");
}
if self.compress {
for i in 2..count {
if filenames[i].extension().unwrap_or(std::ffi::OsStr::new("")) != "zst" {
let mut target = filenames[i].clone().into_os_string();
target.push(".zstd");
Self::compress(&filenames[i], &target.into(), &options)?;
}
filenames.push(PathBuf::from(next_filename));
for i in (0..count-1).rev() {
if self.compress
&& filenames[i+0].extension().unwrap_or(std::ffi::OsStr::new("")) != "zst"
&& filenames[i+1].extension().unwrap_or(std::ffi::OsStr::new("")) == "zst"
{
Self::compress(&filenames[i], &filenames[i+1], &options)?;
} else {
rename(&filenames[i], &filenames[i+1])?;
}
}

View File

@ -42,8 +42,10 @@ fn worker_task_abort() -> Result<(), Error> {
let mut rt = tokio::runtime::Runtime::new().unwrap();
rt.block_on(async move {
let mut commando_sock = server::CommandoSocket::new(server::our_ctrl_sock());
let init_result: Result<(), Error> = try_block!({
server::create_task_control_socket()?;
server::register_task_control_commands(&mut commando_sock)?;
server::server_state_init()?;
Ok(())
});

View File

@ -63,7 +63,9 @@ Ext.define('PBS.Dashboard', {
updateSubscription: function(store, records, success) {
if (!success) { return; }
let me = this;
let subStatus = records[0].data.status === 'Active' ? 2 : 0; // 2 = all good, 1 = different leves, 0 = none
let status = records[0].data.status || 'unknown';
// 2 = all good, 1 = different leves, 0 = none
let subStatus = status.toLowerCase() === 'active' ? 2 : 0;
me.lookup('subscription').setSubStatus(subStatus);
},
@ -230,8 +232,9 @@ Ext.define('PBS.Dashboard', {
model: 'proxmox-tasks',
proxy: {
type: 'proxmox',
url: '/api2/json/status/tasks',
url: '/api2/json/nodes/localhost/tasks',
extraParams: {
limit: 0,
since: '{sinceEpoch}',
},
},

View File

@ -68,8 +68,10 @@ Ext.define('PBS.DataStoreInfo', {
let gcstatus = store.getById('gc-status').data.value;
let dedup = (gcstatus['index-data-bytes'] || 0)/
(gcstatus['disk-bytes'] || Infinity);
let dedup = 1.0;
if (gcstatus['disk-bytes'] > 0) {
dedup = (gcstatus['index-data-bytes'] || 0)/gcstatus['disk-bytes'];
}
let countstext = function(count) {
count = count || {};

View File

@ -236,9 +236,19 @@ Ext.define('PBS.MainView', {
iconCls: 'fa fa-user',
menu: [
{
reference: 'logoutButton',
iconCls: 'fa fa-language',
text: gettext('Language'),
reference: 'languageButton',
handler: () => Ext.create('Proxmox.window.LanguageEditWindow', {
cookieName: 'PBSLangCookie',
autoShow: true,
}),
},
'-',
{
iconCls: 'fa fa-sign-out',
text: gettext('Logout'),
reference: 'logoutButton',
},
],
},

View File

@ -36,6 +36,8 @@ JSSRC= \
dashboard/LongestTasks.js \
dashboard/RunningTasks.js \
dashboard/TaskSummary.js \
panel/Tasks.js \
panel/XtermJsConsole.js \
Utils.js \
AccessControlPanel.js \
ZFSList.js \

View File

@ -55,6 +55,12 @@ Ext.define('PBS.store.NavigationStore', {
expanded: true,
leaf: false,
children: [
{
text: gettext('Shell'),
iconCls: 'fa fa-terminal',
path: 'pbsXtermJsConsole',
leaf: true,
},
{
text: gettext('Disks'),
iconCls: 'fa fa-hdd-o',

View File

@ -20,11 +20,13 @@ Ext.define('PBS.ServerAdministration', {
{
xtype: 'pbsServerStatus',
itemId: 'status',
iconCls: 'fa fa-area-chart',
},
{
xtype: 'proxmoxNodeServiceView',
title: gettext('Services'),
itemId: 'services',
iconCls: 'fa fa-cogs',
restartCommand: 'reload', // avoid disruptions
startOnlyServices: {
syslog: true,
@ -36,6 +38,7 @@ Ext.define('PBS.ServerAdministration', {
{
xtype: 'proxmoxNodeAPT',
title: gettext('Updates'),
iconCls: 'fa fa-refresh',
upgradeBtn: {
xtype: 'button',
reference: 'upgradeBtn',
@ -51,12 +54,14 @@ Ext.define('PBS.ServerAdministration', {
{
xtype: 'proxmoxJournalView',
itemId: 'logs',
iconCls: 'fa fa-list',
title: gettext('Syslog'),
url: "/api2/extjs/nodes/localhost/journal",
},
{
xtype: 'proxmoxNodeTasks',
xtype: 'pbsNodeTasks',
itemId: 'tasks',
iconCls: 'fa fa-list-alt',
title: gettext('Tasks'),
height: 'auto',
nodename: 'localhost',

View File

@ -28,6 +28,73 @@ Ext.define('PBS.Subscription', {
enableTextSelection: true,
},
showReport: function() {
var me = this;
var getReportFileName = function() {
var now = Ext.Date.format(new Date(), 'D-d-F-Y-G-i');
return `${me.nodename}-pbs-report-${now}.txt`;
};
var view = Ext.createWidget('component', {
itemId: 'system-report-view',
scrollable: true,
style: {
'background-color': 'white',
'white-space': 'pre',
'font-family': 'monospace',
padding: '5px',
},
});
var reportWindow = Ext.create('Ext.window.Window', {
title: gettext('System Report'),
width: 1024,
height: 600,
layout: 'fit',
modal: true,
buttons: [
'->',
{
text: gettext('Download'),
handler: function() {
var fileContent = Ext.String.htmlDecode(reportWindow.getComponent('system-report-view').html);
var fileName = getReportFileName();
// Internet Explorer
if (window.navigator.msSaveOrOpenBlob) {
navigator.msSaveOrOpenBlob(new Blob([fileContent]), fileName);
} else {
var element = document.createElement('a');
element.setAttribute('href', 'data:text/plain;charset=utf-8,' +
encodeURIComponent(fileContent));
element.setAttribute('download', fileName);
element.style.display = 'none';
document.body.appendChild(element);
element.click();
document.body.removeChild(element);
}
},
},
],
items: view,
});
Proxmox.Utils.API2Request({
url: '/api2/extjs/nodes/' + me.nodename + '/report',
method: 'GET',
waitMsgTarget: me,
failure: function(response) {
Ext.Msg.alert(gettext('Error'), response.htmlStatus);
},
success: function(response) {
var report = Ext.htmlEncode(response.result.data);
reportWindow.show();
view.update(report);
},
});
},
initComponent: function() {
let me = this;
@ -105,7 +172,13 @@ Ext.define('PBS.Subscription', {
selModel: false,
callback: reload,
},
//'-',
'-',
{
text: gettext('System Report'),
handler: function() {
Proxmox.Utils.checked_command(function() { me.showReport(); });
},
},
],
rows: rows,
});

View File

@ -97,17 +97,20 @@ Ext.define('PBS.Utils', {
// do whatever you want here
Proxmox.Utils.override_task_descriptions({
backup: (type, id) => PBS.Utils.render_datastore_worker_id(id, gettext('Backup')),
dircreate: [gettext('Directory Storage'), gettext('Create')],
dirremove: [gettext('Directory'), gettext('Remove')],
garbage_collection: ['Datastore', gettext('Garbage collect')],
logrotate: [null, gettext('Log Rotation')],
prune: (type, id) => PBS.Utils.render_datastore_worker_id(id, gettext('Prune')),
reader: (type, id) => PBS.Utils.render_datastore_worker_id(id, gettext('Read objects')),
sync: ['Datastore', gettext('Remote Sync')],
syncjob: [gettext('Sync Job'), gettext('Remote Sync')],
verify: ['Datastore', gettext('Verification')],
verify_group: ['Group', gettext('Verification')],
verify_snapshot: ['Snapshot', gettext('Verification')],
syncjob: [gettext('Sync Job'), gettext('Remote Sync')],
verifyjob: [gettext('Verify Job'), gettext('Scheduled Verification')],
prune: (type, id) => PBS.Utils.render_datastore_worker_id(id, gettext('Prune')),
backup: (type, id) => PBS.Utils.render_datastore_worker_id(id, gettext('Backup')),
reader: (type, id) => PBS.Utils.render_datastore_worker_id(id, gettext('Read objects')),
logrotate: [gettext('Log'), gettext('Rotation')],
verificationjob: [gettext('Verify Job'), gettext('Scheduled Verification')],
zfscreate: [gettext('ZFS Storage'), gettext('Create')],
});
},
});

View File

@ -7,7 +7,7 @@ Ext.define('PBS.TaskButton', {
badgeCls: '',
},
iconCls: 'fa fa-list',
iconCls: 'fa fa-list-alt',
userCls: 'pmx-has-badge',
text: gettext('Tasks'),

View File

@ -20,9 +20,6 @@ Ext.define('PBS.config.ACLView', {
extend: 'Ext.grid.GridPanel',
alias: 'widget.pbsACLView',
stateful: true,
stateId: 'grid-acls',
title: gettext('Permissions'),
aclPath: undefined,
@ -69,7 +66,7 @@ Ext.define('PBS.config.ACLView', {
'delete': 1,
path: rec.data.path,
role: rec.data.roleid,
auth_id: rec.data.ugid,
'auth-id': rec.data.ugid,
},
callback: function() {
me.reload();
@ -149,31 +146,30 @@ Ext.define('PBS.config.ACLView', {
columns: [
{
header: gettext('Path'),
width: 200,
width: 250,
sortable: true,
renderer: Ext.String.htmlEncode,
dataIndex: 'path',
},
{
header: gettext('User/Group/API Token'),
width: 100,
width: 200,
sortable: true,
renderer: Ext.String.htmlEncode,
dataIndex: 'ugid',
},
{
header: gettext('Role'),
width: 80,
width: 200,
sortable: true,
dataIndex: 'roleid',
},
{
header: gettext('Propagate'),
width: 150,
flex: 1, // last element flex looks better
sortable: true,
renderer: Proxmox.Utils.format_boolean,
dataIndex: 'propagate',
},
],
});

View File

@ -1,7 +1,7 @@
Ext.define('pbs-sync-jobs-status', {
extend: 'Ext.data.Model',
fields: [
'id', 'remote', 'remote-store', 'store', 'schedule',
'id', 'owner', 'remote', 'remote-store', 'store', 'schedule',
'next-run', 'last-run-upid', 'last-run-state', 'last-run-endtime',
{
name: 'duration',
@ -26,7 +26,7 @@ Ext.define('PBS.config.SyncJobView', {
alias: 'widget.pbsSyncJobView',
stateful: true,
stateId: 'grid-sync-jobs',
stateId: 'grid-sync-jobs-v1',
title: gettext('Sync Jobs'),
@ -151,6 +151,11 @@ Ext.define('PBS.config.SyncJobView', {
return Proxmox.Utils.render_timestamp(value);
},
render_optional_owner: function(value, metadata, record) {
if (!value) return '-';
return Ext.String.htmlEncode(value, metadata, record);
},
startStore: function() { this.getView().getStore().rstore.startUpdate(); },
stopStore: function() { this.getView().getStore().rstore.stopUpdate(); },
@ -224,68 +229,78 @@ Ext.define('PBS.config.SyncJobView', {
columns: [
{
header: gettext('Sync Job'),
header: gettext('Job ID'),
dataIndex: 'id',
renderer: Ext.String.htmlEncode,
flex: 2,
maxWidth: 220,
minWidth: 75,
flex: 1,
sortable: true,
hidden: true,
},
{
header: gettext('Remote'),
dataIndex: 'remote',
flex: 2,
width: 120,
sortable: true,
},
{
header: gettext('Remote Store'),
dataIndex: 'remote-store',
flex: 2,
width: 120,
sortable: true,
},
{
header: gettext('Local Store'),
dataIndex: 'store',
width: 120,
sortable: true,
},
{
header: gettext('Owner'),
dataIndex: 'owner',
renderer: 'render_optional_owner',
flex: 2,
sortable: true,
},
{
header: gettext('Schedule'),
dataIndex: 'schedule',
flex: 2,
maxWidth: 220,
minWidth: 80,
flex: 1,
sortable: true,
},
{
header: gettext('Status'),
dataIndex: 'last-run-state',
renderer: 'render_sync_status',
flex: 4,
},
{
header: gettext('Last Sync'),
dataIndex: 'last-run-endtime',
renderer: 'render_optional_timestamp',
flex: 3,
width: 150,
sortable: true,
},
{
text: gettext('Duration'),
dataIndex: 'duration',
renderer: Proxmox.Utils.render_duration,
flex: 2,
width: 80,
},
{
header: gettext('Status'),
dataIndex: 'last-run-state',
renderer: 'render_sync_status',
flex: 3,
},
{
header: gettext('Next Run'),
dataIndex: 'next-run',
renderer: 'render_next_run',
flex: 3,
width: 150,
sortable: true,
},
{
header: gettext('Comment'),
dataIndex: 'comment',
renderer: Ext.String.htmlEncode,
flex: 4,
flex: 2,
sortable: true,
},
],

View File

@ -109,7 +109,7 @@ Ext.define('PBS.config.TokenView', {
Ext.create('Proxmox.PermissionView', {
auth_id: selection[0].data.tokenid,
auth_id_name: 'auth_id',
auth_id_name: 'auth-id',
}).show();
},

View File

@ -72,7 +72,7 @@ Ext.define('PBS.config.UserView', {
Ext.create('Proxmox.PermissionView', {
auth_id: selection[0].data.userid,
auth_id_name: 'auth_id',
auth_id_name: 'auth-id',
}).show();
},

View File

@ -26,7 +26,7 @@ Ext.define('PBS.config.VerifyJobView', {
alias: 'widget.pbsVerifyJobView',
stateful: true,
stateId: 'grid-verify-jobs',
stateId: 'grid-verify-jobs-v1',
title: gettext('Verify Jobs'),
@ -227,61 +227,64 @@ Ext.define('PBS.config.VerifyJobView', {
header: gettext('Job ID'),
dataIndex: 'id',
renderer: Ext.String.htmlEncode,
flex: 2,
maxWidth: 220,
minWidth: 75,
flex: 1,
sortable: true,
hidden: true,
},
{
header: gettext('Skip Verified'),
dataIndex: 'ignore-verified',
renderer: Proxmox.Utils.format_boolean,
flex: 2,
width: 100,
sortable: true,
},
{
header: gettext('Re-Verfiy Age'),
header: gettext('Re-Verify After'),
dataIndex: 'outdated-after',
renderer: v => v ? v +' '+ gettext('Days') : gettext('Never'),
flex: 2,
width: 125,
sortable: true,
},
{
header: gettext('Schedule'),
dataIndex: 'schedule',
sortable: true,
flex: 2,
},
{
header: gettext('Status'),
dataIndex: 'last-run-state',
renderer: 'render_verify_status',
flex: 4,
maxWidth: 220,
minWidth: 80,
flex: 1,
},
{
header: gettext('Last Verification'),
dataIndex: 'last-run-endtime',
renderer: 'render_optional_timestamp',
flex: 3,
width: 150,
sortable: true,
},
{
text: gettext('Duration'),
dataIndex: 'duration',
renderer: Proxmox.Utils.render_duration,
flex: 2,
width: 80,
},
{
header: gettext('Status'),
dataIndex: 'last-run-state',
renderer: 'render_verify_status',
flex: 3,
},
{
header: gettext('Next Run'),
dataIndex: 'next-run',
renderer: 'render_next_run',
flex: 3,
width: 150,
sortable: true,
},
{
header: gettext('Comment'),
dataIndex: 'comment',
renderer: Ext.String.htmlEncode,
flex: 4,
flex: 2,
sortable: true,
},
],

View File

@ -14,6 +14,10 @@
background-color: #3892d4;
}
.info-blue {
color: #3892d4;
}
/* make the upper window end visible */
.x-css-shadow {
box-shadow: rgb(136,136,136) 0px -1px 15px !important;

View File

@ -39,6 +39,7 @@ Ext.define('PBS.TaskSummary', {
let state = me.states[cellindex];
let type = me.types[rowindex];
let filterParam = {
limit: 0,
'statusfilter': state,
'typefilter': type,
};
@ -111,7 +112,7 @@ Ext.define('PBS.TaskSummary', {
model: 'proxmox-tasks',
proxy: {
type: 'proxmox',
url: "/api2/json/status/tasks",
url: "/api2/json/nodes/localhost/tasks",
},
},
});

View File

@ -58,10 +58,13 @@ Ext.define('PBS.data.PermissionPathsStore', {
Ext.define('PBS.form.PermissionPathSelector', {
extend: 'Ext.form.field.ComboBox',
xtype: 'pbsPermissionPathSelector',
mixins: ['Proxmox.Mixin.CBind'],
valueField: 'value',
displayField: 'value',
typeAhead: true,
cbind: {
typeAhead: '{editable}',
},
anyMatch: true,
queryMode: 'local',

386
www/panel/Tasks.js Normal file
View File

@ -0,0 +1,386 @@
Ext.define('PBS.node.Tasks', {
extend: 'Ext.grid.GridPanel',
alias: 'widget.pbsNodeTasks',
stateful: true,
stateId: 'pbs-grid-node-tasks',
loadMask: true,
sortableColumns: false,
controller: {
xclass: 'Ext.app.ViewController',
showTaskLog: function() {
let me = this;
let selection = me.getView().getSelection();
if (selection.length < 1) {
return;
}
let rec = selection[0];
Ext.create('Proxmox.window.TaskViewer', {
upid: rec.data.upid,
endtime: rec.data.endtime,
}).show();
},
updateLayout: function() {
let me = this;
// we want to update the scrollbar on every store load
// since the total count might be different
// the buffered grid plugin does this only on scrolling itself
// and even reduces the scrollheight again when scrolling up
me.getView().updateLayout();
},
sinceChange: function(field, newval) {
let me = this;
let vm = me.getViewModel();
vm.set('since', newval);
},
untilChange: function(field, newval, oldval) {
let me = this;
let vm = me.getViewModel();
vm.set('until', newval);
},
reload: function() {
let me = this;
let view = me.getView();
view.getStore().load();
},
showFilter: function(btn, pressed) {
let me = this;
let vm = me.getViewModel();
vm.set('showFilter', pressed);
},
init: function(view) {
let me = this;
Proxmox.Utils.monStoreErrors(view, view.getStore(), true);
},
},
listeners: {
itemdblclick: 'showTaskLog',
},
viewModel: {
data: {
typefilter: '',
statusfilter: '',
datastore: '',
showFilter: false,
since: null,
until: null,
},
formulas: {
filterIcon: (get) => 'fa fa-filter' + (get('showFilter') ? ' info-blue' : ''),
extraParams: function(get) {
let me = this;
let params = {};
if (get('typefilter')) {
params.typefilter = get('typefilter');
}
if (get('statusfilter')) {
params.statusfilter = get('statusfilter');
}
if (get('datastore')) {
params.store = get('datastore');
}
if (get('since')) {
params.since = get('since').valueOf()/1000;
}
if (get('until')) {
let until = new Date(get('until').getTime()); // copy object
until.setDate(until.getDate() + 1); // end of the day
params.until = until.valueOf()/1000;
}
me.getView().getStore().load();
return params;
},
},
stores: {
bufferedstore: {
type: 'buffered',
pageSize: 500,
autoLoad: true,
remoteFilter: true,
model: 'proxmox-tasks',
proxy: {
type: 'proxmox',
startParam: 'start',
limitParam: 'limit',
extraParams: '{extraParams}',
url: "/api2/json/nodes/localhost/tasks",
},
listeners: {
prefetch: 'updateLayout',
},
},
},
},
bind: {
store: '{bufferedstore}',
},
dockedItems: [
{
xtype: 'toolbar',
items: [
{
xtype: 'proxmoxButton',
text: gettext('View'),
iconCls: 'fa fa-window-restore',
disabled: true,
handler: 'showTaskLog',
},
{
xtype: 'button',
text: gettext('Reload'),
iconCls: 'fa fa-refresh',
handler: 'reload',
},
'->',
{
xtype: 'button',
enableToggle: true,
bind: {
iconCls: '{filterIcon}',
},
text: gettext('Filter'),
stateful: true,
stateId: 'task-showfilter',
stateEvents: ['toggle'],
applyState: function(state) {
if (state.pressed !== undefined) {
this.setPressed(state.pressed);
}
},
getState: function() {
return {
pressed: this.pressed,
};
},
listeners: {
toggle: 'showFilter',
},
},
],
},
{
xtype: 'toolbar',
dock: 'top',
layout: {
type: 'hbox',
align: 'top',
},
bind: {
hidden: '{!showFilter}',
},
items: [
{
xtype: 'container',
padding: 10,
layout: {
type: 'vbox',
align: 'stretch',
},
defaults: {
labelWidth: 80,
},
// cannot bind the values directly, as it then changes also
// on blur, causing wrong reloads of the store
items: [
{
xtype: 'datefield',
fieldLabel: gettext('Since'),
format: 'Y-m-d',
bind: {
maxValue: '{until}',
},
listeners: {
change: 'sinceChange',
},
},
{
xtype: 'datefield',
fieldLabel: gettext('Until'),
format: 'Y-m-d',
bind: {
minValue: '{since}',
},
listeners: {
change: 'untilChange',
},
},
],
},
{
xtype: 'container',
padding: 10,
layout: {
type: 'vbox',
align: 'stretch',
},
defaults: {
labelWidth: 80,
},
items: [
{
xtype: 'pmxTaskTypeSelector',
fieldLabel: gettext('Task Type'),
emptyText: gettext('All'),
bind: {
value: '{typefilter}',
},
},
{
xtype: 'combobox',
fieldLabel: gettext('Task Result'),
emptyText: gettext('All'),
multiSelect: true,
store: [
['ok', gettext('OK')],
['unknown', Proxmox.Utils.unknownText],
['warning', gettext('Warnings')],
['error', gettext('Errors')],
],
bind: {
value: '{statusfilter}',
},
},
],
},
{
xtype: 'container',
padding: 10,
layout: {
type: 'vbox',
align: 'stretch',
},
defaults: {
labelWidth: 80,
},
items: [
{
xtype: 'pbsDataStoreSelector',
fieldLabel: gettext('Datastore'),
emptyText: gettext('All'),
bind: {
value: '{datastore}',
},
allowBlank: true,
},
],
},
],
},
],
viewConfig: {
trackOver: false,
stripeRows: false, // does not work with getRowClass()
emptyText: gettext('No Tasks found'),
getRowClass: function(record, index) {
let status = record.get('status');
if (status) {
let parsed = Proxmox.Utils.parse_task_status(status);
if (parsed === 'error') {
return "proxmox-invalid-row";
} else if (parsed === 'warning') {
return "proxmox-warning-row";
}
}
return '';
},
},
columns: [
{
header: gettext("Start Time"),
dataIndex: 'starttime',
width: 130,
renderer: function(value) {
return Ext.Date.format(value, "M d H:i:s");
},
},
{
header: gettext("End Time"),
dataIndex: 'endtime',
width: 130,
renderer: function(value, metaData, record) {
if (!value) {
metaData.tdCls = "x-grid-row-loading";
return '';
}
return Ext.Date.format(value, "M d H:i:s");
},
},
{
header: gettext("Duration"),
hidden: true,
width: 80,
renderer: function(value, metaData, record) {
let start = record.data.starttime;
if (start) {
let end = record.data.endtime || Date.now();
let duration = end - start;
if (duration > 0) {
duration /= 1000;
}
return Proxmox.Utils.format_duration_human(duration);
}
return Proxmox.Utils.unknownText;
},
},
{
header: gettext("User name"),
dataIndex: 'user',
width: 150,
},
{
header: gettext("Description"),
dataIndex: 'upid',
flex: 1,
renderer: Proxmox.Utils.render_upid,
},
{
header: gettext("Status"),
dataIndex: 'status',
width: 200,
renderer: function(value, metaData, record) {
if (value === undefined && !record.data.endtime) {
metaData.tdCls = "x-grid-row-loading";
return '';
}
let parsed = Proxmox.Utils.parse_task_status(value);
switch (parsed) {
case 'unknown': return Proxmox.Utils.unknownText;
case 'error': return Proxmox.Utils.errorText + ': ' + value;
case 'ok': // fall-through
case 'warning': // fall-through
default: return value;
}
},
},
],
});

View File

@ -0,0 +1,25 @@
Ext.define('PBS.panel.XtermJsConsole', {
extend: 'Ext.panel.Panel',
alias: 'widget.pbsXtermJsConsole',
layout: 'fit',
items: [
{
xtype: 'uxiframe',
itemId: 'iframe',
},
],
listeners: {
'afterrender': function() {
let me = this;
let params = {
console: 'shell',
node: 'localhost',
xtermjs: 1,
};
me.getComponent('iframe').load('/?' + Ext.Object.toQueryString(params));
},
},
});

View File

@ -33,7 +33,7 @@ Ext.define('PBS.window.ACLEdit', {
me.items.push({
xtype: 'pbsUserSelector',
fieldLabel: gettext('User'),
name: 'auth_id',
name: 'auth-id',
allowBlank: false,
});
} else if (me.aclType === 'token') {
@ -41,7 +41,7 @@ Ext.define('PBS.window.ACLEdit', {
me.items.push({
xtype: 'pbsTokenSelector',
fieldLabel: gettext('API Token'),
name: 'auth_id',
name: 'auth-id',
allowBlank: false,
});
}

View File

@ -12,6 +12,7 @@ Ext.define('PBS.window.SyncJobEdit', {
subject: gettext('SyncJob'),
fieldDefaults: { labelWidth: 120 },
defaultFocus: 'proxmoxtextfield[name=comment]',
cbindData: function(initialConfig) {
let me = this;
@ -23,6 +24,7 @@ Ext.define('PBS.window.SyncJobEdit', {
me.url = id ? `${baseurl}/${id}` : baseurl;
me.method = id ? 'PUT' : 'POST';
me.autoLoad = !!id;
me.scheduleValue = id ? null : 'hourly';
return { };
},
@ -47,6 +49,32 @@ Ext.define('PBS.window.SyncJobEdit', {
value: '{datastore}',
},
},
{
fieldLabel: gettext('Local Owner'),
xtype: 'pbsUserSelector',
name: 'owner',
allowBlank: true,
value: null,
emptyText: 'backup@pam',
skipEmptyText: true,
cbind: {
deleteEmpty: '{!isCreate}',
},
},
{
fieldLabel: gettext('Remove vanished'),
xtype: 'proxmoxcheckbox',
name: 'remove-vanished',
autoEl: {
tag: 'div',
'data-qtip': gettext('Remove snapshots from local datastore if they vanished from source datastore?'),
},
uncheckedValue: false,
value: false,
},
],
column2: [
{
fieldLabel: gettext('Source Remote'),
xtype: 'pbsRemoteSelector',
@ -59,24 +87,14 @@ Ext.define('PBS.window.SyncJobEdit', {
allowBlank: false,
name: 'remote-store',
},
],
column2: [
{
fieldLabel: gettext('Remove vanished'),
xtype: 'proxmoxcheckbox',
name: 'remove-vanished',
uncheckedValue: false,
value: false,
},
{
fieldLabel: gettext('Schedule'),
fieldLabel: gettext('Sync Schedule'),
xtype: 'pbsCalendarEvent',
name: 'schedule',
value: 'hourly',
emptyText: gettext('none (disabled)'),
cbind: {
deleteEmpty: '{!isCreate}',
value: '{scheduleValue}',
},
},
],