those are not in a hot code path, and it is not really much work to
build them on the go..
It may not matther much, but it is unnecessary. Rust will probably
inline most of it anyway..
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
re-use the future we already have for task log rotation to trigger
it.
Move the FileLogger in ApiConfig into an Arc, so that we can actually
update it and REST using the new one.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
this is internal for now, use the comanndo socket struct
implementation, and ideally not a new one but the existing ones
created in the proxy and api daemons.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Allows to extend the use of that socket in the future, e.g., for log
rotate re-open signaling.
To reflect this we use a more general name, and change the commandos
to a more clear namespace.
Both are actually somewhat a breaking change, but the single real
world issue it should be able to cause is, that one won't be able to
stop task from older daemons, which still use the older abstract
socket name format.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
This is a preparatory step to replace the task control socket with it
and provide a "reopen log file" command for the rest server.
Kept it simple by disallowing to register new commands after the
socket gets spawned, this avoids the need for locking.
If we really need that we can always wrap it in a Arc<RWLock<..>> or
something like that, or even nicer, register at compile time.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
again, base idea copied off PVE, but, we safe the information about
which pending version we send a mail out already in a separate
object, to keep the api return type APTUpdateInfo clean.
This also makes a few things a bit easier, as we can update the
package status without saving/restoring the notify information.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
for verifying a whole datastore. Datastore.Backup now allows verifying
only backups owned by the triggering user.
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
in most generic places. this is accompanied by a change in
RpcEnvironment to purposefully break existing call sites.
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
instead of prerotating 1000 tasks
(which resulted in 2 writes each time an active worker was finished)
simply append finished tasks to the archive (which will be rotated)
page cache should be good enough so that we can get the task logs fast
since existing installations might have an 'index' file, we
still have to read tasks from there, but only if it exists
this simplifies the TaskListInfoIterator a good amount
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
commit a4915dfc2b made a wrong fix, as
it did not observed that the last expressions was done under the
invariant that we had a last verification result, because if none
could be loaded we already returned true (include).
It thus broke the case for "never re-verify", which is important when
using multiple schedules, a more high frequent one for new,
unverified snapshots, and a low frequency to re-verify older snapshots,
e.g., monthly.
Fix this case again, rework the code to avoid this easy to oversee
invariant. Use a nested match to better express the implication of
each setting, and add some comments.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
and use that in ApiConfig to avoid that it is owned by root if the
proxmox-backup-api process creates it first.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
reuse the FileLogger module in append mode.
As it implements write, which is not thread safe (mutable self) and
we use it in a async context we need to serialize access using a
mutex.
Try to use the same format we do in pveproxy, namely the one which is
also used in apache or nginx by default.
Use the response extensions to pass up the userid, if we extract it
from a ticket.
The privileged and unprivileged dameons log both to the same file, to
have a unified view, and avoiding the need to handle more log files.
We avoid extra intra-process locking by reusing the fact that a write
smaller than PIPE_BUF (4k on linux) is atomic for files opened with
the 'O_APPEND' flag. For now the logged request path is not yet
guaranteed to be smaller than that, this will be improved in a future
patch.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Add a generous limit now and return the correct error (414 URI Too
Long). Otherwise we could to pretty larger GET requests, 64 KiB and
possible bigger (at 64 KiB my simple curl test failed due to
shell/curl limitations).
For now allow a 3072 characters as combined length of URI path and
query.
This is conform with the HTTP/1.1 RFCs (e.g., RFC 7231, 6.5.12 and
RFC 2616, 3.2.1) which do not specify any limits, upper or lower, but
require that all server accessible resources mus be reachable without
getting 414, which is normally fulfilled as we have various length
limits for stuff which could be in an URI, in place, e.g.:
* user id: max. 64 chars
* datastore: max. 32 chars
The only known problematic API endpoint is the catalog one, used in
the GUI's pxar file browser:
GET /api2/json/admin/datastore/<id>/catalog?..&filepath=<path>
The <path> is the encoded archive path, and can be arbitrary long.
But, this is a flawed design, as even without this new limit one can
easily generate archives which cannot be browsed anymore, as hyper
only accepts requests with max. 64 KiB in the URI.
So rather, we should move that to a GET-as-POST call, which has no
such limitations (and would not need to base32 encode the path).
Note: This change was inspired by adding a request access log, which
profits from such limits as we can then rely on certain atomicity
guarantees when writing requests to the log.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
needs new proxmox dependency to get the RpcEnvironment changes,
adding client_ip getter and setter.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
The 'Ok::<_, Self::Error>(res)' type annotation was from a time where
we could not use async, and had a combinator here which needed
explicity type information. We switched over to async in commit
91e4587343 and, as the type annotation
is already included in the Future type, we can safely drop it.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Used to not require access to the WorkerTask struct outside
the `server` and `api2` module, so it'll be easier to
separate those backup/server/client parts into separate
crates.
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
if the archive file does not exist yet, we cannot rotate it, but it's not
actually an error, so just return Ok(false) to indicate no rotation took
place
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
while we probably do not add much more to them, it still looks ugly.
If this was made so that adding a World readable API call is "hard"
and not done by accident, it rather should be done as a test on build
time. But, IMO, the API permission schema definitions are easy to
review, and not often changed/added - so any wrong World readable API
call will normally still caught.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
when upgrading from a version where we stored all tasks in the 'active' file,
we did not completly account for finished tasks still there
we should update the file when encountering any finished task in
'active' as well as filter them out on the api call (if they get through)
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
since len() and MAX_INDEX_TASKS are both usize, they underflow
instead of getting negative values
instead check the sizes and set them accordingly
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
this starts a task once a day at "00:00" that rotates the task log
archive if it is bigger than 500k
if we want, we can make the schedule/size limit/etc. configurable,
but for now it's ok to set fixed values for that
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
since there are no users of this anymore and we now have a nicer
TaskListInfoIterator to use, we can drop this function
this also means that 'update_active_workers' does not need to return
a list anymore since we never used that result besides in
read_task_list
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
this is an iterator that reads/parses/updates the task list as
necessary and returns the tasks in descending order (newest first)
it does this by using our logrotate iterator and using a vecdeque
we can use this to iterate over all tasks, even if they are in the
archive and even if the archive is logrotated but only read
as much as we need
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
instead of removing tasks beyond the 1000 that are in the index
write them into an archive file by appending them at the end
this way we can later still read them
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
one for only the active tasks and one for up to 1000 finished tasks
factor out the parsing of a task file (we will later need this again)
and use iterator combinators for easier code
we now sort the tasks ascending (this will become important in a later patch)
but reverse (for now) it to keep compatibility
this code also omits the converting into an intermittent hash
since it cannot really happen that we have duplicate tasks in this list
(since the call is locked by an flock, and it is the only place where we
write into the lists)
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
- remove chrono dependency
- depend on proxmox 0.3.8
- remove epoch_now, epoch_now_u64 and epoch_now_f64
- remove tm_editor (moved to proxmox crate)
- use new helpers from proxmox 0.3.8
* epoch_i64 and epoch_f64
* parse_rfc3339
* epoch_to_rfc3339_utc
* strftime_local
- BackupDir changes:
* store epoch and rfc3339 string instead of DateTime
* backup_time_to_string now return a Result
* remove unnecessary TryFrom<(BackupGroup, i64)> for BackupDir
- DynamicIndexHeader: change ctime to i64
- FixedIndexHeader: change ctime to i64
a range from high to low in rust results in an empty range
(see std::ops::Range documentation)
so we need to generate the range from 0..data.len() and then reverse it
also, the task log contains a newline at the end, so we have to remove
that (should it exist)
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
when trying to parse the task status, we seek 8k from the end
which may be into the middle of a line, so the datetime parsing
can fail (when the log message contains ': ')
This patch does a fast search for the last line, and avoid the
'lines' iterator.
It's a string-type.
Implement Serialize via Display, Deserialize via FromStr and
add an API_SCHEMA so that it can be used as a type within
the #[api] macro.
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
the endtime should be the timestamp of the last log line
or if there is no log at all, the starttime
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
representing a state via an enum makes more sense in this case
we also implement FromStr and Display to make it easy to convet from/to
a string
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
Also swap the order of a couple of `.map_err().await` to
`.await.map_err()` since that's generally more efficient.
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
is a helper to spawn an internal tokio task without it showing up
in the task list
it is still tracked for reload and notifies the last_worker_listeners
this enables the console to survive a reload of proxmox-backup-proxy
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
instead of exposing handlebars itself, offer a register_template and
a render_template ourselves.
render_template checks if the template file was modified since
the last render and reloads it when necessary
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
Especially helpful for requests not coming from browsers (where the
URL is normally easy to find out).
Makes it easier to detect if one triggered a request with an old
client, or so..
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
it does not make sense to check if the worker is running if we already
have an endtime and state
our 'worker_is_active_local' heuristic returns true for non
process-local tasks, so we got 'running' for all tasks that were not
started by 'our' pid and were still running
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
instead of checking on '1' or 'true', check that it is there and not
'0' and 'false'. this allows using simply
https://foo:8007/?debug
instead of
https://foo:8007/?debug=1
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
when starting a new task, we do two things to keep track of tasks
(in that order):
* updating the 'active' file with a list of tasks with
'update_active_workers'
* updating the WORKER_TASK_LIST
the second also updates the status of running tasks in the file by
checking if it is still running by checking the WORKER_TASK_LIST
since those two things are not locked, it can happend that
we update the file, and before updating the WORKER_TASK_LIST,
another thread calls update_active_workers and tries to
get the status from the task log, which won't have any data yet
so the status is 'unknown'
(we do not update that status ever, likely for performance reasons,
so we have to fix this here)
by switching the order of the two operations, we make sure that only
tasks reach the 'active' file which are inserted in the WORKER_TASK_LIST
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>