The entries in a file go from oldest end-time in the first time to
newest end-time in the last line. So, just because the first line is
older than the cut-off time, the remaining one doesn't necessarily
have to be old enough too. What we can know for sure that older than
the current checked rotations of the task archive are definitively up
for deletion.
Another possibility would be to check the last line, but as scanning
backwards is more expensive/complex to do while only being an actual
improvement in a very specific edge case (it's more likely to have a
mixed time-cutoff vs. task-log-file boundary than that those are
aligned)
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
by not bubbling up most errors, and continuing on. this avoids that we
stop cleaning up because e.g. one directory was missing.
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
Similar to 874bd545 ("pxar: fix anchored exclusion at archive root"),
but this time for inclusion. Because of the inconsistency, it could
happen that a file included in generate_directory_file_list() got
excluded in add_entry(), e.g. with a .pxarexclude file like
> *
> !/supposed-to-be-included
Reported-by: Dominik Csapak <d.csapak@proxmox.com>
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
As serde_json will otherwise read files 1 byte at a time.
Writing is a bit better, but syntacitcal elements (quotes, braces,
commas) still often show up as single write syscalls, so use BufWriter
there as well.
Note that while we do store the file in the resulting objects, we do not
need to keep the buffered read/writers as we always `seek` to the
beginning on further file operations.
Reported-by: Mark Schouten <mark@tuxis.nl>
Link: https://lists.proxmox.com/pipermail/pbs-devel/2022-April/004909.html
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
added a parameter to the cli for importing tape key via a json-parameter or
via reading a exported paperkey-file or json-file.
For this i also added a backupkey parameter to the api, but here it only
accepts json.
The cli interprets the parameter first as json-string, then json-file
and last as paperkey-file.
functionality:
proxmox-tape key paperkey [fingerprint of existing key] > paperkey.backup
proxmox-tape key restore --backupkey paperkey.backup # key from line above
proxmox-tape key restore --backupkey paperkey.json # only the json
proxmox-tape key restore --backupkey '{"kdf": {"Scrypt": ...' # json as string
for importing the key as paperkey-file it is irrelevant, if the paperkey got exported as html
or txt.
Signed-off-by: Markus Frank <m.frank@proxmox.com>
When we have a previous manifest, we try to download the fidx/didx files
to get the known chunks list. We continue if that fails (which is ok),
but we did not print any error, leading to a confusing backup output,
since the users would expect that chunks will be reused.
Printing the error should at least make it apparent that something did
not work correctly.
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
add support for multi-line comments to node.cfg and the api, similar to
how pve handles multi-line comments
Signed-off-by: Stefan Sterz <s.sterz@proxmox.com>
Acked-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
the latest changes to this api call changed/removed some things that
were actually necessary for the gui. Readd those and document them this
time.
The change from u64 to i64 limits us to 8EiB of Datastore sizes (instead if
16EiB) but if we reach that, we must adapt most other parts to use 128bit
sizes anyway
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
changed pmxUserSelector to pbsAuthidSelector, because it is currently
not possible to restore with a api token via gui.
Signed-off-by: Markus Frank <m.frank@proxmox.com>
make ConfigVersionCacheData a #[repr(C)] union to fix its
size and let it transparently `Deref{,Mut}` to its actual
contents
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
when we use http to make the api call, we have to parse the parameters
before, else we might send the string "true" instead of the boolean true
and the api rejects it with a 'Parameter verification error'.
We already have all api call schemas here, so parsing is possible.
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
'objset_id' already contains that, so the error was
"could not parse 'objset-objset-0xFFFF' stat file"
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
this can only real fail for two reasons:
* the format is wrong:
this should not happen unless the format changed, then it will
happen every time
* the file can't be read:
this can happen if a user deletes and recreates a dataset manually,
since the mapped file does not exist anymore but the dataset does
for the second case, delete the mapping from the hashmap, so that the
next call will refresh the mapping with the correct file
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
the api should return a 404 error for entries that do not exist
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
when using the 'extjs' formatter, it marks them in a way, so that
the gui can mark the form fields with the error
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
commenting that version_cache.increase_datastore_generation increases
the, well, version is rather superfluous. Also avoid the use of "we",
which is always ambiguous in code comments.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
instead of relying on the content of some configs
previously, we always read and parsed the config file, and only
generated a new config object when the path or the 'verify-new' option
changed.
now, we increase the datastore generation on config save, and if that
changed (or the last load is 1 minute in the past), we always
generate a new object
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
'when the calendar event' triggers was too vague, it could mean for
the current media-set or the next time. Apart from that, it was not
technically correct all the time, since we take the start time of
the next media set if that exists first.
The idea here is that we begin the retention when the media set is
finished.
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>