Adds possibility to inspect .blob, .fidx and .didx files. For index
files a list of the chunks referenced will be printed in addition to
some other information. .blob files can be decoded into file or directly
into stdout. Without decode the tool just prints the size and encryption
mode of the blob file. Options:
- file: path to the file
- [opt] decode: path to a file or stdout(-), if specidied, the file will be
decoded into the specified location [only for blob files, no effect
with index files]
- [opt] keyfile: path to a keyfile, needed if decode is specified and the
data was encrypted
Signed-off-by: Hannes Laimer <h.laimer@proxmox.com>
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
Adds possibility to inspect chunks and find indexes that reference the
chunk. Options:
- chunk: path to the chunk file
- [opt] decode: path to a file or to stdout(-), if specified, the
chunk will be decoded into the specified location
- [opt] digest: needed when searching for references, if set, it will
be used for verification when decoding
- [opt] keyfile: path to a keyfile, needed if decode is specified and
the data was encrypted
- [opt] reference-filter: path in which indexes that reference the
chunk should be searched, can be a group, snapshot or the whole
datastore, if not specified no references will be searched
- [default=true] use-filename-as-digest: use chunk-filename as digest,
if no digest is specified
Signed-off-by: Hannes Laimer <h.laimer@proxmox.com>
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
two things wrong with the old code:
* the sort function wants -1, 0 and 1 as a return value for a<b, a==b and a>b
respectively, not a bool (which a < b returns)
* we have to sort the newest backups first, since the first reason is
'keep-last'. until now, we sorted the oldest backup first, resulting
in the older backups getting the 'keep-last' reason
reported by a user in the forum:
https://forum.proxmox.com/threads/prune-ui-and-prune-schedule-simulator-dont-match.94944/
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
Defined a new struct RemoteConfig (without name and password). This makes it
possible to bas64-encode the pasword in the config, but still allow plain
passwords with the API.
otherwise a user might get a task log like this:
-----
...
found 7 groups
TASK OK
-----
which could confuse the users as why there were no snapshots backed up
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
it seems that for some actions or in some circumstances, two minutes is
simply too short and the command aborts. Increase the default timeout to
10 minutes.
While it should give most commands enough time to finish, in case of a real
failure the procedure now takes up to 5 times longer, but IMHO thats an
OK tradeoff.
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
this should make the api call much faster, since it is not reading
the whole catalog anymore
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
Signed-off-by: Dietmar Maurer <dietmar@proxmox.com>
For some parts of the ui, we only need the snapshot list from the catalog,
and reading the whole catalog (can be multiple hundred MiB) is not
really necessary.
Instead, we write the list of snapshots into a seperate .index file. This file
is generated on demand and is much smaller and thus faster to read.
a test for a valid status_page, one with excess data
(in the descriptor as well in the page as a whole)
and a test with too little data
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
if the library sends more data than advertised, simply cut it off,
but if it sends less data, bail out (depending on how much data is
missing, trying to parse it could lead to a panic, so bail out early)
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
in 'restore_archive', we reach that 'catalog.commit()' for
* every skipped snapshot (we already call 'commit_if_large' then before)
* every skipped chunk archive (no change in catalog since we do not read
the chunk archive in that case)
* after reading a catalog (no change in catalog)
in all other cases, we call 'commit_if_large' and return early,
meaning that the 'commit' there was executed too often and
unnecessary, so move it after the loop over the files, before
finishing the temporary database.
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
instead of having a public start/end_chunk_archive and register_chunks,
simply expose a 'register_chunk_archive' method since we always have
a list of chunks anywhere we want to add them
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
instead of having the grid be as tall as possible and the containing
panel scroll. limit the grids height to the panel size and scroll the
grid.
this has two advantages:
* if a user has many slots, it is now possible to to navigate the other
grids to the position wanted
* having the grids scroll, means it can use extjs' buffered renderer,
which makes the view much more responsive (in case of hundreds of
slots)
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
We get the descriptor length from the library and use that in
'chunks_exact', which panics on length 0. Catch that case
and bail out, since that makes no sense here anyway.
This could prevent a panic, in case a library sends wrong data.
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>