'objset_id' already contains that, so the error was
"could not parse 'objset-objset-0xFFFF' stat file"
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
this can only real fail for two reasons:
* the format is wrong:
this should not happen unless the format changed, then it will
happen every time
* the file can't be read:
this can happen if a user deletes and recreates a dataset manually,
since the mapped file does not exist anymore but the dataset does
for the second case, delete the mapping from the hashmap, so that the
next call will refresh the mapping with the correct file
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
and move the comment from the local io_bail in pbs-client/src/pxar/fuse.rs
to the only use
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
rustdoc lints detected that two external hyperlinks were not
clickable.
The short cut used is only available for internal links, otherwise
one needs to use the Markdown syntax, so either [Text](URL) or <URL>.
Signed-off-by: Matthias Heiserer <m.heiserer@proxmox.com>
[ T: commit message text width, mention markdown ]
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Recently, ZFS removed the pool global io stats from
/proc/spl/kstat/zfs/POOL/io with no replacement.
To gather stats about the datastores, access now the objset specific
entries there. To be able to make that efficient, cache a map of
dataset <-> obset ids, so that we do not have to parse all files each time.
We update the cache each time we try to get the info for a dataset
where we do not have a mapping.
We cannot update it on datastore add/remove since that happens in the
proxmox-backup daemon, while we need the info here in proxmox-backup-proxy.
Sadly with this we lose the io wait metric, but it seems that this is no
longer tracked in zfs at all, so nothing we can do for that.
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
- imported pbs-api-types/src/common_regex.rs from old proxmox crate
- use hex crate to generate/parse hex digest
- remove all reference to proxmox crate (use proxmox-sys and
proxmox-serde instead)
Signed-off-by: Dietmar Maurer <dietmar@proxmox.com>
only bit 0-2 are fatal errors, bit 3-7 are used to indicate
some drive conditions. for details see the manpage of smartctl(8)
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
[ Thomas: resolved merge-conflict due to moved run_command ]
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
in preparation to also get the file system type from lsblk.
Co-developed-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
in PVE, the logic how wearout gets read from the smartctl output was
changed from a vendor -> id map to a sorted list of specific
attribute field names.
copy that list to pbs (in the same order), and use that to get the
wearout
in the future we might want to split the disk logic into its own crate
and reuse it in pve
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
if a datastore or root is not used directly on the pool dir
(e.g. the installer creates 2 sub datasets ROOT/pbs-1), info in
/proc/self/mountinfo returns not the pool, but the path to the
dataset, which has no iostats itself in /proc/spl/kstat/zfs/
but only the pool itself
so instead of not gathering data at all, gather the info from the
underlying pool instead. if one has multiple datastores on the same
pool those rrd stats will be the same for all those datastores now
(instead of empty) similar to 'normal' directories
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
we test for the config key in the API so it makes sense to have as
test here too. Actually it would be better if we'd have a expect
Value defined here and enforce that it matches, but better than
nothing.
Fix the input for test 1, where tabs got replaced by spaces, as else
it fails
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
not exhaustive of what zfs allows (space is missing), but this
can be done easily without problems
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
those names are allowed for zpools
these will fail for now, but it will be fixed in the next commit
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>