79f6a79cfc
This can slow things down by a lot on setups with (relatively) high seek time, in the order of doubling the backup times if cache isn't populated with the last backups chunk inode info. Effectively there's nothing known this protects us from in the codebase. The only thing which was theorized about was the case where a really long running backup job (over 24 hours) is still running and writing new chunks, not indexed yet anywhere, then an update (or manual action) triggers a reload of the proxy. There was some theory that then a GC in the new daemon would not know about the oldest writer in the old one, and thus use a less strict atime limit for chunk sweeping - opening up a window for deleting chunks from the long running backup. But, this simply cannot happen as we have a per datastore process wide flock, which is acquired shared by backup jobs and exclusive by GC. In the same process GC and backup can both get it, as it has a process locking granularity. If there's an old daemon with a writer, that also has the lock open shared, and so no GC in the new process can get exclusive access to it. So, with that confirmed we have no need for a "half-assed" verification in the backup finish step. Rather, we plan to add an opt-in "full verify each backup on finish" option (see #2988) Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com> |
||
---|---|---|
.. | ||
access | ||
admin | ||
backup | ||
config | ||
node | ||
reader | ||
types | ||
access.rs | ||
admin.rs | ||
backup.rs | ||
config.rs | ||
helpers.rs | ||
node.rs | ||
pull.rs | ||
reader.rs | ||
status.rs | ||
version.rs |