backup verify: re-check if we can skip a chunk in the actual verify loop

Fixes a non-negligible performance regression from commit
7f394c807b

While we skip known-verified chunks in the stat-and-inode-sort loop,
those are only the ones from previous indexes. If there's a repeated
chunk in one index they would get re-verified more often as required.

So, add the check again explicitly to the read+verify loop.

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
This commit is contained in:
Thomas Lamprecht 2021-04-15 10:00:04 +02:00
parent e7f94010d3
commit 26af61debc
1 changed files with 6 additions and 0 deletions

View File

@ -214,6 +214,12 @@ fn verify_index_chunks(
let info = index.chunk_info(pos).unwrap();
// we must always recheck this here, the parallel worker below alter it!
// Else we miss skipping repeated chunks from the same index, and re-verify them all
if verify_worker.verified_chunks.lock().unwrap().contains(&info.digest) {
continue; // already verified
}
match verify_worker.datastore.load_chunk(&info.digest) {
Err(err) => {
verify_worker.corrupt_chunks.lock().unwrap().insert(info.digest);