tape/restore: optimize chunk restore behaviour

by checking the 'checked_chunks' before trying to write to disk
and by doing the existance check in the parallel handler. This way,
we do not have to check the existance of a chunk multiple times
(if multiple source datastores gets restored to the same target
datastore) and also we do not have to wait on the stat before reading
the next chunk.

We have to change the &WorkerTask to an Arc though, otherwise we
cannot log to the worker from the parallel handler

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
This commit is contained in:
Dominik Csapak
2021-05-04 12:21:47 +02:00
committed by Dietmar Maurer
parent 4cba875379
commit 49f9aca627
2 changed files with 41 additions and 33 deletions

View File

@ -1336,7 +1336,7 @@ pub fn catalog_media(
drive.read_label()?; // skip over labels - we already read them above
let mut checked_chunks = HashMap::new();
restore_media(&worker, &mut drive, &media_id, None, &mut checked_chunks, verbose)?;
restore_media(worker, &mut drive, &media_id, None, &mut checked_chunks, verbose)?;
Ok(())
},