Compare commits

..

30 Commits

Author SHA1 Message Date
c4430a937d bump version to 1.0.11-1
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-03-18 12:36:28 +01:00
237314ad0d tape: improve catalog consistency checks
Try to check if we read the correct catalog by verifying uuid, media_set_uuid
and seq_nr.

Note: this changes the catalog format again.
2021-03-18 08:43:55 +01:00
caf76ec592 tools/subscription: ignore ENOENT for apt auth config removal
deleting a nonexistant file is hardly an error worth mentioning

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2021-03-17 20:12:58 +01:00
0af8c26b74 ui: tape/BackupOverview: insert a datastore level
since we can now backup multiple datastores in the same media-set,
we show the datastores as first level below that

the final tree structucture looks like this:

tapepool A
- media set 1
 - datastore I
  - tape x
   - ct/100
    - ct/100/2020-01-01T00:00:00Z

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2021-03-17 13:37:49 +01:00
825dfe7e0d ui: tape/DriveStatus: fix updating pointer+click handler on info widget
we can only do this after it is rendered, the element does not exist
before

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2021-03-17 13:37:39 +01:00
30a0809553 ui: tape/DriveStatus: add erase button
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2021-03-17 13:37:17 +01:00
6ee3035523 tape: define magic number for catalog archives 2021-03-17 13:35:23 +01:00
b627ebbf40 tape: improve catalog parser 2021-03-17 11:29:23 +01:00
ef4bdf6b8b tape: proxmox-tape media content - add 'store' attribute 2021-03-17 11:17:54 +01:00
54722acada tape: store datastore name in tape archives and media catalog
So that we can store multiple datastores on a single media set.
Deduplication is now per datastore (not per media set).
2021-03-17 11:08:51 +01:00
0e2bf3aa1d SnapshotReader: add self.datastore_name() helper 2021-03-17 10:16:34 +01:00
365126efa9 tape: PoolWriter - remove unnecessary move_to_eom 2021-03-17 10:16:34 +01:00
03d4c9217d update OnlineHelpInfo.js 2021-03-17 10:16:34 +01:00
8498290848 docs: technically not everything is in rust/js
I mean the whole distro uses quite some C and the like as base, so
avoid being overly strict here.

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-03-16 20:37:16 +01:00
654db565cb docs: features: mention that there are no client/data limits
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-03-16 20:37:16 +01:00
51f83548ed docs: drop uncommon spelled out GCM
It does not help users if that is spelled out, and its not a common
use of GCM, and especially in the AES 256 context its clear what is
meant. The link to Wikipedia stays, so interested people can still
read up on it and others get a better overview due to the text being
more concise.

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-03-16 20:37:16 +01:00
5847a6bdb5 docs: fix linking, avoid over long text in main feature list
The main feature list should provide a short overview of the, well,
main features. While enterprise support *is* a main and important
feature, it's not the place here to describe things like personal
volume/ngo/... offers and the like.

Move parts of it to getting help, which lacked mentioning the
enterprise support too and is a good place to describe the customer
portal.

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-03-16 20:37:16 +01:00
313e5e2047 docs: mention support subscription plans
and change enterprise repository section to present tense.

Signed-off-by: Dylan Whyte <d.whyte@proxmox.com>
2021-03-16 19:24:23 +01:00
7914e62b10 tools/zip: only add zip64 field when necessary
if neither offset nor size exceeds 32bit, do not add the
zip64 extension field

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2021-03-16 09:13:39 +01:00
84d3284609 ui: tape/DriveStatus: open task window on click on state
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2021-03-16 09:00:07 +01:00
70fab5b46e ui: tape: convert slot selection on transfer to combogrid
this is much handier than number field, and the user can instantly
see which one is an import/export slot

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2021-03-16 08:57:48 +01:00
e36135031d ui: tape/Restore: let the user choose an owner
so that the tape backup can be restored as any user, given
the current logged in user has the correct permission.

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2021-03-16 08:55:42 +01:00
5a5ee0326e proxmox-tape: add missing notify-user to 'proxmox-tape restore'
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2021-03-16 08:54:38 +01:00
776dabfb2e tape: use MB/s for backup speed (to match drive speed specification) 2021-03-16 08:51:49 +01:00
5c4755ad08 tape: speedup backup by doing read/write in parallel 2021-03-16 08:51:49 +01:00
7c1666289d tools/zip: add missing start_disk field for zip64 extension
it is not optional, even though we give the size explicitely

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2021-03-15 12:36:40 +01:00
cded320e92 backup info: run rustfmt
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-03-14 19:18:35 +01:00
b31cdec225 update to pxar 0.10
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2021-03-12 10:48:09 +01:00
591b120d35 fix feature flag logic in pxar create
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2021-03-12 10:17:51 +01:00
e8913fea12 tape: write_chunk_archive - do not consume partially written chunk at EOT
So that it is re-written to the next tape.
2021-03-12 07:14:50 +01:00
28 changed files with 1014 additions and 319 deletions

View File

@ -1,6 +1,6 @@
[package]
name = "proxmox-backup"
version = "1.0.10"
version = "1.0.11"
authors = [
"Dietmar Maurer <dietmar@proxmox.com>",
"Dominik Csapak <d.csapak@proxmox.com>",
@ -52,7 +52,7 @@ proxmox = { version = "0.11.0", features = [ "sortable-macro", "api-macro", "web
#proxmox = { git = "git://git.proxmox.com/git/proxmox", version = "0.1.2", features = [ "sortable-macro", "api-macro" ] }
#proxmox = { path = "../proxmox/proxmox", features = [ "sortable-macro", "api-macro", "websocket" ] }
proxmox-fuse = "0.1.1"
pxar = { version = "0.9.0", features = [ "tokio-io" ] }
pxar = { version = "0.10.0", features = [ "tokio-io" ] }
#pxar = { path = "../pxar", features = [ "tokio-io" ] }
regex = "1.2"
rustyline = "7"

13
debian/changelog vendored
View File

@ -1,3 +1,16 @@
rust-proxmox-backup (1.0.11-1) unstable; urgency=medium
* fix feature flag logic in pxar create
* tools/zip: add missing start_disk field for zip64 extension to improve
compatibility with some strict archive tools
* tape: speedup backup by doing read/write in parallel
* tape: store datastore name in tape archives and media catalog
-- Proxmox Support Team <support@proxmox.com> Thu, 18 Mar 2021 12:36:01 +0100
rust-proxmox-backup (1.0.10-1) unstable; urgency=medium
* tape: improve MediaPool allocation by sorting tapes by creation time and

4
debian/control vendored
View File

@ -41,8 +41,8 @@ Build-Depends: debhelper (>= 11),
librust-proxmox-0.11+sortable-macro-dev,
librust-proxmox-0.11+websocket-dev,
librust-proxmox-fuse-0.1+default-dev (>= 0.1.1-~~),
librust-pxar-0.9+default-dev,
librust-pxar-0.9+tokio-io-dev,
librust-pxar-0.10+default-dev,
librust-pxar-0.10+tokio-io-dev,
librust-regex-1+default-dev (>= 1.2-~~),
librust-rustyline-7+default-dev,
librust-serde-1+default-dev,

View File

@ -65,10 +65,10 @@ Main Features
:Compression: The ultra-fast Zstandard_ compression is able to compress
several gigabytes of data per second.
:Encryption: Backups can be encrypted on the client-side, using AES-256 in
Galois/Counter Mode (GCM_). This authenticated encryption (AE_) mode
provides very high performance on modern hardware. In addition to client-side
encryption, all data is transferred via a secure TLS connection.
:Encryption: Backups can be encrypted on the client-side, using AES-256 GCM_.
This authenticated encryption (AE_) mode provides very high performance on
modern hardware. In addition to client-side encryption, all data is
transferred via a secure TLS connection.
:Web interface: Manage the Proxmox Backup Server with the integrated, web-based
user interface.
@ -76,8 +76,16 @@ Main Features
:Open Source: No secrets. Proxmox Backup Server is free and open-source
software. The source code is licensed under AGPL, v3.
:Support: Enterprise support will be available from `Proxmox`_ once the beta
phase is over.
:No Limits: Proxmox Backup Server has no artifical limits for backup storage or
backup-clients.
:Enterprise Support: Proxmox Server Solutions GmbH offers enterprise support in
form of `Proxmox Backup Server Subscription Plans
<https://www.proxmox.com/en/proxmox-backup-server/pricing>`_. Users at every
subscription level get access to the Proxmox Backup :ref:`Enterprise
Repository <sysadmin_package_repos_enterprise>`. In addition, with a Basic,
Standard or Premium subscription, users have access to the :ref:`Proxmox
Customer Portal <get_help_enterprise_support>`.
Reasons for Data Backup?
@ -117,8 +125,8 @@ Proxmox Backup Server consists of multiple components:
* A client CLI tool (`proxmox-backup-client`) to access the server easily from
any `Linux amd64` environment
Aside from the web interface, everything is written in the Rust programming
language.
Aside from the web interface, most parts of Proxmox Backup Server are written in
the Rust programming language.
"The Rust programming language helps you write faster, more reliable software.
High-level ergonomics and low-level control are often at odds in programming
@ -134,6 +142,17 @@ language.
Getting Help
------------
.. _get_help_enterprise_support:
Enterprise Support
~~~~~~~~~~~~~~~~~~
Users with a `Proxmox Backup Server Basic, Standard or Premium Subscription Plan
<https://www.proxmox.com/en/proxmox-backup-server/pricing>`_ have access to the
Proxmox Customer Portal. The Customer Portal provides support with guaranteed
response times from the Proxmox developers.
For more information or for volume discounts, please contact office@proxmox.com.
Community Support Forum
~~~~~~~~~~~~~~~~~~~~~~~

View File

@ -69,10 +69,12 @@ Here, the output should be:
f3f6c5a3a67baf38ad178e5ff1ee270c /etc/apt/trusted.gpg.d/proxmox-ve-release-6.x.gpg
.. _sysadmin_package_repos_enterprise:
`Proxmox Backup`_ Enterprise Repository
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
This will be the default, stable, and recommended repository. It is available for
This is the stable, recommended repository. It is available for
all `Proxmox Backup`_ subscription users. It contains the most stable packages,
and is suitable for production use. The ``pbs-enterprise`` repository is
enabled by default:

View File

@ -1,5 +1,5 @@
use std::path::Path;
use std::sync::Arc;
use std::sync::{Mutex, Arc};
use anyhow::{bail, format_err, Error};
use serde_json::Value;
@ -402,6 +402,8 @@ fn backup_worker(
task_log!(worker, "latest-only: true (only considering latest snapshots)");
}
let datastore_name = datastore.name();
let mut errors = false;
for (group_number, group) in group_list.into_iter().enumerate() {
@ -416,7 +418,7 @@ fn backup_worker(
if latest_only {
progress.group_snapshots = 1;
if let Some(info) = snapshot_list.pop() {
if pool_writer.contains_snapshot(&info.backup_dir.to_string()) {
if pool_writer.contains_snapshot(datastore_name, &info.backup_dir.to_string()) {
task_log!(worker, "skip snapshot {}", info.backup_dir);
continue;
}
@ -433,7 +435,7 @@ fn backup_worker(
} else {
progress.group_snapshots = snapshot_list.len() as u64;
for (snapshot_number, info) in snapshot_list.into_iter().enumerate() {
if pool_writer.contains_snapshot(&info.backup_dir.to_string()) {
if pool_writer.contains_snapshot(datastore_name, &info.backup_dir.to_string()) {
task_log!(worker, "skip snapshot {}", info.backup_dir);
continue;
}
@ -508,33 +510,48 @@ pub fn backup_snapshot(
}
};
let mut chunk_iter = snapshot_reader.chunk_iterator()?.peekable();
let snapshot_reader = Arc::new(Mutex::new(snapshot_reader));
let (reader_thread, chunk_iter) = pool_writer.spawn_chunk_reader_thread(
datastore.clone(),
snapshot_reader.clone(),
)?;
let mut chunk_iter = chunk_iter.peekable();
loop {
worker.check_abort()?;
// test is we have remaining chunks
if chunk_iter.peek().is_none() {
break;
match chunk_iter.peek() {
None => break,
Some(Ok(_)) => { /* Ok */ },
Some(Err(err)) => bail!("{}", err),
}
let uuid = pool_writer.load_writable_media(worker)?;
worker.check_abort()?;
let (leom, _bytes) = pool_writer.append_chunk_archive(worker, &datastore, &mut chunk_iter)?;
let (leom, _bytes) = pool_writer.append_chunk_archive(worker, &mut chunk_iter, datastore.name())?;
if leom {
pool_writer.set_media_status_full(&uuid)?;
}
}
if let Err(_) = reader_thread.join() {
bail!("chunk reader thread failed");
}
worker.check_abort()?;
let uuid = pool_writer.load_writable_media(worker)?;
worker.check_abort()?;
let snapshot_reader = snapshot_reader.lock().unwrap();
let (done, _bytes) = pool_writer.append_snapshot_archive(worker, &snapshot_reader)?;
if !done {

View File

@ -432,29 +432,32 @@ pub fn list_content(
.generate_media_set_name(&set.uuid, template)
.unwrap_or_else(|_| set.uuid.to_string());
let catalog = MediaCatalog::open(status_path, &media_id.label.uuid, false, false)?;
let catalog = MediaCatalog::open(status_path, &media_id, false, false)?;
for snapshot in catalog.snapshot_index().keys() {
let backup_dir: BackupDir = snapshot.parse()?;
for (store, content) in catalog.content() {
for snapshot in content.snapshot_index.keys() {
let backup_dir: BackupDir = snapshot.parse()?;
if let Some(ref backup_type) = filter.backup_type {
if backup_dir.group().backup_type() != backup_type { continue; }
if let Some(ref backup_type) = filter.backup_type {
if backup_dir.group().backup_type() != backup_type { continue; }
}
if let Some(ref backup_id) = filter.backup_id {
if backup_dir.group().backup_id() != backup_id { continue; }
}
list.push(MediaContentEntry {
uuid: media_id.label.uuid.clone(),
label_text: media_id.label.label_text.to_string(),
pool: set.pool.clone(),
media_set_name: media_set_name.clone(),
media_set_uuid: set.uuid.clone(),
media_set_ctime: set.ctime,
seq_nr: set.seq_nr,
snapshot: snapshot.to_owned(),
store: store.to_owned(),
backup_time: backup_dir.backup_time(),
});
}
if let Some(ref backup_id) = filter.backup_id {
if backup_dir.group().backup_id() != backup_id { continue; }
}
list.push(MediaContentEntry {
uuid: media_id.label.uuid.clone(),
label_text: media_id.label.label_text.to_string(),
pool: set.pool.clone(),
media_set_name: media_set_name.clone(),
media_set_uuid: set.uuid.clone(),
media_set_ctime: set.ctime,
seq_nr: set.seq_nr,
snapshot: snapshot.to_owned(),
backup_time: backup_dir.backup_time(),
});
}
}

View File

@ -40,6 +40,7 @@ use crate::{
cached_user_info::CachedUserInfo,
acl::{
PRIV_DATASTORE_BACKUP,
PRIV_DATASTORE_MODIFY,
PRIV_TAPE_READ,
},
},
@ -70,11 +71,15 @@ use crate::{
file_formats::{
PROXMOX_BACKUP_MEDIA_LABEL_MAGIC_1_0,
PROXMOX_BACKUP_SNAPSHOT_ARCHIVE_MAGIC_1_0,
PROXMOX_BACKUP_SNAPSHOT_ARCHIVE_MAGIC_1_1,
PROXMOX_BACKUP_MEDIA_SET_LABEL_MAGIC_1_0,
PROXMOX_BACKUP_CONTENT_HEADER_MAGIC_1_0,
PROXMOX_BACKUP_CHUNK_ARCHIVE_MAGIC_1_0,
PROXMOX_BACKUP_CHUNK_ARCHIVE_MAGIC_1_1,
MediaContentHeader,
ChunkArchiveHeader,
ChunkArchiveDecoder,
SnapshotArchiveHeader,
},
drive::{
TapeDriver,
@ -105,6 +110,10 @@ pub const ROUTER: Router = Router::new()
type: Userid,
optional: true,
},
owner: {
type: Authid,
optional: true,
},
},
},
returns: {
@ -123,6 +132,7 @@ pub fn restore(
drive: String,
media_set: String,
notify_user: Option<Userid>,
owner: Option<Authid>,
rpcenv: &mut dyn RpcEnvironment,
) -> Result<Value, Error> {
@ -134,6 +144,18 @@ pub fn restore(
bail!("no permissions on /datastore/{}", store);
}
if let Some(ref owner) = owner {
let correct_owner = owner == &auth_id
|| (owner.is_token()
&& !auth_id.is_token()
&& owner.user() == auth_id.user());
// same permission as changing ownership after syncing
if !correct_owner && privs & PRIV_DATASTORE_MODIFY == 0 {
bail!("no permission to restore as '{}'", owner);
}
}
let privs = user_info.lookup_privs(&auth_id, &["tape", "drive", &drive]);
if (privs & PRIV_TAPE_READ) == 0 {
bail!("no permissions on /tape/drive/{}", drive);
@ -222,6 +244,7 @@ pub fn restore(
&datastore,
&auth_id,
&notify_user,
&owner,
)?;
}
@ -252,6 +275,7 @@ pub fn request_and_restore_media(
datastore: &DataStore,
authid: &Authid,
notify_user: &Option<Userid>,
owner: &Option<Authid>,
) -> Result<(), Error> {
let media_set_uuid = match media_id.media_set_label {
@ -284,7 +308,9 @@ pub fn request_and_restore_media(
}
}
restore_media(worker, &mut drive, &info, Some((datastore, authid)), false)
let restore_owner = owner.as_ref().unwrap_or(authid);
restore_media(worker, &mut drive, &info, Some((datastore, restore_owner)), false)
}
/// Restore complete media content and catalog
@ -340,10 +366,18 @@ fn restore_archive<'a>(
bail!("unexpected content magic (label)");
}
PROXMOX_BACKUP_SNAPSHOT_ARCHIVE_MAGIC_1_0 => {
let snapshot = reader.read_exact_allocated(header.size as usize)?;
let snapshot = std::str::from_utf8(&snapshot)
.map_err(|_| format_err!("found snapshot archive with non-utf8 characters in name"))?;
task_log!(worker, "Found snapshot archive: {} {}", current_file_number, snapshot);
bail!("unexpected snapshot archive version (v1.0)");
}
PROXMOX_BACKUP_SNAPSHOT_ARCHIVE_MAGIC_1_1 => {
let header_data = reader.read_exact_allocated(header.size as usize)?;
let archive_header: SnapshotArchiveHeader = serde_json::from_slice(&header_data)
.map_err(|err| format_err!("unable to parse snapshot archive header - {}", err))?;
let datastore_name = archive_header.store;
let snapshot = archive_header.snapshot;
task_log!(worker, "File {}: snapshot archive {}:{}", current_file_number, datastore_name, snapshot);
let backup_dir: BackupDir = snapshot.parse()?;
@ -371,7 +405,7 @@ fn restore_archive<'a>(
task_log!(worker, "skip incomplete snapshot {}", backup_dir);
}
Ok(true) => {
catalog.register_snapshot(Uuid::from(header.uuid), current_file_number, snapshot)?;
catalog.register_snapshot(Uuid::from(header.uuid), current_file_number, &datastore_name, &snapshot)?;
catalog.commit_if_large()?;
}
}
@ -381,17 +415,26 @@ fn restore_archive<'a>(
reader.skip_to_end()?; // read all data
if let Ok(false) = reader.is_incomplete() {
catalog.register_snapshot(Uuid::from(header.uuid), current_file_number, snapshot)?;
catalog.register_snapshot(Uuid::from(header.uuid), current_file_number, &datastore_name, &snapshot)?;
catalog.commit_if_large()?;
}
}
PROXMOX_BACKUP_CHUNK_ARCHIVE_MAGIC_1_0 => {
bail!("unexpected chunk archive version (v1.0)");
}
PROXMOX_BACKUP_CHUNK_ARCHIVE_MAGIC_1_1 => {
let header_data = reader.read_exact_allocated(header.size as usize)?;
task_log!(worker, "Found chunk archive: {}", current_file_number);
let archive_header: ChunkArchiveHeader = serde_json::from_slice(&header_data)
.map_err(|err| format_err!("unable to parse chunk archive header - {}", err))?;
let source_datastore = archive_header.store;
task_log!(worker, "File {}: chunk archive for datastore '{}'", current_file_number, source_datastore);
let datastore = target.as_ref().map(|t| t.0);
if let Some(chunks) = restore_chunk_archive(worker, reader, datastore, verbose)? {
catalog.start_chunk_archive(Uuid::from(header.uuid), current_file_number)?;
catalog.start_chunk_archive(Uuid::from(header.uuid), current_file_number, &source_datastore)?;
for digest in chunks.iter() {
catalog.register_chunk(&digest)?;
}

View File

@ -144,6 +144,8 @@ pub struct MediaContentEntry {
pub seq_nr: u64,
/// Media Pool
pub pool: String,
/// Datastore Name
pub store: String,
/// Backup snapshot
pub snapshot: String,
/// Snapshot creation time (epoch)

View File

@ -3,17 +3,29 @@ use crate::tools;
use anyhow::{bail, format_err, Error};
use std::os::unix::io::RawFd;
use std::path::{PathBuf, Path};
use std::path::{Path, PathBuf};
use proxmox::const_regex;
use super::manifest::MANIFEST_BLOB_NAME;
macro_rules! BACKUP_ID_RE { () => (r"[A-Za-z0-9_][A-Za-z0-9._\-]*") }
macro_rules! BACKUP_TYPE_RE { () => (r"(?:host|vm|ct)") }
macro_rules! BACKUP_TIME_RE { () => (r"[0-9]{4}-[0-9]{2}-[0-9]{2}T[0-9]{2}:[0-9]{2}:[0-9]{2}Z") }
macro_rules! BACKUP_ID_RE {
() => {
r"[A-Za-z0-9_][A-Za-z0-9._\-]*"
};
}
macro_rules! BACKUP_TYPE_RE {
() => {
r"(?:host|vm|ct)"
};
}
macro_rules! BACKUP_TIME_RE {
() => {
r"[0-9]{4}-[0-9]{2}-[0-9]{2}T[0-9]{2}:[0-9]{2}:[0-9]{2}Z"
};
}
const_regex!{
const_regex! {
BACKUP_FILE_REGEX = r"^.*\.([fd]idx|blob)$";
BACKUP_TYPE_REGEX = concat!(r"^(", BACKUP_TYPE_RE!(), r")$");
@ -38,7 +50,6 @@ pub struct BackupGroup {
}
impl std::cmp::Ord for BackupGroup {
fn cmp(&self, other: &Self) -> std::cmp::Ordering {
let type_order = self.backup_type.cmp(&other.backup_type);
if type_order != std::cmp::Ordering::Equal {
@ -51,7 +62,7 @@ impl std::cmp::Ord for BackupGroup {
(Ok(id_self), Ok(id_other)) => id_self.cmp(&id_other),
(Ok(_), Err(_)) => std::cmp::Ordering::Less,
(Err(_), Ok(_)) => std::cmp::Ordering::Greater,
_ => self.backup_id.cmp(&other.backup_id),
_ => self.backup_id.cmp(&other.backup_id),
}
}
}
@ -63,9 +74,11 @@ impl std::cmp::PartialOrd for BackupGroup {
}
impl BackupGroup {
pub fn new<T: Into<String>, U: Into<String>>(backup_type: T, backup_id: U) -> Self {
Self { backup_type: backup_type.into(), backup_id: backup_id.into() }
Self {
backup_type: backup_type.into(),
backup_id: backup_id.into(),
}
}
pub fn backup_type(&self) -> &str {
@ -76,8 +89,7 @@ impl BackupGroup {
&self.backup_id
}
pub fn group_path(&self) -> PathBuf {
pub fn group_path(&self) -> PathBuf {
let mut relative_path = PathBuf::new();
relative_path.push(&self.backup_type);
@ -88,60 +100,82 @@ impl BackupGroup {
}
pub fn list_backups(&self, base_path: &Path) -> Result<Vec<BackupInfo>, Error> {
let mut list = vec![];
let mut path = base_path.to_owned();
path.push(self.group_path());
tools::scandir(libc::AT_FDCWD, &path, &BACKUP_DATE_REGEX, |l2_fd, backup_time, file_type| {
if file_type != nix::dir::Type::Directory { return Ok(()); }
tools::scandir(
libc::AT_FDCWD,
&path,
&BACKUP_DATE_REGEX,
|l2_fd, backup_time, file_type| {
if file_type != nix::dir::Type::Directory {
return Ok(());
}
let backup_dir = BackupDir::with_rfc3339(&self.backup_type, &self.backup_id, backup_time)?;
let files = list_backup_files(l2_fd, backup_time)?;
let backup_dir =
BackupDir::with_rfc3339(&self.backup_type, &self.backup_id, backup_time)?;
let files = list_backup_files(l2_fd, backup_time)?;
list.push(BackupInfo { backup_dir, files });
list.push(BackupInfo { backup_dir, files });
Ok(())
})?;
Ok(())
},
)?;
Ok(list)
}
pub fn last_successful_backup(&self, base_path: &Path) -> Result<Option<i64>, Error> {
pub fn last_successful_backup(&self, base_path: &Path) -> Result<Option<i64>, Error> {
let mut last = None;
let mut path = base_path.to_owned();
path.push(self.group_path());
tools::scandir(libc::AT_FDCWD, &path, &BACKUP_DATE_REGEX, |l2_fd, backup_time, file_type| {
if file_type != nix::dir::Type::Directory { return Ok(()); }
let mut manifest_path = PathBuf::from(backup_time);
manifest_path.push(MANIFEST_BLOB_NAME);
use nix::fcntl::{openat, OFlag};
match openat(l2_fd, &manifest_path, OFlag::O_RDONLY, nix::sys::stat::Mode::empty()) {
Ok(rawfd) => {
/* manifest exists --> assume backup was successful */
/* close else this leaks! */
nix::unistd::close(rawfd)?;
},
Err(nix::Error::Sys(nix::errno::Errno::ENOENT)) => { return Ok(()); }
Err(err) => {
bail!("last_successful_backup: unexpected error - {}", err);
tools::scandir(
libc::AT_FDCWD,
&path,
&BACKUP_DATE_REGEX,
|l2_fd, backup_time, file_type| {
if file_type != nix::dir::Type::Directory {
return Ok(());
}
}
let timestamp = proxmox::tools::time::parse_rfc3339(backup_time)?;
if let Some(last_timestamp) = last {
if timestamp > last_timestamp { last = Some(timestamp); }
} else {
last = Some(timestamp);
}
let mut manifest_path = PathBuf::from(backup_time);
manifest_path.push(MANIFEST_BLOB_NAME);
Ok(())
})?;
use nix::fcntl::{openat, OFlag};
match openat(
l2_fd,
&manifest_path,
OFlag::O_RDONLY,
nix::sys::stat::Mode::empty(),
) {
Ok(rawfd) => {
/* manifest exists --> assume backup was successful */
/* close else this leaks! */
nix::unistd::close(rawfd)?;
}
Err(nix::Error::Sys(nix::errno::Errno::ENOENT)) => {
return Ok(());
}
Err(err) => {
bail!("last_successful_backup: unexpected error - {}", err);
}
}
let timestamp = proxmox::tools::time::parse_rfc3339(backup_time)?;
if let Some(last_timestamp) = last {
if timestamp > last_timestamp {
last = Some(timestamp);
}
} else {
last = Some(timestamp);
}
Ok(())
},
)?;
Ok(last)
}
@ -162,7 +196,8 @@ impl std::str::FromStr for BackupGroup {
///
/// This parses strings like `vm/100".
fn from_str(path: &str) -> Result<Self, Self::Err> {
let cap = GROUP_PATH_REGEX.captures(path)
let cap = GROUP_PATH_REGEX
.captures(path)
.ok_or_else(|| format_err!("unable to parse backup group path '{}'", path))?;
Ok(Self {
@ -182,11 +217,10 @@ pub struct BackupDir {
/// Backup timestamp
backup_time: i64,
// backup_time as rfc3339
backup_time_string: String
backup_time_string: String,
}
impl BackupDir {
pub fn new<T, U>(backup_type: T, backup_id: U, backup_time: i64) -> Result<Self, Error>
where
T: Into<String>,
@ -196,21 +230,33 @@ impl BackupDir {
BackupDir::with_group(group, backup_time)
}
pub fn with_rfc3339<T,U,V>(backup_type: T, backup_id: U, backup_time_string: V) -> Result<Self, Error>
pub fn with_rfc3339<T, U, V>(
backup_type: T,
backup_id: U,
backup_time_string: V,
) -> Result<Self, Error>
where
T: Into<String>,
U: Into<String>,
V: Into<String>,
{
let backup_time_string = backup_time_string.into();
let backup_time_string = backup_time_string.into();
let backup_time = proxmox::tools::time::parse_rfc3339(&backup_time_string)?;
let group = BackupGroup::new(backup_type.into(), backup_id.into());
Ok(Self { group, backup_time, backup_time_string })
Ok(Self {
group,
backup_time,
backup_time_string,
})
}
pub fn with_group(group: BackupGroup, backup_time: i64) -> Result<Self, Error> {
let backup_time_string = Self::backup_time_to_string(backup_time)?;
Ok(Self { group, backup_time, backup_time_string })
Ok(Self {
group,
backup_time,
backup_time_string,
})
}
pub fn group(&self) -> &BackupGroup {
@ -225,8 +271,7 @@ impl BackupDir {
&self.backup_time_string
}
pub fn relative_path(&self) -> PathBuf {
pub fn relative_path(&self) -> PathBuf {
let mut relative_path = self.group.group_path();
relative_path.push(self.backup_time_string.clone());
@ -247,7 +292,8 @@ impl std::str::FromStr for BackupDir {
///
/// This parses strings like `host/elsa/2020-06-15T05:18:33Z".
fn from_str(path: &str) -> Result<Self, Self::Err> {
let cap = SNAPSHOT_PATH_REGEX.captures(path)
let cap = SNAPSHOT_PATH_REGEX
.captures(path)
.ok_or_else(|| format_err!("unable to parse backup snapshot path '{}'", path))?;
BackupDir::with_rfc3339(
@ -276,7 +322,6 @@ pub struct BackupInfo {
}
impl BackupInfo {
pub fn new(base_path: &Path, backup_dir: BackupDir) -> Result<BackupInfo, Error> {
let mut path = base_path.to_owned();
path.push(backup_dir.relative_path());
@ -287,19 +332,24 @@ impl BackupInfo {
}
/// Finds the latest backup inside a backup group
pub fn last_backup(base_path: &Path, group: &BackupGroup, only_finished: bool)
-> Result<Option<BackupInfo>, Error>
{
pub fn last_backup(
base_path: &Path,
group: &BackupGroup,
only_finished: bool,
) -> Result<Option<BackupInfo>, Error> {
let backups = group.list_backups(base_path)?;
Ok(backups.into_iter()
Ok(backups
.into_iter()
.filter(|item| !only_finished || item.is_finished())
.max_by_key(|item| item.backup_dir.backup_time()))
}
pub fn sort_list(list: &mut Vec<BackupInfo>, ascendending: bool) {
if ascendending { // oldest first
if ascendending {
// oldest first
list.sort_unstable_by(|a, b| a.backup_dir.backup_time.cmp(&b.backup_dir.backup_time));
} else { // newest first
} else {
// newest first
list.sort_unstable_by(|a, b| b.backup_dir.backup_time.cmp(&a.backup_dir.backup_time));
}
}
@ -316,31 +366,52 @@ impl BackupInfo {
pub fn list_backup_groups(base_path: &Path) -> Result<Vec<BackupGroup>, Error> {
let mut list = Vec::new();
tools::scandir(libc::AT_FDCWD, base_path, &BACKUP_TYPE_REGEX, |l0_fd, backup_type, file_type| {
if file_type != nix::dir::Type::Directory { return Ok(()); }
tools::scandir(l0_fd, backup_type, &BACKUP_ID_REGEX, |_, backup_id, file_type| {
if file_type != nix::dir::Type::Directory { return Ok(()); }
tools::scandir(
libc::AT_FDCWD,
base_path,
&BACKUP_TYPE_REGEX,
|l0_fd, backup_type, file_type| {
if file_type != nix::dir::Type::Directory {
return Ok(());
}
tools::scandir(
l0_fd,
backup_type,
&BACKUP_ID_REGEX,
|_, backup_id, file_type| {
if file_type != nix::dir::Type::Directory {
return Ok(());
}
list.push(BackupGroup::new(backup_type, backup_id));
list.push(BackupGroup::new(backup_type, backup_id));
Ok(())
})
})?;
Ok(())
},
)
},
)?;
Ok(list)
}
pub fn is_finished(&self) -> bool {
// backup is considered unfinished if there is no manifest
self.files.iter().any(|name| name == super::MANIFEST_BLOB_NAME)
self.files
.iter()
.any(|name| name == super::MANIFEST_BLOB_NAME)
}
}
fn list_backup_files<P: ?Sized + nix::NixPath>(dirfd: RawFd, path: &P) -> Result<Vec<String>, Error> {
fn list_backup_files<P: ?Sized + nix::NixPath>(
dirfd: RawFd,
path: &P,
) -> Result<Vec<String>, Error> {
let mut files = vec![];
tools::scandir(dirfd, path, &BACKUP_FILE_REGEX, |_, filename, file_type| {
if file_type != nix::dir::Type::File { return Ok(()); }
if file_type != nix::dir::Type::File {
return Ok(());
}
files.push(filename.to_owned());
Ok(())
})?;

View File

@ -27,10 +27,12 @@ use proxmox_backup::{
api2::{
self,
types::{
Authid,
DATASTORE_SCHEMA,
DRIVE_NAME_SCHEMA,
MEDIA_LABEL_SCHEMA,
MEDIA_POOL_NAME_SCHEMA,
Userid,
},
},
config::{
@ -863,6 +865,14 @@ async fn backup(mut param: Value) -> Result<(), Error> {
description: "Media set UUID.",
type: String,
},
"notify-user": {
type: Userid,
optional: true,
},
owner: {
type: Authid,
optional: true,
},
"output-format": {
schema: OUTPUT_FORMAT,
optional: true,

View File

@ -177,12 +177,14 @@ fn list_content(
let options = default_table_format_options()
.sortby("media-set-uuid", false)
.sortby("seq-nr", false)
.sortby("store", false)
.sortby("snapshot", false)
.sortby("backup-time", false)
.column(ColumnConfig::new("label-text"))
.column(ColumnConfig::new("pool"))
.column(ColumnConfig::new("media-set-name"))
.column(ColumnConfig::new("seq-nr"))
.column(ColumnConfig::new("store"))
.column(ColumnConfig::new("snapshot"))
.column(ColumnConfig::new("media-set-uuid"))
;

View File

@ -130,22 +130,22 @@ fn extract_archive(
) -> Result<(), Error> {
let mut feature_flags = Flags::DEFAULT;
if no_xattrs {
feature_flags ^= Flags::WITH_XATTRS;
feature_flags.remove(Flags::WITH_XATTRS);
}
if no_fcaps {
feature_flags ^= Flags::WITH_FCAPS;
feature_flags.remove(Flags::WITH_FCAPS);
}
if no_acls {
feature_flags ^= Flags::WITH_ACL;
feature_flags.remove(Flags::WITH_ACL);
}
if no_device_nodes {
feature_flags ^= Flags::WITH_DEVICE_NODES;
feature_flags.remove(Flags::WITH_DEVICE_NODES);
}
if no_fifos {
feature_flags ^= Flags::WITH_FIFOS;
feature_flags.remove(Flags::WITH_FIFOS);
}
if no_sockets {
feature_flags ^= Flags::WITH_SOCKETS;
feature_flags.remove(Flags::WITH_SOCKETS);
}
let pattern = pattern.unwrap_or_else(Vec::new);
@ -353,22 +353,22 @@ async fn create_archive(
let writer = std::io::BufWriter::with_capacity(1024 * 1024, file);
let mut feature_flags = Flags::DEFAULT;
if no_xattrs {
feature_flags ^= Flags::WITH_XATTRS;
feature_flags.remove(Flags::WITH_XATTRS);
}
if no_fcaps {
feature_flags ^= Flags::WITH_FCAPS;
feature_flags.remove(Flags::WITH_FCAPS);
}
if no_acls {
feature_flags ^= Flags::WITH_ACL;
feature_flags.remove(Flags::WITH_ACL);
}
if no_device_nodes {
feature_flags ^= Flags::WITH_DEVICE_NODES;
feature_flags.remove(Flags::WITH_DEVICE_NODES);
}
if no_fifos {
feature_flags ^= Flags::WITH_FIFOS;
feature_flags.remove(Flags::WITH_FIFOS);
}
if no_sockets {
feature_flags ^= Flags::WITH_SOCKETS;
feature_flags.remove(Flags::WITH_SOCKETS);
}
let writer = pxar::encoder::sync::StandardWriter::new(writer);

View File

@ -752,10 +752,7 @@ fn get_metadata(fd: RawFd, stat: &FileStat, flags: Flags, fs_magic: i64) -> Resu
flags: 0,
uid: stat.st_uid,
gid: stat.st_gid,
mtime: pxar::format::StatxTimestamp {
secs: stat.st_mtime,
nanos: stat.st_mtime_nsec as u32,
},
mtime: pxar::format::StatxTimestamp::new(stat.st_mtime, stat.st_mtime_nsec as u32),
},
..Default::default()
};
@ -768,7 +765,7 @@ fn get_metadata(fd: RawFd, stat: &FileStat, flags: Flags, fs_magic: i64) -> Resu
}
fn get_fcaps(meta: &mut Metadata, fd: RawFd, flags: Flags) -> Result<(), Error> {
if flags.contains(Flags::WITH_FCAPS) {
if !flags.contains(Flags::WITH_FCAPS) {
return Ok(());
}
@ -790,7 +787,7 @@ fn get_xattr_fcaps_acl(
proc_path: &Path,
flags: Flags,
) -> Result<(), Error> {
if flags.contains(Flags::WITH_XATTRS) {
if !flags.contains(Flags::WITH_XATTRS) {
return Ok(());
}
@ -879,7 +876,7 @@ fn get_quota_project_id(
return Ok(());
}
if flags.contains(Flags::WITH_QUOTA_PROJID) {
if !flags.contains(Flags::WITH_QUOTA_PROJID) {
return Ok(());
}
@ -914,7 +911,7 @@ fn get_quota_project_id(
}
fn get_acl(metadata: &mut Metadata, proc_path: &Path, flags: Flags) -> Result<(), Error> {
if flags.contains(Flags::WITH_ACL) {
if !flags.contains(Flags::WITH_ACL) {
return Ok(());
}

View File

@ -14,9 +14,10 @@ use crate::tape::{
TapeWrite,
file_formats::{
PROXMOX_TAPE_BLOCK_SIZE,
PROXMOX_BACKUP_CHUNK_ARCHIVE_MAGIC_1_0,
PROXMOX_BACKUP_CHUNK_ARCHIVE_MAGIC_1_1,
PROXMOX_BACKUP_CHUNK_ARCHIVE_ENTRY_MAGIC_1_0,
MediaContentHeader,
ChunkArchiveHeader,
ChunkArchiveEntryHeader,
},
};
@ -36,13 +37,20 @@ pub struct ChunkArchiveWriter<'a> {
impl <'a> ChunkArchiveWriter<'a> {
pub const MAGIC: [u8; 8] = PROXMOX_BACKUP_CHUNK_ARCHIVE_MAGIC_1_0;
pub const MAGIC: [u8; 8] = PROXMOX_BACKUP_CHUNK_ARCHIVE_MAGIC_1_1;
/// Creates a new instance
pub fn new(mut writer: Box<dyn TapeWrite + 'a>, close_on_leom: bool) -> Result<(Self,Uuid), Error> {
pub fn new(
mut writer: Box<dyn TapeWrite + 'a>,
store: &str,
close_on_leom: bool,
) -> Result<(Self,Uuid), Error> {
let header = MediaContentHeader::new(Self::MAGIC, 0);
writer.write_header(&header, &[])?;
let archive_header = ChunkArchiveHeader { store: store.to_string() };
let header_data = serde_json::to_string_pretty(&archive_header)?.as_bytes().to_vec();
let header = MediaContentHeader::new(Self::MAGIC, header_data.len() as u32);
writer.write_header(&header, &header_data)?;
let me = Self {
writer: Some(writer),

View File

@ -44,12 +44,22 @@ pub const PROXMOX_BACKUP_MEDIA_LABEL_MAGIC_1_0: [u8; 8] = [42, 5, 191, 60, 176,
pub const PROXMOX_BACKUP_MEDIA_SET_LABEL_MAGIC_1_0: [u8; 8] = [8, 96, 99, 249, 47, 151, 83, 216];
// openssl::sha::sha256(b"Proxmox Backup Chunk Archive v1.0")[0..8]
// only used in unreleased version - no longer supported
pub const PROXMOX_BACKUP_CHUNK_ARCHIVE_MAGIC_1_0: [u8; 8] = [62, 173, 167, 95, 49, 76, 6, 110];
// openssl::sha::sha256(b"Proxmox Backup Chunk Archive v1.1")[0..8]
pub const PROXMOX_BACKUP_CHUNK_ARCHIVE_MAGIC_1_1: [u8; 8] = [109, 49, 99, 109, 215, 2, 131, 191];
// openssl::sha::sha256(b"Proxmox Backup Chunk Archive Entry v1.0")[0..8]
pub const PROXMOX_BACKUP_CHUNK_ARCHIVE_ENTRY_MAGIC_1_0: [u8; 8] = [72, 87, 109, 242, 222, 66, 143, 220];
// openssl::sha::sha256(b"Proxmox Backup Snapshot Archive v1.0")[0..8];
// only used in unreleased version - no longer supported
pub const PROXMOX_BACKUP_SNAPSHOT_ARCHIVE_MAGIC_1_0: [u8; 8] = [9, 182, 2, 31, 125, 232, 114, 133];
// openssl::sha::sha256(b"Proxmox Backup Snapshot Archive v1.1")[0..8];
pub const PROXMOX_BACKUP_SNAPSHOT_ARCHIVE_MAGIC_1_1: [u8; 8] = [218, 22, 21, 208, 17, 226, 154, 98];
// openssl::sha::sha256(b"Proxmox Backup Catalog Archive v1.0")[0..8];
pub const PROXMOX_BACKUP_CATALOG_ARCHIVE_MAGIC_1_0: [u8; 8] = [183, 207, 199, 37, 158, 153, 30, 115];
lazy_static::lazy_static!{
// Map content magic numbers to human readable names.
@ -58,7 +68,10 @@ lazy_static::lazy_static!{
map.insert(&PROXMOX_BACKUP_MEDIA_LABEL_MAGIC_1_0, "Proxmox Backup Tape Label v1.0");
map.insert(&PROXMOX_BACKUP_MEDIA_SET_LABEL_MAGIC_1_0, "Proxmox Backup MediaSet Label v1.0");
map.insert(&PROXMOX_BACKUP_CHUNK_ARCHIVE_MAGIC_1_0, "Proxmox Backup Chunk Archive v1.0");
map.insert(&PROXMOX_BACKUP_CHUNK_ARCHIVE_MAGIC_1_1, "Proxmox Backup Chunk Archive v1.1");
map.insert(&PROXMOX_BACKUP_SNAPSHOT_ARCHIVE_MAGIC_1_0, "Proxmox Backup Snapshot Archive v1.0");
map.insert(&PROXMOX_BACKUP_SNAPSHOT_ARCHIVE_MAGIC_1_1, "Proxmox Backup Snapshot Archive v1.1");
map.insert(&PROXMOX_BACKUP_CATALOG_ARCHIVE_MAGIC_1_0, "Proxmox Backup Catalog Archive v1.0");
map
};
}
@ -172,6 +185,13 @@ impl MediaContentHeader {
}
}
#[derive(Deserialize, Serialize)]
/// Header for chunk archives
pub struct ChunkArchiveHeader {
// Datastore name
pub store: String,
}
#[derive(Endian)]
#[repr(C,packed)]
/// Header for data blobs inside a chunk archive
@ -184,6 +204,26 @@ pub struct ChunkArchiveEntryHeader {
pub size: u64,
}
#[derive(Deserialize, Serialize)]
/// Header for snapshot archives
pub struct SnapshotArchiveHeader {
/// Snapshot name
pub snapshot: String,
/// Datastore name
pub store: String,
}
#[derive(Deserialize, Serialize)]
/// Header for Catalog archives
pub struct CatalogArchiveHeader {
/// The uuid of the media the catalog is for
pub uuid: Uuid,
/// The media set uuid the catalog is for
pub media_set_uuid: Uuid,
/// Media sequence number
pub seq_nr: u64,
}
#[derive(Serialize,Deserialize,Clone,Debug)]
/// Media Label
///

View File

@ -12,11 +12,13 @@ use crate::tape::{
SnapshotReader,
file_formats::{
PROXMOX_TAPE_BLOCK_SIZE,
PROXMOX_BACKUP_SNAPSHOT_ARCHIVE_MAGIC_1_0,
PROXMOX_BACKUP_SNAPSHOT_ARCHIVE_MAGIC_1_1,
MediaContentHeader,
SnapshotArchiveHeader,
},
};
/// Write a set of files as `pxar` archive to the tape
///
/// This ignores file attributes like ACLs and xattrs.
@ -31,12 +33,15 @@ pub fn tape_write_snapshot_archive<'a>(
) -> Result<Option<Uuid>, std::io::Error> {
let snapshot = snapshot_reader.snapshot().to_string();
let store = snapshot_reader.datastore_name().to_string();
let file_list = snapshot_reader.file_list();
let header_data = snapshot.as_bytes().to_vec();
let archive_header = SnapshotArchiveHeader { snapshot, store };
let header_data = serde_json::to_string_pretty(&archive_header)?.as_bytes().to_vec();
let header = MediaContentHeader::new(
PROXMOX_BACKUP_SNAPSHOT_ARCHIVE_MAGIC_1_0, header_data.len() as u32);
PROXMOX_BACKUP_SNAPSHOT_ARCHIVE_MAGIC_1_1, header_data.len() as u32);
let content_uuid = header.uuid.into();
let root_metadata = pxar::Metadata::dir_builder(0o0664).build();

View File

@ -26,6 +26,7 @@ use crate::{
/// This make it easy to iterate over all used chunks and files.
pub struct SnapshotReader {
snapshot: BackupDir,
datastore_name: String,
file_list: Vec<String>,
locked_dir: Dir,
}
@ -42,11 +43,13 @@ impl SnapshotReader {
"snapshot",
"locked by another operation")?;
let datastore_name = datastore.name().to_string();
let manifest = match datastore.load_manifest(&snapshot) {
Ok((manifest, _)) => manifest,
Err(err) => {
bail!("manifest load error on datastore '{}' snapshot '{}' - {}",
datastore.name(), snapshot, err);
datastore_name, snapshot, err);
}
};
@ -60,7 +63,7 @@ impl SnapshotReader {
file_list.push(CLIENT_LOG_BLOB_NAME.to_string());
}
Ok(Self { snapshot, file_list, locked_dir })
Ok(Self { snapshot, datastore_name, file_list, locked_dir })
}
/// Return the snapshot directory
@ -68,6 +71,11 @@ impl SnapshotReader {
&self.snapshot
}
/// Return the datastore name
pub fn datastore_name(&self) -> &str {
&self.datastore_name
}
/// Returns the list of files the snapshot refers to.
pub fn file_list(&self) -> &Vec<String> {
&self.file_list
@ -96,7 +104,6 @@ impl SnapshotReader {
/// Note: The iterator returns a `Result`, and the iterator state is
/// undefined after the first error. So it make no sense to continue
/// iteration after the first error.
#[derive(Clone)]
pub struct SnapshotChunkIterator<'a> {
snapshot_reader: &'a SnapshotReader,
todo_list: Vec<String>,

View File

@ -26,9 +26,24 @@ use crate::{
backup::BackupDir,
tape::{
MediaId,
file_formats::MediaSetLabel,
},
};
pub struct DatastoreContent {
pub snapshot_index: HashMap<String, u64>, // snapshot => file_nr
pub chunk_index: HashMap<[u8;32], u64>, // chunk => file_nr
}
impl DatastoreContent {
pub fn new() -> Self {
Self {
chunk_index: HashMap::new(),
snapshot_index: HashMap::new(),
}
}
}
/// The Media Catalog
///
@ -44,13 +59,11 @@ pub struct MediaCatalog {
log_to_stdout: bool,
current_archive: Option<(Uuid, u64)>,
current_archive: Option<(Uuid, u64, String)>, // (uuid, file_nr, store)
last_entry: Option<(Uuid, u64)>,
chunk_index: HashMap<[u8;32], u64>,
snapshot_index: HashMap<String, u64>,
content: HashMap<String, DatastoreContent>,
pending: Vec<u8>,
}
@ -59,8 +72,12 @@ impl MediaCatalog {
/// Magic number for media catalog files.
// openssl::sha::sha256(b"Proxmox Backup Media Catalog v1.0")[0..8]
// Note: this version did not store datastore names (not supported anymore)
pub const PROXMOX_BACKUP_MEDIA_CATALOG_MAGIC_1_0: [u8; 8] = [221, 29, 164, 1, 59, 69, 19, 40];
// openssl::sha::sha256(b"Proxmox Backup Media Catalog v1.1")[0..8]
pub const PROXMOX_BACKUP_MEDIA_CATALOG_MAGIC_1_1: [u8; 8] = [76, 142, 232, 193, 32, 168, 137, 113];
/// List media with catalogs
pub fn media_with_catalogs(base_path: &Path) -> Result<HashSet<Uuid>, Error> {
let mut catalogs = HashSet::new();
@ -120,11 +137,13 @@ impl MediaCatalog {
/// Open a catalog database, load into memory
pub fn open(
base_path: &Path,
uuid: &Uuid,
media_id: &MediaId,
write: bool,
create: bool,
) -> Result<Self, Error> {
let uuid = &media_id.label.uuid;
let mut path = base_path.to_owned();
path.push(uuid.to_string());
path.set_extension("log");
@ -149,15 +168,14 @@ impl MediaCatalog {
log_to_stdout: false,
current_archive: None,
last_entry: None,
chunk_index: HashMap::new(),
snapshot_index: HashMap::new(),
content: HashMap::new(),
pending: Vec::new(),
};
let found_magic_number = me.load_catalog(&mut file)?;
let found_magic_number = me.load_catalog(&mut file, media_id.media_set_label.as_ref())?;
if !found_magic_number {
me.pending.extend(&Self::PROXMOX_BACKUP_MEDIA_CATALOG_MAGIC_1_0);
me.pending.extend(&Self::PROXMOX_BACKUP_MEDIA_CATALOG_MAGIC_1_1);
}
if write {
@ -207,19 +225,18 @@ impl MediaCatalog {
log_to_stdout: false,
current_archive: None,
last_entry: None,
chunk_index: HashMap::new(),
snapshot_index: HashMap::new(),
content: HashMap::new(),
pending: Vec::new(),
};
me.log_to_stdout = log_to_stdout;
me.pending.extend(&Self::PROXMOX_BACKUP_MEDIA_CATALOG_MAGIC_1_0);
me.pending.extend(&Self::PROXMOX_BACKUP_MEDIA_CATALOG_MAGIC_1_1);
me.register_label(&media_id.label.uuid, 0)?;
me.register_label(&media_id.label.uuid, 0, 0)?;
if let Some(ref set) = media_id.media_set_label {
me.register_label(&set.uuid, 1)?;
me.register_label(&set.uuid, set.seq_nr, 1)?;
}
me.commit()?;
@ -265,8 +282,8 @@ impl MediaCatalog {
}
/// Accessor to content list
pub fn snapshot_index(&self) -> &HashMap<String, u64> {
&self.snapshot_index
pub fn content(&self) -> &HashMap<String, DatastoreContent> {
&self.content
}
/// Commit pending changes
@ -319,31 +336,47 @@ impl MediaCatalog {
}
/// Test if the catalog already contain a snapshot
pub fn contains_snapshot(&self, snapshot: &str) -> bool {
self.snapshot_index.contains_key(snapshot)
pub fn contains_snapshot(&self, store: &str, snapshot: &str) -> bool {
match self.content.get(store) {
None => false,
Some(content) => content.snapshot_index.contains_key(snapshot),
}
}
/// Returns the chunk archive file number
pub fn lookup_snapshot(&self, snapshot: &str) -> Option<u64> {
self.snapshot_index.get(snapshot).copied()
/// Returns the snapshot archive file number
pub fn lookup_snapshot(&self, store: &str, snapshot: &str) -> Option<u64> {
match self.content.get(store) {
None => None,
Some(content) => content.snapshot_index.get(snapshot).copied(),
}
}
/// Test if the catalog already contain a chunk
pub fn contains_chunk(&self, digest: &[u8;32]) -> bool {
self.chunk_index.contains_key(digest)
pub fn contains_chunk(&self, store: &str, digest: &[u8;32]) -> bool {
match self.content.get(store) {
None => false,
Some(content) => content.chunk_index.contains_key(digest),
}
}
/// Returns the chunk archive file number
pub fn lookup_chunk(&self, digest: &[u8;32]) -> Option<u64> {
self.chunk_index.get(digest).copied()
pub fn lookup_chunk(&self, store: &str, digest: &[u8;32]) -> Option<u64> {
match self.content.get(store) {
None => None,
Some(content) => content.chunk_index.get(digest).copied(),
}
}
fn check_register_label(&self, file_number: u64) -> Result<(), Error> {
fn check_register_label(&self, file_number: u64, uuid: &Uuid) -> Result<(), Error> {
if file_number >= 2 {
bail!("register label failed: got wrong file number ({} >= 2)", file_number);
}
if file_number == 0 && uuid != &self.uuid {
bail!("register label failed: uuid does not match");
}
if self.current_archive.is_some() {
bail!("register label failed: inside chunk archive");
}
@ -363,15 +396,21 @@ impl MediaCatalog {
/// Register media labels (file 0 and 1)
pub fn register_label(
&mut self,
uuid: &Uuid, // Uuid form MediaContentHeader
uuid: &Uuid, // Media/MediaSet Uuid
seq_nr: u64, // onyl used for media set labels
file_number: u64,
) -> Result<(), Error> {
self.check_register_label(file_number)?;
self.check_register_label(file_number, uuid)?;
if file_number == 0 && seq_nr != 0 {
bail!("register_label failed - seq_nr should be 0 - iternal error");
}
let entry = LabelEntry {
file_number,
uuid: *uuid.as_bytes(),
seq_nr,
};
if self.log_to_stdout {
@ -395,9 +434,9 @@ impl MediaCatalog {
digest: &[u8;32],
) -> Result<(), Error> {
let file_number = match self.current_archive {
let (file_number, store) = match self.current_archive {
None => bail!("register_chunk failed: no archive started"),
Some((_, file_number)) => file_number,
Some((_, file_number, ref store)) => (file_number, store),
};
if self.log_to_stdout {
@ -407,7 +446,12 @@ impl MediaCatalog {
self.pending.push(b'C');
self.pending.extend(digest);
self.chunk_index.insert(*digest, file_number);
match self.content.get_mut(store) {
None => bail!("storage {} not registered - internal error", store),
Some(content) => {
content.chunk_index.insert(*digest, file_number);
}
}
Ok(())
}
@ -440,24 +484,29 @@ impl MediaCatalog {
&mut self,
uuid: Uuid, // Uuid form MediaContentHeader
file_number: u64,
) -> Result<(), Error> {
store: &str,
) -> Result<(), Error> {
self.check_start_chunk_archive(file_number)?;
let entry = ChunkArchiveStart {
file_number,
uuid: *uuid.as_bytes(),
store_name_len: u8::try_from(store.len())?,
};
if self.log_to_stdout {
println!("A|{}|{}", file_number, uuid.to_string());
println!("A|{}|{}|{}", file_number, uuid.to_string(), store);
}
self.pending.push(b'A');
unsafe { self.pending.write_le_value(entry)?; }
self.pending.extend(store.as_bytes());
self.current_archive = Some((uuid, file_number));
self.content.entry(store.to_string()).or_insert(DatastoreContent::new());
self.current_archive = Some((uuid, file_number, store.to_string()));
Ok(())
}
@ -466,7 +515,7 @@ impl MediaCatalog {
match self.current_archive {
None => bail!("end_chunk archive failed: not started"),
Some((ref expected_uuid, expected_file_number)) => {
Some((ref expected_uuid, expected_file_number, ..)) => {
if uuid != expected_uuid {
bail!("end_chunk_archive failed: got unexpected uuid");
}
@ -476,7 +525,6 @@ impl MediaCatalog {
}
}
}
Ok(())
}
@ -485,7 +533,7 @@ impl MediaCatalog {
match self.current_archive.take() {
None => bail!("end_chunk_archive failed: not started"),
Some((uuid, file_number)) => {
Some((uuid, file_number, ..)) => {
let entry = ChunkArchiveEnd {
file_number,
@ -539,6 +587,7 @@ impl MediaCatalog {
&mut self,
uuid: Uuid, // Uuid form MediaContentHeader
file_number: u64,
store: &str,
snapshot: &str,
) -> Result<(), Error> {
@ -547,26 +596,36 @@ impl MediaCatalog {
let entry = SnapshotEntry {
file_number,
uuid: *uuid.as_bytes(),
store_name_len: u8::try_from(store.len())?,
name_len: u16::try_from(snapshot.len())?,
};
if self.log_to_stdout {
println!("S|{}|{}|{}", file_number, uuid.to_string(), snapshot);
println!("S|{}|{}|{}:{}", file_number, uuid.to_string(), store, snapshot);
}
self.pending.push(b'S');
unsafe { self.pending.write_le_value(entry)?; }
self.pending.extend(store.as_bytes());
self.pending.push(b':');
self.pending.extend(snapshot.as_bytes());
self.snapshot_index.insert(snapshot.to_string(), file_number);
let content = self.content.entry(store.to_string())
.or_insert(DatastoreContent::new());
content.snapshot_index.insert(snapshot.to_string(), file_number);
self.last_entry = Some((uuid, file_number));
Ok(())
}
fn load_catalog(&mut self, file: &mut File) -> Result<bool, Error> {
fn load_catalog(
&mut self,
file: &mut File,
media_set_label: Option<&MediaSetLabel>,
) -> Result<bool, Error> {
let mut file = BufReader::new(file);
let mut found_magic_number = false;
@ -581,7 +640,11 @@ impl MediaCatalog {
Ok(true) => { /* OK */ }
Err(err) => bail!("read failed - {}", err),
}
if magic != Self::PROXMOX_BACKUP_MEDIA_CATALOG_MAGIC_1_0 {
if magic == Self::PROXMOX_BACKUP_MEDIA_CATALOG_MAGIC_1_0 {
// only use in unreleased versions
bail!("old catalog format (v1.0) is no longer supported");
}
if magic != Self::PROXMOX_BACKUP_MEDIA_CATALOG_MAGIC_1_1 {
bail!("wrong magic number");
}
found_magic_number = true;
@ -597,23 +660,35 @@ impl MediaCatalog {
match entry_type[0] {
b'C' => {
let file_number = match self.current_archive {
let (file_number, store) = match self.current_archive {
None => bail!("register_chunk failed: no archive started"),
Some((_, file_number)) => file_number,
Some((_, file_number, ref store)) => (file_number, store),
};
let mut digest = [0u8; 32];
file.read_exact(&mut digest)?;
self.chunk_index.insert(digest, file_number);
match self.content.get_mut(store) {
None => bail!("storage {} not registered - internal error", store),
Some(content) => {
content.chunk_index.insert(digest, file_number);
}
}
}
b'A' => {
let entry: ChunkArchiveStart = unsafe { file.read_le_value()? };
let file_number = entry.file_number;
let uuid = Uuid::from(entry.uuid);
let store_name_len = entry.store_name_len as usize;
let store = file.read_exact_allocated(store_name_len)?;
let store = std::str::from_utf8(&store)?;
self.check_start_chunk_archive(file_number)?;
self.current_archive = Some((uuid, file_number));
}
self.content.entry(store.to_string())
.or_insert(DatastoreContent::new());
self.current_archive = Some((uuid, file_number, store.to_string()));
}
b'E' => {
let entry: ChunkArchiveEnd = unsafe { file.read_le_value()? };
let file_number = entry.file_number;
@ -627,15 +702,26 @@ impl MediaCatalog {
b'S' => {
let entry: SnapshotEntry = unsafe { file.read_le_value()? };
let file_number = entry.file_number;
let name_len = entry.name_len;
let store_name_len = entry.store_name_len as usize;
let name_len = entry.name_len as usize;
let uuid = Uuid::from(entry.uuid);
let snapshot = file.read_exact_allocated(name_len.into())?;
let store = file.read_exact_allocated(store_name_len + 1)?;
if store[store_name_len] != b':' {
bail!("parse-error: missing separator in SnapshotEntry");
}
let store = std::str::from_utf8(&store[..store_name_len])?;
let snapshot = file.read_exact_allocated(name_len)?;
let snapshot = std::str::from_utf8(&snapshot)?;
self.check_register_snapshot(file_number, snapshot)?;
self.snapshot_index.insert(snapshot.to_string(), file_number);
let content = self.content.entry(store.to_string())
.or_insert(DatastoreContent::new());
content.snapshot_index.insert(snapshot.to_string(), file_number);
self.last_entry = Some((uuid, file_number));
}
@ -644,7 +730,18 @@ impl MediaCatalog {
let file_number = entry.file_number;
let uuid = Uuid::from(entry.uuid);
self.check_register_label(file_number)?;
self.check_register_label(file_number, &uuid)?;
if file_number == 1 {
if let Some(set) = media_set_label {
if set.uuid != uuid {
bail!("got unexpected media set uuid");
}
if set.seq_nr != entry.seq_nr {
bail!("got unexpected media set sequence number");
}
}
}
self.last_entry = Some((uuid, file_number));
}
@ -693,9 +790,9 @@ impl MediaSetCatalog {
}
/// Test if the catalog already contain a snapshot
pub fn contains_snapshot(&self, snapshot: &str) -> bool {
pub fn contains_snapshot(&self, store: &str, snapshot: &str) -> bool {
for catalog in self.catalog_list.values() {
if catalog.contains_snapshot(snapshot) {
if catalog.contains_snapshot(store, snapshot) {
return true;
}
}
@ -703,9 +800,9 @@ impl MediaSetCatalog {
}
/// Test if the catalog already contain a chunk
pub fn contains_chunk(&self, digest: &[u8;32]) -> bool {
pub fn contains_chunk(&self, store: &str, digest: &[u8;32]) -> bool {
for catalog in self.catalog_list.values() {
if catalog.contains_chunk(digest) {
if catalog.contains_chunk(store, digest) {
return true;
}
}
@ -720,6 +817,7 @@ impl MediaSetCatalog {
struct LabelEntry {
file_number: u64,
uuid: [u8;16],
seq_nr: u64, // only used for media set labels
}
#[derive(Endian)]
@ -727,6 +825,8 @@ struct LabelEntry {
struct ChunkArchiveStart {
file_number: u64,
uuid: [u8;16],
store_name_len: u8,
/* datastore name follows */
}
#[derive(Endian)]
@ -741,6 +841,7 @@ struct ChunkArchiveEnd{
struct SnapshotEntry{
file_number: u64,
uuid: [u8;16],
store_name_len: u8,
name_len: u16,
/* snapshot name follows */
/* datastore name, ':', snapshot name follows */
}

View File

@ -1,8 +1,9 @@
use std::collections::HashSet;
use std::path::Path;
use std::time::SystemTime;
use std::sync::{Arc, Mutex};
use anyhow::{bail, Error};
use anyhow::{bail, format_err, Error};
use proxmox::tools::Uuid;
@ -10,6 +11,7 @@ use crate::{
task_log,
backup::{
DataStore,
DataBlob,
},
server::WorkerTask,
tape::{
@ -18,7 +20,6 @@ use crate::{
COMMIT_BLOCK_SIZE,
TapeWrite,
SnapshotReader,
SnapshotChunkIterator,
MediaPool,
MediaId,
MediaCatalog,
@ -38,32 +39,196 @@ use crate::{
config::tape_encryption_keys::load_key_configs,
};
/// Helper to build and query sets of catalogs
pub struct CatalogBuilder {
// read only part
media_set_catalog: MediaSetCatalog,
// catalog to modify (latest in set)
catalog: Option<MediaCatalog>,
}
impl CatalogBuilder {
/// Test if the catalog already contains a snapshot
pub fn contains_snapshot(&self, store: &str, snapshot: &str) -> bool {
if let Some(ref catalog) = self.catalog {
if catalog.contains_snapshot(store, snapshot) {
return true;
}
}
self.media_set_catalog.contains_snapshot(store, snapshot)
}
/// Test if the catalog already contains a chunk
pub fn contains_chunk(&self, store: &str, digest: &[u8;32]) -> bool {
if let Some(ref catalog) = self.catalog {
if catalog.contains_chunk(store, digest) {
return true;
}
}
self.media_set_catalog.contains_chunk(store, digest)
}
/// Add a new catalog, move the old on to the read-only set
pub fn append_catalog(&mut self, new_catalog: MediaCatalog) -> Result<(), Error> {
// append current catalog to read-only set
if let Some(catalog) = self.catalog.take() {
self.media_set_catalog.append_catalog(catalog)?;
}
// remove read-only version from set (in case it is there)
self.media_set_catalog.remove_catalog(&new_catalog.uuid());
self.catalog = Some(new_catalog);
Ok(())
}
/// Register a snapshot
pub fn register_snapshot(
&mut self,
uuid: Uuid, // Uuid form MediaContentHeader
file_number: u64,
store: &str,
snapshot: &str,
) -> Result<(), Error> {
match self.catalog {
Some(ref mut catalog) => {
catalog.register_snapshot(uuid, file_number, store, snapshot)?;
}
None => bail!("no catalog loaded - internal error"),
}
Ok(())
}
/// Register a chunk archive
pub fn register_chunk_archive(
&mut self,
uuid: Uuid, // Uuid form MediaContentHeader
file_number: u64,
store: &str,
chunk_list: &[[u8; 32]],
) -> Result<(), Error> {
match self.catalog {
Some(ref mut catalog) => {
catalog.start_chunk_archive(uuid, file_number, store)?;
for digest in chunk_list {
catalog.register_chunk(digest)?;
}
catalog.end_chunk_archive()?;
}
None => bail!("no catalog loaded - internal error"),
}
Ok(())
}
/// Commit the catalog changes
pub fn commit(&mut self) -> Result<(), Error> {
if let Some(ref mut catalog) = self.catalog {
catalog.commit()?;
}
Ok(())
}
}
/// Chunk iterator which use a separate thread to read chunks
///
/// The iterator skips duplicate chunks and chunks already in the
/// catalog.
pub struct NewChunksIterator {
rx: std::sync::mpsc::Receiver<Result<Option<([u8; 32], DataBlob)>, Error>>,
}
impl NewChunksIterator {
/// Creates the iterator, spawning a new thread
///
/// Make sure to join() the returnd thread handle.
pub fn spawn(
datastore: Arc<DataStore>,
snapshot_reader: Arc<Mutex<SnapshotReader>>,
catalog_builder: Arc<Mutex<CatalogBuilder>>,
) -> Result<(std::thread::JoinHandle<()>, Self), Error> {
let (tx, rx) = std::sync::mpsc::sync_channel(3);
let reader_thread = std::thread::spawn(move || {
let snapshot_reader = snapshot_reader.lock().unwrap();
let mut chunk_index: HashSet<[u8;32]> = HashSet::new();
let datastore_name = snapshot_reader.datastore_name();
let result: Result<(), Error> = proxmox::try_block!({
let mut chunk_iter = snapshot_reader.chunk_iterator()?;
loop {
let digest = match chunk_iter.next() {
None => {
tx.send(Ok(None)).unwrap();
break;
}
Some(digest) => digest?,
};
if chunk_index.contains(&digest) {
continue;
}
if catalog_builder.lock().unwrap().contains_chunk(&datastore_name, &digest) {
continue;
};
let blob = datastore.load_chunk(&digest)?;
//println!("LOAD CHUNK {}", proxmox::tools::digest_to_hex(&digest));
tx.send(Ok(Some((digest, blob)))).unwrap();
chunk_index.insert(digest);
}
Ok(())
});
if let Err(err) = result {
tx.send(Err(err)).unwrap();
}
});
Ok((reader_thread, Self { rx }))
}
}
// We do not use Receiver::into_iter(). The manual implementation
// returns a simpler type.
impl Iterator for NewChunksIterator {
type Item = Result<([u8; 32], DataBlob), Error>;
fn next(&mut self) -> Option<Self::Item> {
match self.rx.recv() {
Ok(Ok(None)) => None,
Ok(Ok(Some((digest, blob)))) => Some(Ok((digest, blob))),
Ok(Err(err)) => Some(Err(err)),
Err(_) => Some(Err(format_err!("reader thread failed"))),
}
}
}
struct PoolWriterState {
drive: Box<dyn TapeDriver>,
catalog: MediaCatalog,
// tell if we already moved to EOM
at_eom: bool,
// bytes written after the last tape fush/sync
bytes_written: usize,
}
impl PoolWriterState {
fn commit(&mut self) -> Result<(), Error> {
self.drive.sync()?; // sync all data to the tape
self.catalog.commit()?; // then commit the catalog
self.bytes_written = 0;
Ok(())
}
}
/// Helper to manage a backup job, writing several tapes of a pool
pub struct PoolWriter {
pool: MediaPool,
drive_name: String,
status: Option<PoolWriterState>,
media_set_catalog: MediaSetCatalog,
catalog_builder: Arc<Mutex<CatalogBuilder>>,
notify_email: Option<String>,
}
@ -88,20 +253,23 @@ impl PoolWriter {
// load all catalogs read-only at start
for media_uuid in pool.current_media_list()? {
let media_info = pool.lookup_media(media_uuid).unwrap();
let media_catalog = MediaCatalog::open(
Path::new(TAPE_STATUS_DIR),
&media_uuid,
media_info.id(),
false,
false,
)?;
media_set_catalog.append_catalog(media_catalog)?;
}
let catalog_builder = CatalogBuilder { media_set_catalog, catalog: None };
Ok(Self {
pool,
drive_name: drive_name.to_string(),
status: None,
media_set_catalog,
catalog_builder: Arc::new(Mutex::new(catalog_builder)),
notify_email,
})
}
@ -116,13 +284,8 @@ impl PoolWriter {
Ok(())
}
pub fn contains_snapshot(&self, snapshot: &str) -> bool {
if let Some(PoolWriterState { ref catalog, .. }) = self.status {
if catalog.contains_snapshot(snapshot) {
return true;
}
}
self.media_set_catalog.contains_snapshot(snapshot)
pub fn contains_snapshot(&self, store: &str, snapshot: &str) -> bool {
self.catalog_builder.lock().unwrap().contains_snapshot(store, snapshot)
}
/// Eject media and drop PoolWriterState (close drive)
@ -188,16 +351,17 @@ impl PoolWriter {
/// This is done automatically during a backupsession, but needs to
/// be called explicitly before dropping the PoolWriter
pub fn commit(&mut self) -> Result<(), Error> {
if let Some(ref mut status) = self.status {
status.commit()?;
if let Some(PoolWriterState {ref mut drive, .. }) = self.status {
drive.sync()?; // sync all data to the tape
}
self.catalog_builder.lock().unwrap().commit()?; // then commit the catalog
Ok(())
}
/// Load a writable media into the drive
pub fn load_writable_media(&mut self, worker: &WorkerTask) -> Result<Uuid, Error> {
let last_media_uuid = match self.status {
Some(PoolWriterState { ref catalog, .. }) => Some(catalog.uuid().clone()),
let last_media_uuid = match self.catalog_builder.lock().unwrap().catalog {
Some(ref catalog) => Some(catalog.uuid().clone()),
None => None,
};
@ -217,13 +381,11 @@ impl PoolWriter {
task_log!(worker, "allocated new writable media '{}'", media.label_text());
// remove read-only catalog (we store a writable version in status)
self.media_set_catalog.remove_catalog(&media_uuid);
if let Some(PoolWriterState {mut drive, catalog, .. }) = self.status.take() {
self.media_set_catalog.append_catalog(catalog)?;
task_log!(worker, "eject current media");
drive.eject_media()?;
if let Some(PoolWriterState {mut drive, .. }) = self.status.take() {
if last_media_uuid.is_some() {
task_log!(worker, "eject current media");
drive.eject_media()?;
}
}
let (drive_config, _digest) = crate::config::drive::config()?;
@ -249,6 +411,8 @@ impl PoolWriter {
media.id(),
)?;
self.catalog_builder.lock().unwrap().append_catalog(catalog)?;
let media_set = media.media_set_label().clone().unwrap();
let encrypt_fingerprint = media_set
@ -258,19 +422,11 @@ impl PoolWriter {
drive.set_encryption(encrypt_fingerprint)?;
self.status = Some(PoolWriterState { drive, catalog, at_eom: false, bytes_written: 0 });
self.status = Some(PoolWriterState { drive, at_eom: false, bytes_written: 0 });
Ok(media_uuid)
}
/// uuid of currently loaded BackupMedia
pub fn current_media_uuid(&self) -> Result<&Uuid, Error> {
match self.status {
Some(PoolWriterState { ref catalog, ..}) => Ok(catalog.uuid()),
None => bail!("PoolWriter - no media loaded"),
}
}
/// Move to EOM (if not already there), then creates a new snapshot
/// archive writing specified files (as .pxar) into it. On
/// success, this return 'Ok(true)' and the media catalog gets
@ -308,9 +464,10 @@ impl PoolWriter {
match tape_write_snapshot_archive(writer.as_mut(), snapshot_reader)? {
Some(content_uuid) => {
status.catalog.register_snapshot(
self.catalog_builder.lock().unwrap().register_snapshot(
content_uuid,
current_file_number,
&snapshot_reader.datastore_name().to_string(),
&snapshot_reader.snapshot().to_string(),
)?;
(true, writer.bytes_written())
@ -324,7 +481,7 @@ impl PoolWriter {
let request_sync = status.bytes_written >= COMMIT_BLOCK_SIZE;
if !done || request_sync {
status.commit()?;
self.commit()?;
}
Ok((done, bytes_written))
@ -337,8 +494,8 @@ impl PoolWriter {
pub fn append_chunk_archive(
&mut self,
worker: &WorkerTask,
datastore: &DataStore,
chunk_iter: &mut std::iter::Peekable<SnapshotChunkIterator>,
chunk_iter: &mut std::iter::Peekable<NewChunksIterator>,
store: &str,
) -> Result<(bool, usize), Error> {
let status = match self.status {
@ -363,10 +520,8 @@ impl PoolWriter {
let (saved_chunks, content_uuid, leom, bytes_written) = write_chunk_archive(
worker,
writer,
datastore,
chunk_iter,
&self.media_set_catalog,
&status.catalog,
store,
MAX_CHUNK_ARCHIVE_SIZE,
)?;
@ -374,43 +529,48 @@ impl PoolWriter {
let elapsed = start_time.elapsed()?.as_secs_f64();
worker.log(format!(
"wrote {} chunks ({:.2} MiB at {:.2} MiB/s)",
"wrote {} chunks ({:.2} MB at {:.2} MB/s)",
saved_chunks.len(),
bytes_written as f64 / (1024.0*1024.0),
(bytes_written as f64)/(1024.0*1024.0*elapsed),
bytes_written as f64 /1_000_000.0,
(bytes_written as f64)/(1_000_000.0*elapsed),
));
let request_sync = status.bytes_written >= COMMIT_BLOCK_SIZE;
// register chunks in media_catalog
status.catalog.start_chunk_archive(content_uuid, current_file_number)?;
for digest in saved_chunks {
status.catalog.register_chunk(&digest)?;
}
status.catalog.end_chunk_archive()?;
self.catalog_builder.lock().unwrap()
.register_chunk_archive(content_uuid, current_file_number, store, &saved_chunks)?;
if leom || request_sync {
status.commit()?;
self.commit()?;
}
Ok((leom, bytes_written))
}
pub fn spawn_chunk_reader_thread(
&self,
datastore: Arc<DataStore>,
snapshot_reader: Arc<Mutex<SnapshotReader>>,
) -> Result<(std::thread::JoinHandle<()>, NewChunksIterator), Error> {
NewChunksIterator::spawn(
datastore,
snapshot_reader,
Arc::clone(&self.catalog_builder),
)
}
}
/// write up to <max_size> of chunks
fn write_chunk_archive<'a>(
_worker: &WorkerTask,
writer: Box<dyn 'a + TapeWrite>,
datastore: &DataStore,
chunk_iter: &mut std::iter::Peekable<SnapshotChunkIterator>,
media_set_catalog: &MediaSetCatalog,
media_catalog: &MediaCatalog,
chunk_iter: &mut std::iter::Peekable<NewChunksIterator>,
store: &str,
max_size: usize,
) -> Result<(Vec<[u8;32]>, Uuid, bool, usize), Error> {
let (mut writer, content_uuid) = ChunkArchiveWriter::new(writer, true)?;
let mut chunk_index: HashSet<[u8;32]> = HashSet::new();
let (mut writer, content_uuid) = ChunkArchiveWriter::new(writer, store, true)?;
// we want to get the chunk list in correct order
let mut chunk_list: Vec<[u8;32]> = Vec::new();
@ -418,26 +578,21 @@ fn write_chunk_archive<'a>(
let mut leom = false;
loop {
let digest = match chunk_iter.next() {
let (digest, blob) = match chunk_iter.peek() {
None => break,
Some(digest) => digest?,
Some(Ok((digest, blob))) => (digest, blob),
Some(Err(err)) => bail!("{}", err),
};
if media_catalog.contains_chunk(&digest)
|| chunk_index.contains(&digest)
|| media_set_catalog.contains_chunk(&digest)
{
continue;
}
let blob = datastore.load_chunk(&digest)?;
//println!("CHUNK {} size {}", proxmox::tools::digest_to_hex(&digest), blob.raw_size());
//println!("CHUNK {} size {}", proxmox::tools::digest_to_hex(digest), blob.raw_size());
match writer.try_write_chunk(&digest, &blob) {
Ok(true) => {
chunk_index.insert(digest);
chunk_list.push(digest);
Ok(true) => {
chunk_list.push(*digest);
chunk_iter.next(); // consume
}
Ok(false) => {
// Note; we do not consume the chunk (no chunk_iter.next())
leom = true;
break;
}
@ -501,7 +656,7 @@ fn update_media_set_label(
if new_set.encryption_key_fingerprint != media_set_label.encryption_key_fingerprint {
bail!("detected changed encryption fingerprint - internal error");
}
media_catalog = MediaCatalog::open(status_path, &media_id.label.uuid, true, false)?;
media_catalog = MediaCatalog::open(status_path, &media_id, true, false)?;
} else {
worker.log(
format!("wrinting new media set label (overwrite '{}/{}')",
@ -515,7 +670,6 @@ fn update_media_set_label(
}
// todo: verify last content/media_catalog somehow?
drive.move_to_eom()?; // just to be sure
Ok(media_catalog)
}

View File

@ -318,8 +318,11 @@ pub fn update_apt_auth(key: Option<String>, password: Option<String>) -> Result<
replace_file(auth_conf, conf.as_bytes(), file_opts)
.map_err(|e| format_err!("Error saving apt auth config - {}", e))?;
}
_ => nix::unistd::unlink(auth_conf)
.map_err(|e| format_err!("Error clearing apt auth config - {}", e))?,
_ => match nix::unistd::unlink(auth_conf) {
Ok(()) => Ok(()),
Err(nix::Error::Sys(nix::errno::Errno::ENOENT)) => Ok(()), // ignore not existing
Err(err) => Err(err),
}.map_err(|e| format_err!("Error clearing apt auth config - {}", e))?,
}
Ok(())
}

View File

@ -80,6 +80,7 @@ struct Zip64FieldWithOffset {
uncompressed_size: u64,
compressed_size: u64,
offset: u64,
start_disk: u32,
}
#[derive(Endian)]
@ -300,10 +301,26 @@ impl ZipEntry {
let filename_len = filename.len();
let header_size = size_of::<CentralDirectoryFileHeader>();
let zip_field_size = size_of::<Zip64FieldWithOffset>();
let size: usize = header_size + filename_len + zip_field_size;
let mut size: usize = header_size + filename_len;
let (date, time) = epoch_to_dos(self.mtime);
let (compressed_size, uncompressed_size, offset, need_zip64) = if self.compressed_size
>= (u32::MAX as u64)
|| self.uncompressed_size >= (u32::MAX as u64)
|| self.offset >= (u32::MAX as u64)
{
size += zip_field_size;
(0xFFFFFFFF, 0xFFFFFFFF, 0xFFFFFFFF, true)
} else {
(
self.compressed_size as u32,
self.uncompressed_size as u32,
self.offset as u32,
false,
)
};
write_struct(
&mut buf,
CentralDirectoryFileHeader {
@ -315,32 +332,35 @@ impl ZipEntry {
time,
date,
crc32: self.crc32,
compressed_size: 0xFFFFFFFF,
uncompressed_size: 0xFFFFFFFF,
compressed_size,
uncompressed_size,
filename_len: filename_len as u16,
extra_field_len: zip_field_size as u16,
extra_field_len: if need_zip64 { zip_field_size as u16 } else { 0 },
comment_len: 0,
start_disk: 0,
internal_flags: 0,
external_flags: (self.mode as u32) << 16 | (!self.is_file as u32) << 4,
offset: 0xFFFFFFFF,
offset,
},
)
.await?;
buf.write_all(filename).await?;
write_struct(
&mut buf,
Zip64FieldWithOffset {
field_type: 1,
field_size: 3 * 8,
uncompressed_size: self.uncompressed_size,
compressed_size: self.compressed_size,
offset: self.offset,
},
)
.await?;
if need_zip64 {
write_struct(
&mut buf,
Zip64FieldWithOffset {
field_type: 1,
field_size: 3 * 8 + 4,
uncompressed_size: self.uncompressed_size,
compressed_size: self.compressed_size,
offset: self.offset,
start_disk: 0,
},
)
.await?;
}
Ok(size)
}

View File

@ -47,10 +47,18 @@ const proxmoxOnlineHelpInfo = {
"link": "/docs/package-repositories.html#sysadmin-package-repositories",
"title": "Debian Package Repositories"
},
"sysadmin-package-repos-enterprise": {
"link": "/docs/package-repositories.html#sysadmin-package-repos-enterprise",
"title": "`Proxmox Backup`_ Enterprise Repository"
},
"get-help": {
"link": "/docs/introduction.html#get-help",
"title": "Getting Help"
},
"get-help-enterprise-support": {
"link": "/docs/introduction.html#get-help-enterprise-support",
"title": "Enterprise Support"
},
"chapter-zfs": {
"link": "/docs/sysadmin.html#chapter-zfs",
"title": "ZFS on Linux"

View File

@ -273,3 +273,7 @@ span.snapshot-comment-column {
height: 20px;
background-image:url(../images/icon-tape-drive.svg);
}
.info-pointer div.right-aligned {
cursor: pointer;
}

View File

@ -127,9 +127,16 @@ Ext.define('PBS.TapeManagement.BackupOverview', {
},
});
list.result.data.sort((a, b) => a.snapshot.localeCompare(b.snapshot));
list.result.data.sort(function(a, b) {
let storeRes = a.store.localeCompare(b.store);
if (storeRes === 0) {
return a.snapshot.localeCompare(b.snapshot);
} else {
return storeRes;
}
});
let tapes = {};
let stores = {};
for (let entry of list.result.data) {
entry.text = entry.snapshot;
@ -140,9 +147,19 @@ Ext.define('PBS.TapeManagement.BackupOverview', {
entry.iconCls = `fa ${iconCls}`;
}
let store = entry.store;
let tape = entry['label-text'];
if (tapes[tape] === undefined) {
tapes[tape] = {
if (stores[store] === undefined) {
stores[store] = {
text: store,
'media-set-uuid': entry['media-set-uuid'],
iconCls: 'fa fa-database',
tapes: {},
};
}
if (stores[store].tapes[tape] === undefined) {
stores[store].tapes[tape] = {
text: tape,
'media-set-uuid': entry['media-set-uuid'],
'seq-nr': entry['seq-nr'],
@ -153,7 +170,7 @@ Ext.define('PBS.TapeManagement.BackupOverview', {
}
let [type, group, _id] = PBS.Utils.parse_snapshot_id(entry.snapshot);
let children = tapes[tape].children;
let children = stores[store].tapes[tape].children;
let text = `${type}/${group}`;
if (children.length < 1 || children[children.length - 1].text !== text) {
children.push({
@ -167,8 +184,13 @@ Ext.define('PBS.TapeManagement.BackupOverview', {
children[children.length - 1].children.push(entry);
}
for (const tape of Object.values(tapes)) {
node.appendChild(tape);
let storeList = Object.values(stores);
let expand = storeList.length === 1;
for (const store of storeList) {
store.children = Object.values(store.tapes);
store.expanded = expand;
delete store.tapes;
node.appendChild(store);
}
if (list.result.data.length === 0) {

View File

@ -11,6 +11,29 @@ Ext.define('pbs-slot-model', {
idProperty: 'entry-id',
});
Ext.define('PBS.TapeManagement.FreeSlotSelector', {
extend: 'Proxmox.form.ComboGrid',
alias: 'widget.pbsFreeSlotSelector',
valueField: 'id',
displayField: 'id',
listConfig: {
columns: [
{
dataIndex: 'id',
text: gettext('ID'),
flex: 1,
},
{
dataIndex: 'type',
text: gettext('Type'),
flex: 1,
},
],
},
});
Ext.define('PBS.TapeManagement.ChangerStatus', {
extend: 'Ext.panel.Panel',
alias: 'widget.pbsChangerStatus',
@ -40,9 +63,12 @@ Ext.define('PBS.TapeManagement.ChangerStatus', {
fieldLabel: gettext('From Slot'),
},
{
xtype: 'proxmoxintegerfield',
xtype: 'pbsFreeSlotSelector',
name: 'to',
fieldLabel: gettext('To Slot'),
store: {
data: me.free_slots,
},
},
],
listeners: {
@ -73,9 +99,12 @@ Ext.define('PBS.TapeManagement.ChangerStatus', {
fieldLabel: gettext('From Slot'),
},
{
xtype: 'proxmoxintegerfield',
xtype: 'pbsFreeSlotSelector',
name: 'to',
fieldLabel: gettext('To Slot'),
store: {
data: me.free_slots.concat(me.free_ie_slots),
},
},
],
listeners: {
@ -340,6 +369,14 @@ Ext.define('PBS.TapeManagement.ChangerStatus', {
me.reload_full(false);
},
free_slots: [],
updateFreeSlots: function(free_slots, free_ie_slots) {
let me = this;
me.free_slots = free_slots;
me.free_ie_slots = free_ie_slots;
},
reload_full: async function(use_cache) {
let me = this;
let view = me.getView();
@ -399,6 +436,9 @@ Ext.define('PBS.TapeManagement.ChangerStatus', {
drive_entries[entry['changer-drivenum'] || 0] = entry;
}
let free_slots = [];
let free_ie_slots = [];
for (let entry of status.result.data) {
let type = entry['entry-kind'];
@ -414,6 +454,19 @@ Ext.define('PBS.TapeManagement.ChangerStatus', {
entry['is-labeled'] = false;
}
if (!entry['label-text'] && type !== 'drive') {
if (type === 'slot') {
free_slots.push({
id: entry['entry-id'],
type,
});
} else {
free_ie_slots.push({
id: entry['entry-id'],
type,
});
}
}
data[type].push(entry);
}
@ -433,6 +486,8 @@ Ext.define('PBS.TapeManagement.ChangerStatus', {
// manually fire selectionchange to update button status
me.lookup('drives').getSelectionModel().fireEvent('selectionchange', me);
me.updateFreeSlots(free_slots, free_ie_slots);
if (!use_cache) {
Proxmox.Utils.setErrorMask(view);
}

View File

@ -84,6 +84,24 @@ Ext.define('PBS.TapeManagement.DriveStatus', {
}).show();
},
erase: function() {
let me = this;
let view = me.getView();
let driveid = view.drive;
PBS.Utils.driveCommand(driveid, 'erase-media', {
waitMsgTarget: view,
method: 'POST',
success: function(response) {
Ext.create('Proxmox.window.TaskProgress', {
upid: response.result.data,
taskDone: function() {
me.reload();
},
}).show();
},
});
},
ejectMedia: function() {
let me = this;
let view = me.getView();
@ -193,6 +211,18 @@ Ext.define('PBS.TapeManagement.DriveStatus', {
disabled: '{!online}',
},
},
{
text: gettext('Erase'),
xtype: 'proxmoxButton',
handler: 'erase',
iconCls: 'fa fa-trash-o',
dangerous: true,
confirmMsg: gettext('Are you sure you want to erase the inserted tape?'),
disabled: true,
bind: {
disabled: '{!online}',
},
},
{
text: gettext('Catalog'),
xtype: 'proxmoxButton',
@ -400,6 +430,7 @@ Ext.define('PBS.TapeManagement.DriveInfoPanel', {
},
{
xtype: 'pmxInfoWidget',
reference: 'statewidget',
title: gettext('State'),
bind: {
data: {
@ -409,6 +440,23 @@ Ext.define('PBS.TapeManagement.DriveInfoPanel', {
},
],
clickState: function(e, t, eOpts) {
let me = this;
let vm = me.getViewModel();
let drive = vm.get('drive');
if (t.classList.contains('right-aligned')) {
let upid = drive.state;
if (!upid || !upid.startsWith("UPID")) {
return;
}
Ext.create('Proxmox.window.TaskViewer', {
autoShow: true,
upid,
});
}
},
updateData: function(store) {
let me = this;
if (!store) {
@ -422,6 +470,37 @@ Ext.define('PBS.TapeManagement.DriveInfoPanel', {
let vm = me.getViewModel();
vm.set('drive', record.data);
vm.notify();
me.updatePointer();
},
updatePointer: function() {
let me = this;
let stateWidget = me.down('pmxInfoWidget[reference=statewidget]');
let stateEl = stateWidget.getEl();
if (!stateEl) {
setTimeout(function() {
me.updatePointer();
}, 100);
return;
}
let vm = me.getViewModel();
let drive = vm.get('drive');
if (drive.state) {
stateEl.addCls('info-pointer');
} else {
stateEl.removeCls('info-pointer');
}
},
listeners: {
afterrender: function() {
let me = this;
let stateWidget = me.down('pmxInfoWidget[reference=statewidget]');
let stateEl = stateWidget.getEl();
stateEl.on('click', me.clickState, me);
},
},
initComponent: function() {
@ -430,12 +509,12 @@ Ext.define('PBS.TapeManagement.DriveInfoPanel', {
throw "no drive given";
}
me.callParent();
let tapeStore = Ext.ComponentQuery.query('navigationtree')[0].tapestore;
me.mon(tapeStore, 'load', me.updateData, me);
if (tapeStore.isLoaded()) {
me.updateData(tapeStore);
}
me.callParent();
},
});

View File

@ -51,5 +51,15 @@ Ext.define('PBS.TapeManagement.TapeRestoreWindow', {
skipEmptyText: true,
renderer: Ext.String.htmlEncode,
},
{
xtype: 'pbsUserSelector',
name: 'owner',
fieldLabel: gettext('Owner'),
emptyText: gettext('Current User'),
value: null,
allowBlank: true,
skipEmptyText: true,
renderer: Ext.String.htmlEncode,
},
],
});