Compare commits
98 Commits
Author | SHA1 | Date | |
---|---|---|---|
2d87f2fb73 | |||
4c81273274 | |||
73b8f6793e | |||
663ef85992 | |||
e92c75815b | |||
6dbad5b4b5 | |||
bff7e3f3e4 | |||
83abc7497d | |||
8bc5eebeb8 | |||
1433b96ba0 | |||
be1a8c94ae | |||
4606f34353 | |||
7bb720cb4d | |||
c4d8542ec1 | |||
9700d5374a | |||
05e90d6463 | |||
55118ca18e | |||
f70d8091d3 | |||
a3c709ef21 | |||
4917f1e2d4 | |||
93829fc680 | |||
5605ca5619 | |||
e49f0c03d9 | |||
0098b712a5 | |||
5fb694e8c0 | |||
583a68a446 | |||
e6604cf391 | |||
43cfb3c35a | |||
8a16c571d2 | |||
314652a499 | |||
6b68e5d597 | |||
cafd51bf42 | |||
eaff09f483 | |||
9b93c62044 | |||
5d90860688 | |||
5ba83ed099 | |||
50bf10ad56 | |||
16d444c979 | |||
fa9c9be737 | |||
2e7014e31d | |||
a84050c1f0 | |||
7c9835465e | |||
ec00200411 | |||
956e5fec1f | |||
b107fdb99a | |||
7320e9ff4b | |||
c4d2d54a6d | |||
1142350e8d | |||
d735b31345 | |||
e211fee562 | |||
8c15560b68 | |||
327e93711f | |||
a076571470 | |||
ff50c07ebf | |||
179145dc24 | |||
6bd0a00c46 | |||
f6e28f4e62 | |||
37f1b7dd8d | |||
60e6ee46de | |||
2260f065d4 | |||
6eff8dec4f | |||
7e25b9aaaa | |||
f867ef9c4a | |||
fc8920e35d | |||
7f3b0f67e7 | |||
844660036b | |||
efcac39d34 | |||
cb4b721cb0 | |||
7956877f14 | |||
2241c6795f | |||
43e60ceb41 | |||
b760d8a23f | |||
2c1592263d | |||
616533823c | |||
913dddea85 | |||
3530430365 | |||
a4ba60be8f | |||
99e98f605c | |||
935ee97b17 | |||
6b9bfd7fe9 | |||
dd519bbad1 | |||
35fe981c7d | |||
b6570abe79 | |||
54813c650e | |||
781106f8c5 | |||
96f35520a0 | |||
490560e0c6 | |||
52f53d8280 | |||
27b8a3f671 | |||
abf9b6da42 | |||
0c9209b04c | |||
edebd52374 | |||
61205f00fb | |||
a303e00289 | |||
af9f72e9d8 | |||
5176346b30 | |||
731eeef25b | |||
a65e3e4bc0 |
@ -1,6 +1,6 @@
|
||||
[package]
|
||||
name = "proxmox-backup"
|
||||
version = "1.0.3"
|
||||
version = "1.0.6"
|
||||
authors = [
|
||||
"Dietmar Maurer <dietmar@proxmox.com>",
|
||||
"Dominik Csapak <d.csapak@proxmox.com>",
|
||||
@ -48,7 +48,7 @@ percent-encoding = "2.1"
|
||||
pin-utils = "0.1.0"
|
||||
pin-project = "0.4"
|
||||
pathpatterns = "0.1.2"
|
||||
proxmox = { version = "0.7.1", features = [ "sortable-macro", "api-macro", "websocket" ] }
|
||||
proxmox = { version = "0.8.1", features = [ "sortable-macro", "api-macro", "websocket" ] }
|
||||
#proxmox = { git = "git://git.proxmox.com/git/proxmox", version = "0.1.2", features = [ "sortable-macro", "api-macro" ] }
|
||||
#proxmox = { path = "../proxmox/proxmox", features = [ "sortable-macro", "api-macro", "websocket" ] }
|
||||
proxmox-fuse = "0.1.0"
|
||||
|
80
README.rst
80
README.rst
@ -53,3 +53,83 @@ Setup:
|
||||
Note: 2. may be skipped if you already added the PVE or PBS package repository
|
||||
|
||||
You are now able to build using the Makefile or cargo itself.
|
||||
|
||||
|
||||
Design Notes
|
||||
============
|
||||
|
||||
Here are some random thought about the software design (unless I find a better place).
|
||||
|
||||
|
||||
Large chunk sizes
|
||||
-----------------
|
||||
|
||||
It is important to notice that large chunk sizes are crucial for
|
||||
performance. We have a multi-user system, where different people can do
|
||||
different operations on a datastore at the same time, and most operation
|
||||
involves reading a series of chunks.
|
||||
|
||||
So what is the maximal theoretical speed we can get when reading a
|
||||
series of chunks? Reading a chunk sequence need the following steps:
|
||||
|
||||
- seek to the first chunk start location
|
||||
- read the chunk data
|
||||
- seek to the first chunk start location
|
||||
- read the chunk data
|
||||
- ...
|
||||
|
||||
Lets use the following disk performance metrics:
|
||||
|
||||
:AST: Average Seek Time (second)
|
||||
:MRS: Maximum sequential Read Speed (bytes/second)
|
||||
:ACS: Average Chunk Size (bytes)
|
||||
|
||||
The maximum performance you can get is::
|
||||
|
||||
MAX(ACS) = ACS /(AST + ACS/MRS)
|
||||
|
||||
Please note that chunk data is likely to be sequential arranged on disk, but
|
||||
this it is sort of a best case assumption.
|
||||
|
||||
For a typical rotational disk, we assume the following values::
|
||||
|
||||
AST: 10ms
|
||||
MRS: 170MB/s
|
||||
|
||||
MAX(4MB) = 115.37 MB/s
|
||||
MAX(1MB) = 61.85 MB/s;
|
||||
MAX(64KB) = 6.02 MB/s;
|
||||
MAX(4KB) = 0.39 MB/s;
|
||||
MAX(1KB) = 0.10 MB/s;
|
||||
|
||||
Modern SSD are much faster, lets assume the following::
|
||||
|
||||
max IOPS: 20000 => AST = 0.00005
|
||||
MRS: 500Mb/s
|
||||
|
||||
MAX(4MB) = 474 MB/s
|
||||
MAX(1MB) = 465 MB/s;
|
||||
MAX(64KB) = 354 MB/s;
|
||||
MAX(4KB) = 67 MB/s;
|
||||
MAX(1KB) = 18 MB/s;
|
||||
|
||||
|
||||
Also, the average chunk directly relates to the number of chunks produced by
|
||||
a backup::
|
||||
|
||||
CHUNK_COUNT = BACKUP_SIZE / ACS
|
||||
|
||||
Here are some staticics from my developer worstation::
|
||||
|
||||
Disk Usage: 65 GB
|
||||
Directories: 58971
|
||||
Files: 726314
|
||||
Files < 64KB: 617541
|
||||
|
||||
As you see, there are really many small files. If we would do file
|
||||
level deduplication, i.e. generate one chunk per file, we end up with
|
||||
more than 700000 chunks.
|
||||
|
||||
Instead, our current algorithm only produce large chunks with an
|
||||
average chunks size of 4MB. With above data, this produce about 15000
|
||||
chunks (factor 50 less chunks).
|
||||
|
35
debian/changelog
vendored
35
debian/changelog
vendored
@ -1,3 +1,38 @@
|
||||
rust-proxmox-backup (1.0.6-1) unstable; urgency=medium
|
||||
|
||||
* stricter handling of file-descriptors, fixes some cases where some could
|
||||
leak
|
||||
|
||||
* ui: fix various usages of the findRecord emthod, ensuring it matches exact
|
||||
|
||||
* garbage collection: improve task log format
|
||||
|
||||
* verification: improve progress log, make it similar to what's logged on
|
||||
pull (sync)
|
||||
|
||||
* datastore: move manifest locking to /run. This avoids issues with
|
||||
filesystems which cannot natively handle removing in-use files ("delete on
|
||||
last close"), and create a virtual, internal, replacement file to work
|
||||
around that. This is done, for example, by NFS or CIFS (samba).
|
||||
|
||||
-- Proxmox Support Team <support@proxmox.com> Fri, 11 Dec 2020 12:51:33 +0100
|
||||
|
||||
rust-proxmox-backup (1.0.5-1) unstable; urgency=medium
|
||||
|
||||
* client: restore: print meta information exclusively to standard error
|
||||
|
||||
-- Proxmox Support Team <support@proxmox.com> Wed, 25 Nov 2020 15:29:58 +0100
|
||||
|
||||
rust-proxmox-backup (1.0.4-1) unstable; urgency=medium
|
||||
|
||||
* fingerprint: add bytes() accessor
|
||||
|
||||
* ui: fix broken gettext use
|
||||
|
||||
* cli: move more commands into "snapshot" sub-command
|
||||
|
||||
-- Proxmox Support Team <support@proxmox.com> Wed, 25 Nov 2020 06:37:41 +0100
|
||||
|
||||
rust-proxmox-backup (1.0.3-1) unstable; urgency=medium
|
||||
|
||||
* client: inform user when automatically using the default encryption key
|
||||
|
10
debian/control
vendored
10
debian/control
vendored
@ -35,10 +35,10 @@ Build-Depends: debhelper (>= 11),
|
||||
librust-percent-encoding-2+default-dev (>= 2.1-~~),
|
||||
librust-pin-project-0.4+default-dev,
|
||||
librust-pin-utils-0.1+default-dev,
|
||||
librust-proxmox-0.7+api-macro-dev (>= 0.7.1-~~),
|
||||
librust-proxmox-0.7+default-dev (>= 0.7.1-~~),
|
||||
librust-proxmox-0.7+sortable-macro-dev (>= 0.7.1-~~),
|
||||
librust-proxmox-0.7+websocket-dev (>= 0.7.1-~~),
|
||||
librust-proxmox-0.8+api-macro-dev (>= 0.8.1-~~),
|
||||
librust-proxmox-0.8+default-dev (>= 0.8.1-~~),
|
||||
librust-proxmox-0.8+sortable-macro-dev (>= 0.8.1-~~),
|
||||
librust-proxmox-0.8+websocket-dev (>= 0.8.1-~~),
|
||||
librust-proxmox-fuse-0.1+default-dev,
|
||||
librust-pxar-0.6+default-dev (>= 0.6.1-~~),
|
||||
librust-pxar-0.6+futures-io-dev (>= 0.6.1-~~),
|
||||
@ -113,6 +113,8 @@ Depends: fonts-font-awesome,
|
||||
proxmox-widget-toolkit (>= 2.3-6),
|
||||
pve-xtermjs (>= 4.7.0-1),
|
||||
smartmontools,
|
||||
mtx,
|
||||
mt-st,
|
||||
${misc:Depends},
|
||||
${shlibs:Depends},
|
||||
Recommends: zfsutils-linux,
|
||||
|
2
debian/control.in
vendored
2
debian/control.in
vendored
@ -12,6 +12,8 @@ Depends: fonts-font-awesome,
|
||||
proxmox-widget-toolkit (>= 2.3-6),
|
||||
pve-xtermjs (>= 4.7.0-1),
|
||||
smartmontools,
|
||||
mtx,
|
||||
mt-st,
|
||||
${misc:Depends},
|
||||
${shlibs:Depends},
|
||||
Recommends: zfsutils-linux,
|
||||
|
3
debian/proxmox-backup-server.install
vendored
3
debian/proxmox-backup-server.install
vendored
@ -11,8 +11,7 @@ usr/lib/x86_64-linux-gnu/proxmox-backup/proxmox-daily-update
|
||||
usr/sbin/proxmox-backup-manager
|
||||
usr/share/javascript/proxmox-backup/index.hbs
|
||||
usr/share/javascript/proxmox-backup/css/ext6-pbs.css
|
||||
usr/share/javascript/proxmox-backup/images/logo-128.png
|
||||
usr/share/javascript/proxmox-backup/images/proxmox_logo.png
|
||||
usr/share/javascript/proxmox-backup/images
|
||||
usr/share/javascript/proxmox-backup/js/proxmox-backup-gui.js
|
||||
usr/share/man/man1/proxmox-backup-manager.1
|
||||
usr/share/man/man1/proxmox-backup-proxy.1
|
||||
|
@ -392,11 +392,11 @@ periodic recovery tests to ensure that you can access the data in
|
||||
case of problems.
|
||||
|
||||
First, you need to find the snapshot which you want to restore. The snapshot
|
||||
command provides a list of all the snapshots on the server:
|
||||
list command provides a list of all the snapshots on the server:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
# proxmox-backup-client snapshots
|
||||
# proxmox-backup-client snapshot list
|
||||
┌────────────────────────────────┬─────────────┬────────────────────────────────────┐
|
||||
│ snapshot │ size │ files │
|
||||
╞════════════════════════════════╪═════════════╪════════════════════════════════════╡
|
||||
@ -581,7 +581,7 @@ command:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
# proxmox-backup-client forget <snapshot>
|
||||
# proxmox-backup-client snapshot forget <snapshot>
|
||||
|
||||
|
||||
.. caution:: This command removes all archives in this backup
|
||||
|
@ -9,6 +9,7 @@ pub mod types;
|
||||
pub mod version;
|
||||
pub mod ping;
|
||||
pub mod pull;
|
||||
pub mod tape;
|
||||
mod helpers;
|
||||
|
||||
use proxmox::api::router::SubdirMap;
|
||||
@ -27,6 +28,7 @@ pub const SUBDIRS: SubdirMap = &[
|
||||
("pull", &pull::ROUTER),
|
||||
("reader", &reader::ROUTER),
|
||||
("status", &status::ROUTER),
|
||||
("tape", &tape::ROUTER),
|
||||
("version", &version::ROUTER),
|
||||
];
|
||||
|
||||
|
@ -181,6 +181,7 @@ fn create_ticket(
|
||||
}
|
||||
|
||||
#[api(
|
||||
protected: true,
|
||||
input: {
|
||||
properties: {
|
||||
userid: {
|
||||
@ -195,7 +196,6 @@ fn create_ticket(
|
||||
description: "Anybody is allowed to change there own password. In addition, users with 'Permissions:Modify' privilege may change any password.",
|
||||
permission: &Permission::Anybody,
|
||||
},
|
||||
|
||||
)]
|
||||
/// Change user password
|
||||
///
|
||||
@ -215,7 +215,7 @@ fn change_password(
|
||||
|
||||
let mut allowed = userid == current_user;
|
||||
|
||||
if userid == "root@pam" { allowed = true; }
|
||||
if current_user == "root@pam" { allowed = true; }
|
||||
|
||||
if !allowed {
|
||||
let user_info = CachedUserInfo::new()?;
|
||||
|
@ -249,10 +249,7 @@ pub fn create_user(password: Option<String>, param: Value) -> Result<(), Error>
|
||||
},
|
||||
},
|
||||
},
|
||||
returns: {
|
||||
description: "The user configuration (with config digest).",
|
||||
type: user::User,
|
||||
},
|
||||
returns: { type: user::User },
|
||||
access: {
|
||||
permission: &Permission::Or(&[
|
||||
&Permission::Privilege(&["access", "users"], PRIV_SYS_AUDIT, false),
|
||||
@ -468,10 +465,7 @@ pub fn delete_user(userid: Userid, digest: Option<String>) -> Result<(), Error>
|
||||
},
|
||||
},
|
||||
},
|
||||
returns: {
|
||||
description: "Get API token metadata (with config digest).",
|
||||
type: user::ApiToken,
|
||||
},
|
||||
returns: { type: user::ApiToken },
|
||||
access: {
|
||||
permission: &Permission::Or(&[
|
||||
&Permission::Privilege(&["access", "users"], PRIV_SYS_AUDIT, false),
|
||||
|
@ -687,12 +687,12 @@ pub fn verify(
|
||||
}
|
||||
res
|
||||
} else if let Some(backup_group) = backup_group {
|
||||
let (_count, failed_dirs) = verify_backup_group(
|
||||
let failed_dirs = verify_backup_group(
|
||||
datastore,
|
||||
&backup_group,
|
||||
verified_chunks,
|
||||
corrupt_chunks,
|
||||
None,
|
||||
&mut StoreProgress::new(1),
|
||||
worker.clone(),
|
||||
worker.upid(),
|
||||
None,
|
||||
@ -975,10 +975,7 @@ pub fn garbage_collection_status(
|
||||
returns: {
|
||||
description: "List the accessible datastores.",
|
||||
type: Array,
|
||||
items: {
|
||||
description: "Datastore name and description.",
|
||||
type: DataStoreListItem,
|
||||
},
|
||||
items: { type: DataStoreListItem },
|
||||
},
|
||||
access: {
|
||||
permission: &Permission::Anybody,
|
||||
|
@ -5,9 +5,15 @@ pub mod datastore;
|
||||
pub mod remote;
|
||||
pub mod sync;
|
||||
pub mod verify;
|
||||
pub mod drive;
|
||||
pub mod changer;
|
||||
pub mod media_pool;
|
||||
|
||||
const SUBDIRS: SubdirMap = &[
|
||||
("changer", &changer::ROUTER),
|
||||
("datastore", &datastore::ROUTER),
|
||||
("drive", &drive::ROUTER),
|
||||
("media-pool", &media_pool::ROUTER),
|
||||
("remote", &remote::ROUTER),
|
||||
("sync", &sync::ROUTER),
|
||||
("verify", &verify::ROUTER)
|
||||
|
243
src/api2/config/changer.rs
Normal file
243
src/api2/config/changer.rs
Normal file
@ -0,0 +1,243 @@
|
||||
use anyhow::{bail, Error};
|
||||
use serde_json::Value;
|
||||
|
||||
use proxmox::api::{api, Router, RpcEnvironment};
|
||||
|
||||
use crate::{
|
||||
config,
|
||||
api2::types::{
|
||||
PROXMOX_CONFIG_DIGEST_SCHEMA,
|
||||
CHANGER_ID_SCHEMA,
|
||||
LINUX_DRIVE_PATH_SCHEMA,
|
||||
DriveListEntry,
|
||||
ScsiTapeChanger,
|
||||
LinuxTapeDrive,
|
||||
},
|
||||
tape::{
|
||||
linux_tape_changer_list,
|
||||
check_drive_path,
|
||||
lookup_drive,
|
||||
},
|
||||
};
|
||||
|
||||
#[api(
|
||||
protected: true,
|
||||
input: {
|
||||
properties: {
|
||||
name: {
|
||||
schema: CHANGER_ID_SCHEMA,
|
||||
},
|
||||
path: {
|
||||
schema: LINUX_DRIVE_PATH_SCHEMA,
|
||||
},
|
||||
},
|
||||
},
|
||||
)]
|
||||
/// Create a new changer device
|
||||
pub fn create_changer(
|
||||
name: String,
|
||||
path: String,
|
||||
) -> Result<(), Error> {
|
||||
|
||||
let _lock = config::drive::lock()?;
|
||||
|
||||
let (mut config, _digest) = config::drive::config()?;
|
||||
|
||||
let linux_changers = linux_tape_changer_list();
|
||||
|
||||
check_drive_path(&linux_changers, &path)?;
|
||||
|
||||
if config.sections.get(&name).is_some() {
|
||||
bail!("Entry '{}' already exists", name);
|
||||
}
|
||||
|
||||
let item = ScsiTapeChanger {
|
||||
name: name.clone(),
|
||||
path,
|
||||
};
|
||||
|
||||
config.set_data(&name, "changer", &item)?;
|
||||
|
||||
config::drive::save_config(&config)?;
|
||||
|
||||
Ok(())
|
||||
}
|
||||
|
||||
#[api(
|
||||
input: {
|
||||
properties: {
|
||||
name: {
|
||||
schema: CHANGER_ID_SCHEMA,
|
||||
},
|
||||
},
|
||||
},
|
||||
returns: {
|
||||
type: ScsiTapeChanger,
|
||||
},
|
||||
|
||||
)]
|
||||
/// Get tape changer configuration
|
||||
pub fn get_config(
|
||||
name: String,
|
||||
_param: Value,
|
||||
mut rpcenv: &mut dyn RpcEnvironment,
|
||||
) -> Result<ScsiTapeChanger, Error> {
|
||||
|
||||
let (config, digest) = config::drive::config()?;
|
||||
|
||||
let data: ScsiTapeChanger = config.lookup("changer", &name)?;
|
||||
|
||||
rpcenv["digest"] = proxmox::tools::digest_to_hex(&digest).into();
|
||||
|
||||
Ok(data)
|
||||
}
|
||||
|
||||
#[api(
|
||||
input: {
|
||||
properties: {},
|
||||
},
|
||||
returns: {
|
||||
description: "The list of configured changers (with config digest).",
|
||||
type: Array,
|
||||
items: {
|
||||
type: DriveListEntry,
|
||||
},
|
||||
},
|
||||
)]
|
||||
/// List changers
|
||||
pub fn list_changers(
|
||||
_param: Value,
|
||||
mut rpcenv: &mut dyn RpcEnvironment,
|
||||
) -> Result<Vec<DriveListEntry>, Error> {
|
||||
|
||||
let (config, digest) = config::drive::config()?;
|
||||
|
||||
let linux_changers = linux_tape_changer_list();
|
||||
|
||||
let changer_list: Vec<ScsiTapeChanger> = config.convert_to_typed_array("changer")?;
|
||||
|
||||
let mut list = Vec::new();
|
||||
|
||||
for changer in changer_list {
|
||||
let mut entry = DriveListEntry {
|
||||
name: changer.name,
|
||||
path: changer.path.clone(),
|
||||
changer: None,
|
||||
vendor: None,
|
||||
model: None,
|
||||
serial: None,
|
||||
};
|
||||
if let Some(info) = lookup_drive(&linux_changers, &changer.path) {
|
||||
entry.vendor = Some(info.vendor.clone());
|
||||
entry.model = Some(info.model.clone());
|
||||
entry.serial = Some(info.serial.clone());
|
||||
}
|
||||
|
||||
list.push(entry);
|
||||
}
|
||||
|
||||
rpcenv["digest"] = proxmox::tools::digest_to_hex(&digest).into();
|
||||
Ok(list)
|
||||
}
|
||||
|
||||
#[api(
|
||||
protected: true,
|
||||
input: {
|
||||
properties: {
|
||||
name: {
|
||||
schema: CHANGER_ID_SCHEMA,
|
||||
},
|
||||
path: {
|
||||
schema: LINUX_DRIVE_PATH_SCHEMA,
|
||||
optional: true,
|
||||
},
|
||||
digest: {
|
||||
schema: PROXMOX_CONFIG_DIGEST_SCHEMA,
|
||||
optional: true,
|
||||
},
|
||||
},
|
||||
},
|
||||
)]
|
||||
/// Update a tape changer configuration
|
||||
pub fn update_changer(
|
||||
name: String,
|
||||
path: Option<String>,
|
||||
digest: Option<String>,
|
||||
_param: Value,
|
||||
) -> Result<(), Error> {
|
||||
|
||||
let _lock = config::drive::lock()?;
|
||||
|
||||
let (mut config, expected_digest) = config::drive::config()?;
|
||||
|
||||
if let Some(ref digest) = digest {
|
||||
let digest = proxmox::tools::hex_to_digest(digest)?;
|
||||
crate::tools::detect_modified_configuration_file(&digest, &expected_digest)?;
|
||||
}
|
||||
|
||||
let mut data: ScsiTapeChanger = config.lookup("changer", &name)?;
|
||||
|
||||
if let Some(path) = path {
|
||||
let changers = linux_tape_changer_list();
|
||||
check_drive_path(&changers, &path)?;
|
||||
data.path = path;
|
||||
}
|
||||
|
||||
config.set_data(&name, "changer", &data)?;
|
||||
|
||||
config::drive::save_config(&config)?;
|
||||
|
||||
Ok(())
|
||||
}
|
||||
|
||||
#[api(
|
||||
protected: true,
|
||||
input: {
|
||||
properties: {
|
||||
name: {
|
||||
schema: CHANGER_ID_SCHEMA,
|
||||
},
|
||||
},
|
||||
},
|
||||
)]
|
||||
/// Delete a tape changer configuration
|
||||
pub fn delete_changer(name: String, _param: Value) -> Result<(), Error> {
|
||||
|
||||
let _lock = config::drive::lock()?;
|
||||
|
||||
let (mut config, _digest) = config::drive::config()?;
|
||||
|
||||
match config.sections.get(&name) {
|
||||
Some((section_type, _)) => {
|
||||
if section_type != "changer" {
|
||||
bail!("Entry '{}' exists, but is not a changer device", name);
|
||||
}
|
||||
config.sections.remove(&name);
|
||||
},
|
||||
None => bail!("Delete changer '{}' failed - no such entry", name),
|
||||
}
|
||||
|
||||
let drive_list: Vec<LinuxTapeDrive> = config.convert_to_typed_array("linux")?;
|
||||
for drive in drive_list {
|
||||
if let Some(changer) = drive.changer {
|
||||
if changer == name {
|
||||
bail!("Delete changer '{}' failed - used by drive '{}'", name, drive.name);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
config::drive::save_config(&config)?;
|
||||
|
||||
Ok(())
|
||||
}
|
||||
|
||||
const ITEM_ROUTER: Router = Router::new()
|
||||
.get(&API_METHOD_GET_CONFIG)
|
||||
.put(&API_METHOD_UPDATE_CHANGER)
|
||||
.delete(&API_METHOD_DELETE_CHANGER);
|
||||
|
||||
|
||||
pub const ROUTER: Router = Router::new()
|
||||
.get(&API_METHOD_LIST_CHANGERS)
|
||||
.post(&API_METHOD_CREATE_CHANGER)
|
||||
.match_all("name", &ITEM_ROUTER);
|
@ -151,10 +151,7 @@ pub fn create_datastore(param: Value) -> Result<(), Error> {
|
||||
},
|
||||
},
|
||||
},
|
||||
returns: {
|
||||
description: "The datastore configuration (with config digest).",
|
||||
type: datastore::DataStoreConfig,
|
||||
},
|
||||
returns: { type: datastore::DataStoreConfig },
|
||||
access: {
|
||||
permission: &Permission::Privilege(&["datastore", "{name}"], PRIV_DATASTORE_AUDIT, false),
|
||||
},
|
||||
|
297
src/api2/config/drive.rs
Normal file
297
src/api2/config/drive.rs
Normal file
@ -0,0 +1,297 @@
|
||||
use anyhow::{bail, Error};
|
||||
use ::serde::{Deserialize, Serialize};
|
||||
use serde_json::Value;
|
||||
|
||||
use proxmox::api::{api, Router, RpcEnvironment};
|
||||
|
||||
use crate::{
|
||||
config,
|
||||
api2::types::{
|
||||
PROXMOX_CONFIG_DIGEST_SCHEMA,
|
||||
DRIVE_ID_SCHEMA,
|
||||
CHANGER_ID_SCHEMA,
|
||||
CHANGER_DRIVE_ID_SCHEMA,
|
||||
LINUX_DRIVE_PATH_SCHEMA,
|
||||
DriveListEntry,
|
||||
LinuxTapeDrive,
|
||||
ScsiTapeChanger,
|
||||
},
|
||||
tape::{
|
||||
linux_tape_device_list,
|
||||
check_drive_path,
|
||||
lookup_drive,
|
||||
},
|
||||
};
|
||||
|
||||
#[api(
|
||||
protected: true,
|
||||
input: {
|
||||
properties: {
|
||||
name: {
|
||||
schema: DRIVE_ID_SCHEMA,
|
||||
},
|
||||
path: {
|
||||
schema: LINUX_DRIVE_PATH_SCHEMA,
|
||||
},
|
||||
changer: {
|
||||
schema: CHANGER_ID_SCHEMA,
|
||||
optional: true,
|
||||
},
|
||||
"changer-drive-id": {
|
||||
schema: CHANGER_DRIVE_ID_SCHEMA,
|
||||
optional: true,
|
||||
},
|
||||
},
|
||||
},
|
||||
)]
|
||||
/// Create a new drive
|
||||
pub fn create_drive(param: Value) -> Result<(), Error> {
|
||||
|
||||
let _lock = config::drive::lock()?;
|
||||
|
||||
let (mut config, _digest) = config::drive::config()?;
|
||||
|
||||
let item: LinuxTapeDrive = serde_json::from_value(param)?;
|
||||
|
||||
let linux_drives = linux_tape_device_list();
|
||||
|
||||
check_drive_path(&linux_drives, &item.path)?;
|
||||
|
||||
if config.sections.get(&item.name).is_some() {
|
||||
bail!("Entry '{}' already exists", item.name);
|
||||
}
|
||||
|
||||
config.set_data(&item.name, "linux", &item)?;
|
||||
|
||||
config::drive::save_config(&config)?;
|
||||
|
||||
Ok(())
|
||||
}
|
||||
|
||||
#[api(
|
||||
input: {
|
||||
properties: {
|
||||
name: {
|
||||
schema: DRIVE_ID_SCHEMA,
|
||||
},
|
||||
},
|
||||
},
|
||||
returns: {
|
||||
type: LinuxTapeDrive,
|
||||
},
|
||||
)]
|
||||
/// Get drive configuration
|
||||
pub fn get_config(
|
||||
name: String,
|
||||
_param: Value,
|
||||
mut rpcenv: &mut dyn RpcEnvironment,
|
||||
) -> Result<LinuxTapeDrive, Error> {
|
||||
|
||||
let (config, digest) = config::drive::config()?;
|
||||
|
||||
let data: LinuxTapeDrive = config.lookup("linux", &name)?;
|
||||
|
||||
rpcenv["digest"] = proxmox::tools::digest_to_hex(&digest).into();
|
||||
|
||||
Ok(data)
|
||||
}
|
||||
|
||||
#[api(
|
||||
input: {
|
||||
properties: {},
|
||||
},
|
||||
returns: {
|
||||
description: "The list of configured remotes (with config digest).",
|
||||
type: Array,
|
||||
items: {
|
||||
type: DriveListEntry,
|
||||
},
|
||||
},
|
||||
)]
|
||||
/// List drives
|
||||
pub fn list_drives(
|
||||
_param: Value,
|
||||
mut rpcenv: &mut dyn RpcEnvironment,
|
||||
) -> Result<Vec<DriveListEntry>, Error> {
|
||||
|
||||
let (config, digest) = config::drive::config()?;
|
||||
|
||||
let linux_drives = linux_tape_device_list();
|
||||
|
||||
let drive_list: Vec<LinuxTapeDrive> = config.convert_to_typed_array("linux")?;
|
||||
|
||||
let mut list = Vec::new();
|
||||
|
||||
for drive in drive_list {
|
||||
let mut entry = DriveListEntry {
|
||||
name: drive.name,
|
||||
path: drive.path.clone(),
|
||||
changer: drive.changer,
|
||||
vendor: None,
|
||||
model: None,
|
||||
serial: None,
|
||||
};
|
||||
if let Some(info) = lookup_drive(&linux_drives, &drive.path) {
|
||||
entry.vendor = Some(info.vendor.clone());
|
||||
entry.model = Some(info.model.clone());
|
||||
entry.serial = Some(info.serial.clone());
|
||||
}
|
||||
|
||||
list.push(entry);
|
||||
}
|
||||
|
||||
rpcenv["digest"] = proxmox::tools::digest_to_hex(&digest).into();
|
||||
Ok(list)
|
||||
}
|
||||
|
||||
#[api()]
|
||||
#[derive(Serialize, Deserialize)]
|
||||
#[allow(non_camel_case_types)]
|
||||
#[serde(rename_all = "kebab-case")]
|
||||
/// Deletable property name
|
||||
pub enum DeletableProperty {
|
||||
/// Delete the changer property.
|
||||
changer,
|
||||
/// Delete the changer-drive-id property.
|
||||
changer_drive_id,
|
||||
}
|
||||
|
||||
#[api(
|
||||
protected: true,
|
||||
input: {
|
||||
properties: {
|
||||
name: {
|
||||
schema: DRIVE_ID_SCHEMA,
|
||||
},
|
||||
path: {
|
||||
schema: LINUX_DRIVE_PATH_SCHEMA,
|
||||
optional: true,
|
||||
},
|
||||
changer: {
|
||||
schema: CHANGER_ID_SCHEMA,
|
||||
optional: true,
|
||||
},
|
||||
"changer-drive-id": {
|
||||
schema: CHANGER_DRIVE_ID_SCHEMA,
|
||||
optional: true,
|
||||
},
|
||||
delete: {
|
||||
description: "List of properties to delete.",
|
||||
type: Array,
|
||||
optional: true,
|
||||
items: {
|
||||
type: DeletableProperty,
|
||||
}
|
||||
},
|
||||
digest: {
|
||||
schema: PROXMOX_CONFIG_DIGEST_SCHEMA,
|
||||
optional: true,
|
||||
},
|
||||
},
|
||||
},
|
||||
)]
|
||||
/// Update a drive configuration
|
||||
pub fn update_drive(
|
||||
name: String,
|
||||
path: Option<String>,
|
||||
changer: Option<String>,
|
||||
changer_drive_id: Option<u64>,
|
||||
delete: Option<Vec<DeletableProperty>>,
|
||||
digest: Option<String>,
|
||||
_param: Value,
|
||||
) -> Result<(), Error> {
|
||||
|
||||
let _lock = config::drive::lock()?;
|
||||
|
||||
let (mut config, expected_digest) = config::drive::config()?;
|
||||
|
||||
if let Some(ref digest) = digest {
|
||||
let digest = proxmox::tools::hex_to_digest(digest)?;
|
||||
crate::tools::detect_modified_configuration_file(&digest, &expected_digest)?;
|
||||
}
|
||||
|
||||
let mut data: LinuxTapeDrive = config.lookup("linux", &name)?;
|
||||
|
||||
if let Some(delete) = delete {
|
||||
for delete_prop in delete {
|
||||
match delete_prop {
|
||||
DeletableProperty::changer => {
|
||||
data.changer = None;
|
||||
data.changer_drive_id = None;
|
||||
},
|
||||
DeletableProperty::changer_drive_id => { data.changer_drive_id = None; },
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
if let Some(path) = path {
|
||||
let linux_drives = linux_tape_device_list();
|
||||
check_drive_path(&linux_drives, &path)?;
|
||||
data.path = path;
|
||||
}
|
||||
|
||||
if let Some(changer) = changer {
|
||||
let _: ScsiTapeChanger = config.lookup("changer", &changer)?;
|
||||
data.changer = Some(changer);
|
||||
}
|
||||
|
||||
if let Some(changer_drive_id) = changer_drive_id {
|
||||
if changer_drive_id == 0 {
|
||||
data.changer_drive_id = None;
|
||||
} else {
|
||||
if data.changer.is_none() {
|
||||
bail!("Option 'changer-drive-id' requires option 'changer'.");
|
||||
}
|
||||
data.changer_drive_id = Some(changer_drive_id);
|
||||
}
|
||||
}
|
||||
|
||||
config.set_data(&name, "linux", &data)?;
|
||||
|
||||
config::drive::save_config(&config)?;
|
||||
|
||||
Ok(())
|
||||
}
|
||||
|
||||
#[api(
|
||||
protected: true,
|
||||
input: {
|
||||
properties: {
|
||||
name: {
|
||||
schema: DRIVE_ID_SCHEMA,
|
||||
},
|
||||
},
|
||||
},
|
||||
)]
|
||||
/// Delete a drive configuration
|
||||
pub fn delete_drive(name: String, _param: Value) -> Result<(), Error> {
|
||||
|
||||
let _lock = config::drive::lock()?;
|
||||
|
||||
let (mut config, _digest) = config::drive::config()?;
|
||||
|
||||
match config.sections.get(&name) {
|
||||
Some((section_type, _)) => {
|
||||
if section_type != "linux" {
|
||||
bail!("Entry '{}' exists, but is not a linux tape drive", name);
|
||||
}
|
||||
config.sections.remove(&name);
|
||||
},
|
||||
None => bail!("Delete drive '{}' failed - no such drive", name),
|
||||
}
|
||||
|
||||
config::drive::save_config(&config)?;
|
||||
|
||||
Ok(())
|
||||
}
|
||||
|
||||
const ITEM_ROUTER: Router = Router::new()
|
||||
.get(&API_METHOD_GET_CONFIG)
|
||||
.put(&API_METHOD_UPDATE_DRIVE)
|
||||
.delete(&API_METHOD_DELETE_DRIVE);
|
||||
|
||||
|
||||
pub const ROUTER: Router = Router::new()
|
||||
.get(&API_METHOD_LIST_DRIVES)
|
||||
.post(&API_METHOD_CREATE_DRIVE)
|
||||
.match_all("name", &ITEM_ROUTER);
|
252
src/api2/config/media_pool.rs
Normal file
252
src/api2/config/media_pool.rs
Normal file
@ -0,0 +1,252 @@
|
||||
use anyhow::{bail, Error};
|
||||
use ::serde::{Deserialize, Serialize};
|
||||
|
||||
use proxmox::{
|
||||
api::{
|
||||
api,
|
||||
Router,
|
||||
RpcEnvironment,
|
||||
},
|
||||
};
|
||||
|
||||
use crate::{
|
||||
api2::types::{
|
||||
DRIVE_ID_SCHEMA,
|
||||
MEDIA_POOL_NAME_SCHEMA,
|
||||
MEDIA_SET_NAMING_TEMPLATE_SCHEMA,
|
||||
MEDIA_SET_ALLOCATION_POLICY_SCHEMA,
|
||||
MEDIA_RETENTION_POLICY_SCHEMA,
|
||||
MediaPoolConfig,
|
||||
},
|
||||
config::{
|
||||
self,
|
||||
drive::{
|
||||
check_drive_exists,
|
||||
},
|
||||
},
|
||||
};
|
||||
|
||||
#[api(
|
||||
input: {
|
||||
properties: {
|
||||
name: {
|
||||
schema: MEDIA_POOL_NAME_SCHEMA,
|
||||
},
|
||||
drive: {
|
||||
schema: DRIVE_ID_SCHEMA,
|
||||
},
|
||||
allocation: {
|
||||
schema: MEDIA_SET_ALLOCATION_POLICY_SCHEMA,
|
||||
optional: true,
|
||||
},
|
||||
retention: {
|
||||
schema: MEDIA_RETENTION_POLICY_SCHEMA,
|
||||
optional: true,
|
||||
},
|
||||
template: {
|
||||
schema: MEDIA_SET_NAMING_TEMPLATE_SCHEMA,
|
||||
optional: true,
|
||||
},
|
||||
},
|
||||
},
|
||||
)]
|
||||
/// Create a new media pool
|
||||
pub fn create_pool(
|
||||
name: String,
|
||||
drive: String,
|
||||
allocation: Option<String>,
|
||||
retention: Option<String>,
|
||||
template: Option<String>,
|
||||
) -> Result<(), Error> {
|
||||
|
||||
let _lock = config::media_pool::lock()?;
|
||||
|
||||
let (mut config, _digest) = config::media_pool::config()?;
|
||||
|
||||
if config.sections.get(&name).is_some() {
|
||||
bail!("Media pool '{}' already exists", name);
|
||||
}
|
||||
|
||||
let (drive_config, _) = config::drive::config()?;
|
||||
check_drive_exists(&drive_config, &drive)?;
|
||||
|
||||
let item = MediaPoolConfig {
|
||||
name: name.clone(),
|
||||
drive,
|
||||
allocation,
|
||||
retention,
|
||||
template,
|
||||
};
|
||||
|
||||
config.set_data(&name, "pool", &item)?;
|
||||
|
||||
config::media_pool::save_config(&config)?;
|
||||
|
||||
Ok(())
|
||||
}
|
||||
|
||||
#[api(
|
||||
returns: {
|
||||
description: "The list of configured media pools (with config digest).",
|
||||
type: Array,
|
||||
items: {
|
||||
type: MediaPoolConfig,
|
||||
},
|
||||
},
|
||||
)]
|
||||
/// List media pools
|
||||
pub fn list_pools(
|
||||
mut rpcenv: &mut dyn RpcEnvironment,
|
||||
) -> Result<Vec<MediaPoolConfig>, Error> {
|
||||
|
||||
let (config, digest) = config::media_pool::config()?;
|
||||
|
||||
let list = config.convert_to_typed_array("pool")?;
|
||||
|
||||
rpcenv["digest"] = proxmox::tools::digest_to_hex(&digest).into();
|
||||
|
||||
Ok(list)
|
||||
}
|
||||
|
||||
#[api(
|
||||
input: {
|
||||
properties: {
|
||||
name: {
|
||||
schema: MEDIA_POOL_NAME_SCHEMA,
|
||||
},
|
||||
},
|
||||
},
|
||||
returns: {
|
||||
type: MediaPoolConfig,
|
||||
},
|
||||
)]
|
||||
/// Get media pool configuration
|
||||
pub fn get_config(name: String) -> Result<MediaPoolConfig, Error> {
|
||||
|
||||
let (config, _digest) = config::media_pool::config()?;
|
||||
|
||||
let data: MediaPoolConfig = config.lookup("pool", &name)?;
|
||||
|
||||
Ok(data)
|
||||
}
|
||||
|
||||
#[api()]
|
||||
#[derive(Serialize, Deserialize)]
|
||||
#[allow(non_camel_case_types)]
|
||||
/// Deletable property name
|
||||
pub enum DeletableProperty {
|
||||
/// Delete media set allocation policy.
|
||||
allocation,
|
||||
/// Delete pool retention policy
|
||||
retention,
|
||||
/// Delete media set naming template
|
||||
template,
|
||||
}
|
||||
|
||||
#[api(
|
||||
input: {
|
||||
properties: {
|
||||
name: {
|
||||
schema: MEDIA_POOL_NAME_SCHEMA,
|
||||
},
|
||||
drive: {
|
||||
schema: DRIVE_ID_SCHEMA,
|
||||
optional: true,
|
||||
},
|
||||
allocation: {
|
||||
schema: MEDIA_SET_ALLOCATION_POLICY_SCHEMA,
|
||||
optional: true,
|
||||
},
|
||||
retention: {
|
||||
schema: MEDIA_RETENTION_POLICY_SCHEMA,
|
||||
optional: true,
|
||||
},
|
||||
template: {
|
||||
schema: MEDIA_SET_NAMING_TEMPLATE_SCHEMA,
|
||||
optional: true,
|
||||
},
|
||||
delete: {
|
||||
description: "List of properties to delete.",
|
||||
type: Array,
|
||||
optional: true,
|
||||
items: {
|
||||
type: DeletableProperty,
|
||||
}
|
||||
},
|
||||
},
|
||||
},
|
||||
)]
|
||||
/// Update media pool settings
|
||||
pub fn update_pool(
|
||||
name: String,
|
||||
drive: Option<String>,
|
||||
allocation: Option<String>,
|
||||
retention: Option<String>,
|
||||
template: Option<String>,
|
||||
delete: Option<Vec<DeletableProperty>>,
|
||||
) -> Result<(), Error> {
|
||||
|
||||
let _lock = config::media_pool::lock()?;
|
||||
|
||||
let (mut config, _digest) = config::media_pool::config()?;
|
||||
|
||||
let mut data: MediaPoolConfig = config.lookup("pool", &name)?;
|
||||
|
||||
if let Some(delete) = delete {
|
||||
for delete_prop in delete {
|
||||
match delete_prop {
|
||||
DeletableProperty::allocation => { data.allocation = None; },
|
||||
DeletableProperty::retention => { data.retention = None; },
|
||||
DeletableProperty::template => { data.template = None; },
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
if let Some(drive) = drive { data.drive = drive; }
|
||||
if allocation.is_some() { data.allocation = allocation; }
|
||||
if retention.is_some() { data.retention = retention; }
|
||||
if template.is_some() { data.template = template; }
|
||||
|
||||
config.set_data(&name, "pool", &data)?;
|
||||
|
||||
config::media_pool::save_config(&config)?;
|
||||
|
||||
Ok(())
|
||||
}
|
||||
|
||||
#[api(
|
||||
input: {
|
||||
properties: {
|
||||
name: {
|
||||
schema: MEDIA_POOL_NAME_SCHEMA,
|
||||
},
|
||||
},
|
||||
},
|
||||
)]
|
||||
/// Delete a media pool configuration
|
||||
pub fn delete_pool(name: String) -> Result<(), Error> {
|
||||
|
||||
let _lock = config::media_pool::lock()?;
|
||||
|
||||
let (mut config, _digest) = config::media_pool::config()?;
|
||||
|
||||
match config.sections.get(&name) {
|
||||
Some(_) => { config.sections.remove(&name); },
|
||||
None => bail!("delete pool '{}' failed - no such pool", name),
|
||||
}
|
||||
|
||||
config::media_pool::save_config(&config)?;
|
||||
|
||||
Ok(())
|
||||
}
|
||||
|
||||
const ITEM_ROUTER: Router = Router::new()
|
||||
.get(&API_METHOD_GET_CONFIG)
|
||||
.put(&API_METHOD_UPDATE_POOL)
|
||||
.delete(&API_METHOD_DELETE_POOL);
|
||||
|
||||
|
||||
pub const ROUTER: Router = Router::new()
|
||||
.get(&API_METHOD_LIST_POOLS)
|
||||
.post(&API_METHOD_CREATE_POOL)
|
||||
.match_all("name", &ITEM_ROUTER);
|
@ -19,10 +19,7 @@ use crate::config::acl::{PRIV_REMOTE_AUDIT, PRIV_REMOTE_MODIFY};
|
||||
returns: {
|
||||
description: "The list of configured remotes (with config digest).",
|
||||
type: Array,
|
||||
items: {
|
||||
type: remote::Remote,
|
||||
description: "Remote configuration (without password).",
|
||||
},
|
||||
items: { type: remote::Remote },
|
||||
},
|
||||
access: {
|
||||
description: "List configured remotes filtered by Remote.Audit privileges",
|
||||
@ -124,10 +121,7 @@ pub fn create_remote(password: String, param: Value) -> Result<(), Error> {
|
||||
},
|
||||
},
|
||||
},
|
||||
returns: {
|
||||
description: "The remote configuration (with config digest).",
|
||||
type: remote::Remote,
|
||||
},
|
||||
returns: { type: remote::Remote },
|
||||
access: {
|
||||
permission: &Permission::Privilege(&["remote", "{name}"], PRIV_REMOTE_AUDIT, false),
|
||||
}
|
||||
@ -347,10 +341,7 @@ pub async fn remote_client(remote: remote::Remote) -> Result<HttpClient, Error>
|
||||
returns: {
|
||||
description: "List the accessible datastores.",
|
||||
type: Array,
|
||||
items: {
|
||||
description: "Datastore name and description.",
|
||||
type: DataStoreListItem,
|
||||
},
|
||||
items: { type: DataStoreListItem },
|
||||
},
|
||||
)]
|
||||
/// List datastores of a remote.cfg entry
|
||||
|
@ -182,10 +182,7 @@ pub fn create_sync_job(
|
||||
},
|
||||
},
|
||||
},
|
||||
returns: {
|
||||
description: "The sync job configuration.",
|
||||
type: sync::SyncJobConfig,
|
||||
},
|
||||
returns: { type: sync::SyncJobConfig },
|
||||
access: {
|
||||
description: "Limited to sync job entries where user has Datastore.Audit on target datastore, and Remote.Audit on source remote.",
|
||||
permission: &Permission::Anybody,
|
||||
|
@ -127,10 +127,7 @@ pub fn create_verification_job(
|
||||
},
|
||||
},
|
||||
},
|
||||
returns: {
|
||||
description: "The verification job configuration.",
|
||||
type: verify::VerificationJobConfig,
|
||||
},
|
||||
returns: { type: verify::VerificationJobConfig },
|
||||
access: {
|
||||
permission: &Permission::Anybody,
|
||||
description: "Requires Datastore.Audit or Datastore.Verify on job's datastore.",
|
||||
|
@ -6,7 +6,6 @@ use futures::future::{FutureExt, TryFutureExt};
|
||||
use hyper::body::Body;
|
||||
use hyper::http::request::Parts;
|
||||
use hyper::upgrade::Upgraded;
|
||||
use nix::fcntl::{fcntl, FcntlArg, FdFlag};
|
||||
use serde_json::{json, Value};
|
||||
use tokio::io::{AsyncBufReadExt, BufReader};
|
||||
|
||||
@ -145,18 +144,10 @@ async fn termproxy(
|
||||
move |worker| async move {
|
||||
// move inside the worker so that it survives and does not close the port
|
||||
// remove CLOEXEC from listenere so that we can reuse it in termproxy
|
||||
let fd = listener.as_raw_fd();
|
||||
let mut flags = match fcntl(fd, FcntlArg::F_GETFD) {
|
||||
Ok(bits) => FdFlag::from_bits_truncate(bits),
|
||||
Err(err) => bail!("could not get fd: {}", err),
|
||||
};
|
||||
flags.remove(FdFlag::FD_CLOEXEC);
|
||||
if let Err(err) = fcntl(fd, FcntlArg::F_SETFD(flags)) {
|
||||
bail!("could not set fd: {}", err);
|
||||
}
|
||||
tools::fd_change_cloexec(listener.as_raw_fd(), false)?;
|
||||
|
||||
let mut arguments: Vec<&str> = Vec::new();
|
||||
let fd_string = fd.to_string();
|
||||
let fd_string = listener.as_raw_fd().to_string();
|
||||
arguments.push(&fd_string);
|
||||
arguments.extend_from_slice(&[
|
||||
"--path",
|
||||
|
@ -102,10 +102,7 @@ pub fn list_network_devices(
|
||||
},
|
||||
},
|
||||
},
|
||||
returns: {
|
||||
description: "The network interface configuration (with config digest).",
|
||||
type: Interface,
|
||||
},
|
||||
returns: { type: Interface },
|
||||
access: {
|
||||
permission: &Permission::Privilege(&["system", "network", "interfaces", "{name}"], PRIV_SYS_AUDIT, false),
|
||||
},
|
||||
@ -135,7 +132,6 @@ pub fn read_interface(iface: String) -> Result<Value, Error> {
|
||||
schema: NETWORK_INTERFACE_NAME_SCHEMA,
|
||||
},
|
||||
"type": {
|
||||
description: "Interface type.",
|
||||
type: NetworkInterfaceType,
|
||||
optional: true,
|
||||
},
|
||||
@ -388,7 +384,6 @@ pub enum DeletableProperty {
|
||||
schema: NETWORK_INTERFACE_NAME_SCHEMA,
|
||||
},
|
||||
"type": {
|
||||
description: "Interface type. If specified, need to match the current type.",
|
||||
type: NetworkInterfaceType,
|
||||
optional: true,
|
||||
},
|
||||
|
@ -73,10 +73,7 @@ pub fn check_subscription(
|
||||
},
|
||||
},
|
||||
},
|
||||
returns: {
|
||||
description: "Subscription status.",
|
||||
type: SubscriptionInfo,
|
||||
},
|
||||
returns: { type: SubscriptionInfo },
|
||||
access: {
|
||||
permission: &Permission::Anybody,
|
||||
},
|
||||
|
@ -166,7 +166,6 @@ fn check_task_access(auth_id: &Authid, upid: &UPID) -> Result<(), Error> {
|
||||
},
|
||||
user: {
|
||||
type: Userid,
|
||||
description: "The user who started the task.",
|
||||
},
|
||||
tokenid: {
|
||||
type: Tokenname,
|
||||
|
163
src/api2/tape/changer.rs
Normal file
163
src/api2/tape/changer.rs
Normal file
@ -0,0 +1,163 @@
|
||||
use std::path::Path;
|
||||
|
||||
use anyhow::Error;
|
||||
use serde_json::Value;
|
||||
|
||||
use proxmox::api::{api, Router, SubdirMap};
|
||||
use proxmox::list_subdirs_api_method;
|
||||
|
||||
use crate::{
|
||||
config,
|
||||
api2::types::{
|
||||
CHANGER_ID_SCHEMA,
|
||||
ScsiTapeChanger,
|
||||
TapeDeviceInfo,
|
||||
MtxStatusEntry,
|
||||
MtxEntryKind,
|
||||
},
|
||||
tape::{
|
||||
TAPE_STATUS_DIR,
|
||||
ElementStatus,
|
||||
OnlineStatusMap,
|
||||
Inventory,
|
||||
MediaStateDatabase,
|
||||
linux_tape_changer_list,
|
||||
mtx_status,
|
||||
mtx_status_to_online_set,
|
||||
mtx_transfer,
|
||||
},
|
||||
};
|
||||
|
||||
|
||||
#[api(
|
||||
input: {
|
||||
properties: {
|
||||
name: {
|
||||
schema: CHANGER_ID_SCHEMA,
|
||||
},
|
||||
},
|
||||
},
|
||||
returns: {
|
||||
description: "A status entry for each drive and slot.",
|
||||
type: Array,
|
||||
items: {
|
||||
type: MtxStatusEntry,
|
||||
},
|
||||
},
|
||||
)]
|
||||
/// Get tape changer status
|
||||
pub fn get_status(name: String) -> Result<Vec<MtxStatusEntry>, Error> {
|
||||
|
||||
let (config, _digest) = config::drive::config()?;
|
||||
|
||||
let data: ScsiTapeChanger = config.lookup("changer", &name)?;
|
||||
|
||||
let status = mtx_status(&data.path)?;
|
||||
|
||||
let state_path = Path::new(TAPE_STATUS_DIR);
|
||||
let inventory = Inventory::load(state_path)?;
|
||||
|
||||
let mut map = OnlineStatusMap::new(&config)?;
|
||||
let online_set = mtx_status_to_online_set(&status, &inventory);
|
||||
map.update_online_status(&name, online_set)?;
|
||||
|
||||
let mut state_db = MediaStateDatabase::load(state_path)?;
|
||||
state_db.update_online_status(&map)?;
|
||||
|
||||
let mut list = Vec::new();
|
||||
|
||||
for (id, drive_status) in status.drives.iter().enumerate() {
|
||||
let entry = MtxStatusEntry {
|
||||
entry_kind: MtxEntryKind::Drive,
|
||||
entry_id: id as u64,
|
||||
changer_id: match &drive_status.status {
|
||||
ElementStatus::Empty => None,
|
||||
ElementStatus::Full => Some(String::new()),
|
||||
ElementStatus::VolumeTag(tag) => Some(tag.to_string()),
|
||||
},
|
||||
loaded_slot: drive_status.loaded_slot,
|
||||
};
|
||||
list.push(entry);
|
||||
}
|
||||
|
||||
for (id, slot_status) in status.slots.iter().enumerate() {
|
||||
let entry = MtxStatusEntry {
|
||||
entry_kind: MtxEntryKind::Slot,
|
||||
entry_id: id as u64 + 1,
|
||||
changer_id: match &slot_status {
|
||||
ElementStatus::Empty => None,
|
||||
ElementStatus::Full => Some(String::new()),
|
||||
ElementStatus::VolumeTag(tag) => Some(tag.to_string()),
|
||||
},
|
||||
loaded_slot: None,
|
||||
};
|
||||
list.push(entry);
|
||||
}
|
||||
|
||||
Ok(list)
|
||||
}
|
||||
|
||||
#[api(
|
||||
input: {
|
||||
properties: {
|
||||
name: {
|
||||
schema: CHANGER_ID_SCHEMA,
|
||||
},
|
||||
from: {
|
||||
description: "Source slot number",
|
||||
minimum: 1,
|
||||
},
|
||||
to: {
|
||||
description: "Destination slot number",
|
||||
minimum: 1,
|
||||
},
|
||||
},
|
||||
},
|
||||
)]
|
||||
/// Transfers media from one slot to another
|
||||
pub fn transfer(
|
||||
name: String,
|
||||
from: u64,
|
||||
to: u64,
|
||||
) -> Result<(), Error> {
|
||||
|
||||
let (config, _digest) = config::drive::config()?;
|
||||
|
||||
let data: ScsiTapeChanger = config.lookup("changer", &name)?;
|
||||
|
||||
mtx_transfer(&data.path, from, to)?;
|
||||
|
||||
Ok(())
|
||||
}
|
||||
|
||||
#[api(
|
||||
input: {
|
||||
properties: {},
|
||||
},
|
||||
returns: {
|
||||
description: "The list of autodetected tape changers.",
|
||||
type: Array,
|
||||
items: {
|
||||
type: TapeDeviceInfo,
|
||||
},
|
||||
},
|
||||
)]
|
||||
/// Scan for SCSI tape changers
|
||||
pub fn scan_changers(_param: Value) -> Result<Vec<TapeDeviceInfo>, Error> {
|
||||
|
||||
let list = linux_tape_changer_list();
|
||||
|
||||
Ok(list)
|
||||
}
|
||||
|
||||
const SUBDIRS: SubdirMap = &[
|
||||
(
|
||||
"scan",
|
||||
&Router::new()
|
||||
.get(&API_METHOD_SCAN_CHANGERS)
|
||||
),
|
||||
];
|
||||
|
||||
pub const ROUTER: Router = Router::new()
|
||||
.get(&list_subdirs_api_method!(SUBDIRS))
|
||||
.subdirs(SUBDIRS);
|
816
src/api2/tape/drive.rs
Normal file
816
src/api2/tape/drive.rs
Normal file
@ -0,0 +1,816 @@
|
||||
use std::path::Path;
|
||||
use std::sync::Arc;
|
||||
|
||||
use anyhow::{bail, Error};
|
||||
use serde_json::Value;
|
||||
|
||||
use proxmox::{
|
||||
sortable,
|
||||
identity,
|
||||
list_subdirs_api_method,
|
||||
tools::Uuid,
|
||||
sys::error::SysError,
|
||||
api::{
|
||||
api,
|
||||
RpcEnvironment,
|
||||
Router,
|
||||
SubdirMap,
|
||||
},
|
||||
};
|
||||
|
||||
use crate::{
|
||||
config::{
|
||||
self,
|
||||
drive::check_drive_exists,
|
||||
},
|
||||
api2::types::{
|
||||
UPID_SCHEMA,
|
||||
DRIVE_ID_SCHEMA,
|
||||
MEDIA_LABEL_SCHEMA,
|
||||
MEDIA_POOL_NAME_SCHEMA,
|
||||
Authid,
|
||||
LinuxTapeDrive,
|
||||
ScsiTapeChanger,
|
||||
TapeDeviceInfo,
|
||||
MediaLabelInfoFlat,
|
||||
LabelUuidMap,
|
||||
},
|
||||
server::WorkerTask,
|
||||
tape::{
|
||||
TAPE_STATUS_DIR,
|
||||
TapeDriver,
|
||||
MediaChange,
|
||||
Inventory,
|
||||
MediaStateDatabase,
|
||||
MediaId,
|
||||
mtx_load,
|
||||
mtx_unload,
|
||||
linux_tape_device_list,
|
||||
open_drive,
|
||||
media_changer,
|
||||
update_changer_online_status,
|
||||
file_formats::{
|
||||
DriveLabel,
|
||||
MediaSetLabel,
|
||||
},
|
||||
},
|
||||
};
|
||||
|
||||
#[api(
|
||||
input: {
|
||||
properties: {
|
||||
drive: {
|
||||
schema: DRIVE_ID_SCHEMA,
|
||||
},
|
||||
slot: {
|
||||
description: "Source slot number",
|
||||
minimum: 1,
|
||||
},
|
||||
},
|
||||
},
|
||||
)]
|
||||
/// Load media via changer from slot
|
||||
pub fn load_slot(
|
||||
drive: String,
|
||||
slot: u64,
|
||||
_param: Value,
|
||||
) -> Result<(), Error> {
|
||||
|
||||
let (config, _digest) = config::drive::config()?;
|
||||
|
||||
let drive_config: LinuxTapeDrive = config.lookup("linux", &drive)?;
|
||||
|
||||
let changer: ScsiTapeChanger = match drive_config.changer {
|
||||
Some(ref changer) => config.lookup("changer", changer)?,
|
||||
None => bail!("drive '{}' has no associated changer", drive),
|
||||
};
|
||||
|
||||
let drivenum = drive_config.changer_drive_id.unwrap_or(0);
|
||||
|
||||
mtx_load(&changer.path, slot, drivenum)
|
||||
}
|
||||
|
||||
#[api(
|
||||
input: {
|
||||
properties: {
|
||||
drive: {
|
||||
schema: DRIVE_ID_SCHEMA,
|
||||
},
|
||||
"changer-id": {
|
||||
schema: MEDIA_LABEL_SCHEMA,
|
||||
},
|
||||
},
|
||||
},
|
||||
)]
|
||||
/// Load media with specified label
|
||||
///
|
||||
/// Issue a media load request to the associated changer device.
|
||||
pub fn load_media(drive: String, changer_id: String) -> Result<(), Error> {
|
||||
|
||||
let (config, _digest) = config::drive::config()?;
|
||||
|
||||
let (mut changer, _) = media_changer(&config, &drive, false)?;
|
||||
|
||||
changer.load_media(&changer_id)?;
|
||||
|
||||
Ok(())
|
||||
}
|
||||
|
||||
#[api(
|
||||
input: {
|
||||
properties: {
|
||||
drive: {
|
||||
schema: DRIVE_ID_SCHEMA,
|
||||
},
|
||||
slot: {
|
||||
description: "Target slot number. If omitted, defaults to the slot that the drive was loaded from.",
|
||||
minimum: 1,
|
||||
optional: true,
|
||||
},
|
||||
},
|
||||
},
|
||||
)]
|
||||
/// Unload media via changer
|
||||
pub fn unload(
|
||||
drive: String,
|
||||
slot: Option<u64>,
|
||||
_param: Value,
|
||||
) -> Result<(), Error> {
|
||||
|
||||
let (config, _digest) = config::drive::config()?;
|
||||
|
||||
let mut drive_config: LinuxTapeDrive = config.lookup("linux", &drive)?;
|
||||
|
||||
let changer: ScsiTapeChanger = match drive_config.changer {
|
||||
Some(ref changer) => config.lookup("changer", changer)?,
|
||||
None => bail!("drive '{}' has no associated changer", drive),
|
||||
};
|
||||
|
||||
let drivenum: u64 = 0;
|
||||
|
||||
if let Some(slot) = slot {
|
||||
mtx_unload(&changer.path, slot, drivenum)
|
||||
} else {
|
||||
drive_config.unload_media()
|
||||
}
|
||||
}
|
||||
|
||||
#[api(
|
||||
input: {
|
||||
properties: {},
|
||||
},
|
||||
returns: {
|
||||
description: "The list of autodetected tape drives.",
|
||||
type: Array,
|
||||
items: {
|
||||
type: TapeDeviceInfo,
|
||||
},
|
||||
},
|
||||
)]
|
||||
/// Scan tape drives
|
||||
pub fn scan_drives(_param: Value) -> Result<Vec<TapeDeviceInfo>, Error> {
|
||||
|
||||
let list = linux_tape_device_list();
|
||||
|
||||
Ok(list)
|
||||
}
|
||||
|
||||
#[api(
|
||||
input: {
|
||||
properties: {
|
||||
drive: {
|
||||
schema: DRIVE_ID_SCHEMA,
|
||||
},
|
||||
fast: {
|
||||
description: "Use fast erase.",
|
||||
type: bool,
|
||||
optional: true,
|
||||
default: true,
|
||||
},
|
||||
},
|
||||
},
|
||||
returns: {
|
||||
schema: UPID_SCHEMA,
|
||||
},
|
||||
)]
|
||||
/// Erase media
|
||||
pub fn erase_media(
|
||||
drive: String,
|
||||
fast: Option<bool>,
|
||||
rpcenv: &mut dyn RpcEnvironment,
|
||||
) -> Result<Value, Error> {
|
||||
|
||||
let (config, _digest) = config::drive::config()?;
|
||||
|
||||
check_drive_exists(&config, &drive)?; // early check before starting worker
|
||||
|
||||
let auth_id: Authid = rpcenv.get_auth_id().unwrap().parse()?;
|
||||
|
||||
let upid_str = WorkerTask::new_thread(
|
||||
"erase-media",
|
||||
Some(drive.clone()),
|
||||
auth_id,
|
||||
true,
|
||||
move |_worker| {
|
||||
let mut drive = open_drive(&config, &drive)?;
|
||||
drive.erase_media(fast.unwrap_or(true))?;
|
||||
Ok(())
|
||||
}
|
||||
)?;
|
||||
|
||||
Ok(upid_str.into())
|
||||
}
|
||||
|
||||
#[api(
|
||||
input: {
|
||||
properties: {
|
||||
drive: {
|
||||
schema: DRIVE_ID_SCHEMA,
|
||||
},
|
||||
},
|
||||
},
|
||||
returns: {
|
||||
schema: UPID_SCHEMA,
|
||||
},
|
||||
)]
|
||||
/// Rewind tape
|
||||
pub fn rewind(
|
||||
drive: String,
|
||||
rpcenv: &mut dyn RpcEnvironment,
|
||||
) -> Result<Value, Error> {
|
||||
|
||||
let (config, _digest) = config::drive::config()?;
|
||||
|
||||
check_drive_exists(&config, &drive)?; // early check before starting worker
|
||||
|
||||
let auth_id: Authid = rpcenv.get_auth_id().unwrap().parse()?;
|
||||
|
||||
let upid_str = WorkerTask::new_thread(
|
||||
"rewind-media",
|
||||
Some(drive.clone()),
|
||||
auth_id,
|
||||
true,
|
||||
move |_worker| {
|
||||
let mut drive = open_drive(&config, &drive)?;
|
||||
drive.rewind()?;
|
||||
Ok(())
|
||||
}
|
||||
)?;
|
||||
|
||||
Ok(upid_str.into())
|
||||
}
|
||||
|
||||
#[api(
|
||||
input: {
|
||||
properties: {
|
||||
drive: {
|
||||
schema: DRIVE_ID_SCHEMA,
|
||||
},
|
||||
},
|
||||
},
|
||||
)]
|
||||
/// Eject/Unload drive media
|
||||
pub fn eject_media(drive: String) -> Result<(), Error> {
|
||||
|
||||
let (config, _digest) = config::drive::config()?;
|
||||
|
||||
let (mut changer, _) = media_changer(&config, &drive, false)?;
|
||||
|
||||
if !changer.eject_on_unload() {
|
||||
let mut drive = open_drive(&config, &drive)?;
|
||||
drive.eject_media()?;
|
||||
}
|
||||
|
||||
changer.unload_media()?;
|
||||
|
||||
Ok(())
|
||||
}
|
||||
|
||||
#[api(
|
||||
input: {
|
||||
properties: {
|
||||
drive: {
|
||||
schema: DRIVE_ID_SCHEMA,
|
||||
},
|
||||
"changer-id": {
|
||||
schema: MEDIA_LABEL_SCHEMA,
|
||||
},
|
||||
pool: {
|
||||
schema: MEDIA_POOL_NAME_SCHEMA,
|
||||
optional: true,
|
||||
},
|
||||
},
|
||||
},
|
||||
returns: {
|
||||
schema: UPID_SCHEMA,
|
||||
},
|
||||
)]
|
||||
/// Label media
|
||||
///
|
||||
/// Write a new media label to the media in 'drive'. The media is
|
||||
/// assigned to the specified 'pool', or else to the free media pool.
|
||||
///
|
||||
/// Note: The media need to be empty (you may want to erase it first).
|
||||
pub fn label_media(
|
||||
drive: String,
|
||||
pool: Option<String>,
|
||||
changer_id: String,
|
||||
rpcenv: &mut dyn RpcEnvironment,
|
||||
) -> Result<Value, Error> {
|
||||
|
||||
let auth_id: Authid = rpcenv.get_auth_id().unwrap().parse()?;
|
||||
|
||||
if let Some(ref pool) = pool {
|
||||
let (pool_config, _digest) = config::media_pool::config()?;
|
||||
|
||||
if pool_config.sections.get(pool).is_none() {
|
||||
bail!("no such pool ('{}')", pool);
|
||||
}
|
||||
}
|
||||
|
||||
let (config, _digest) = config::drive::config()?;
|
||||
|
||||
let upid_str = WorkerTask::new_thread(
|
||||
"label-media",
|
||||
Some(drive.clone()),
|
||||
auth_id,
|
||||
true,
|
||||
move |worker| {
|
||||
|
||||
let mut drive = open_drive(&config, &drive)?;
|
||||
|
||||
drive.rewind()?;
|
||||
|
||||
match drive.read_next_file() {
|
||||
Ok(Some(_file)) => bail!("media is not empty (erase first)"),
|
||||
Ok(None) => { /* EOF mark at BOT, assume tape is empty */ },
|
||||
Err(err) => {
|
||||
if err.is_errno(nix::errno::Errno::ENOSPC) || err.is_errno(nix::errno::Errno::EIO) {
|
||||
/* assume tape is empty */
|
||||
} else {
|
||||
bail!("media read error - {}", err);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
let ctime = proxmox::tools::time::epoch_i64();
|
||||
let label = DriveLabel {
|
||||
changer_id: changer_id.to_string(),
|
||||
uuid: Uuid::generate(),
|
||||
ctime,
|
||||
};
|
||||
|
||||
write_media_label(worker, &mut drive, label, pool)
|
||||
}
|
||||
)?;
|
||||
|
||||
Ok(upid_str.into())
|
||||
}
|
||||
|
||||
fn write_media_label(
|
||||
worker: Arc<WorkerTask>,
|
||||
drive: &mut Box<dyn TapeDriver>,
|
||||
label: DriveLabel,
|
||||
pool: Option<String>,
|
||||
) -> Result<(), Error> {
|
||||
|
||||
drive.label_tape(&label)?;
|
||||
|
||||
let mut media_set_label = None;
|
||||
|
||||
if let Some(ref pool) = pool {
|
||||
// assign media to pool by writing special media set label
|
||||
worker.log(format!("Label media '{}' for pool '{}'", label.changer_id, pool));
|
||||
let set = MediaSetLabel::with_data(&pool, [0u8; 16].into(), 0, label.ctime);
|
||||
|
||||
drive.write_media_set_label(&set)?;
|
||||
media_set_label = Some(set);
|
||||
} else {
|
||||
worker.log(format!("Label media '{}' (no pool assignment)", label.changer_id));
|
||||
}
|
||||
|
||||
let media_id = MediaId { label, media_set_label };
|
||||
|
||||
let mut inventory = Inventory::load(Path::new(TAPE_STATUS_DIR))?;
|
||||
inventory.store(media_id.clone())?;
|
||||
|
||||
drive.rewind()?;
|
||||
|
||||
match drive.read_label() {
|
||||
Ok(Some(info)) => {
|
||||
if info.label.uuid != media_id.label.uuid {
|
||||
bail!("verify label failed - got wrong label uuid");
|
||||
}
|
||||
if let Some(ref pool) = pool {
|
||||
match info.media_set_label {
|
||||
Some((set, _)) => {
|
||||
if set.uuid != [0u8; 16].into() {
|
||||
bail!("verify media set label failed - got wrong set uuid");
|
||||
}
|
||||
if &set.pool != pool {
|
||||
bail!("verify media set label failed - got wrong pool");
|
||||
}
|
||||
}
|
||||
None => {
|
||||
bail!("verify media set label failed (missing set label)");
|
||||
}
|
||||
}
|
||||
}
|
||||
},
|
||||
Ok(None) => bail!("verify label failed (got empty media)"),
|
||||
Err(err) => bail!("verify label failed - {}", err),
|
||||
};
|
||||
|
||||
drive.rewind()?;
|
||||
|
||||
Ok(())
|
||||
}
|
||||
|
||||
#[api(
|
||||
input: {
|
||||
properties: {
|
||||
drive: {
|
||||
schema: DRIVE_ID_SCHEMA,
|
||||
},
|
||||
},
|
||||
},
|
||||
returns: {
|
||||
type: MediaLabelInfoFlat,
|
||||
},
|
||||
)]
|
||||
/// Read media label
|
||||
pub fn read_label(drive: String) -> Result<MediaLabelInfoFlat, Error> {
|
||||
|
||||
let (config, _digest) = config::drive::config()?;
|
||||
|
||||
let mut drive = open_drive(&config, &drive)?;
|
||||
|
||||
let info = drive.read_label()?;
|
||||
|
||||
let info = match info {
|
||||
Some(info) => {
|
||||
let mut flat = MediaLabelInfoFlat {
|
||||
uuid: info.label.uuid.to_string(),
|
||||
changer_id: info.label.changer_id.clone(),
|
||||
ctime: info.label.ctime,
|
||||
media_set_ctime: None,
|
||||
media_set_uuid: None,
|
||||
pool: None,
|
||||
seq_nr: None,
|
||||
};
|
||||
if let Some((set, _)) = info.media_set_label {
|
||||
flat.pool = Some(set.pool.clone());
|
||||
flat.seq_nr = Some(set.seq_nr);
|
||||
flat.media_set_uuid = Some(set.uuid.to_string());
|
||||
flat.media_set_ctime = Some(set.ctime);
|
||||
}
|
||||
flat
|
||||
}
|
||||
None => {
|
||||
bail!("Media is empty (no label).");
|
||||
}
|
||||
};
|
||||
|
||||
Ok(info)
|
||||
}
|
||||
|
||||
#[api(
|
||||
input: {
|
||||
properties: {
|
||||
drive: {
|
||||
schema: DRIVE_ID_SCHEMA,
|
||||
},
|
||||
},
|
||||
},
|
||||
returns: {
|
||||
description: "The list of media labels with associated media Uuid (if any).",
|
||||
type: Array,
|
||||
items: {
|
||||
type: LabelUuidMap,
|
||||
},
|
||||
},
|
||||
)]
|
||||
/// List known media labels (Changer Inventory)
|
||||
///
|
||||
/// Note: Only useful for drives with associated changer device.
|
||||
///
|
||||
/// This method queries the changer to get a list of media labels.
|
||||
///
|
||||
/// Note: This updates the media online status.
|
||||
pub fn inventory(
|
||||
drive: String,
|
||||
) -> Result<Vec<LabelUuidMap>, Error> {
|
||||
|
||||
let (config, _digest) = config::drive::config()?;
|
||||
|
||||
let (changer, changer_name) = media_changer(&config, &drive, false)?;
|
||||
|
||||
let changer_id_list = changer.list_media_changer_ids()?;
|
||||
|
||||
let state_path = Path::new(TAPE_STATUS_DIR);
|
||||
|
||||
let mut inventory = Inventory::load(state_path)?;
|
||||
let mut state_db = MediaStateDatabase::load(state_path)?;
|
||||
|
||||
update_changer_online_status(&config, &mut inventory, &mut state_db, &changer_name, &changer_id_list)?;
|
||||
|
||||
let mut list = Vec::new();
|
||||
|
||||
for changer_id in changer_id_list.iter() {
|
||||
if changer_id.starts_with("CLN") {
|
||||
// skip cleaning unit
|
||||
continue;
|
||||
}
|
||||
|
||||
let changer_id = changer_id.to_string();
|
||||
|
||||
if let Some(media_id) = inventory.find_media_by_changer_id(&changer_id) {
|
||||
list.push(LabelUuidMap { changer_id, uuid: Some(media_id.label.uuid.to_string()) });
|
||||
} else {
|
||||
list.push(LabelUuidMap { changer_id, uuid: None });
|
||||
}
|
||||
}
|
||||
|
||||
Ok(list)
|
||||
}
|
||||
|
||||
#[api(
|
||||
input: {
|
||||
properties: {
|
||||
drive: {
|
||||
schema: DRIVE_ID_SCHEMA,
|
||||
},
|
||||
"read-all-labels": {
|
||||
description: "Load all tapes and try read labels (even if already inventoried)",
|
||||
type: bool,
|
||||
optional: true,
|
||||
},
|
||||
},
|
||||
},
|
||||
returns: {
|
||||
schema: UPID_SCHEMA,
|
||||
},
|
||||
)]
|
||||
/// Update inventory
|
||||
///
|
||||
/// Note: Only useful for drives with associated changer device.
|
||||
///
|
||||
/// This method queries the changer to get a list of media labels. It
|
||||
/// then loads any unknown media into the drive, reads the label, and
|
||||
/// store the result to the media database.
|
||||
///
|
||||
/// Note: This updates the media online status.
|
||||
pub fn update_inventory(
|
||||
drive: String,
|
||||
read_all_labels: Option<bool>,
|
||||
rpcenv: &mut dyn RpcEnvironment,
|
||||
) -> Result<Value, Error> {
|
||||
|
||||
let (config, _digest) = config::drive::config()?;
|
||||
|
||||
check_drive_exists(&config, &drive)?; // early check before starting worker
|
||||
|
||||
let auth_id: Authid = rpcenv.get_auth_id().unwrap().parse()?;
|
||||
|
||||
let upid_str = WorkerTask::new_thread(
|
||||
"inventory-update",
|
||||
Some(drive.clone()),
|
||||
auth_id,
|
||||
true,
|
||||
move |worker| {
|
||||
|
||||
let (mut changer, changer_name) = media_changer(&config, &drive, false)?;
|
||||
|
||||
let changer_id_list = changer.list_media_changer_ids()?;
|
||||
if changer_id_list.is_empty() {
|
||||
worker.log(format!("changer device does not list any media labels"));
|
||||
}
|
||||
|
||||
let state_path = Path::new(TAPE_STATUS_DIR);
|
||||
|
||||
let mut inventory = Inventory::load(state_path)?;
|
||||
let mut state_db = MediaStateDatabase::load(state_path)?;
|
||||
|
||||
update_changer_online_status(&config, &mut inventory, &mut state_db, &changer_name, &changer_id_list)?;
|
||||
|
||||
for changer_id in changer_id_list.iter() {
|
||||
if changer_id.starts_with("CLN") {
|
||||
worker.log(format!("skip cleaning unit '{}'", changer_id));
|
||||
continue;
|
||||
}
|
||||
|
||||
let changer_id = changer_id.to_string();
|
||||
|
||||
if !read_all_labels.unwrap_or(false) {
|
||||
if let Some(_) = inventory.find_media_by_changer_id(&changer_id) {
|
||||
worker.log(format!("media '{}' already inventoried", changer_id));
|
||||
continue;
|
||||
}
|
||||
}
|
||||
|
||||
if let Err(err) = changer.load_media(&changer_id) {
|
||||
worker.warn(format!("unable to load media '{}' - {}", changer_id, err));
|
||||
continue;
|
||||
}
|
||||
|
||||
let mut drive = open_drive(&config, &drive)?;
|
||||
match drive.read_label() {
|
||||
Err(err) => {
|
||||
worker.warn(format!("unable to read label form media '{}' - {}", changer_id, err));
|
||||
}
|
||||
Ok(None) => {
|
||||
worker.log(format!("media '{}' is empty", changer_id));
|
||||
}
|
||||
Ok(Some(info)) => {
|
||||
if changer_id != info.label.changer_id {
|
||||
worker.warn(format!("label changer ID missmatch ({} != {})", changer_id, info.label.changer_id));
|
||||
continue;
|
||||
}
|
||||
worker.log(format!("inventorize media '{}' with uuid '{}'", changer_id, info.label.uuid));
|
||||
inventory.store(info.into())?;
|
||||
}
|
||||
}
|
||||
}
|
||||
Ok(())
|
||||
}
|
||||
)?;
|
||||
|
||||
Ok(upid_str.into())
|
||||
}
|
||||
|
||||
|
||||
#[api(
|
||||
input: {
|
||||
properties: {
|
||||
drive: {
|
||||
schema: DRIVE_ID_SCHEMA,
|
||||
},
|
||||
pool: {
|
||||
schema: MEDIA_POOL_NAME_SCHEMA,
|
||||
optional: true,
|
||||
},
|
||||
},
|
||||
},
|
||||
returns: {
|
||||
schema: UPID_SCHEMA,
|
||||
},
|
||||
)]
|
||||
/// Label media with barcodes from changer device
|
||||
pub fn barcode_label_media(
|
||||
drive: String,
|
||||
pool: Option<String>,
|
||||
rpcenv: &mut dyn RpcEnvironment,
|
||||
) -> Result<Value, Error> {
|
||||
|
||||
if let Some(ref pool) = pool {
|
||||
let (pool_config, _digest) = config::media_pool::config()?;
|
||||
|
||||
if pool_config.sections.get(pool).is_none() {
|
||||
bail!("no such pool ('{}')", pool);
|
||||
}
|
||||
}
|
||||
|
||||
let auth_id: Authid = rpcenv.get_auth_id().unwrap().parse()?;
|
||||
|
||||
let upid_str = WorkerTask::new_thread(
|
||||
"barcode-label-media",
|
||||
Some(drive.clone()),
|
||||
auth_id,
|
||||
true,
|
||||
move |worker| {
|
||||
barcode_label_media_worker(worker, drive, pool)
|
||||
}
|
||||
)?;
|
||||
|
||||
Ok(upid_str.into())
|
||||
}
|
||||
|
||||
fn barcode_label_media_worker(
|
||||
worker: Arc<WorkerTask>,
|
||||
drive: String,
|
||||
pool: Option<String>,
|
||||
) -> Result<(), Error> {
|
||||
|
||||
let (config, _digest) = config::drive::config()?;
|
||||
|
||||
let (mut changer, changer_name) = media_changer(&config, &drive, false)?;
|
||||
|
||||
let changer_id_list = changer.list_media_changer_ids()?;
|
||||
|
||||
let state_path = Path::new(TAPE_STATUS_DIR);
|
||||
|
||||
let mut inventory = Inventory::load(state_path)?;
|
||||
let mut state_db = MediaStateDatabase::load(state_path)?;
|
||||
|
||||
update_changer_online_status(&config, &mut inventory, &mut state_db, &changer_name, &changer_id_list)?;
|
||||
|
||||
if changer_id_list.is_empty() {
|
||||
bail!("changer device does not list any media labels");
|
||||
}
|
||||
|
||||
for changer_id in changer_id_list {
|
||||
if changer_id.starts_with("CLN") { continue; }
|
||||
|
||||
inventory.reload()?;
|
||||
if inventory.find_media_by_changer_id(&changer_id).is_some() {
|
||||
worker.log(format!("media '{}' already inventoried (already labeled)", changer_id));
|
||||
continue;
|
||||
}
|
||||
|
||||
worker.log(format!("checking/loading media '{}'", changer_id));
|
||||
|
||||
if let Err(err) = changer.load_media(&changer_id) {
|
||||
worker.warn(format!("unable to load media '{}' - {}", changer_id, err));
|
||||
continue;
|
||||
}
|
||||
|
||||
let mut drive = open_drive(&config, &drive)?;
|
||||
drive.rewind()?;
|
||||
|
||||
match drive.read_next_file() {
|
||||
Ok(Some(_file)) => {
|
||||
worker.log(format!("media '{}' is not empty (erase first)", changer_id));
|
||||
continue;
|
||||
}
|
||||
Ok(None) => { /* EOF mark at BOT, assume tape is empty */ },
|
||||
Err(err) => {
|
||||
if err.is_errno(nix::errno::Errno::ENOSPC) || err.is_errno(nix::errno::Errno::EIO) {
|
||||
/* assume tape is empty */
|
||||
} else {
|
||||
worker.warn(format!("media '{}' read error (maybe not empty - erase first)", changer_id));
|
||||
continue;
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
let ctime = proxmox::tools::time::epoch_i64();
|
||||
let label = DriveLabel {
|
||||
changer_id: changer_id.to_string(),
|
||||
uuid: Uuid::generate(),
|
||||
ctime,
|
||||
};
|
||||
|
||||
write_media_label(worker.clone(), &mut drive, label, pool.clone())?
|
||||
}
|
||||
|
||||
Ok(())
|
||||
}
|
||||
|
||||
#[sortable]
|
||||
pub const SUBDIRS: SubdirMap = &sorted!([
|
||||
(
|
||||
"barcode-label-media",
|
||||
&Router::new()
|
||||
.put(&API_METHOD_BARCODE_LABEL_MEDIA)
|
||||
),
|
||||
(
|
||||
"eject-media",
|
||||
&Router::new()
|
||||
.put(&API_METHOD_EJECT_MEDIA)
|
||||
),
|
||||
(
|
||||
"erase-media",
|
||||
&Router::new()
|
||||
.put(&API_METHOD_ERASE_MEDIA)
|
||||
),
|
||||
(
|
||||
"inventory",
|
||||
&Router::new()
|
||||
.get(&API_METHOD_INVENTORY)
|
||||
.put(&API_METHOD_UPDATE_INVENTORY)
|
||||
),
|
||||
(
|
||||
"label-media",
|
||||
&Router::new()
|
||||
.put(&API_METHOD_LABEL_MEDIA)
|
||||
),
|
||||
(
|
||||
"load-slot",
|
||||
&Router::new()
|
||||
.put(&API_METHOD_LOAD_SLOT)
|
||||
),
|
||||
(
|
||||
"read-label",
|
||||
&Router::new()
|
||||
.get(&API_METHOD_READ_LABEL)
|
||||
),
|
||||
(
|
||||
"rewind",
|
||||
&Router::new()
|
||||
.put(&API_METHOD_REWIND)
|
||||
),
|
||||
(
|
||||
"scan",
|
||||
&Router::new()
|
||||
.get(&API_METHOD_SCAN_DRIVES)
|
||||
),
|
||||
(
|
||||
"unload",
|
||||
&Router::new()
|
||||
.put(&API_METHOD_UNLOAD)
|
||||
),
|
||||
]);
|
||||
|
||||
pub const ROUTER: Router = Router::new()
|
||||
.get(&list_subdirs_api_method!(SUBDIRS))
|
||||
.subdirs(SUBDIRS);
|
15
src/api2/tape/mod.rs
Normal file
15
src/api2/tape/mod.rs
Normal file
@ -0,0 +1,15 @@
|
||||
use proxmox::api::router::SubdirMap;
|
||||
use proxmox::api::Router;
|
||||
use proxmox::list_subdirs_api_method;
|
||||
|
||||
pub mod drive;
|
||||
pub mod changer;
|
||||
|
||||
pub const SUBDIRS: SubdirMap = &[
|
||||
("changer", &changer::ROUTER),
|
||||
("drive", &drive::ROUTER),
|
||||
];
|
||||
|
||||
pub const ROUTER: Router = Router::new()
|
||||
.get(&list_subdirs_api_method!(SUBDIRS))
|
||||
.subdirs(SUBDIRS);
|
@ -20,6 +20,9 @@ pub use userid::Userid;
|
||||
pub use userid::Authid;
|
||||
pub use userid::{PROXMOX_TOKEN_ID_SCHEMA, PROXMOX_TOKEN_NAME_SCHEMA, PROXMOX_GROUP_ID_SCHEMA};
|
||||
|
||||
mod tape;
|
||||
pub use tape::*;
|
||||
|
||||
// File names: may not contain slashes, may not start with "."
|
||||
pub const FILENAME_FORMAT: ApiStringFormat = ApiStringFormat::VerifyFn(|name| {
|
||||
if name.starts_with('.') {
|
||||
|
39
src/api2/types/tape/device.rs
Normal file
39
src/api2/types/tape/device.rs
Normal file
@ -0,0 +1,39 @@
|
||||
use ::serde::{Deserialize, Serialize};
|
||||
|
||||
use proxmox::api::api;
|
||||
|
||||
#[api()]
|
||||
#[derive(Debug,Serialize,Deserialize)]
|
||||
#[serde(rename_all = "kebab-case")]
|
||||
/// Kind of devive
|
||||
pub enum DeviceKind {
|
||||
/// Tape changer (Autoloader, Robot)
|
||||
Changer,
|
||||
/// Normal SCSI tape device
|
||||
Tape,
|
||||
}
|
||||
|
||||
#[api(
|
||||
properties: {
|
||||
kind: {
|
||||
type: DeviceKind,
|
||||
},
|
||||
},
|
||||
)]
|
||||
#[derive(Debug,Serialize,Deserialize)]
|
||||
/// Tape device information
|
||||
pub struct TapeDeviceInfo {
|
||||
pub kind: DeviceKind,
|
||||
/// Path to the linux device node
|
||||
pub path: String,
|
||||
/// Serial number (autodetected)
|
||||
pub serial: String,
|
||||
/// Vendor (autodetected)
|
||||
pub vendor: String,
|
||||
/// Model (autodetected)
|
||||
pub model: String,
|
||||
/// Device major number
|
||||
pub major: u32,
|
||||
/// Device minor number
|
||||
pub minor: u32,
|
||||
}
|
169
src/api2/types/tape/drive.rs
Normal file
169
src/api2/types/tape/drive.rs
Normal file
@ -0,0 +1,169 @@
|
||||
//! Types for tape drive API
|
||||
|
||||
use serde::{Deserialize, Serialize};
|
||||
|
||||
use proxmox::api::{
|
||||
api,
|
||||
schema::{Schema, IntegerSchema, StringSchema},
|
||||
};
|
||||
|
||||
use crate::api2::types::PROXMOX_SAFE_ID_FORMAT;
|
||||
|
||||
pub const DRIVE_ID_SCHEMA: Schema = StringSchema::new("Drive Identifier.")
|
||||
.format(&PROXMOX_SAFE_ID_FORMAT)
|
||||
.min_length(3)
|
||||
.max_length(32)
|
||||
.schema();
|
||||
|
||||
pub const CHANGER_ID_SCHEMA: Schema = StringSchema::new("Tape Changer Identifier.")
|
||||
.format(&PROXMOX_SAFE_ID_FORMAT)
|
||||
.min_length(3)
|
||||
.max_length(32)
|
||||
.schema();
|
||||
|
||||
pub const LINUX_DRIVE_PATH_SCHEMA: Schema = StringSchema::new(
|
||||
"The path to a LINUX non-rewinding SCSI tape device (i.e. '/dev/nst0')")
|
||||
.schema();
|
||||
|
||||
pub const SCSI_CHANGER_PATH_SCHEMA: Schema = StringSchema::new(
|
||||
"Path to Linux generic SCSI device (i.e. '/dev/sg4')")
|
||||
.schema();
|
||||
|
||||
pub const MEDIA_LABEL_SCHEMA: Schema = StringSchema::new("Media Label/Barcode.")
|
||||
.format(&PROXMOX_SAFE_ID_FORMAT)
|
||||
.min_length(3)
|
||||
.max_length(32)
|
||||
.schema();
|
||||
|
||||
pub const CHANGER_DRIVE_ID_SCHEMA: Schema = IntegerSchema::new(
|
||||
"Associated changer drive number (requires option changer)")
|
||||
.minimum(0)
|
||||
.maximum(8)
|
||||
.default(0)
|
||||
.schema();
|
||||
|
||||
#[api(
|
||||
properties: {
|
||||
name: {
|
||||
schema: DRIVE_ID_SCHEMA,
|
||||
}
|
||||
}
|
||||
)]
|
||||
#[derive(Serialize,Deserialize)]
|
||||
/// Simulate tape drives (only for test and debug)
|
||||
#[serde(rename_all = "kebab-case")]
|
||||
pub struct VirtualTapeDrive {
|
||||
pub name: String,
|
||||
/// Path to directory
|
||||
pub path: String,
|
||||
/// Virtual tape size
|
||||
#[serde(skip_serializing_if="Option::is_none")]
|
||||
pub max_size: Option<usize>,
|
||||
}
|
||||
|
||||
#[api(
|
||||
properties: {
|
||||
name: {
|
||||
schema: DRIVE_ID_SCHEMA,
|
||||
},
|
||||
path: {
|
||||
schema: LINUX_DRIVE_PATH_SCHEMA,
|
||||
},
|
||||
changer: {
|
||||
schema: CHANGER_ID_SCHEMA,
|
||||
optional: true,
|
||||
},
|
||||
"changer-drive-id": {
|
||||
schema: CHANGER_DRIVE_ID_SCHEMA,
|
||||
optional: true,
|
||||
},
|
||||
}
|
||||
)]
|
||||
#[derive(Serialize,Deserialize)]
|
||||
#[serde(rename_all = "kebab-case")]
|
||||
/// Linux SCSI tape driver
|
||||
pub struct LinuxTapeDrive {
|
||||
pub name: String,
|
||||
pub path: String,
|
||||
#[serde(skip_serializing_if="Option::is_none")]
|
||||
pub changer: Option<String>,
|
||||
#[serde(skip_serializing_if="Option::is_none")]
|
||||
pub changer_drive_id: Option<u64>,
|
||||
}
|
||||
|
||||
#[api(
|
||||
properties: {
|
||||
name: {
|
||||
schema: CHANGER_ID_SCHEMA,
|
||||
},
|
||||
path: {
|
||||
schema: SCSI_CHANGER_PATH_SCHEMA,
|
||||
},
|
||||
}
|
||||
)]
|
||||
#[derive(Serialize,Deserialize)]
|
||||
/// SCSI tape changer
|
||||
pub struct ScsiTapeChanger {
|
||||
pub name: String,
|
||||
pub path: String,
|
||||
}
|
||||
|
||||
|
||||
#[api()]
|
||||
#[derive(Serialize,Deserialize)]
|
||||
/// Drive list entry
|
||||
pub struct DriveListEntry {
|
||||
/// Drive name
|
||||
pub name: String,
|
||||
/// Path to the linux device node
|
||||
pub path: String,
|
||||
/// Associated changer device
|
||||
#[serde(skip_serializing_if="Option::is_none")]
|
||||
pub changer: Option<String>,
|
||||
/// Vendor (autodetected)
|
||||
#[serde(skip_serializing_if="Option::is_none")]
|
||||
pub vendor: Option<String>,
|
||||
/// Model (autodetected)
|
||||
#[serde(skip_serializing_if="Option::is_none")]
|
||||
pub model: Option<String>,
|
||||
/// Serial number (autodetected)
|
||||
#[serde(skip_serializing_if="Option::is_none")]
|
||||
pub serial: Option<String>,
|
||||
}
|
||||
|
||||
#[api()]
|
||||
#[derive(Serialize,Deserialize)]
|
||||
#[serde(rename_all = "lowercase")]
|
||||
/// Mtx Entry Kind
|
||||
pub enum MtxEntryKind {
|
||||
/// Drive
|
||||
Drive,
|
||||
/// Slot
|
||||
Slot,
|
||||
}
|
||||
|
||||
#[api(
|
||||
properties: {
|
||||
"entry-kind": {
|
||||
type: MtxEntryKind,
|
||||
},
|
||||
"changer-id": {
|
||||
schema: MEDIA_LABEL_SCHEMA,
|
||||
optional: true,
|
||||
},
|
||||
},
|
||||
)]
|
||||
#[derive(Serialize,Deserialize)]
|
||||
#[serde(rename_all = "kebab-case")]
|
||||
/// Mtx Status Entry
|
||||
pub struct MtxStatusEntry {
|
||||
pub entry_kind: MtxEntryKind,
|
||||
/// The ID of the slot or drive
|
||||
pub entry_id: u64,
|
||||
/// The media label (volume tag) if the slot/drive is full
|
||||
#[serde(skip_serializing_if="Option::is_none")]
|
||||
pub changer_id: Option<String>,
|
||||
/// The slot the drive was loaded from
|
||||
#[serde(skip_serializing_if="Option::is_none")]
|
||||
pub loaded_slot: Option<u64>,
|
||||
}
|
96
src/api2/types/tape/media.rs
Normal file
96
src/api2/types/tape/media.rs
Normal file
@ -0,0 +1,96 @@
|
||||
use ::serde::{Deserialize, Serialize};
|
||||
|
||||
use proxmox::api::api;
|
||||
|
||||
use super::{
|
||||
MediaStatus,
|
||||
};
|
||||
|
||||
#[api()]
|
||||
#[derive(Serialize,Deserialize)]
|
||||
#[serde(rename_all = "lowercase")]
|
||||
/// Media location
|
||||
pub enum MediaLocationKind {
|
||||
/// Ready for use (inside tape library)
|
||||
Online,
|
||||
/// Local available, but need to be mounted (insert into tape
|
||||
/// drive)
|
||||
Offline,
|
||||
/// Media is inside a Vault
|
||||
Vault,
|
||||
}
|
||||
|
||||
#[api(
|
||||
properties: {
|
||||
location: {
|
||||
type: MediaLocationKind,
|
||||
},
|
||||
status: {
|
||||
type: MediaStatus,
|
||||
},
|
||||
},
|
||||
)]
|
||||
#[derive(Serialize,Deserialize)]
|
||||
#[serde(rename_all = "kebab-case")]
|
||||
/// Media list entry
|
||||
pub struct MediaListEntry {
|
||||
/// Media changer ID
|
||||
pub changer_id: String,
|
||||
/// Media Uuid
|
||||
pub uuid: String,
|
||||
pub location: MediaLocationKind,
|
||||
/// Media location hint (vault name, changer name)
|
||||
pub location_hint: Option<String>,
|
||||
pub status: MediaStatus,
|
||||
/// Expired flag
|
||||
pub expired: bool,
|
||||
/// Media set name
|
||||
#[serde(skip_serializing_if="Option::is_none")]
|
||||
pub media_set_name: Option<String>,
|
||||
/// Media set uuid
|
||||
#[serde(skip_serializing_if="Option::is_none")]
|
||||
pub media_set_uuid: Option<String>,
|
||||
/// Media set seq_nr
|
||||
#[serde(skip_serializing_if="Option::is_none")]
|
||||
pub seq_nr: Option<u64>,
|
||||
/// Media Pool
|
||||
#[serde(skip_serializing_if="Option::is_none")]
|
||||
pub pool: Option<String>,
|
||||
}
|
||||
|
||||
#[api()]
|
||||
#[derive(Serialize,Deserialize)]
|
||||
#[serde(rename_all = "kebab-case")]
|
||||
/// Media label info
|
||||
pub struct MediaLabelInfoFlat {
|
||||
/// Unique ID
|
||||
pub uuid: String,
|
||||
/// Media Changer ID or Barcode
|
||||
pub changer_id: String,
|
||||
/// Creation time stamp
|
||||
pub ctime: i64,
|
||||
// All MediaSet properties are optional here
|
||||
/// MediaSet Pool
|
||||
#[serde(skip_serializing_if="Option::is_none")]
|
||||
pub pool: Option<String>,
|
||||
/// MediaSet Uuid. We use the all-zero Uuid to reseve an empty media for a specific pool
|
||||
#[serde(skip_serializing_if="Option::is_none")]
|
||||
pub media_set_uuid: Option<String>,
|
||||
/// MediaSet media sequence number
|
||||
#[serde(skip_serializing_if="Option::is_none")]
|
||||
pub seq_nr: Option<u64>,
|
||||
/// MediaSet Creation time stamp
|
||||
#[serde(skip_serializing_if="Option::is_none")]
|
||||
pub media_set_ctime: Option<i64>,
|
||||
}
|
||||
|
||||
#[api()]
|
||||
#[derive(Serialize,Deserialize)]
|
||||
#[serde(rename_all = "kebab-case")]
|
||||
/// Label with optional Uuid
|
||||
pub struct LabelUuidMap {
|
||||
/// Changer ID (label)
|
||||
pub changer_id: String,
|
||||
/// Associated Uuid (if any)
|
||||
pub uuid: Option<String>,
|
||||
}
|
154
src/api2/types/tape/media_pool.rs
Normal file
154
src/api2/types/tape/media_pool.rs
Normal file
@ -0,0 +1,154 @@
|
||||
//! Types for tape media pool API
|
||||
//!
|
||||
//! Note: Both MediaSetPolicy and RetentionPolicy are complex enums,
|
||||
//! so we cannot use them directly for the API. Instead, we represent
|
||||
//! them as String.
|
||||
|
||||
use anyhow::Error;
|
||||
use std::str::FromStr;
|
||||
use serde::{Deserialize, Serialize};
|
||||
|
||||
use proxmox::api::{
|
||||
api,
|
||||
schema::{Schema, StringSchema, ApiStringFormat},
|
||||
};
|
||||
|
||||
use crate::{
|
||||
tools::systemd::time::{
|
||||
CalendarEvent,
|
||||
TimeSpan,
|
||||
parse_time_span,
|
||||
parse_calendar_event,
|
||||
},
|
||||
api2::types::{
|
||||
DRIVE_ID_SCHEMA,
|
||||
PROXMOX_SAFE_ID_FORMAT,
|
||||
SINGLE_LINE_COMMENT_FORMAT,
|
||||
},
|
||||
};
|
||||
|
||||
pub const MEDIA_POOL_NAME_SCHEMA: Schema = StringSchema::new("Media pool name.")
|
||||
.format(&PROXMOX_SAFE_ID_FORMAT)
|
||||
.min_length(2)
|
||||
.max_length(32)
|
||||
.schema();
|
||||
|
||||
pub const MEDIA_SET_NAMING_TEMPLATE_SCHEMA: Schema = StringSchema::new(
|
||||
"Media set naming template.")
|
||||
.format(&SINGLE_LINE_COMMENT_FORMAT)
|
||||
.min_length(2)
|
||||
.max_length(64)
|
||||
.schema();
|
||||
|
||||
pub const MEDIA_SET_ALLOCATION_POLICY_FORMAT: ApiStringFormat =
|
||||
ApiStringFormat::VerifyFn(|s| { MediaSetPolicy::from_str(s)?; Ok(()) });
|
||||
|
||||
pub const MEDIA_SET_ALLOCATION_POLICY_SCHEMA: Schema = StringSchema::new(
|
||||
"Media set allocation policy.")
|
||||
.format(&MEDIA_SET_ALLOCATION_POLICY_FORMAT)
|
||||
.schema();
|
||||
|
||||
/// Media set allocation policy
|
||||
pub enum MediaSetPolicy {
|
||||
/// Try to use the current media set
|
||||
ContinueCurrent,
|
||||
/// Each backup job creates a new media set
|
||||
AlwaysCreate,
|
||||
/// Create a new set when the specified CalendarEvent triggers
|
||||
CreateAt(CalendarEvent),
|
||||
}
|
||||
|
||||
impl std::str::FromStr for MediaSetPolicy {
|
||||
type Err = Error;
|
||||
|
||||
fn from_str(s: &str) -> Result<Self, Self::Err> {
|
||||
if s == "continue" {
|
||||
return Ok(MediaSetPolicy::ContinueCurrent);
|
||||
}
|
||||
if s == "always" {
|
||||
return Ok(MediaSetPolicy::AlwaysCreate);
|
||||
}
|
||||
|
||||
let event = parse_calendar_event(s)?;
|
||||
|
||||
Ok(MediaSetPolicy::CreateAt(event))
|
||||
}
|
||||
}
|
||||
|
||||
pub const MEDIA_RETENTION_POLICY_FORMAT: ApiStringFormat =
|
||||
ApiStringFormat::VerifyFn(|s| { RetentionPolicy::from_str(s)?; Ok(()) });
|
||||
|
||||
pub const MEDIA_RETENTION_POLICY_SCHEMA: Schema = StringSchema::new(
|
||||
"Media retention policy.")
|
||||
.format(&MEDIA_RETENTION_POLICY_FORMAT)
|
||||
.schema();
|
||||
|
||||
/// Media retention Policy
|
||||
pub enum RetentionPolicy {
|
||||
/// Always overwrite media
|
||||
OverwriteAlways,
|
||||
/// Protect data for the timespan specified
|
||||
ProtectFor(TimeSpan),
|
||||
/// Never overwrite data
|
||||
KeepForever,
|
||||
}
|
||||
|
||||
impl std::str::FromStr for RetentionPolicy {
|
||||
type Err = Error;
|
||||
|
||||
fn from_str(s: &str) -> Result<Self, Self::Err> {
|
||||
if s == "overwrite" {
|
||||
return Ok(RetentionPolicy::OverwriteAlways);
|
||||
}
|
||||
if s == "keep" {
|
||||
return Ok(RetentionPolicy::KeepForever);
|
||||
}
|
||||
|
||||
let time_span = parse_time_span(s)?;
|
||||
|
||||
Ok(RetentionPolicy::ProtectFor(time_span))
|
||||
}
|
||||
}
|
||||
|
||||
#[api(
|
||||
properties: {
|
||||
name: {
|
||||
schema: MEDIA_POOL_NAME_SCHEMA,
|
||||
},
|
||||
drive: {
|
||||
schema: DRIVE_ID_SCHEMA,
|
||||
},
|
||||
allocation: {
|
||||
schema: MEDIA_SET_ALLOCATION_POLICY_SCHEMA,
|
||||
optional: true,
|
||||
},
|
||||
retention: {
|
||||
schema: MEDIA_RETENTION_POLICY_SCHEMA,
|
||||
optional: true,
|
||||
},
|
||||
template: {
|
||||
schema: MEDIA_SET_NAMING_TEMPLATE_SCHEMA,
|
||||
optional: true,
|
||||
},
|
||||
}
|
||||
)]
|
||||
#[derive(Serialize,Deserialize)]
|
||||
/// Media pool configuration
|
||||
pub struct MediaPoolConfig {
|
||||
/// The pool name
|
||||
pub name: String,
|
||||
/// The associated drive
|
||||
pub drive: String,
|
||||
/// Media Set allocation policy
|
||||
#[serde(skip_serializing_if="Option::is_none")]
|
||||
pub allocation: Option<String>,
|
||||
/// Media retention policy
|
||||
#[serde(skip_serializing_if="Option::is_none")]
|
||||
pub retention: Option<String>,
|
||||
/// Media set naming template (default "%id%")
|
||||
///
|
||||
/// The template is UTF8 text, and can include strftime time
|
||||
/// format specifications.
|
||||
#[serde(skip_serializing_if="Option::is_none")]
|
||||
pub template: Option<String>,
|
||||
}
|
21
src/api2/types/tape/media_status.rs
Normal file
21
src/api2/types/tape/media_status.rs
Normal file
@ -0,0 +1,21 @@
|
||||
use ::serde::{Deserialize, Serialize};
|
||||
|
||||
use proxmox::api::api;
|
||||
|
||||
#[api()]
|
||||
/// Media status
|
||||
#[derive(Debug, PartialEq, Copy, Clone, Serialize, Deserialize)]
|
||||
#[serde(rename_all = "lowercase")]
|
||||
/// Media Status
|
||||
pub enum MediaStatus {
|
||||
/// Media is ready to be written
|
||||
Writable,
|
||||
/// Media is full (contains data)
|
||||
Full,
|
||||
/// Media is marked as unknown, needs rescan
|
||||
Unknown,
|
||||
/// Media is marked as damaged
|
||||
Damaged,
|
||||
/// Media is marked as retired
|
||||
Retired,
|
||||
}
|
16
src/api2/types/tape/mod.rs
Normal file
16
src/api2/types/tape/mod.rs
Normal file
@ -0,0 +1,16 @@
|
||||
//! Types for tape backup API
|
||||
|
||||
mod device;
|
||||
pub use device::*;
|
||||
|
||||
mod drive;
|
||||
pub use drive::*;
|
||||
|
||||
mod media_pool;
|
||||
pub use media_pool::*;
|
||||
|
||||
mod media_status;
|
||||
pub use media_status::*;
|
||||
|
||||
mod media;
|
||||
pub use media::*;
|
@ -419,12 +419,10 @@ impl<'a> TryFrom<&'a str> for &'a TokennameRef {
|
||||
}
|
||||
|
||||
/// A complete user id consisting of a user name and a realm
|
||||
#[derive(Clone, Debug, Hash)]
|
||||
#[derive(Clone, Debug, PartialEq, Eq, Hash)]
|
||||
pub struct Userid {
|
||||
data: String,
|
||||
name_len: usize,
|
||||
//name: Username,
|
||||
//realm: Realm,
|
||||
}
|
||||
|
||||
impl Userid {
|
||||
@ -460,14 +458,6 @@ lazy_static! {
|
||||
pub static ref ROOT_USERID: Userid = Userid::new("root@pam".to_string(), 4);
|
||||
}
|
||||
|
||||
impl Eq for Userid {}
|
||||
|
||||
impl PartialEq for Userid {
|
||||
fn eq(&self, rhs: &Self) -> bool {
|
||||
self.data == rhs.data && self.name_len == rhs.name_len
|
||||
}
|
||||
}
|
||||
|
||||
impl From<Authid> for Userid {
|
||||
fn from(authid: Authid) -> Self {
|
||||
authid.user
|
||||
|
@ -247,6 +247,9 @@ pub use prune::*;
|
||||
mod datastore;
|
||||
pub use datastore::*;
|
||||
|
||||
mod store_progress;
|
||||
pub use store_progress::*;
|
||||
|
||||
mod verify;
|
||||
pub use verify::*;
|
||||
|
||||
|
@ -145,20 +145,6 @@ impl BackupGroup {
|
||||
|
||||
Ok(last)
|
||||
}
|
||||
|
||||
pub fn list_groups(base_path: &Path) -> Result<Vec<BackupGroup>, Error> {
|
||||
let mut list = Vec::new();
|
||||
|
||||
tools::scandir(libc::AT_FDCWD, base_path, &BACKUP_TYPE_REGEX, |l0_fd, backup_type, file_type| {
|
||||
if file_type != nix::dir::Type::Directory { return Ok(()); }
|
||||
tools::scandir(l0_fd, backup_type, &BACKUP_ID_REGEX, |_l1_fd, backup_id, file_type| {
|
||||
if file_type != nix::dir::Type::Directory { return Ok(()); }
|
||||
list.push(BackupGroup::new(backup_type, backup_id));
|
||||
Ok(())
|
||||
})
|
||||
})?;
|
||||
Ok(list)
|
||||
}
|
||||
}
|
||||
|
||||
impl std::fmt::Display for BackupGroup {
|
||||
|
@ -301,7 +301,7 @@ impl ChunkStore {
|
||||
last_percentage = percentage;
|
||||
crate::task_log!(
|
||||
worker,
|
||||
"percentage done: phase2 {}% (processed {} chunks)",
|
||||
"processed {}% ({} chunks)",
|
||||
percentage,
|
||||
chunk_count,
|
||||
);
|
||||
|
@ -47,6 +47,15 @@ pub struct Fingerprint {
|
||||
bytes: [u8; 32],
|
||||
}
|
||||
|
||||
impl Fingerprint {
|
||||
pub fn new(bytes: [u8; 32]) -> Self {
|
||||
Self { bytes }
|
||||
}
|
||||
pub fn bytes(&self) -> &[u8; 32] {
|
||||
&self.bytes
|
||||
}
|
||||
}
|
||||
|
||||
/// Display as short key ID
|
||||
impl Display for Fingerprint {
|
||||
fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
|
||||
@ -126,9 +135,7 @@ impl CryptConfig {
|
||||
}
|
||||
|
||||
pub fn fingerprint(&self) -> Fingerprint {
|
||||
Fingerprint {
|
||||
bytes: self.compute_digest(&FINGERPRINT_INPUT)
|
||||
}
|
||||
Fingerprint::new(self.compute_digest(&FINGERPRINT_INPUT))
|
||||
}
|
||||
|
||||
pub fn data_crypter(&self, iv: &[u8; 16], mode: Mode) -> Result<Crypter, Error> {
|
||||
|
@ -3,6 +3,7 @@ use std::io::{self, Write};
|
||||
use std::path::{Path, PathBuf};
|
||||
use std::sync::{Arc, Mutex};
|
||||
use std::convert::TryFrom;
|
||||
use std::str::FromStr;
|
||||
use std::time::Duration;
|
||||
use std::fs::File;
|
||||
|
||||
@ -243,7 +244,7 @@ impl DataStore {
|
||||
let (_guard, _manifest_guard);
|
||||
if !force {
|
||||
_guard = lock_dir_noblock(&full_path, "snapshot", "possibly running or in use")?;
|
||||
_manifest_guard = self.lock_manifest(backup_dir);
|
||||
_manifest_guard = self.lock_manifest(backup_dir)?;
|
||||
}
|
||||
|
||||
log::info!("removing backup snapshot {:?}", full_path);
|
||||
@ -256,6 +257,12 @@ impl DataStore {
|
||||
)
|
||||
})?;
|
||||
|
||||
// the manifest does not exists anymore, we do not need to keep the lock
|
||||
if let Ok(path) = self.manifest_lock_path(backup_dir) {
|
||||
// ignore errors
|
||||
let _ = std::fs::remove_file(path);
|
||||
}
|
||||
|
||||
Ok(())
|
||||
}
|
||||
|
||||
@ -379,7 +386,7 @@ impl DataStore {
|
||||
|
||||
use walkdir::WalkDir;
|
||||
|
||||
let walker = WalkDir::new(&base).same_file_system(true).into_iter();
|
||||
let walker = WalkDir::new(&base).into_iter();
|
||||
|
||||
// make sure we skip .chunks (and other hidden files to keep it simple)
|
||||
fn is_hidden(entry: &walkdir::DirEntry) -> bool {
|
||||
@ -474,30 +481,40 @@ impl DataStore {
|
||||
let mut done = 0;
|
||||
let mut last_percentage: usize = 0;
|
||||
|
||||
let mut strange_paths_count: u64 = 0;
|
||||
|
||||
for img in image_list {
|
||||
|
||||
worker.check_abort()?;
|
||||
tools::fail_on_shutdown()?;
|
||||
|
||||
let path = self.chunk_store.relative_path(&img);
|
||||
match std::fs::File::open(&path) {
|
||||
if let Some(backup_dir_path) = img.parent() {
|
||||
let backup_dir_path = backup_dir_path.strip_prefix(self.base_path())?;
|
||||
if let Some(backup_dir_str) = backup_dir_path.to_str() {
|
||||
if BackupDir::from_str(backup_dir_str).is_err() {
|
||||
strange_paths_count += 1;
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
match std::fs::File::open(&img) {
|
||||
Ok(file) => {
|
||||
if let Ok(archive_type) = archive_type(&img) {
|
||||
if archive_type == ArchiveType::FixedIndex {
|
||||
let index = FixedIndexReader::new(file).map_err(|e| {
|
||||
format_err!("can't read index '{}' - {}", path.to_string_lossy(), e)
|
||||
format_err!("can't read index '{}' - {}", img.to_string_lossy(), e)
|
||||
})?;
|
||||
self.index_mark_used_chunks(index, &img, status, worker)?;
|
||||
} else if archive_type == ArchiveType::DynamicIndex {
|
||||
let index = DynamicIndexReader::new(file).map_err(|e| {
|
||||
format_err!("can't read index '{}' - {}", path.to_string_lossy(), e)
|
||||
format_err!("can't read index '{}' - {}", img.to_string_lossy(), e)
|
||||
})?;
|
||||
self.index_mark_used_chunks(index, &img, status, worker)?;
|
||||
}
|
||||
}
|
||||
}
|
||||
Err(err) if err.kind() == io::ErrorKind::NotFound => (), // ignore vanished files
|
||||
Err(err) => bail!("can't open index {} - {}", path.to_string_lossy(), err),
|
||||
Err(err) => bail!("can't open index {} - {}", img.to_string_lossy(), err),
|
||||
}
|
||||
done += 1;
|
||||
|
||||
@ -505,7 +522,7 @@ impl DataStore {
|
||||
if percentage > last_percentage {
|
||||
crate::task_log!(
|
||||
worker,
|
||||
"percentage done: phase1 {}% ({} of {} index files)",
|
||||
"marked {}% ({} of {} index files)",
|
||||
percentage,
|
||||
done,
|
||||
image_count,
|
||||
@ -514,6 +531,15 @@ impl DataStore {
|
||||
}
|
||||
}
|
||||
|
||||
if strange_paths_count > 0 {
|
||||
crate::task_log!(
|
||||
worker,
|
||||
"found (and marked) {} index files outside of expected directory scheme",
|
||||
strange_paths_count,
|
||||
);
|
||||
}
|
||||
|
||||
|
||||
Ok(())
|
||||
}
|
||||
|
||||
@ -678,13 +704,32 @@ impl DataStore {
|
||||
))
|
||||
}
|
||||
|
||||
/// Returns the filename to lock a manifest
|
||||
///
|
||||
/// Also creates the basedir. The lockfile is located in
|
||||
/// '/run/proxmox-backup/locks/{datastore}/{type}/{id}/{timestamp}.index.json.lck'
|
||||
fn manifest_lock_path(
|
||||
&self,
|
||||
backup_dir: &BackupDir,
|
||||
) -> Result<String, Error> {
|
||||
let mut path = format!(
|
||||
"/run/proxmox-backup/locks/{}/{}/{}",
|
||||
self.name(),
|
||||
backup_dir.group().backup_type(),
|
||||
backup_dir.group().backup_id(),
|
||||
);
|
||||
std::fs::create_dir_all(&path)?;
|
||||
use std::fmt::Write;
|
||||
write!(path, "/{}{}", backup_dir.backup_time_string(), &MANIFEST_LOCK_NAME)?;
|
||||
|
||||
Ok(path)
|
||||
}
|
||||
|
||||
fn lock_manifest(
|
||||
&self,
|
||||
backup_dir: &BackupDir,
|
||||
) -> Result<File, Error> {
|
||||
let mut path = self.base_path();
|
||||
path.push(backup_dir.relative_path());
|
||||
path.push(&MANIFEST_LOCK_NAME);
|
||||
let path = self.manifest_lock_path(backup_dir)?;
|
||||
|
||||
// update_manifest should never take a long time, so if someone else has
|
||||
// the lock we can simply block a bit and should get it soon
|
||||
@ -739,3 +784,4 @@ impl DataStore {
|
||||
self.verify_new
|
||||
}
|
||||
}
|
||||
|
||||
|
@ -3,7 +3,7 @@ use std::io::{Seek, SeekFrom};
|
||||
|
||||
use super::chunk_stat::*;
|
||||
use super::chunk_store::*;
|
||||
use super::{IndexFile, ChunkReadInfo};
|
||||
use super::{ChunkReadInfo, IndexFile};
|
||||
use crate::tools;
|
||||
|
||||
use std::fs::File;
|
||||
@ -69,8 +69,7 @@ impl FixedIndexReader {
|
||||
|
||||
let header_size = std::mem::size_of::<FixedIndexHeader>();
|
||||
|
||||
let rawfd = file.as_raw_fd();
|
||||
let stat = match nix::sys::stat::fstat(rawfd) {
|
||||
let stat = match nix::sys::stat::fstat(file.as_raw_fd()) {
|
||||
Ok(stat) => stat,
|
||||
Err(err) => bail!("fstat failed - {}", err),
|
||||
};
|
||||
@ -94,7 +93,6 @@ impl FixedIndexReader {
|
||||
let index_length = ((size + chunk_size - 1) / chunk_size) as usize;
|
||||
let index_size = index_length * 32;
|
||||
|
||||
|
||||
let expected_index_size = (stat.st_size as usize) - header_size;
|
||||
if index_size != expected_index_size {
|
||||
bail!(
|
||||
@ -150,7 +148,7 @@ impl FixedIndexReader {
|
||||
println!("ChunkSize: {}", self.chunk_size);
|
||||
|
||||
let mut ctime_str = self.ctime.to_string();
|
||||
if let Ok(s) = proxmox::tools::time::strftime_local("%c",self.ctime) {
|
||||
if let Ok(s) = proxmox::tools::time::strftime_local("%c", self.ctime) {
|
||||
ctime_str = s;
|
||||
}
|
||||
|
||||
@ -215,7 +213,7 @@ impl IndexFile for FixedIndexReader {
|
||||
|
||||
Some((
|
||||
(offset / self.chunk_size as u64) as usize,
|
||||
offset & (self.chunk_size - 1) as u64 // fast modulo, valid for 2^x chunk_size
|
||||
offset & (self.chunk_size - 1) as u64, // fast modulo, valid for 2^x chunk_size
|
||||
))
|
||||
}
|
||||
}
|
||||
|
64
src/backup/store_progress.rs
Normal file
64
src/backup/store_progress.rs
Normal file
@ -0,0 +1,64 @@
|
||||
#[derive(Debug, Default)]
|
||||
/// Tracker for progress of operations iterating over `Datastore` contents.
|
||||
pub struct StoreProgress {
|
||||
/// Completed groups
|
||||
pub done_groups: u64,
|
||||
/// Total groups
|
||||
pub total_groups: u64,
|
||||
/// Completed snapshots within current group
|
||||
pub done_snapshots: u64,
|
||||
/// Total snapshots in current group
|
||||
pub group_snapshots: u64,
|
||||
}
|
||||
|
||||
impl StoreProgress {
|
||||
pub fn new(total_groups: u64) -> Self {
|
||||
StoreProgress {
|
||||
total_groups,
|
||||
.. Default::default()
|
||||
}
|
||||
}
|
||||
|
||||
/// Calculates an interpolated relative progress based on current counters.
|
||||
pub fn percentage(&self) -> f64 {
|
||||
let per_groups = (self.done_groups as f64) / (self.total_groups as f64);
|
||||
if self.group_snapshots == 0 {
|
||||
per_groups
|
||||
} else {
|
||||
let per_snapshots = (self.done_snapshots as f64) / (self.group_snapshots as f64);
|
||||
per_groups + (1.0 / self.total_groups as f64) * per_snapshots
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
impl std::fmt::Display for StoreProgress {
|
||||
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
|
||||
if self.group_snapshots == 0 {
|
||||
write!(
|
||||
f,
|
||||
"{:.2}% ({} of {} groups)",
|
||||
self.percentage() * 100.0,
|
||||
self.done_groups,
|
||||
self.total_groups,
|
||||
)
|
||||
} else if self.total_groups == 1 {
|
||||
write!(
|
||||
f,
|
||||
"{:.2}% ({} of {} snapshots)",
|
||||
self.percentage() * 100.0,
|
||||
self.done_snapshots,
|
||||
self.group_snapshots,
|
||||
)
|
||||
} else {
|
||||
write!(
|
||||
f,
|
||||
"{:.2}% ({} of {} groups, {} of {} group snapshots)",
|
||||
self.percentage() * 100.0,
|
||||
self.done_groups,
|
||||
self.total_groups,
|
||||
self.done_snapshots,
|
||||
self.group_snapshots,
|
||||
)
|
||||
}
|
||||
}
|
||||
}
|
@ -10,6 +10,7 @@ use crate::{
|
||||
api2::types::*,
|
||||
backup::{
|
||||
DataStore,
|
||||
StoreProgress,
|
||||
DataBlob,
|
||||
BackupGroup,
|
||||
BackupDir,
|
||||
@ -425,11 +426,11 @@ pub fn verify_backup_group(
|
||||
group: &BackupGroup,
|
||||
verified_chunks: Arc<Mutex<HashSet<[u8;32]>>>,
|
||||
corrupt_chunks: Arc<Mutex<HashSet<[u8;32]>>>,
|
||||
progress: Option<(usize, usize)>, // (done, snapshot_count)
|
||||
progress: &mut StoreProgress,
|
||||
worker: Arc<dyn TaskState + Send + Sync>,
|
||||
upid: &UPID,
|
||||
filter: Option<&dyn Fn(&BackupManifest) -> bool>,
|
||||
) -> Result<(usize, Vec<String>), Error> {
|
||||
) -> Result<Vec<String>, Error> {
|
||||
|
||||
let mut errors = Vec::new();
|
||||
let mut list = match group.list_backups(&datastore.base_path()) {
|
||||
@ -442,19 +443,17 @@ pub fn verify_backup_group(
|
||||
group,
|
||||
err,
|
||||
);
|
||||
return Ok((0, errors));
|
||||
return Ok(errors);
|
||||
}
|
||||
};
|
||||
|
||||
task_log!(worker, "verify group {}:{}", datastore.name(), group);
|
||||
let snapshot_count = list.len();
|
||||
task_log!(worker, "verify group {}:{} ({} snapshots)", datastore.name(), group, snapshot_count);
|
||||
|
||||
let (done, snapshot_count) = progress.unwrap_or((0, list.len()));
|
||||
progress.group_snapshots = snapshot_count as u64;
|
||||
|
||||
let mut count = 0;
|
||||
BackupInfo::sort_list(&mut list, false); // newest first
|
||||
for info in list {
|
||||
count += 1;
|
||||
|
||||
for (pos, info) in list.into_iter().enumerate() {
|
||||
if !verify_backup_dir(
|
||||
datastore.clone(),
|
||||
&info.backup_dir,
|
||||
@ -466,20 +465,15 @@ pub fn verify_backup_group(
|
||||
)? {
|
||||
errors.push(info.backup_dir.to_string());
|
||||
}
|
||||
if snapshot_count != 0 {
|
||||
let pos = done + count;
|
||||
let percentage = ((pos as f64) * 100.0)/(snapshot_count as f64);
|
||||
task_log!(
|
||||
worker,
|
||||
"percentage done: {:.2}% ({} of {} snapshots)",
|
||||
percentage,
|
||||
pos,
|
||||
snapshot_count,
|
||||
);
|
||||
}
|
||||
progress.done_snapshots = pos as u64 + 1;
|
||||
task_log!(
|
||||
worker,
|
||||
"percentage done: {}",
|
||||
progress
|
||||
);
|
||||
}
|
||||
|
||||
Ok((count, errors))
|
||||
Ok(errors)
|
||||
}
|
||||
|
||||
/// Verify all (owned) backups inside a datastore
|
||||
@ -533,7 +527,7 @@ pub fn verify_all_backups(
|
||||
}
|
||||
};
|
||||
|
||||
let mut list = match BackupGroup::list_groups(&datastore.base_path()) {
|
||||
let mut list = match BackupInfo::list_backup_groups(&datastore.base_path()) {
|
||||
Ok(list) => list
|
||||
.into_iter()
|
||||
.filter(|group| !(group.backup_type() == "host" && group.backup_id() == "benchmark"))
|
||||
@ -551,34 +545,33 @@ pub fn verify_all_backups(
|
||||
|
||||
list.sort_unstable();
|
||||
|
||||
let mut snapshot_count = 0;
|
||||
for group in list.iter() {
|
||||
snapshot_count += group.list_backups(&datastore.base_path())?.len();
|
||||
}
|
||||
|
||||
// start with 16384 chunks (up to 65GB)
|
||||
let verified_chunks = Arc::new(Mutex::new(HashSet::with_capacity(1024*16)));
|
||||
|
||||
// start with 64 chunks since we assume there are few corrupt ones
|
||||
let corrupt_chunks = Arc::new(Mutex::new(HashSet::with_capacity(64)));
|
||||
|
||||
task_log!(worker, "found {} snapshots", snapshot_count);
|
||||
let group_count = list.len();
|
||||
task_log!(worker, "found {} groups", group_count);
|
||||
|
||||
let mut done = 0;
|
||||
for group in list {
|
||||
let (count, mut group_errors) = verify_backup_group(
|
||||
let mut progress = StoreProgress::new(group_count as u64);
|
||||
|
||||
for (pos, group) in list.into_iter().enumerate() {
|
||||
progress.done_groups = pos as u64;
|
||||
progress.done_snapshots = 0;
|
||||
progress.group_snapshots = 0;
|
||||
|
||||
let mut group_errors = verify_backup_group(
|
||||
datastore.clone(),
|
||||
&group,
|
||||
verified_chunks.clone(),
|
||||
corrupt_chunks.clone(),
|
||||
Some((done, snapshot_count)),
|
||||
&mut progress,
|
||||
worker.clone(),
|
||||
upid,
|
||||
filter,
|
||||
)?;
|
||||
errors.append(&mut group_errors);
|
||||
|
||||
done += count;
|
||||
}
|
||||
|
||||
Ok(errors)
|
||||
|
@ -38,6 +38,7 @@ async fn run() -> Result<(), Error> {
|
||||
|
||||
proxmox_backup::rrd::create_rrdb_dir()?;
|
||||
proxmox_backup::server::jobstate::create_jobstate_dir()?;
|
||||
proxmox_backup::tape::create_tape_status_dir()?;
|
||||
|
||||
if let Err(err) = generate_auth_key() {
|
||||
bail!("unable to generate auth key - {}", err);
|
||||
|
@ -53,7 +53,6 @@ use proxmox_backup::backup::{
|
||||
ChunkStream,
|
||||
CryptConfig,
|
||||
CryptMode,
|
||||
DataBlob,
|
||||
DynamicIndexReader,
|
||||
FixedChunkStream,
|
||||
FixedIndexReader,
|
||||
@ -456,112 +455,6 @@ async fn change_backup_owner(group: String, mut param: Value) -> Result<(), Erro
|
||||
Ok(())
|
||||
}
|
||||
|
||||
#[api(
|
||||
input: {
|
||||
properties: {
|
||||
repository: {
|
||||
schema: REPO_URL_SCHEMA,
|
||||
optional: true,
|
||||
},
|
||||
group: {
|
||||
type: String,
|
||||
description: "Backup group.",
|
||||
optional: true,
|
||||
},
|
||||
"output-format": {
|
||||
schema: OUTPUT_FORMAT,
|
||||
optional: true,
|
||||
},
|
||||
}
|
||||
}
|
||||
)]
|
||||
/// List backup snapshots.
|
||||
async fn list_snapshots(param: Value) -> Result<Value, Error> {
|
||||
|
||||
let repo = extract_repository_from_value(¶m)?;
|
||||
|
||||
let output_format = get_output_format(¶m);
|
||||
|
||||
let client = connect(&repo)?;
|
||||
|
||||
let group: Option<BackupGroup> = if let Some(path) = param["group"].as_str() {
|
||||
Some(path.parse()?)
|
||||
} else {
|
||||
None
|
||||
};
|
||||
|
||||
let mut data = api_datastore_list_snapshots(&client, repo.store(), group).await?;
|
||||
|
||||
record_repository(&repo);
|
||||
|
||||
let render_snapshot_path = |_v: &Value, record: &Value| -> Result<String, Error> {
|
||||
let item: SnapshotListItem = serde_json::from_value(record.to_owned())?;
|
||||
let snapshot = BackupDir::new(item.backup_type, item.backup_id, item.backup_time)?;
|
||||
Ok(snapshot.relative_path().to_str().unwrap().to_owned())
|
||||
};
|
||||
|
||||
let render_files = |_v: &Value, record: &Value| -> Result<String, Error> {
|
||||
let item: SnapshotListItem = serde_json::from_value(record.to_owned())?;
|
||||
let mut filenames = Vec::new();
|
||||
for file in &item.files {
|
||||
filenames.push(file.filename.to_string());
|
||||
}
|
||||
Ok(tools::format::render_backup_file_list(&filenames[..]))
|
||||
};
|
||||
|
||||
let options = default_table_format_options()
|
||||
.sortby("backup-type", false)
|
||||
.sortby("backup-id", false)
|
||||
.sortby("backup-time", false)
|
||||
.column(ColumnConfig::new("backup-id").renderer(render_snapshot_path).header("snapshot"))
|
||||
.column(ColumnConfig::new("size").renderer(tools::format::render_bytes_human_readable))
|
||||
.column(ColumnConfig::new("files").renderer(render_files))
|
||||
;
|
||||
|
||||
let info = &proxmox_backup::api2::admin::datastore::API_RETURN_SCHEMA_LIST_SNAPSHOTS;
|
||||
|
||||
format_and_print_result_full(&mut data, info, &output_format, &options);
|
||||
|
||||
Ok(Value::Null)
|
||||
}
|
||||
|
||||
#[api(
|
||||
input: {
|
||||
properties: {
|
||||
repository: {
|
||||
schema: REPO_URL_SCHEMA,
|
||||
optional: true,
|
||||
},
|
||||
snapshot: {
|
||||
type: String,
|
||||
description: "Snapshot path.",
|
||||
},
|
||||
}
|
||||
}
|
||||
)]
|
||||
/// Forget (remove) backup snapshots.
|
||||
async fn forget_snapshots(param: Value) -> Result<Value, Error> {
|
||||
|
||||
let repo = extract_repository_from_value(¶m)?;
|
||||
|
||||
let path = tools::required_string_param(¶m, "snapshot")?;
|
||||
let snapshot: BackupDir = path.parse()?;
|
||||
|
||||
let mut client = connect(&repo)?;
|
||||
|
||||
let path = format!("api2/json/admin/datastore/{}/snapshots", repo.store());
|
||||
|
||||
let result = client.delete(&path, Some(json!({
|
||||
"backup-type": snapshot.group().backup_type(),
|
||||
"backup-id": snapshot.group().backup_id(),
|
||||
"backup-time": snapshot.backup_time(),
|
||||
}))).await?;
|
||||
|
||||
record_repository(&repo);
|
||||
|
||||
Ok(result)
|
||||
}
|
||||
|
||||
#[api(
|
||||
input: {
|
||||
properties: {
|
||||
@ -655,58 +548,6 @@ async fn api_version(param: Value) -> Result<(), Error> {
|
||||
Ok(())
|
||||
}
|
||||
|
||||
|
||||
#[api(
|
||||
input: {
|
||||
properties: {
|
||||
repository: {
|
||||
schema: REPO_URL_SCHEMA,
|
||||
optional: true,
|
||||
},
|
||||
snapshot: {
|
||||
type: String,
|
||||
description: "Snapshot path.",
|
||||
},
|
||||
"output-format": {
|
||||
schema: OUTPUT_FORMAT,
|
||||
optional: true,
|
||||
},
|
||||
}
|
||||
}
|
||||
)]
|
||||
/// List snapshot files.
|
||||
async fn list_snapshot_files(param: Value) -> Result<Value, Error> {
|
||||
|
||||
let repo = extract_repository_from_value(¶m)?;
|
||||
|
||||
let path = tools::required_string_param(¶m, "snapshot")?;
|
||||
let snapshot: BackupDir = path.parse()?;
|
||||
|
||||
let output_format = get_output_format(¶m);
|
||||
|
||||
let client = connect(&repo)?;
|
||||
|
||||
let path = format!("api2/json/admin/datastore/{}/files", repo.store());
|
||||
|
||||
let mut result = client.get(&path, Some(json!({
|
||||
"backup-type": snapshot.group().backup_type(),
|
||||
"backup-id": snapshot.group().backup_id(),
|
||||
"backup-time": snapshot.backup_time(),
|
||||
}))).await?;
|
||||
|
||||
record_repository(&repo);
|
||||
|
||||
let info = &proxmox_backup::api2::admin::datastore::API_RETURN_SCHEMA_LIST_SNAPSHOT_FILES;
|
||||
|
||||
let mut data: Value = result["data"].take();
|
||||
|
||||
let options = default_table_format_options();
|
||||
|
||||
format_and_print_result_full(&mut data, info, &output_format, &options);
|
||||
|
||||
Ok(Value::Null)
|
||||
}
|
||||
|
||||
#[api(
|
||||
input: {
|
||||
properties: {
|
||||
@ -803,7 +644,7 @@ fn keyfile_parameters(param: &Value) -> Result<(Option<Vec<u8>>, CryptMode), Err
|
||||
(None, None) => None,
|
||||
(Some(_), Some(_)) => bail!("--keyfile and --keyfd are mutually exclusive"),
|
||||
(Some(keyfile), None) => {
|
||||
println!("Using encryption key file: {}", keyfile);
|
||||
eprintln!("Using encryption key file: {}", keyfile);
|
||||
Some(file_get_contents(keyfile)?)
|
||||
},
|
||||
(None, Some(fd)) => {
|
||||
@ -813,7 +654,7 @@ fn keyfile_parameters(param: &Value) -> Result<(Option<Vec<u8>>, CryptMode), Err
|
||||
.map_err(|err| {
|
||||
format_err!("error reading encryption key from fd {}: {}", fd, err)
|
||||
})?;
|
||||
println!("Using encryption key from file descriptor");
|
||||
eprintln!("Using encryption key from file descriptor");
|
||||
Some(data)
|
||||
}
|
||||
};
|
||||
@ -822,7 +663,7 @@ fn keyfile_parameters(param: &Value) -> Result<(Option<Vec<u8>>, CryptMode), Err
|
||||
// no parameters:
|
||||
(None, None) => match key::read_optional_default_encryption_key()? {
|
||||
Some(key) => {
|
||||
println!("Encrypting with default encryption key!");
|
||||
eprintln!("Encrypting with default encryption key!");
|
||||
(Some(key), CryptMode::Encrypt)
|
||||
},
|
||||
None => (None, CryptMode::None),
|
||||
@ -835,7 +676,7 @@ fn keyfile_parameters(param: &Value) -> Result<(Option<Vec<u8>>, CryptMode), Err
|
||||
(None, Some(crypt_mode)) => match key::read_optional_default_encryption_key()? {
|
||||
None => bail!("--crypt-mode without --keyfile and no default key file available"),
|
||||
Some(key) => {
|
||||
println!("Encrypting with default encryption key!");
|
||||
eprintln!("Encrypting with default encryption key!");
|
||||
(Some(key), crypt_mode)
|
||||
},
|
||||
}
|
||||
@ -1416,7 +1257,7 @@ async fn restore(param: Value) -> Result<Value, Error> {
|
||||
None => None,
|
||||
Some(key) => {
|
||||
let (key, _, fingerprint) = decrypt_key(&key, &key::get_encryption_key_password)?;
|
||||
println!("Encryption key fingerprint: '{}'", fingerprint);
|
||||
eprintln!("Encryption key fingerprint: '{}'", fingerprint);
|
||||
Some(Arc::new(CryptConfig::new(key)?))
|
||||
}
|
||||
};
|
||||
@ -1529,81 +1370,6 @@ async fn restore(param: Value) -> Result<Value, Error> {
|
||||
Ok(Value::Null)
|
||||
}
|
||||
|
||||
#[api(
|
||||
input: {
|
||||
properties: {
|
||||
repository: {
|
||||
schema: REPO_URL_SCHEMA,
|
||||
optional: true,
|
||||
},
|
||||
snapshot: {
|
||||
type: String,
|
||||
description: "Group/Snapshot path.",
|
||||
},
|
||||
logfile: {
|
||||
type: String,
|
||||
description: "The path to the log file you want to upload.",
|
||||
},
|
||||
keyfile: {
|
||||
schema: KEYFILE_SCHEMA,
|
||||
optional: true,
|
||||
},
|
||||
"keyfd": {
|
||||
schema: KEYFD_SCHEMA,
|
||||
optional: true,
|
||||
},
|
||||
"crypt-mode": {
|
||||
type: CryptMode,
|
||||
optional: true,
|
||||
},
|
||||
}
|
||||
}
|
||||
)]
|
||||
/// Upload backup log file.
|
||||
async fn upload_log(param: Value) -> Result<Value, Error> {
|
||||
|
||||
let logfile = tools::required_string_param(¶m, "logfile")?;
|
||||
let repo = extract_repository_from_value(¶m)?;
|
||||
|
||||
let snapshot = tools::required_string_param(¶m, "snapshot")?;
|
||||
let snapshot: BackupDir = snapshot.parse()?;
|
||||
|
||||
let mut client = connect(&repo)?;
|
||||
|
||||
let (keydata, crypt_mode) = keyfile_parameters(¶m)?;
|
||||
|
||||
let crypt_config = match keydata {
|
||||
None => None,
|
||||
Some(key) => {
|
||||
let (key, _created, _) = decrypt_key(&key, &key::get_encryption_key_password)?;
|
||||
let crypt_config = CryptConfig::new(key)?;
|
||||
Some(Arc::new(crypt_config))
|
||||
}
|
||||
};
|
||||
|
||||
let data = file_get_contents(logfile)?;
|
||||
|
||||
// fixme: howto sign log?
|
||||
let blob = match crypt_mode {
|
||||
CryptMode::None | CryptMode::SignOnly => DataBlob::encode(&data, None, true)?,
|
||||
CryptMode::Encrypt => DataBlob::encode(&data, crypt_config.as_ref().map(Arc::as_ref), true)?,
|
||||
};
|
||||
|
||||
let raw_data = blob.into_inner();
|
||||
|
||||
let path = format!("api2/json/admin/datastore/{}/upload-backup-log", repo.store());
|
||||
|
||||
let args = json!({
|
||||
"backup-type": snapshot.group().backup_type(),
|
||||
"backup-id": snapshot.group().backup_id(),
|
||||
"backup-time": snapshot.backup_time(),
|
||||
});
|
||||
|
||||
let body = hyper::Body::from(raw_data);
|
||||
|
||||
client.upload("application/octet-stream", body, &path, Some(args)).await
|
||||
}
|
||||
|
||||
const API_METHOD_PRUNE: ApiMethod = ApiMethod::new(
|
||||
&ApiHandler::Async(&prune),
|
||||
&ObjectSchema::new(
|
||||
@ -2041,26 +1807,9 @@ fn main() {
|
||||
.completion_cb("repository", complete_repository)
|
||||
.completion_cb("keyfile", tools::complete_file_name);
|
||||
|
||||
let upload_log_cmd_def = CliCommand::new(&API_METHOD_UPLOAD_LOG)
|
||||
.arg_param(&["snapshot", "logfile"])
|
||||
.completion_cb("snapshot", complete_backup_snapshot)
|
||||
.completion_cb("logfile", tools::complete_file_name)
|
||||
.completion_cb("keyfile", tools::complete_file_name)
|
||||
.completion_cb("repository", complete_repository);
|
||||
|
||||
let list_cmd_def = CliCommand::new(&API_METHOD_LIST_BACKUP_GROUPS)
|
||||
.completion_cb("repository", complete_repository);
|
||||
|
||||
let snapshots_cmd_def = CliCommand::new(&API_METHOD_LIST_SNAPSHOTS)
|
||||
.arg_param(&["group"])
|
||||
.completion_cb("group", complete_backup_group)
|
||||
.completion_cb("repository", complete_repository);
|
||||
|
||||
let forget_cmd_def = CliCommand::new(&API_METHOD_FORGET_SNAPSHOTS)
|
||||
.arg_param(&["snapshot"])
|
||||
.completion_cb("repository", complete_repository)
|
||||
.completion_cb("snapshot", complete_backup_snapshot);
|
||||
|
||||
let garbage_collect_cmd_def = CliCommand::new(&API_METHOD_START_GARBAGE_COLLECTION)
|
||||
.completion_cb("repository", complete_repository);
|
||||
|
||||
@ -2071,11 +1820,6 @@ fn main() {
|
||||
.completion_cb("archive-name", complete_archive_name)
|
||||
.completion_cb("target", tools::complete_file_name);
|
||||
|
||||
let files_cmd_def = CliCommand::new(&API_METHOD_LIST_SNAPSHOT_FILES)
|
||||
.arg_param(&["snapshot"])
|
||||
.completion_cb("repository", complete_repository)
|
||||
.completion_cb("snapshot", complete_backup_snapshot);
|
||||
|
||||
let prune_cmd_def = CliCommand::new(&API_METHOD_PRUNE)
|
||||
.arg_param(&["group"])
|
||||
.completion_cb("group", complete_backup_group)
|
||||
@ -2101,16 +1845,13 @@ fn main() {
|
||||
|
||||
let cmd_def = CliCommandMap::new()
|
||||
.insert("backup", backup_cmd_def)
|
||||
.insert("upload-log", upload_log_cmd_def)
|
||||
.insert("forget", forget_cmd_def)
|
||||
.insert("garbage-collect", garbage_collect_cmd_def)
|
||||
.insert("list", list_cmd_def)
|
||||
.insert("login", login_cmd_def)
|
||||
.insert("logout", logout_cmd_def)
|
||||
.insert("prune", prune_cmd_def)
|
||||
.insert("restore", restore_cmd_def)
|
||||
.insert("snapshots", snapshots_cmd_def)
|
||||
.insert("files", files_cmd_def)
|
||||
.insert("snapshot", snapshot_mgtm_cli())
|
||||
.insert("status", status_cmd_def)
|
||||
.insert("key", key::cli())
|
||||
.insert("mount", mount_cmd_def())
|
||||
@ -2120,7 +1861,13 @@ fn main() {
|
||||
.insert("task", task_mgmt_cli())
|
||||
.insert("version", version_cmd_def)
|
||||
.insert("benchmark", benchmark_cmd_def)
|
||||
.insert("change-owner", change_owner_cmd_def);
|
||||
.insert("change-owner", change_owner_cmd_def)
|
||||
|
||||
.alias(&["files"], &["snapshot", "files"])
|
||||
.alias(&["forget"], &["snapshot", "forget"])
|
||||
.alias(&["upload-log"], &["snapshot", "upload-log"])
|
||||
.alias(&["snapshots"], &["snapshot", "list"])
|
||||
;
|
||||
|
||||
let rpcenv = CliEnvironment::new();
|
||||
run_cli_command(cmd_def, rpcenv, Some(|future| {
|
||||
|
@ -10,8 +10,6 @@ use proxmox_backup::tools;
|
||||
use proxmox_backup::config;
|
||||
use proxmox_backup::api2::{self, types::* };
|
||||
use proxmox_backup::client::*;
|
||||
use proxmox_backup::tools::ticket::Ticket;
|
||||
use proxmox_backup::auth_helpers::*;
|
||||
|
||||
mod proxmox_backup_manager;
|
||||
use proxmox_backup_manager::*;
|
||||
@ -51,27 +49,6 @@ pub async fn wait_for_local_worker(upid_str: &str) -> Result<(), Error> {
|
||||
Ok(())
|
||||
}
|
||||
|
||||
fn connect() -> Result<HttpClient, Error> {
|
||||
|
||||
let uid = nix::unistd::Uid::current();
|
||||
|
||||
let mut options = HttpClientOptions::new()
|
||||
.prefix(Some("proxmox-backup".to_string()))
|
||||
.verify_cert(false); // not required for connection to localhost
|
||||
|
||||
let client = if uid.is_root() {
|
||||
let ticket = Ticket::new("PBS", Userid::root_userid())?
|
||||
.sign(private_auth_key(), None)?;
|
||||
options = options.password(Some(ticket));
|
||||
HttpClient::new("localhost", 8007, Authid::root_auth_id(), options)?
|
||||
} else {
|
||||
options = options.ticket_cache(true).interactive(true);
|
||||
HttpClient::new("localhost", 8007, Authid::root_auth_id(), options)?
|
||||
};
|
||||
|
||||
Ok(client)
|
||||
}
|
||||
|
||||
#[api(
|
||||
input: {
|
||||
properties: {
|
||||
@ -92,7 +69,7 @@ async fn start_garbage_collection(param: Value) -> Result<Value, Error> {
|
||||
|
||||
let store = tools::required_string_param(¶m, "store")?;
|
||||
|
||||
let mut client = connect()?;
|
||||
let mut client = connect_to_localhost()?;
|
||||
|
||||
let path = format!("api2/json/admin/datastore/{}/gc", store);
|
||||
|
||||
@ -123,7 +100,7 @@ async fn garbage_collection_status(param: Value) -> Result<Value, Error> {
|
||||
|
||||
let store = tools::required_string_param(¶m, "store")?;
|
||||
|
||||
let client = connect()?;
|
||||
let client = connect_to_localhost()?;
|
||||
|
||||
let path = format!("api2/json/admin/datastore/{}/gc", store);
|
||||
|
||||
@ -183,7 +160,7 @@ async fn task_list(param: Value) -> Result<Value, Error> {
|
||||
|
||||
let output_format = get_output_format(¶m);
|
||||
|
||||
let client = connect()?;
|
||||
let client = connect_to_localhost()?;
|
||||
|
||||
let limit = param["limit"].as_u64().unwrap_or(50) as usize;
|
||||
let running = !param["all"].as_bool().unwrap_or(false);
|
||||
@ -222,7 +199,7 @@ async fn task_log(param: Value) -> Result<Value, Error> {
|
||||
|
||||
let upid = tools::required_string_param(¶m, "upid")?;
|
||||
|
||||
let client = connect()?;
|
||||
let client = connect_to_localhost()?;
|
||||
|
||||
display_task_log(client, upid, true).await?;
|
||||
|
||||
@ -243,7 +220,7 @@ async fn task_stop(param: Value) -> Result<Value, Error> {
|
||||
|
||||
let upid_str = tools::required_string_param(¶m, "upid")?;
|
||||
|
||||
let mut client = connect()?;
|
||||
let mut client = connect_to_localhost()?;
|
||||
|
||||
let path = format!("api2/json/nodes/localhost/tasks/{}", tools::percent_encode_component(upid_str));
|
||||
let _ = client.delete(&path, None).await?;
|
||||
@ -302,7 +279,7 @@ async fn pull_datastore(
|
||||
|
||||
let output_format = get_output_format(¶m);
|
||||
|
||||
let mut client = connect()?;
|
||||
let mut client = connect_to_localhost()?;
|
||||
|
||||
let mut args = json!({
|
||||
"store": local_store,
|
||||
@ -342,7 +319,7 @@ async fn verify(
|
||||
|
||||
let output_format = get_output_format(¶m);
|
||||
|
||||
let mut client = connect()?;
|
||||
let mut client = connect_to_localhost()?;
|
||||
|
||||
let args = json!({});
|
||||
|
||||
|
471
src/bin/proxmox-tape.rs
Normal file
471
src/bin/proxmox-tape.rs
Normal file
@ -0,0 +1,471 @@
|
||||
use anyhow::{format_err, Error};
|
||||
use serde_json::{json, Value};
|
||||
|
||||
use proxmox::{
|
||||
api::{
|
||||
api,
|
||||
cli::*,
|
||||
ApiHandler,
|
||||
RpcEnvironment,
|
||||
section_config::SectionConfigData,
|
||||
},
|
||||
};
|
||||
|
||||
use proxmox_backup::{
|
||||
tools::format::render_epoch,
|
||||
server::{
|
||||
UPID,
|
||||
worker_is_active_local,
|
||||
},
|
||||
api2::{
|
||||
self,
|
||||
types::{
|
||||
DRIVE_ID_SCHEMA,
|
||||
MEDIA_LABEL_SCHEMA,
|
||||
MEDIA_POOL_NAME_SCHEMA,
|
||||
},
|
||||
},
|
||||
config::{
|
||||
self,
|
||||
drive::complete_drive_name,
|
||||
media_pool::complete_pool_name,
|
||||
},
|
||||
tape::{
|
||||
complete_media_changer_id,
|
||||
},
|
||||
};
|
||||
|
||||
mod proxmox_tape;
|
||||
use proxmox_tape::*;
|
||||
|
||||
// Note: local workers should print logs to stdout, so there is no need
|
||||
// to fetch/display logs. We just wait for the worker to finish.
|
||||
pub async fn wait_for_local_worker(upid_str: &str) -> Result<(), Error> {
|
||||
|
||||
let upid: UPID = upid_str.parse()?;
|
||||
|
||||
let sleep_duration = core::time::Duration::new(0, 100_000_000);
|
||||
|
||||
loop {
|
||||
if worker_is_active_local(&upid) {
|
||||
tokio::time::delay_for(sleep_duration).await;
|
||||
} else {
|
||||
break;
|
||||
}
|
||||
}
|
||||
Ok(())
|
||||
}
|
||||
|
||||
fn lookup_drive_name(
|
||||
param: &Value,
|
||||
config: &SectionConfigData,
|
||||
) -> Result<String, Error> {
|
||||
|
||||
let drive = param["drive"]
|
||||
.as_str()
|
||||
.map(String::from)
|
||||
.or_else(|| std::env::var("PROXMOX_TAPE_DRIVE").ok())
|
||||
.or_else(|| {
|
||||
|
||||
let mut drive_names = Vec::new();
|
||||
|
||||
for (name, (section_type, _)) in config.sections.iter() {
|
||||
|
||||
if !(section_type == "linux" || section_type == "virtual") { continue; }
|
||||
drive_names.push(name);
|
||||
}
|
||||
|
||||
if drive_names.len() == 1 {
|
||||
Some(drive_names[0].to_owned())
|
||||
} else {
|
||||
None
|
||||
}
|
||||
})
|
||||
.ok_or_else(|| format_err!("unable to get (default) drive name"))?;
|
||||
|
||||
Ok(drive)
|
||||
}
|
||||
|
||||
#[api(
|
||||
input: {
|
||||
properties: {
|
||||
drive: {
|
||||
schema: DRIVE_ID_SCHEMA,
|
||||
optional: true,
|
||||
},
|
||||
fast: {
|
||||
description: "Use fast erase.",
|
||||
type: bool,
|
||||
optional: true,
|
||||
default: true,
|
||||
},
|
||||
},
|
||||
},
|
||||
)]
|
||||
/// Erase media
|
||||
async fn erase_media(
|
||||
mut param: Value,
|
||||
rpcenv: &mut dyn RpcEnvironment,
|
||||
) -> Result<(), Error> {
|
||||
|
||||
let (config, _digest) = config::drive::config()?;
|
||||
|
||||
param["drive"] = lookup_drive_name(¶m, &config)?.into();
|
||||
|
||||
let info = &api2::tape::drive::API_METHOD_ERASE_MEDIA;
|
||||
|
||||
let result = match info.handler {
|
||||
ApiHandler::Sync(handler) => (handler)(param, info, rpcenv)?,
|
||||
_ => unreachable!(),
|
||||
};
|
||||
|
||||
wait_for_local_worker(result.as_str().unwrap()).await?;
|
||||
|
||||
Ok(())
|
||||
}
|
||||
|
||||
#[api(
|
||||
input: {
|
||||
properties: {
|
||||
drive: {
|
||||
schema: DRIVE_ID_SCHEMA,
|
||||
optional: true,
|
||||
},
|
||||
},
|
||||
},
|
||||
)]
|
||||
/// Rewind tape
|
||||
async fn rewind(
|
||||
mut param: Value,
|
||||
rpcenv: &mut dyn RpcEnvironment,
|
||||
) -> Result<(), Error> {
|
||||
|
||||
let (config, _digest) = config::drive::config()?;
|
||||
|
||||
param["drive"] = lookup_drive_name(¶m, &config)?.into();
|
||||
|
||||
let info = &api2::tape::drive::API_METHOD_REWIND;
|
||||
|
||||
let result = match info.handler {
|
||||
ApiHandler::Sync(handler) => (handler)(param, info, rpcenv)?,
|
||||
_ => unreachable!(),
|
||||
};
|
||||
|
||||
wait_for_local_worker(result.as_str().unwrap()).await?;
|
||||
|
||||
Ok(())
|
||||
}
|
||||
|
||||
#[api(
|
||||
input: {
|
||||
properties: {
|
||||
drive: {
|
||||
schema: DRIVE_ID_SCHEMA,
|
||||
optional: true,
|
||||
},
|
||||
},
|
||||
},
|
||||
)]
|
||||
/// Eject/Unload drive media
|
||||
fn eject_media(
|
||||
mut param: Value,
|
||||
rpcenv: &mut dyn RpcEnvironment,
|
||||
) -> Result<(), Error> {
|
||||
|
||||
let (config, _digest) = config::drive::config()?;
|
||||
|
||||
param["drive"] = lookup_drive_name(¶m, &config)?.into();
|
||||
|
||||
let info = &api2::tape::drive::API_METHOD_EJECT_MEDIA;
|
||||
|
||||
match info.handler {
|
||||
ApiHandler::Sync(handler) => (handler)(param, info, rpcenv)?,
|
||||
_ => unreachable!(),
|
||||
};
|
||||
|
||||
Ok(())
|
||||
}
|
||||
|
||||
#[api(
|
||||
input: {
|
||||
properties: {
|
||||
drive: {
|
||||
schema: DRIVE_ID_SCHEMA,
|
||||
optional: true,
|
||||
},
|
||||
"changer-id": {
|
||||
schema: MEDIA_LABEL_SCHEMA,
|
||||
},
|
||||
},
|
||||
},
|
||||
)]
|
||||
/// Load media
|
||||
fn load_media(
|
||||
mut param: Value,
|
||||
rpcenv: &mut dyn RpcEnvironment,
|
||||
) -> Result<(), Error> {
|
||||
|
||||
let (config, _digest) = config::drive::config()?;
|
||||
|
||||
param["drive"] = lookup_drive_name(¶m, &config)?.into();
|
||||
|
||||
let info = &api2::tape::drive::API_METHOD_LOAD_MEDIA;
|
||||
|
||||
match info.handler {
|
||||
ApiHandler::Sync(handler) => (handler)(param, info, rpcenv)?,
|
||||
_ => unreachable!(),
|
||||
};
|
||||
|
||||
Ok(())
|
||||
}
|
||||
|
||||
#[api(
|
||||
input: {
|
||||
properties: {
|
||||
pool: {
|
||||
schema: MEDIA_POOL_NAME_SCHEMA,
|
||||
optional: true,
|
||||
},
|
||||
drive: {
|
||||
schema: DRIVE_ID_SCHEMA,
|
||||
optional: true,
|
||||
},
|
||||
"changer-id": {
|
||||
schema: MEDIA_LABEL_SCHEMA,
|
||||
},
|
||||
},
|
||||
},
|
||||
)]
|
||||
/// Label media
|
||||
async fn label_media(
|
||||
mut param: Value,
|
||||
rpcenv: &mut dyn RpcEnvironment,
|
||||
) -> Result<(), Error> {
|
||||
|
||||
let (config, _digest) = config::drive::config()?;
|
||||
|
||||
param["drive"] = lookup_drive_name(¶m, &config)?.into();
|
||||
|
||||
let info = &api2::tape::drive::API_METHOD_LABEL_MEDIA;
|
||||
|
||||
let result = match info.handler {
|
||||
ApiHandler::Sync(handler) => (handler)(param, info, rpcenv)?,
|
||||
_ => unreachable!(),
|
||||
};
|
||||
|
||||
wait_for_local_worker(result.as_str().unwrap()).await?;
|
||||
|
||||
Ok(())
|
||||
}
|
||||
|
||||
#[api(
|
||||
input: {
|
||||
properties: {
|
||||
drive: {
|
||||
schema: DRIVE_ID_SCHEMA,
|
||||
optional: true,
|
||||
},
|
||||
"output-format": {
|
||||
schema: OUTPUT_FORMAT,
|
||||
optional: true,
|
||||
},
|
||||
},
|
||||
},
|
||||
)]
|
||||
/// Read media label
|
||||
fn read_label(
|
||||
mut param: Value,
|
||||
rpcenv: &mut dyn RpcEnvironment,
|
||||
) -> Result<(), Error> {
|
||||
|
||||
let (config, _digest) = config::drive::config()?;
|
||||
|
||||
param["drive"] = lookup_drive_name(¶m, &config)?.into();
|
||||
|
||||
let output_format = get_output_format(¶m);
|
||||
let info = &api2::tape::drive::API_METHOD_READ_LABEL;
|
||||
let mut data = match info.handler {
|
||||
ApiHandler::Sync(handler) => (handler)(param, info, rpcenv)?,
|
||||
_ => unreachable!(),
|
||||
};
|
||||
|
||||
let options = default_table_format_options()
|
||||
.column(ColumnConfig::new("changer-id"))
|
||||
.column(ColumnConfig::new("uuid"))
|
||||
.column(ColumnConfig::new("ctime").renderer(render_epoch))
|
||||
.column(ColumnConfig::new("pool"))
|
||||
.column(ColumnConfig::new("media-set-uuid"))
|
||||
.column(ColumnConfig::new("media-set-ctime").renderer(render_epoch))
|
||||
;
|
||||
|
||||
format_and_print_result_full(&mut data, info.returns, &output_format, &options);
|
||||
|
||||
Ok(())
|
||||
}
|
||||
|
||||
#[api(
|
||||
input: {
|
||||
properties: {
|
||||
"output-format": {
|
||||
schema: OUTPUT_FORMAT,
|
||||
optional: true,
|
||||
},
|
||||
drive: {
|
||||
schema: DRIVE_ID_SCHEMA,
|
||||
optional: true,
|
||||
},
|
||||
"read-labels": {
|
||||
description: "Load unknown tapes and try read labels",
|
||||
type: bool,
|
||||
optional: true,
|
||||
},
|
||||
"read-all-labels": {
|
||||
description: "Load all tapes and try read labels (even if already inventoried)",
|
||||
type: bool,
|
||||
optional: true,
|
||||
},
|
||||
},
|
||||
},
|
||||
)]
|
||||
/// List (and update) media labels (Changer Inventory)
|
||||
async fn inventory(
|
||||
read_labels: Option<bool>,
|
||||
read_all_labels: Option<bool>,
|
||||
param: Value,
|
||||
rpcenv: &mut dyn RpcEnvironment,
|
||||
) -> Result<(), Error> {
|
||||
|
||||
let output_format = get_output_format(¶m);
|
||||
|
||||
let (config, _digest) = config::drive::config()?;
|
||||
let drive = lookup_drive_name(¶m, &config)?;
|
||||
|
||||
let do_read = read_labels.unwrap_or(false) || read_all_labels.unwrap_or(false);
|
||||
|
||||
if do_read {
|
||||
let mut param = json!({
|
||||
"drive": &drive,
|
||||
});
|
||||
if let Some(true) = read_all_labels {
|
||||
param["read-all-labels"] = true.into();
|
||||
}
|
||||
let info = &api2::tape::drive::API_METHOD_UPDATE_INVENTORY;
|
||||
let result = match info.handler {
|
||||
ApiHandler::Sync(handler) => (handler)(param, info, rpcenv)?,
|
||||
_ => unreachable!(),
|
||||
};
|
||||
wait_for_local_worker(result.as_str().unwrap()).await?;
|
||||
}
|
||||
|
||||
let info = &api2::tape::drive::API_METHOD_INVENTORY;
|
||||
|
||||
let param = json!({ "drive": &drive });
|
||||
let mut data = match info.handler {
|
||||
ApiHandler::Sync(handler) => (handler)(param, info, rpcenv)?,
|
||||
_ => unreachable!(),
|
||||
};
|
||||
|
||||
let options = default_table_format_options()
|
||||
.column(ColumnConfig::new("changer-id"))
|
||||
.column(ColumnConfig::new("uuid"))
|
||||
;
|
||||
|
||||
format_and_print_result_full(&mut data, info.returns, &output_format, &options);
|
||||
|
||||
Ok(())
|
||||
}
|
||||
|
||||
#[api(
|
||||
input: {
|
||||
properties: {
|
||||
pool: {
|
||||
schema: MEDIA_POOL_NAME_SCHEMA,
|
||||
optional: true,
|
||||
},
|
||||
drive: {
|
||||
schema: DRIVE_ID_SCHEMA,
|
||||
optional: true,
|
||||
},
|
||||
},
|
||||
},
|
||||
)]
|
||||
/// Label media with barcodes from changer device
|
||||
async fn barcode_label_media(
|
||||
mut param: Value,
|
||||
rpcenv: &mut dyn RpcEnvironment,
|
||||
) -> Result<(), Error> {
|
||||
|
||||
let (config, _digest) = config::drive::config()?;
|
||||
|
||||
param["drive"] = lookup_drive_name(¶m, &config)?.into();
|
||||
|
||||
let info = &api2::tape::drive::API_METHOD_BARCODE_LABEL_MEDIA;
|
||||
|
||||
let result = match info.handler {
|
||||
ApiHandler::Sync(handler) => (handler)(param, info, rpcenv)?,
|
||||
_ => unreachable!(),
|
||||
};
|
||||
|
||||
wait_for_local_worker(result.as_str().unwrap()).await?;
|
||||
|
||||
Ok(())
|
||||
}
|
||||
|
||||
fn main() {
|
||||
|
||||
let cmd_def = CliCommandMap::new()
|
||||
.insert(
|
||||
"barcode-label",
|
||||
CliCommand::new(&API_METHOD_BARCODE_LABEL_MEDIA)
|
||||
.completion_cb("drive", complete_drive_name)
|
||||
.completion_cb("pool", complete_pool_name)
|
||||
)
|
||||
.insert(
|
||||
"rewind",
|
||||
CliCommand::new(&API_METHOD_REWIND)
|
||||
.completion_cb("drive", complete_drive_name)
|
||||
)
|
||||
.insert(
|
||||
"erase",
|
||||
CliCommand::new(&API_METHOD_ERASE_MEDIA)
|
||||
.completion_cb("drive", complete_drive_name)
|
||||
)
|
||||
.insert(
|
||||
"eject",
|
||||
CliCommand::new(&API_METHOD_EJECT_MEDIA)
|
||||
.completion_cb("drive", complete_drive_name)
|
||||
)
|
||||
.insert(
|
||||
"inventory",
|
||||
CliCommand::new(&API_METHOD_INVENTORY)
|
||||
.completion_cb("drive", complete_drive_name)
|
||||
)
|
||||
.insert(
|
||||
"read-label",
|
||||
CliCommand::new(&API_METHOD_READ_LABEL)
|
||||
.completion_cb("drive", complete_drive_name)
|
||||
)
|
||||
.insert(
|
||||
"label",
|
||||
CliCommand::new(&API_METHOD_LABEL_MEDIA)
|
||||
.completion_cb("drive", complete_drive_name)
|
||||
.completion_cb("pool", complete_pool_name)
|
||||
|
||||
)
|
||||
.insert("changer", changer_commands())
|
||||
.insert("drive", drive_commands())
|
||||
.insert("pool", pool_commands())
|
||||
.insert(
|
||||
"load-media",
|
||||
CliCommand::new(&API_METHOD_LOAD_MEDIA)
|
||||
.arg_param(&["changer-id"])
|
||||
.completion_cb("drive", complete_drive_name)
|
||||
.completion_cb("changer-id", complete_media_changer_id)
|
||||
)
|
||||
;
|
||||
|
||||
let mut rpcenv = CliEnvironment::new();
|
||||
rpcenv.set_auth_id(Some(String::from("root@pam")));
|
||||
|
||||
proxmox_backup::tools::runtime::main(run_async_cli_command(cmd_def, rpcenv));
|
||||
}
|
@ -372,7 +372,6 @@ fn create_master_key() -> Result<(), Error> {
|
||||
},
|
||||
"output-format": {
|
||||
type: PaperkeyFormat,
|
||||
description: "Output format. Text or Html.",
|
||||
optional: true,
|
||||
},
|
||||
},
|
||||
|
@ -8,6 +8,8 @@ mod task;
|
||||
pub use task::*;
|
||||
mod catalog;
|
||||
pub use catalog::*;
|
||||
mod snapshot;
|
||||
pub use snapshot::*;
|
||||
|
||||
pub mod key;
|
||||
|
||||
|
@ -1,22 +1,21 @@
|
||||
use std::path::PathBuf;
|
||||
use std::sync::Arc;
|
||||
use std::os::unix::io::RawFd;
|
||||
use std::path::Path;
|
||||
use std::ffi::OsStr;
|
||||
use std::collections::HashMap;
|
||||
use std::ffi::OsStr;
|
||||
use std::hash::BuildHasher;
|
||||
use std::os::unix::io::AsRawFd;
|
||||
use std::path::{Path, PathBuf};
|
||||
use std::sync::Arc;
|
||||
|
||||
use anyhow::{bail, format_err, Error};
|
||||
use futures::future::FutureExt;
|
||||
use futures::select;
|
||||
use futures::stream::{StreamExt, TryStreamExt};
|
||||
use nix::unistd::{fork, ForkResult};
|
||||
use serde_json::Value;
|
||||
use tokio::signal::unix::{signal, SignalKind};
|
||||
use nix::unistd::{fork, ForkResult, pipe};
|
||||
use futures::select;
|
||||
use futures::future::FutureExt;
|
||||
use futures::stream::{StreamExt, TryStreamExt};
|
||||
|
||||
use proxmox::{sortable, identity};
|
||||
use proxmox::api::{ApiHandler, ApiMethod, RpcEnvironment, schema::*, cli::*};
|
||||
|
||||
use proxmox::tools::fd::Fd;
|
||||
|
||||
use proxmox_backup::tools;
|
||||
use proxmox_backup::backup::{
|
||||
@ -143,24 +142,24 @@ fn mount(
|
||||
|
||||
// Process should be deamonized.
|
||||
// Make sure to fork before the async runtime is instantiated to avoid troubles.
|
||||
let pipe = pipe()?;
|
||||
let (pr, pw) = proxmox_backup::tools::pipe()?;
|
||||
match unsafe { fork() } {
|
||||
Ok(ForkResult::Parent { .. }) => {
|
||||
nix::unistd::close(pipe.1).unwrap();
|
||||
drop(pw);
|
||||
// Blocks the parent process until we are ready to go in the child
|
||||
let _res = nix::unistd::read(pipe.0, &mut [0]).unwrap();
|
||||
let _res = nix::unistd::read(pr.as_raw_fd(), &mut [0]).unwrap();
|
||||
Ok(Value::Null)
|
||||
}
|
||||
Ok(ForkResult::Child) => {
|
||||
nix::unistd::close(pipe.0).unwrap();
|
||||
drop(pr);
|
||||
nix::unistd::setsid().unwrap();
|
||||
proxmox_backup::tools::runtime::main(mount_do(param, Some(pipe.1)))
|
||||
proxmox_backup::tools::runtime::main(mount_do(param, Some(pw)))
|
||||
}
|
||||
Err(_) => bail!("failed to daemonize process"),
|
||||
}
|
||||
}
|
||||
|
||||
async fn mount_do(param: Value, pipe: Option<RawFd>) -> Result<Value, Error> {
|
||||
async fn mount_do(param: Value, pipe: Option<Fd>) -> Result<Value, Error> {
|
||||
let repo = extract_repository_from_value(¶m)?;
|
||||
let archive_name = tools::required_string_param(¶m, "archive-name")?;
|
||||
let client = connect(&repo)?;
|
||||
@ -235,8 +234,8 @@ async fn mount_do(param: Value, pipe: Option<RawFd>) -> Result<Value, Error> {
|
||||
}
|
||||
// Signal the parent process that we are done with the setup and it can
|
||||
// terminate.
|
||||
nix::unistd::write(pipe, &[0u8])?;
|
||||
nix::unistd::close(pipe).unwrap();
|
||||
nix::unistd::write(pipe.as_raw_fd(), &[0u8])?;
|
||||
let _: Fd = pipe;
|
||||
}
|
||||
|
||||
Ok(())
|
||||
|
416
src/bin/proxmox_backup_client/snapshot.rs
Normal file
416
src/bin/proxmox_backup_client/snapshot.rs
Normal file
@ -0,0 +1,416 @@
|
||||
use std::sync::Arc;
|
||||
|
||||
use anyhow::Error;
|
||||
use serde_json::{json, Value};
|
||||
|
||||
use proxmox::{
|
||||
api::{api, cli::*},
|
||||
tools::fs::file_get_contents,
|
||||
};
|
||||
|
||||
use proxmox_backup::{
|
||||
tools,
|
||||
api2::types::*,
|
||||
backup::{
|
||||
CryptMode,
|
||||
CryptConfig,
|
||||
DataBlob,
|
||||
BackupGroup,
|
||||
decrypt_key,
|
||||
}
|
||||
};
|
||||
|
||||
use crate::{
|
||||
REPO_URL_SCHEMA,
|
||||
KEYFILE_SCHEMA,
|
||||
KEYFD_SCHEMA,
|
||||
BackupDir,
|
||||
api_datastore_list_snapshots,
|
||||
complete_backup_snapshot,
|
||||
complete_backup_group,
|
||||
complete_repository,
|
||||
connect,
|
||||
extract_repository_from_value,
|
||||
record_repository,
|
||||
keyfile_parameters,
|
||||
};
|
||||
|
||||
#[api(
|
||||
input: {
|
||||
properties: {
|
||||
repository: {
|
||||
schema: REPO_URL_SCHEMA,
|
||||
optional: true,
|
||||
},
|
||||
group: {
|
||||
type: String,
|
||||
description: "Backup group.",
|
||||
optional: true,
|
||||
},
|
||||
"output-format": {
|
||||
schema: OUTPUT_FORMAT,
|
||||
optional: true,
|
||||
},
|
||||
}
|
||||
}
|
||||
)]
|
||||
/// List backup snapshots.
|
||||
async fn list_snapshots(param: Value) -> Result<Value, Error> {
|
||||
|
||||
let repo = extract_repository_from_value(¶m)?;
|
||||
|
||||
let output_format = get_output_format(¶m);
|
||||
|
||||
let client = connect(&repo)?;
|
||||
|
||||
let group: Option<BackupGroup> = if let Some(path) = param["group"].as_str() {
|
||||
Some(path.parse()?)
|
||||
} else {
|
||||
None
|
||||
};
|
||||
|
||||
let mut data = api_datastore_list_snapshots(&client, repo.store(), group).await?;
|
||||
|
||||
record_repository(&repo);
|
||||
|
||||
let render_snapshot_path = |_v: &Value, record: &Value| -> Result<String, Error> {
|
||||
let item: SnapshotListItem = serde_json::from_value(record.to_owned())?;
|
||||
let snapshot = BackupDir::new(item.backup_type, item.backup_id, item.backup_time)?;
|
||||
Ok(snapshot.relative_path().to_str().unwrap().to_owned())
|
||||
};
|
||||
|
||||
let render_files = |_v: &Value, record: &Value| -> Result<String, Error> {
|
||||
let item: SnapshotListItem = serde_json::from_value(record.to_owned())?;
|
||||
let mut filenames = Vec::new();
|
||||
for file in &item.files {
|
||||
filenames.push(file.filename.to_string());
|
||||
}
|
||||
Ok(tools::format::render_backup_file_list(&filenames[..]))
|
||||
};
|
||||
|
||||
let options = default_table_format_options()
|
||||
.sortby("backup-type", false)
|
||||
.sortby("backup-id", false)
|
||||
.sortby("backup-time", false)
|
||||
.column(ColumnConfig::new("backup-id").renderer(render_snapshot_path).header("snapshot"))
|
||||
.column(ColumnConfig::new("size").renderer(tools::format::render_bytes_human_readable))
|
||||
.column(ColumnConfig::new("files").renderer(render_files))
|
||||
;
|
||||
|
||||
let info = &proxmox_backup::api2::admin::datastore::API_RETURN_SCHEMA_LIST_SNAPSHOTS;
|
||||
|
||||
format_and_print_result_full(&mut data, info, &output_format, &options);
|
||||
|
||||
Ok(Value::Null)
|
||||
}
|
||||
|
||||
#[api(
|
||||
input: {
|
||||
properties: {
|
||||
repository: {
|
||||
schema: REPO_URL_SCHEMA,
|
||||
optional: true,
|
||||
},
|
||||
snapshot: {
|
||||
type: String,
|
||||
description: "Snapshot path.",
|
||||
},
|
||||
"output-format": {
|
||||
schema: OUTPUT_FORMAT,
|
||||
optional: true,
|
||||
},
|
||||
}
|
||||
}
|
||||
)]
|
||||
/// List snapshot files.
|
||||
async fn list_snapshot_files(param: Value) -> Result<Value, Error> {
|
||||
|
||||
let repo = extract_repository_from_value(¶m)?;
|
||||
|
||||
let path = tools::required_string_param(¶m, "snapshot")?;
|
||||
let snapshot: BackupDir = path.parse()?;
|
||||
|
||||
let output_format = get_output_format(¶m);
|
||||
|
||||
let client = connect(&repo)?;
|
||||
|
||||
let path = format!("api2/json/admin/datastore/{}/files", repo.store());
|
||||
|
||||
let mut result = client.get(&path, Some(json!({
|
||||
"backup-type": snapshot.group().backup_type(),
|
||||
"backup-id": snapshot.group().backup_id(),
|
||||
"backup-time": snapshot.backup_time(),
|
||||
}))).await?;
|
||||
|
||||
record_repository(&repo);
|
||||
|
||||
let info = &proxmox_backup::api2::admin::datastore::API_RETURN_SCHEMA_LIST_SNAPSHOT_FILES;
|
||||
|
||||
let mut data: Value = result["data"].take();
|
||||
|
||||
let options = default_table_format_options();
|
||||
|
||||
format_and_print_result_full(&mut data, info, &output_format, &options);
|
||||
|
||||
Ok(Value::Null)
|
||||
}
|
||||
|
||||
#[api(
|
||||
input: {
|
||||
properties: {
|
||||
repository: {
|
||||
schema: REPO_URL_SCHEMA,
|
||||
optional: true,
|
||||
},
|
||||
snapshot: {
|
||||
type: String,
|
||||
description: "Snapshot path.",
|
||||
},
|
||||
}
|
||||
}
|
||||
)]
|
||||
/// Forget (remove) backup snapshots.
|
||||
async fn forget_snapshots(param: Value) -> Result<Value, Error> {
|
||||
|
||||
let repo = extract_repository_from_value(¶m)?;
|
||||
|
||||
let path = tools::required_string_param(¶m, "snapshot")?;
|
||||
let snapshot: BackupDir = path.parse()?;
|
||||
|
||||
let mut client = connect(&repo)?;
|
||||
|
||||
let path = format!("api2/json/admin/datastore/{}/snapshots", repo.store());
|
||||
|
||||
let result = client.delete(&path, Some(json!({
|
||||
"backup-type": snapshot.group().backup_type(),
|
||||
"backup-id": snapshot.group().backup_id(),
|
||||
"backup-time": snapshot.backup_time(),
|
||||
}))).await?;
|
||||
|
||||
record_repository(&repo);
|
||||
|
||||
Ok(result)
|
||||
}
|
||||
|
||||
#[api(
|
||||
input: {
|
||||
properties: {
|
||||
repository: {
|
||||
schema: REPO_URL_SCHEMA,
|
||||
optional: true,
|
||||
},
|
||||
snapshot: {
|
||||
type: String,
|
||||
description: "Group/Snapshot path.",
|
||||
},
|
||||
logfile: {
|
||||
type: String,
|
||||
description: "The path to the log file you want to upload.",
|
||||
},
|
||||
keyfile: {
|
||||
schema: KEYFILE_SCHEMA,
|
||||
optional: true,
|
||||
},
|
||||
"keyfd": {
|
||||
schema: KEYFD_SCHEMA,
|
||||
optional: true,
|
||||
},
|
||||
"crypt-mode": {
|
||||
type: CryptMode,
|
||||
optional: true,
|
||||
},
|
||||
}
|
||||
}
|
||||
)]
|
||||
/// Upload backup log file.
|
||||
async fn upload_log(param: Value) -> Result<Value, Error> {
|
||||
|
||||
let logfile = tools::required_string_param(¶m, "logfile")?;
|
||||
let repo = extract_repository_from_value(¶m)?;
|
||||
|
||||
let snapshot = tools::required_string_param(¶m, "snapshot")?;
|
||||
let snapshot: BackupDir = snapshot.parse()?;
|
||||
|
||||
let mut client = connect(&repo)?;
|
||||
|
||||
let (keydata, crypt_mode) = keyfile_parameters(¶m)?;
|
||||
|
||||
let crypt_config = match keydata {
|
||||
None => None,
|
||||
Some(key) => {
|
||||
let (key, _created, _) = decrypt_key(&key, &crate::key::get_encryption_key_password)?;
|
||||
let crypt_config = CryptConfig::new(key)?;
|
||||
Some(Arc::new(crypt_config))
|
||||
}
|
||||
};
|
||||
|
||||
let data = file_get_contents(logfile)?;
|
||||
|
||||
// fixme: howto sign log?
|
||||
let blob = match crypt_mode {
|
||||
CryptMode::None | CryptMode::SignOnly => DataBlob::encode(&data, None, true)?,
|
||||
CryptMode::Encrypt => DataBlob::encode(&data, crypt_config.as_ref().map(Arc::as_ref), true)?,
|
||||
};
|
||||
|
||||
let raw_data = blob.into_inner();
|
||||
|
||||
let path = format!("api2/json/admin/datastore/{}/upload-backup-log", repo.store());
|
||||
|
||||
let args = json!({
|
||||
"backup-type": snapshot.group().backup_type(),
|
||||
"backup-id": snapshot.group().backup_id(),
|
||||
"backup-time": snapshot.backup_time(),
|
||||
});
|
||||
|
||||
let body = hyper::Body::from(raw_data);
|
||||
|
||||
client.upload("application/octet-stream", body, &path, Some(args)).await
|
||||
}
|
||||
|
||||
#[api(
|
||||
input: {
|
||||
properties: {
|
||||
repository: {
|
||||
schema: REPO_URL_SCHEMA,
|
||||
optional: true,
|
||||
},
|
||||
snapshot: {
|
||||
type: String,
|
||||
description: "Snapshot path.",
|
||||
},
|
||||
"output-format": {
|
||||
schema: OUTPUT_FORMAT,
|
||||
optional: true,
|
||||
},
|
||||
}
|
||||
}
|
||||
)]
|
||||
/// Show notes
|
||||
async fn show_notes(param: Value) -> Result<Value, Error> {
|
||||
let repo = extract_repository_from_value(¶m)?;
|
||||
let path = tools::required_string_param(¶m, "snapshot")?;
|
||||
|
||||
let snapshot: BackupDir = path.parse()?;
|
||||
let client = connect(&repo)?;
|
||||
|
||||
let path = format!("api2/json/admin/datastore/{}/notes", repo.store());
|
||||
|
||||
let args = json!({
|
||||
"backup-type": snapshot.group().backup_type(),
|
||||
"backup-id": snapshot.group().backup_id(),
|
||||
"backup-time": snapshot.backup_time(),
|
||||
});
|
||||
|
||||
let output_format = get_output_format(¶m);
|
||||
|
||||
let mut result = client.get(&path, Some(args)).await?;
|
||||
|
||||
let notes = result["data"].take();
|
||||
|
||||
if output_format == "text" {
|
||||
if let Some(notes) = notes.as_str() {
|
||||
println!("{}", notes);
|
||||
}
|
||||
} else {
|
||||
format_and_print_result(
|
||||
&json!({
|
||||
"notes": notes,
|
||||
}),
|
||||
&output_format,
|
||||
);
|
||||
}
|
||||
|
||||
Ok(Value::Null)
|
||||
}
|
||||
|
||||
#[api(
|
||||
input: {
|
||||
properties: {
|
||||
repository: {
|
||||
schema: REPO_URL_SCHEMA,
|
||||
optional: true,
|
||||
},
|
||||
snapshot: {
|
||||
type: String,
|
||||
description: "Snapshot path.",
|
||||
},
|
||||
notes: {
|
||||
type: String,
|
||||
description: "The Notes.",
|
||||
},
|
||||
}
|
||||
}
|
||||
)]
|
||||
/// Update Notes
|
||||
async fn update_notes(param: Value) -> Result<Value, Error> {
|
||||
let repo = extract_repository_from_value(¶m)?;
|
||||
let path = tools::required_string_param(¶m, "snapshot")?;
|
||||
let notes = tools::required_string_param(¶m, "notes")?;
|
||||
|
||||
let snapshot: BackupDir = path.parse()?;
|
||||
let mut client = connect(&repo)?;
|
||||
|
||||
let path = format!("api2/json/admin/datastore/{}/notes", repo.store());
|
||||
|
||||
let args = json!({
|
||||
"backup-type": snapshot.group().backup_type(),
|
||||
"backup-id": snapshot.group().backup_id(),
|
||||
"backup-time": snapshot.backup_time(),
|
||||
"notes": notes,
|
||||
});
|
||||
|
||||
client.put(&path, Some(args)).await?;
|
||||
|
||||
Ok(Value::Null)
|
||||
}
|
||||
|
||||
fn notes_cli() -> CliCommandMap {
|
||||
CliCommandMap::new()
|
||||
.insert(
|
||||
"show",
|
||||
CliCommand::new(&API_METHOD_SHOW_NOTES)
|
||||
.arg_param(&["snapshot"])
|
||||
.completion_cb("snapshot", complete_backup_snapshot),
|
||||
)
|
||||
.insert(
|
||||
"update",
|
||||
CliCommand::new(&API_METHOD_UPDATE_NOTES)
|
||||
.arg_param(&["snapshot", "notes"])
|
||||
.completion_cb("snapshot", complete_backup_snapshot),
|
||||
)
|
||||
}
|
||||
|
||||
pub fn snapshot_mgtm_cli() -> CliCommandMap {
|
||||
CliCommandMap::new()
|
||||
.insert("notes", notes_cli())
|
||||
.insert(
|
||||
"list",
|
||||
CliCommand::new(&API_METHOD_LIST_SNAPSHOTS)
|
||||
.arg_param(&["group"])
|
||||
.completion_cb("group", complete_backup_group)
|
||||
.completion_cb("repository", complete_repository)
|
||||
)
|
||||
.insert(
|
||||
"files",
|
||||
CliCommand::new(&API_METHOD_LIST_SNAPSHOT_FILES)
|
||||
.arg_param(&["snapshot"])
|
||||
.completion_cb("repository", complete_repository)
|
||||
.completion_cb("snapshot", complete_backup_snapshot)
|
||||
)
|
||||
.insert(
|
||||
"forget",
|
||||
CliCommand::new(&API_METHOD_FORGET_SNAPSHOTS)
|
||||
.arg_param(&["snapshot"])
|
||||
.completion_cb("repository", complete_repository)
|
||||
.completion_cb("snapshot", complete_backup_snapshot)
|
||||
)
|
||||
.insert(
|
||||
"upload-log",
|
||||
CliCommand::new(&API_METHOD_UPLOAD_LOG)
|
||||
.arg_param(&["snapshot", "logfile"])
|
||||
.completion_cb("snapshot", complete_backup_snapshot)
|
||||
.completion_cb("logfile", tools::complete_file_name)
|
||||
.completion_cb("keyfile", tools::complete_file_name)
|
||||
.completion_cb("repository", complete_repository)
|
||||
)
|
||||
}
|
219
src/bin/proxmox_tape/changer.rs
Normal file
219
src/bin/proxmox_tape/changer.rs
Normal file
@ -0,0 +1,219 @@
|
||||
use anyhow::{Error};
|
||||
use serde_json::Value;
|
||||
|
||||
use proxmox::{
|
||||
api::{
|
||||
api,
|
||||
cli::*,
|
||||
RpcEnvironment,
|
||||
ApiHandler,
|
||||
},
|
||||
};
|
||||
|
||||
use proxmox_backup::{
|
||||
api2::{
|
||||
self,
|
||||
types::{
|
||||
CHANGER_ID_SCHEMA,
|
||||
},
|
||||
},
|
||||
tape::{
|
||||
complete_changer_path,
|
||||
},
|
||||
config::{
|
||||
drive::{
|
||||
complete_drive_name,
|
||||
complete_changer_name,
|
||||
}
|
||||
},
|
||||
};
|
||||
|
||||
pub fn changer_commands() -> CommandLineInterface {
|
||||
|
||||
let cmd_def = CliCommandMap::new()
|
||||
.insert("scan", CliCommand::new(&API_METHOD_SCAN_FOR_CHANGERS))
|
||||
.insert("list", CliCommand::new(&API_METHOD_LIST_CHANGERS))
|
||||
.insert("config",
|
||||
CliCommand::new(&API_METHOD_GET_CONFIG)
|
||||
.arg_param(&["name"])
|
||||
.completion_cb("name", complete_changer_name)
|
||||
)
|
||||
.insert(
|
||||
"remove",
|
||||
CliCommand::new(&api2::config::changer::API_METHOD_DELETE_CHANGER)
|
||||
.arg_param(&["name"])
|
||||
.completion_cb("name", complete_changer_name)
|
||||
)
|
||||
.insert(
|
||||
"create",
|
||||
CliCommand::new(&api2::config::changer::API_METHOD_CREATE_CHANGER)
|
||||
.arg_param(&["name"])
|
||||
.completion_cb("name", complete_drive_name)
|
||||
.completion_cb("path", complete_changer_path)
|
||||
)
|
||||
.insert(
|
||||
"update",
|
||||
CliCommand::new(&api2::config::changer::API_METHOD_UPDATE_CHANGER)
|
||||
.arg_param(&["name"])
|
||||
.completion_cb("name", complete_changer_name)
|
||||
.completion_cb("path", complete_changer_path)
|
||||
)
|
||||
.insert("status",
|
||||
CliCommand::new(&API_METHOD_GET_STATUS)
|
||||
.arg_param(&["name"])
|
||||
.completion_cb("name", complete_changer_name)
|
||||
)
|
||||
.insert("transfer",
|
||||
CliCommand::new(&api2::tape::changer::API_METHOD_TRANSFER)
|
||||
.arg_param(&["name"])
|
||||
.completion_cb("name", complete_changer_name)
|
||||
)
|
||||
;
|
||||
|
||||
cmd_def.into()
|
||||
}
|
||||
|
||||
#[api(
|
||||
input: {
|
||||
properties: {
|
||||
"output-format": {
|
||||
schema: OUTPUT_FORMAT,
|
||||
optional: true,
|
||||
},
|
||||
},
|
||||
},
|
||||
)]
|
||||
/// List changers
|
||||
fn list_changers(
|
||||
param: Value,
|
||||
rpcenv: &mut dyn RpcEnvironment,
|
||||
) -> Result<(), Error> {
|
||||
|
||||
let output_format = get_output_format(¶m);
|
||||
let info = &api2::config::changer::API_METHOD_LIST_CHANGERS;
|
||||
let mut data = match info.handler {
|
||||
ApiHandler::Sync(handler) => (handler)(param, info, rpcenv)?,
|
||||
_ => unreachable!(),
|
||||
};
|
||||
|
||||
let options = default_table_format_options()
|
||||
.column(ColumnConfig::new("name"))
|
||||
.column(ColumnConfig::new("path"))
|
||||
.column(ColumnConfig::new("vendor"))
|
||||
.column(ColumnConfig::new("model"))
|
||||
.column(ColumnConfig::new("serial"))
|
||||
;
|
||||
|
||||
format_and_print_result_full(&mut data, info.returns, &output_format, &options);
|
||||
|
||||
Ok(())
|
||||
}
|
||||
|
||||
#[api(
|
||||
input: {
|
||||
properties: {
|
||||
"output-format": {
|
||||
schema: OUTPUT_FORMAT,
|
||||
optional: true,
|
||||
},
|
||||
},
|
||||
},
|
||||
)]
|
||||
/// Scan for SCSI tape changers
|
||||
fn scan_for_changers(
|
||||
param: Value,
|
||||
rpcenv: &mut dyn RpcEnvironment,
|
||||
) -> Result<(), Error> {
|
||||
|
||||
let output_format = get_output_format(¶m);
|
||||
let info = &api2::tape::changer::API_METHOD_SCAN_CHANGERS;
|
||||
let mut data = match info.handler {
|
||||
ApiHandler::Sync(handler) => (handler)(param, info, rpcenv)?,
|
||||
_ => unreachable!(),
|
||||
};
|
||||
|
||||
let options = default_table_format_options()
|
||||
.column(ColumnConfig::new("path"))
|
||||
.column(ColumnConfig::new("vendor"))
|
||||
.column(ColumnConfig::new("model"))
|
||||
.column(ColumnConfig::new("serial"))
|
||||
;
|
||||
|
||||
format_and_print_result_full(&mut data, info.returns, &output_format, &options);
|
||||
|
||||
Ok(())
|
||||
}
|
||||
|
||||
#[api(
|
||||
input: {
|
||||
properties: {
|
||||
"output-format": {
|
||||
schema: OUTPUT_FORMAT,
|
||||
optional: true,
|
||||
},
|
||||
name: {
|
||||
schema: CHANGER_ID_SCHEMA,
|
||||
},
|
||||
},
|
||||
},
|
||||
)]
|
||||
/// Get tape changer configuration
|
||||
fn get_config(
|
||||
param: Value,
|
||||
rpcenv: &mut dyn RpcEnvironment,
|
||||
) -> Result<(), Error> {
|
||||
|
||||
let output_format = get_output_format(¶m);
|
||||
let info = &api2::config::changer::API_METHOD_GET_CONFIG;
|
||||
let mut data = match info.handler {
|
||||
ApiHandler::Sync(handler) => (handler)(param, info, rpcenv)?,
|
||||
_ => unreachable!(),
|
||||
};
|
||||
|
||||
let options = default_table_format_options()
|
||||
.column(ColumnConfig::new("name"))
|
||||
.column(ColumnConfig::new("path"))
|
||||
;
|
||||
|
||||
format_and_print_result_full(&mut data, info.returns, &output_format, &options);
|
||||
|
||||
Ok(())
|
||||
}
|
||||
|
||||
#[api(
|
||||
input: {
|
||||
properties: {
|
||||
"output-format": {
|
||||
schema: OUTPUT_FORMAT,
|
||||
optional: true,
|
||||
},
|
||||
name: {
|
||||
schema: CHANGER_ID_SCHEMA,
|
||||
},
|
||||
},
|
||||
},
|
||||
)]
|
||||
/// Get tape changer status
|
||||
fn get_status(
|
||||
param: Value,
|
||||
rpcenv: &mut dyn RpcEnvironment,
|
||||
) -> Result<(), Error> {
|
||||
|
||||
let output_format = get_output_format(¶m);
|
||||
let info = &api2::tape::changer::API_METHOD_GET_STATUS;
|
||||
let mut data = match info.handler {
|
||||
ApiHandler::Sync(handler) => (handler)(param, info, rpcenv)?,
|
||||
_ => unreachable!(),
|
||||
};
|
||||
|
||||
let options = default_table_format_options()
|
||||
.column(ColumnConfig::new("entry-kind"))
|
||||
.column(ColumnConfig::new("entry-id"))
|
||||
.column(ColumnConfig::new("changer-id"))
|
||||
.column(ColumnConfig::new("loaded-slot"))
|
||||
;
|
||||
|
||||
format_and_print_result_full(&mut data, info.returns, &output_format, &options);
|
||||
|
||||
Ok(())
|
||||
}
|
188
src/bin/proxmox_tape/drive.rs
Normal file
188
src/bin/proxmox_tape/drive.rs
Normal file
@ -0,0 +1,188 @@
|
||||
use anyhow::Error;
|
||||
use serde_json::Value;
|
||||
|
||||
use proxmox::{
|
||||
api::{
|
||||
api,
|
||||
cli::*,
|
||||
RpcEnvironment,
|
||||
ApiHandler,
|
||||
},
|
||||
};
|
||||
|
||||
use proxmox_backup::{
|
||||
api2::{
|
||||
self,
|
||||
types::{
|
||||
DRIVE_ID_SCHEMA,
|
||||
},
|
||||
},
|
||||
tape::{
|
||||
complete_drive_path,
|
||||
},
|
||||
config::drive::{
|
||||
complete_drive_name,
|
||||
complete_changer_name,
|
||||
complete_linux_drive_name,
|
||||
},
|
||||
};
|
||||
|
||||
pub fn drive_commands() -> CommandLineInterface {
|
||||
|
||||
let cmd_def = CliCommandMap::new()
|
||||
.insert("scan", CliCommand::new(&API_METHOD_SCAN_FOR_DRIVES))
|
||||
.insert("list", CliCommand::new(&API_METHOD_LIST_DRIVES))
|
||||
.insert("config",
|
||||
CliCommand::new(&API_METHOD_GET_CONFIG)
|
||||
.arg_param(&["name"])
|
||||
.completion_cb("name", complete_linux_drive_name)
|
||||
)
|
||||
.insert(
|
||||
"remove",
|
||||
CliCommand::new(&api2::config::drive::API_METHOD_DELETE_DRIVE)
|
||||
.arg_param(&["name"])
|
||||
.completion_cb("name", complete_linux_drive_name)
|
||||
)
|
||||
.insert(
|
||||
"create",
|
||||
CliCommand::new(&api2::config::drive::API_METHOD_CREATE_DRIVE)
|
||||
.arg_param(&["name"])
|
||||
.completion_cb("name", complete_drive_name)
|
||||
.completion_cb("path", complete_drive_path)
|
||||
.completion_cb("changer", complete_changer_name)
|
||||
)
|
||||
.insert(
|
||||
"update",
|
||||
CliCommand::new(&api2::config::drive::API_METHOD_UPDATE_DRIVE)
|
||||
.arg_param(&["name"])
|
||||
.completion_cb("name", complete_linux_drive_name)
|
||||
.completion_cb("path", complete_drive_path)
|
||||
.completion_cb("changer", complete_changer_name)
|
||||
)
|
||||
.insert(
|
||||
"load",
|
||||
CliCommand::new(&api2::tape::drive::API_METHOD_LOAD_SLOT)
|
||||
.arg_param(&["drive"])
|
||||
.completion_cb("drive", complete_linux_drive_name)
|
||||
)
|
||||
.insert(
|
||||
"unload",
|
||||
CliCommand::new(&api2::tape::drive::API_METHOD_UNLOAD)
|
||||
.arg_param(&["drive"])
|
||||
.completion_cb("drive", complete_linux_drive_name)
|
||||
)
|
||||
;
|
||||
|
||||
cmd_def.into()
|
||||
}
|
||||
|
||||
#[api(
|
||||
input: {
|
||||
properties: {
|
||||
"output-format": {
|
||||
schema: OUTPUT_FORMAT,
|
||||
optional: true,
|
||||
},
|
||||
},
|
||||
},
|
||||
)]
|
||||
/// List drives
|
||||
fn list_drives(
|
||||
param: Value,
|
||||
rpcenv: &mut dyn RpcEnvironment,
|
||||
) -> Result<(), Error> {
|
||||
|
||||
let output_format = get_output_format(¶m);
|
||||
let info = &api2::config::drive::API_METHOD_LIST_DRIVES;
|
||||
let mut data = match info.handler {
|
||||
ApiHandler::Sync(handler) => (handler)(param, info, rpcenv)?,
|
||||
_ => unreachable!(),
|
||||
};
|
||||
|
||||
let options = default_table_format_options()
|
||||
.column(ColumnConfig::new("name"))
|
||||
.column(ColumnConfig::new("path"))
|
||||
.column(ColumnConfig::new("changer"))
|
||||
.column(ColumnConfig::new("vendor"))
|
||||
.column(ColumnConfig::new("model"))
|
||||
.column(ColumnConfig::new("serial"))
|
||||
;
|
||||
|
||||
format_and_print_result_full(&mut data, info.returns, &output_format, &options);
|
||||
|
||||
Ok(())
|
||||
}
|
||||
|
||||
#[api(
|
||||
input: {
|
||||
properties: {
|
||||
"output-format": {
|
||||
schema: OUTPUT_FORMAT,
|
||||
optional: true,
|
||||
},
|
||||
},
|
||||
}
|
||||
)]
|
||||
/// Scan for drives
|
||||
fn scan_for_drives(
|
||||
param: Value,
|
||||
rpcenv: &mut dyn RpcEnvironment,
|
||||
) -> Result<(), Error> {
|
||||
|
||||
let output_format = get_output_format(¶m);
|
||||
let info = &api2::tape::drive::API_METHOD_SCAN_DRIVES;
|
||||
let mut data = match info.handler {
|
||||
ApiHandler::Sync(handler) => (handler)(param, info, rpcenv)?,
|
||||
_ => unreachable!(),
|
||||
};
|
||||
|
||||
let options = default_table_format_options()
|
||||
.column(ColumnConfig::new("path"))
|
||||
.column(ColumnConfig::new("vendor"))
|
||||
.column(ColumnConfig::new("model"))
|
||||
.column(ColumnConfig::new("serial"))
|
||||
;
|
||||
|
||||
format_and_print_result_full(&mut data, info.returns, &output_format, &options);
|
||||
|
||||
Ok(())
|
||||
}
|
||||
|
||||
|
||||
#[api(
|
||||
input: {
|
||||
properties: {
|
||||
"output-format": {
|
||||
schema: OUTPUT_FORMAT,
|
||||
optional: true,
|
||||
},
|
||||
name: {
|
||||
schema: DRIVE_ID_SCHEMA,
|
||||
},
|
||||
},
|
||||
},
|
||||
)]
|
||||
/// Get pool configuration
|
||||
fn get_config(
|
||||
param: Value,
|
||||
rpcenv: &mut dyn RpcEnvironment,
|
||||
) -> Result<(), Error> {
|
||||
|
||||
let output_format = get_output_format(¶m);
|
||||
let info = &api2::config::drive::API_METHOD_GET_CONFIG;
|
||||
let mut data = match info.handler {
|
||||
ApiHandler::Sync(handler) => (handler)(param, info, rpcenv)?,
|
||||
_ => unreachable!(),
|
||||
};
|
||||
|
||||
let options = default_table_format_options()
|
||||
.column(ColumnConfig::new("name"))
|
||||
.column(ColumnConfig::new("path"))
|
||||
.column(ColumnConfig::new("changer"))
|
||||
.column(ColumnConfig::new("changer-drive-id"))
|
||||
;
|
||||
|
||||
format_and_print_result_full(&mut data, info.returns, &output_format, &options);
|
||||
|
||||
Ok(())
|
||||
}
|
8
src/bin/proxmox_tape/mod.rs
Normal file
8
src/bin/proxmox_tape/mod.rs
Normal file
@ -0,0 +1,8 @@
|
||||
mod changer;
|
||||
pub use changer::*;
|
||||
|
||||
mod drive;
|
||||
pub use drive::*;
|
||||
|
||||
mod pool;
|
||||
pub use pool::*;
|
137
src/bin/proxmox_tape/pool.rs
Normal file
137
src/bin/proxmox_tape/pool.rs
Normal file
@ -0,0 +1,137 @@
|
||||
use anyhow::{Error};
|
||||
use serde_json::Value;
|
||||
|
||||
use proxmox::{
|
||||
api::{
|
||||
api,
|
||||
cli::*,
|
||||
RpcEnvironment,
|
||||
ApiHandler,
|
||||
},
|
||||
};
|
||||
|
||||
use proxmox_backup::{
|
||||
api2::{
|
||||
self,
|
||||
types::{
|
||||
MEDIA_POOL_NAME_SCHEMA,
|
||||
},
|
||||
},
|
||||
config::{
|
||||
drive::{
|
||||
complete_drive_name,
|
||||
},
|
||||
media_pool::{
|
||||
complete_pool_name,
|
||||
},
|
||||
},
|
||||
};
|
||||
|
||||
pub fn pool_commands() -> CommandLineInterface {
|
||||
|
||||
let cmd_def = CliCommandMap::new()
|
||||
.insert("list", CliCommand::new(&API_METHOD_LIST_POOLS))
|
||||
.insert("config",
|
||||
CliCommand::new(&API_METHOD_GET_CONFIG)
|
||||
.arg_param(&["name"])
|
||||
.completion_cb("name", complete_pool_name)
|
||||
)
|
||||
.insert(
|
||||
"remove",
|
||||
CliCommand::new(&api2::config::media_pool::API_METHOD_DELETE_POOL)
|
||||
.arg_param(&["name"])
|
||||
.completion_cb("name", complete_pool_name)
|
||||
)
|
||||
.insert(
|
||||
"create",
|
||||
CliCommand::new(&api2::config::media_pool::API_METHOD_CREATE_POOL)
|
||||
.arg_param(&["name"])
|
||||
.completion_cb("name", complete_pool_name)
|
||||
.completion_cb("drive", complete_drive_name)
|
||||
)
|
||||
.insert(
|
||||
"update",
|
||||
CliCommand::new(&api2::config::media_pool::API_METHOD_UPDATE_POOL)
|
||||
.arg_param(&["name"])
|
||||
.completion_cb("name", complete_pool_name)
|
||||
.completion_cb("drive", complete_drive_name)
|
||||
)
|
||||
;
|
||||
|
||||
cmd_def.into()
|
||||
}
|
||||
|
||||
#[api(
|
||||
input: {
|
||||
properties: {
|
||||
"output-format": {
|
||||
schema: OUTPUT_FORMAT,
|
||||
optional: true,
|
||||
},
|
||||
},
|
||||
},
|
||||
)]
|
||||
/// List media pool
|
||||
fn list_pools(
|
||||
param: Value,
|
||||
rpcenv: &mut dyn RpcEnvironment,
|
||||
) -> Result<(), Error> {
|
||||
|
||||
let output_format = get_output_format(¶m);
|
||||
let info = &api2::config::media_pool::API_METHOD_LIST_POOLS;
|
||||
let mut data = match info.handler {
|
||||
ApiHandler::Sync(handler) => (handler)(param, info, rpcenv)?,
|
||||
_ => unreachable!(),
|
||||
};
|
||||
|
||||
let options = default_table_format_options()
|
||||
.column(ColumnConfig::new("name"))
|
||||
.column(ColumnConfig::new("drive"))
|
||||
.column(ColumnConfig::new("allocation"))
|
||||
.column(ColumnConfig::new("retention"))
|
||||
.column(ColumnConfig::new("template"))
|
||||
;
|
||||
|
||||
format_and_print_result_full(&mut data, info.returns, &output_format, &options);
|
||||
|
||||
Ok(())
|
||||
}
|
||||
|
||||
#[api(
|
||||
input: {
|
||||
properties: {
|
||||
"output-format": {
|
||||
schema: OUTPUT_FORMAT,
|
||||
optional: true,
|
||||
},
|
||||
name: {
|
||||
schema: MEDIA_POOL_NAME_SCHEMA,
|
||||
},
|
||||
},
|
||||
},
|
||||
)]
|
||||
/// Get media pool configuration
|
||||
fn get_config(
|
||||
param: Value,
|
||||
rpcenv: &mut dyn RpcEnvironment,
|
||||
) -> Result<(), Error> {
|
||||
|
||||
let output_format = get_output_format(¶m);
|
||||
let info = &api2::config::media_pool::API_METHOD_GET_CONFIG;
|
||||
let mut data = match info.handler {
|
||||
ApiHandler::Sync(handler) => (handler)(param, info, rpcenv)?,
|
||||
_ => unreachable!(),
|
||||
};
|
||||
|
||||
let options = default_table_format_options()
|
||||
.column(ColumnConfig::new("name"))
|
||||
.column(ColumnConfig::new("drive"))
|
||||
.column(ColumnConfig::new("allocation"))
|
||||
.column(ColumnConfig::new("retention"))
|
||||
.column(ColumnConfig::new("template"))
|
||||
;
|
||||
|
||||
format_and_print_result_full(&mut data, info.returns, &output_format, &options);
|
||||
|
||||
Ok(())
|
||||
}
|
@ -3,6 +3,16 @@
|
||||
//! This library implements the client side to access the backups
|
||||
//! server using https.
|
||||
|
||||
use anyhow::Error;
|
||||
|
||||
use crate::{
|
||||
api2::types::{Userid, Authid},
|
||||
tools::ticket::Ticket,
|
||||
auth_helpers::private_auth_key,
|
||||
};
|
||||
|
||||
|
||||
|
||||
mod merge_known_chunks;
|
||||
pub mod pipe_to_stream;
|
||||
|
||||
@ -31,3 +41,27 @@ mod backup_specification;
|
||||
pub use backup_specification::*;
|
||||
|
||||
pub mod pull;
|
||||
|
||||
/// Connect to localhost:8007 as root@pam
|
||||
///
|
||||
/// This automatically creates a ticket if run as 'root' user.
|
||||
pub fn connect_to_localhost() -> Result<HttpClient, Error> {
|
||||
|
||||
let uid = nix::unistd::Uid::current();
|
||||
|
||||
let mut options = HttpClientOptions::new()
|
||||
.prefix(Some("proxmox-backup".to_string()))
|
||||
.verify_cert(false); // not required for connection to localhost
|
||||
|
||||
let client = if uid.is_root() {
|
||||
let ticket = Ticket::new("PBS", Userid::root_userid())?
|
||||
.sign(private_auth_key(), None)?;
|
||||
options = options.password(Some(ticket));
|
||||
HttpClient::new("localhost", 8007, Authid::root_auth_id(), options)?
|
||||
} else {
|
||||
options = options.ticket_cache(true).interactive(true);
|
||||
HttpClient::new("localhost", 8007, Authid::root_auth_id(), options)?
|
||||
};
|
||||
|
||||
Ok(client)
|
||||
}
|
||||
|
@ -395,7 +395,7 @@ pub async fn pull_group(
|
||||
tgt_store: Arc<DataStore>,
|
||||
group: &BackupGroup,
|
||||
delete: bool,
|
||||
progress: Option<(usize, usize)>, // (groups_done, group_count)
|
||||
progress: &mut StoreProgress,
|
||||
) -> Result<(), Error> {
|
||||
|
||||
let path = format!("api2/json/admin/datastore/{}/snapshots", src_repo.store());
|
||||
@ -418,18 +418,10 @@ pub async fn pull_group(
|
||||
|
||||
let mut remote_snapshots = std::collections::HashSet::new();
|
||||
|
||||
let (per_start, per_group) = if let Some((groups_done, group_count)) = progress {
|
||||
let per_start = (groups_done as f64)/(group_count as f64);
|
||||
let per_group = 1.0/(group_count as f64);
|
||||
(per_start, per_group)
|
||||
} else {
|
||||
(0.0, 1.0)
|
||||
};
|
||||
|
||||
// start with 16384 chunks (up to 65GB)
|
||||
let downloaded_chunks = Arc::new(Mutex::new(HashSet::with_capacity(1024*64)));
|
||||
|
||||
let snapshot_count = list.len();
|
||||
progress.group_snapshots = list.len() as u64;
|
||||
|
||||
for (pos, item) in list.into_iter().enumerate() {
|
||||
let snapshot = BackupDir::new(item.backup_type, item.backup_id, item.backup_time)?;
|
||||
@ -469,9 +461,8 @@ pub async fn pull_group(
|
||||
|
||||
let result = pull_snapshot_from(worker, reader, tgt_store.clone(), &snapshot, downloaded_chunks.clone()).await;
|
||||
|
||||
let percentage = (pos as f64)/(snapshot_count as f64);
|
||||
let percentage = per_start + percentage*per_group;
|
||||
worker.log(format!("percentage done: {:.2}%", percentage*100.0));
|
||||
progress.done_snapshots = pos as u64 + 1;
|
||||
worker.log(format!("percentage done: {}", progress));
|
||||
|
||||
result?; // stop on error
|
||||
}
|
||||
@ -507,6 +498,8 @@ pub async fn pull_store(
|
||||
|
||||
let mut list: Vec<GroupListItem> = serde_json::from_value(result["data"].take())?;
|
||||
|
||||
worker.log(format!("found {} groups to sync", list.len()));
|
||||
|
||||
list.sort_unstable_by(|a, b| {
|
||||
let type_order = a.backup_type.cmp(&b.backup_type);
|
||||
if type_order == std::cmp::Ordering::Equal {
|
||||
@ -523,9 +516,13 @@ pub async fn pull_store(
|
||||
new_groups.insert(BackupGroup::new(&item.backup_type, &item.backup_id));
|
||||
}
|
||||
|
||||
let group_count = list.len();
|
||||
let mut progress = StoreProgress::new(list.len() as u64);
|
||||
|
||||
for (done, item) in list.into_iter().enumerate() {
|
||||
progress.done_groups = done as u64;
|
||||
progress.done_snapshots = 0;
|
||||
progress.group_snapshots = 0;
|
||||
|
||||
for (groups_done, item) in list.into_iter().enumerate() {
|
||||
let group = BackupGroup::new(&item.backup_type, &item.backup_id);
|
||||
|
||||
let (owner, _lock_guard) = match tgt_store.create_locked_backup_group(&group, &auth_id) {
|
||||
@ -551,7 +548,7 @@ pub async fn pull_store(
|
||||
tgt_store.clone(),
|
||||
&group,
|
||||
delete,
|
||||
Some((groups_done, group_count)),
|
||||
&mut progress,
|
||||
).await {
|
||||
worker.log(format!(
|
||||
"sync group {}/{} failed - {}",
|
||||
@ -565,7 +562,7 @@ pub async fn pull_store(
|
||||
|
||||
if delete {
|
||||
let result: Result<(), Error> = proxmox::try_block!({
|
||||
let local_groups = BackupGroup::list_groups(&tgt_store.base_path())?;
|
||||
let local_groups = BackupInfo::list_backup_groups(&tgt_store.base_path())?;
|
||||
for local_group in local_groups {
|
||||
if new_groups.contains(&local_group) { continue; }
|
||||
worker.log(format!("delete vanished group '{}/{}'", local_group.backup_type(), local_group.backup_id()));
|
||||
|
@ -24,6 +24,8 @@ pub mod sync;
|
||||
pub mod token_shadow;
|
||||
pub mod user;
|
||||
pub mod verify;
|
||||
pub mod drive;
|
||||
pub mod media_pool;
|
||||
|
||||
/// Check configuration directory permissions
|
||||
///
|
||||
|
145
src/config/drive.rs
Normal file
145
src/config/drive.rs
Normal file
@ -0,0 +1,145 @@
|
||||
use std::collections::HashMap;
|
||||
|
||||
use anyhow::{bail, Error};
|
||||
use lazy_static::lazy_static;
|
||||
|
||||
use proxmox::{
|
||||
api::{
|
||||
schema::*,
|
||||
section_config::{
|
||||
SectionConfig,
|
||||
SectionConfigData,
|
||||
SectionConfigPlugin,
|
||||
},
|
||||
},
|
||||
tools::fs::{
|
||||
open_file_locked,
|
||||
replace_file,
|
||||
CreateOptions,
|
||||
},
|
||||
};
|
||||
|
||||
use crate::{
|
||||
api2::types::{
|
||||
DRIVE_ID_SCHEMA,
|
||||
VirtualTapeDrive,
|
||||
LinuxTapeDrive,
|
||||
ScsiTapeChanger,
|
||||
},
|
||||
};
|
||||
|
||||
lazy_static! {
|
||||
pub static ref CONFIG: SectionConfig = init();
|
||||
}
|
||||
|
||||
|
||||
fn init() -> SectionConfig {
|
||||
let mut config = SectionConfig::new(&DRIVE_ID_SCHEMA);
|
||||
|
||||
let obj_schema = match VirtualTapeDrive::API_SCHEMA {
|
||||
Schema::Object(ref obj_schema) => obj_schema,
|
||||
_ => unreachable!(),
|
||||
};
|
||||
let plugin = SectionConfigPlugin::new("virtual".to_string(), Some("name".to_string()), obj_schema);
|
||||
config.register_plugin(plugin);
|
||||
|
||||
let obj_schema = match LinuxTapeDrive::API_SCHEMA {
|
||||
Schema::Object(ref obj_schema) => obj_schema,
|
||||
_ => unreachable!(),
|
||||
};
|
||||
let plugin = SectionConfigPlugin::new("linux".to_string(), Some("name".to_string()), obj_schema);
|
||||
config.register_plugin(plugin);
|
||||
|
||||
let obj_schema = match ScsiTapeChanger::API_SCHEMA {
|
||||
Schema::Object(ref obj_schema) => obj_schema,
|
||||
_ => unreachable!(),
|
||||
};
|
||||
let plugin = SectionConfigPlugin::new("changer".to_string(), Some("name".to_string()), obj_schema);
|
||||
config.register_plugin(plugin);
|
||||
config
|
||||
}
|
||||
|
||||
pub const DRIVE_CFG_FILENAME: &str = "/etc/proxmox-backup/tape.cfg";
|
||||
pub const DRIVE_CFG_LOCKFILE: &str = "/etc/proxmox-backup/.tape.lck";
|
||||
|
||||
pub fn lock() -> Result<std::fs::File, Error> {
|
||||
open_file_locked(DRIVE_CFG_LOCKFILE, std::time::Duration::new(10, 0), true)
|
||||
}
|
||||
|
||||
pub fn config() -> Result<(SectionConfigData, [u8;32]), Error> {
|
||||
|
||||
let content = proxmox::tools::fs::file_read_optional_string(DRIVE_CFG_FILENAME)?;
|
||||
let content = content.unwrap_or(String::from(""));
|
||||
|
||||
let digest = openssl::sha::sha256(content.as_bytes());
|
||||
let data = CONFIG.parse(DRIVE_CFG_FILENAME, &content)?;
|
||||
Ok((data, digest))
|
||||
}
|
||||
|
||||
pub fn save_config(config: &SectionConfigData) -> Result<(), Error> {
|
||||
let raw = CONFIG.write(DRIVE_CFG_FILENAME, &config)?;
|
||||
|
||||
let backup_user = crate::backup::backup_user()?;
|
||||
let mode = nix::sys::stat::Mode::from_bits_truncate(0o0640);
|
||||
// set the correct owner/group/permissions while saving file
|
||||
// owner(rw) = root, group(r)= backup
|
||||
let options = CreateOptions::new()
|
||||
.perm(mode)
|
||||
.owner(nix::unistd::ROOT)
|
||||
.group(backup_user.gid);
|
||||
|
||||
replace_file(DRIVE_CFG_FILENAME, raw.as_bytes(), options)?;
|
||||
|
||||
Ok(())
|
||||
}
|
||||
|
||||
pub fn check_drive_exists(config: &SectionConfigData, drive: &str) -> Result<(), Error> {
|
||||
match config.sections.get(drive) {
|
||||
Some((section_type, _)) => {
|
||||
if !(section_type == "linux" || section_type == "virtual") {
|
||||
bail!("Entry '{}' exists, but is not a tape drive", drive);
|
||||
}
|
||||
}
|
||||
None => bail!("Drive '{}' does not exist", drive),
|
||||
}
|
||||
Ok(())
|
||||
}
|
||||
|
||||
|
||||
// shell completion helper
|
||||
|
||||
/// List all drive names
|
||||
pub fn complete_drive_name(_arg: &str, _param: &HashMap<String, String>) -> Vec<String> {
|
||||
match config() {
|
||||
Ok((data, _digest)) => data.sections.iter()
|
||||
.map(|(id, _)| id.to_string())
|
||||
.collect(),
|
||||
Err(_) => return vec![],
|
||||
}
|
||||
}
|
||||
|
||||
/// List Linux tape drives
|
||||
pub fn complete_linux_drive_name(_arg: &str, _param: &HashMap<String, String>) -> Vec<String> {
|
||||
match config() {
|
||||
Ok((data, _digest)) => data.sections.iter()
|
||||
.filter(|(_id, (section_type, _))| {
|
||||
section_type == "linux"
|
||||
})
|
||||
.map(|(id, _)| id.to_string())
|
||||
.collect(),
|
||||
Err(_) => return vec![],
|
||||
}
|
||||
}
|
||||
|
||||
/// List Scsi tape changer names
|
||||
pub fn complete_changer_name(_arg: &str, _param: &HashMap<String, String>) -> Vec<String> {
|
||||
match config() {
|
||||
Ok((data, _digest)) => data.sections.iter()
|
||||
.filter(|(_id, (section_type, _))| {
|
||||
section_type == "changer"
|
||||
})
|
||||
.map(|(id, _)| id.to_string())
|
||||
.collect(),
|
||||
Err(_) => return vec![],
|
||||
}
|
||||
}
|
88
src/config/media_pool.rs
Normal file
88
src/config/media_pool.rs
Normal file
@ -0,0 +1,88 @@
|
||||
use std::collections::HashMap;
|
||||
|
||||
use anyhow::Error;
|
||||
use lazy_static::lazy_static;
|
||||
|
||||
use proxmox::{
|
||||
api::{
|
||||
schema::*,
|
||||
section_config::{
|
||||
SectionConfig,
|
||||
SectionConfigData,
|
||||
SectionConfigPlugin,
|
||||
}
|
||||
},
|
||||
tools::fs::{
|
||||
open_file_locked,
|
||||
replace_file,
|
||||
CreateOptions,
|
||||
},
|
||||
};
|
||||
|
||||
use crate::{
|
||||
api2::types::{
|
||||
MEDIA_POOL_NAME_SCHEMA,
|
||||
MediaPoolConfig,
|
||||
},
|
||||
};
|
||||
|
||||
lazy_static! {
|
||||
static ref CONFIG: SectionConfig = init();
|
||||
}
|
||||
|
||||
fn init() -> SectionConfig {
|
||||
let mut config = SectionConfig::new(&MEDIA_POOL_NAME_SCHEMA);
|
||||
|
||||
let obj_schema = match MediaPoolConfig::API_SCHEMA {
|
||||
Schema::Object(ref obj_schema) => obj_schema,
|
||||
_ => unreachable!(),
|
||||
};
|
||||
let plugin = SectionConfigPlugin::new("pool".to_string(), Some("name".to_string()), obj_schema);
|
||||
config.register_plugin(plugin);
|
||||
|
||||
config
|
||||
}
|
||||
|
||||
pub const MEDIA_POOL_CFG_FILENAME: &'static str = "/etc/proxmox-backup/media-pool.cfg";
|
||||
pub const MEDIA_POOL_CFG_LOCKFILE: &'static str = "/etc/proxmox-backup/.media-pool.lck";
|
||||
|
||||
pub fn lock() -> Result<std::fs::File, Error> {
|
||||
open_file_locked(MEDIA_POOL_CFG_LOCKFILE, std::time::Duration::new(10, 0), true)
|
||||
}
|
||||
|
||||
pub fn config() -> Result<(SectionConfigData, [u8;32]), Error> {
|
||||
|
||||
let content = proxmox::tools::fs::file_read_optional_string(MEDIA_POOL_CFG_FILENAME)?;
|
||||
let content = content.unwrap_or(String::from(""));
|
||||
|
||||
let digest = openssl::sha::sha256(content.as_bytes());
|
||||
let data = CONFIG.parse(MEDIA_POOL_CFG_FILENAME, &content)?;
|
||||
Ok((data, digest))
|
||||
}
|
||||
|
||||
pub fn save_config(config: &SectionConfigData) -> Result<(), Error> {
|
||||
let raw = CONFIG.write(MEDIA_POOL_CFG_FILENAME, &config)?;
|
||||
|
||||
let backup_user = crate::backup::backup_user()?;
|
||||
let mode = nix::sys::stat::Mode::from_bits_truncate(0o0640);
|
||||
// set the correct owner/group/permissions while saving file
|
||||
// owner(rw) = root, group(r)= backup
|
||||
let options = CreateOptions::new()
|
||||
.perm(mode)
|
||||
.owner(nix::unistd::ROOT)
|
||||
.group(backup_user.gid);
|
||||
|
||||
replace_file(MEDIA_POOL_CFG_FILENAME, raw.as_bytes(), options)?;
|
||||
|
||||
Ok(())
|
||||
}
|
||||
|
||||
// shell completion helper
|
||||
|
||||
/// List existing pool names
|
||||
pub fn complete_pool_name(_arg: &str, _param: &HashMap<String, String>) -> Vec<String> {
|
||||
match config() {
|
||||
Ok((data, _digest)) => data.sections.iter().map(|(id, _)| id.to_string()).collect(),
|
||||
Err(_) => return vec![],
|
||||
}
|
||||
}
|
@ -1,14 +1,16 @@
|
||||
use std::path::Path;
|
||||
use std::process::Command;
|
||||
use std::collections::HashMap;
|
||||
use std::os::unix::io::{AsRawFd, FromRawFd};
|
||||
|
||||
use anyhow::{Error, bail, format_err};
|
||||
use lazy_static::lazy_static;
|
||||
use nix::sys::socket::{socket, AddressFamily, SockType, SockFlag};
|
||||
use nix::ioctl_read_bad;
|
||||
use nix::sys::socket::{socket, AddressFamily, SockType, SockFlag};
|
||||
use regex::Regex;
|
||||
|
||||
use proxmox::*; // for IP macros
|
||||
use proxmox::tools::fd::Fd;
|
||||
|
||||
pub static IPV4_REVERSE_MASK: &[&str] = &[
|
||||
"0.0.0.0",
|
||||
@ -133,8 +135,24 @@ pub fn get_network_interfaces() -> Result<HashMap<String, bool>, Error> {
|
||||
|
||||
let lines = raw.lines();
|
||||
|
||||
let sock = socket(AddressFamily::Inet, SockType::Datagram, SockFlag::empty(), None)
|
||||
.or_else(|_| socket(AddressFamily::Inet6, SockType::Datagram, SockFlag::empty(), None))?;
|
||||
let sock = unsafe {
|
||||
Fd::from_raw_fd(
|
||||
socket(
|
||||
AddressFamily::Inet,
|
||||
SockType::Datagram,
|
||||
SockFlag::empty(),
|
||||
None,
|
||||
)
|
||||
.or_else(|_| {
|
||||
socket(
|
||||
AddressFamily::Inet6,
|
||||
SockType::Datagram,
|
||||
SockFlag::empty(),
|
||||
None,
|
||||
)
|
||||
})?,
|
||||
)
|
||||
};
|
||||
|
||||
let mut interface_list = HashMap::new();
|
||||
|
||||
@ -146,7 +164,7 @@ pub fn get_network_interfaces() -> Result<HashMap<String, bool>, Error> {
|
||||
for (i, b) in std::ffi::CString::new(ifname)?.as_bytes_with_nul().iter().enumerate() {
|
||||
if i < (libc::IFNAMSIZ-1) { req.ifr_name[i] = *b as libc::c_uchar; }
|
||||
}
|
||||
let res = unsafe { get_interface_flags(sock, &mut req)? };
|
||||
let res = unsafe { get_interface_flags(sock.as_raw_fd(), &mut req)? };
|
||||
if res != 0 {
|
||||
bail!("ioctl get_interface_flags for '{}' failed ({})", ifname, res);
|
||||
}
|
||||
|
@ -30,3 +30,5 @@ pub mod auth_helpers;
|
||||
pub mod auth;
|
||||
|
||||
pub mod rrd;
|
||||
|
||||
pub mod tape;
|
||||
|
@ -8,6 +8,7 @@ use nix::fcntl::OFlag;
|
||||
use nix::sys::stat::{mkdirat, Mode};
|
||||
|
||||
use proxmox::sys::error::SysError;
|
||||
use proxmox::tools::fd::BorrowedFd;
|
||||
use pxar::Metadata;
|
||||
|
||||
use crate::pxar::tools::{assert_single_path_component, perms_from_metadata};
|
||||
@ -35,7 +36,11 @@ impl PxarDir {
|
||||
}
|
||||
}
|
||||
|
||||
fn create_dir(&mut self, parent: RawFd, allow_existing_dirs: bool) -> Result<RawFd, Error> {
|
||||
fn create_dir(
|
||||
&mut self,
|
||||
parent: RawFd,
|
||||
allow_existing_dirs: bool,
|
||||
) -> Result<BorrowedFd, Error> {
|
||||
match mkdirat(
|
||||
parent,
|
||||
self.file_name.as_os_str(),
|
||||
@ -52,7 +57,7 @@ impl PxarDir {
|
||||
self.open_dir(parent)
|
||||
}
|
||||
|
||||
fn open_dir(&mut self, parent: RawFd) -> Result<RawFd, Error> {
|
||||
fn open_dir(&mut self, parent: RawFd) -> Result<BorrowedFd, Error> {
|
||||
let dir = Dir::openat(
|
||||
parent,
|
||||
self.file_name.as_os_str(),
|
||||
@ -60,14 +65,14 @@ impl PxarDir {
|
||||
Mode::empty(),
|
||||
)?;
|
||||
|
||||
let fd = dir.as_raw_fd();
|
||||
let fd = BorrowedFd::new(&dir);
|
||||
self.dir = Some(dir);
|
||||
|
||||
Ok(fd)
|
||||
}
|
||||
|
||||
pub fn try_as_raw_fd(&self) -> Option<RawFd> {
|
||||
self.dir.as_ref().map(AsRawFd::as_raw_fd)
|
||||
pub fn try_as_borrowed_fd(&self) -> Option<BorrowedFd> {
|
||||
self.dir.as_ref().map(BorrowedFd::new)
|
||||
}
|
||||
|
||||
pub fn metadata(&self) -> &Metadata {
|
||||
@ -119,32 +124,39 @@ impl PxarDirStack {
|
||||
Ok(out)
|
||||
}
|
||||
|
||||
pub fn last_dir_fd(&mut self, allow_existing_dirs: bool) -> Result<RawFd, Error> {
|
||||
pub fn last_dir_fd(&mut self, allow_existing_dirs: bool) -> Result<BorrowedFd, Error> {
|
||||
// should not be possible given the way we use it:
|
||||
assert!(!self.dirs.is_empty(), "PxarDirStack underrun");
|
||||
|
||||
let dirs_len = self.dirs.len();
|
||||
let mut fd = self.dirs[self.created - 1]
|
||||
.try_as_raw_fd()
|
||||
.ok_or_else(|| format_err!("lost track of directory file descriptors"))?;
|
||||
while self.created < self.dirs.len() {
|
||||
fd = self.dirs[self.created].create_dir(fd, allow_existing_dirs)?;
|
||||
.try_as_borrowed_fd()
|
||||
.ok_or_else(|| format_err!("lost track of directory file descriptors"))?
|
||||
.as_raw_fd();
|
||||
|
||||
while self.created < dirs_len {
|
||||
fd = self.dirs[self.created]
|
||||
.create_dir(fd, allow_existing_dirs)?
|
||||
.as_raw_fd();
|
||||
self.created += 1;
|
||||
}
|
||||
|
||||
Ok(fd)
|
||||
self.dirs[self.created - 1]
|
||||
.try_as_borrowed_fd()
|
||||
.ok_or_else(|| format_err!("lost track of directory file descriptors"))
|
||||
}
|
||||
|
||||
pub fn create_last_dir(&mut self, allow_existing_dirs: bool) -> Result<(), Error> {
|
||||
let _: RawFd = self.last_dir_fd(allow_existing_dirs)?;
|
||||
let _: BorrowedFd = self.last_dir_fd(allow_existing_dirs)?;
|
||||
Ok(())
|
||||
}
|
||||
|
||||
pub fn root_dir_fd(&self) -> Result<RawFd, Error> {
|
||||
pub fn root_dir_fd(&self) -> Result<BorrowedFd, Error> {
|
||||
// should not be possible given the way we use it:
|
||||
assert!(!self.dirs.is_empty(), "PxarDirStack underrun");
|
||||
|
||||
self.dirs[0]
|
||||
.try_as_raw_fd()
|
||||
.try_as_borrowed_fd()
|
||||
.ok_or_else(|| format_err!("lost track of directory file descriptors"))
|
||||
}
|
||||
}
|
||||
|
@ -277,11 +277,11 @@ impl Extractor {
|
||||
.map_err(|err| format_err!("unexpected end of directory entry: {}", err))?
|
||||
.ok_or_else(|| format_err!("broken pxar archive (directory stack underrun)"))?;
|
||||
|
||||
if let Some(fd) = dir.try_as_raw_fd() {
|
||||
if let Some(fd) = dir.try_as_borrowed_fd() {
|
||||
metadata::apply(
|
||||
self.feature_flags,
|
||||
dir.metadata(),
|
||||
fd,
|
||||
fd.as_raw_fd(),
|
||||
&CString::new(dir.file_name().as_bytes())?,
|
||||
&mut self.on_error,
|
||||
)
|
||||
@ -298,6 +298,7 @@ impl Extractor {
|
||||
fn parent_fd(&mut self) -> Result<RawFd, Error> {
|
||||
self.dir_stack
|
||||
.last_dir_fd(self.allow_existing_dirs)
|
||||
.map(|d| d.as_raw_fd())
|
||||
.map_err(|err| format_err!("failed to get parent directory file descriptor: {}", err))
|
||||
}
|
||||
|
||||
@ -325,7 +326,7 @@ impl Extractor {
|
||||
let root = self.dir_stack.root_dir_fd()?;
|
||||
let target = CString::new(link.as_bytes())?;
|
||||
nix::unistd::linkat(
|
||||
Some(root),
|
||||
Some(root.as_raw_fd()),
|
||||
target.as_c_str(),
|
||||
Some(parent),
|
||||
file_name,
|
||||
|
@ -4,7 +4,7 @@ use proxmox::try_block;
|
||||
|
||||
use crate::{
|
||||
api2::types::*,
|
||||
backup::{compute_prune_info, BackupGroup, DataStore, PruneOptions},
|
||||
backup::{compute_prune_info, BackupInfo, DataStore, PruneOptions},
|
||||
server::jobstate::Job,
|
||||
server::WorkerTask,
|
||||
task_log,
|
||||
@ -43,7 +43,7 @@ pub fn do_prune_job(
|
||||
|
||||
let base_path = datastore.base_path();
|
||||
|
||||
let groups = BackupGroup::list_groups(&base_path)?;
|
||||
let groups = BackupInfo::list_backup_groups(&base_path)?;
|
||||
for group in groups {
|
||||
let list = group.list_backups(&base_path)?;
|
||||
let mut prune_info = compute_prune_info(list, &prune_options)?;
|
||||
|
@ -347,6 +347,9 @@ fn update_active_workers(new_upid: Option<&UPID>) -> Result<(), Error> {
|
||||
let mut finish_list: Vec<TaskListInfo> = read_task_file_from_path(PROXMOX_BACKUP_INDEX_TASK_FN)?;
|
||||
let had_index_file = !finish_list.is_empty();
|
||||
|
||||
// We use filter_map because one negative case wants to *move* the data into `finish_list`,
|
||||
// clippy doesn't quite catch this!
|
||||
#[allow(clippy::unnecessary_filter_map)]
|
||||
let mut active_list: Vec<TaskListInfo> = read_task_file_from_path(PROXMOX_BACKUP_ACTIVE_TASK_FN)?
|
||||
.into_iter()
|
||||
.filter_map(|info| {
|
||||
@ -601,7 +604,7 @@ impl WorkerTask {
|
||||
path.push(upid.to_string());
|
||||
|
||||
let logger_options = FileLogOptions {
|
||||
to_stdout: to_stdout,
|
||||
to_stdout,
|
||||
exclusive: true,
|
||||
prefix_time: true,
|
||||
read: true,
|
||||
|
57
src/tape/changer/email.rs
Normal file
57
src/tape/changer/email.rs
Normal file
@ -0,0 +1,57 @@
|
||||
use anyhow::Error;
|
||||
|
||||
use proxmox::tools::email::sendmail;
|
||||
|
||||
use super::MediaChange;
|
||||
|
||||
/// Send email to a person to request a manual media change
|
||||
pub struct ChangeMediaEmail {
|
||||
drive: String,
|
||||
to: String,
|
||||
}
|
||||
|
||||
impl ChangeMediaEmail {
|
||||
|
||||
pub fn new(drive: &str, to: &str) -> Self {
|
||||
Self {
|
||||
drive: String::from(drive),
|
||||
to: String::from(to),
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
impl MediaChange for ChangeMediaEmail {
|
||||
|
||||
fn load_media(&mut self, changer_id: &str) -> Result<(), Error> {
|
||||
|
||||
let subject = format!("Load Media '{}' request for drive '{}'", changer_id, self.drive);
|
||||
|
||||
let mut text = String::new();
|
||||
|
||||
text.push_str("Please insert the requested media into the backup drive.\n\n");
|
||||
|
||||
text.push_str(&format!("Drive: {}\n", self.drive));
|
||||
text.push_str(&format!("Media: {}\n", changer_id));
|
||||
|
||||
sendmail(
|
||||
&[&self.to],
|
||||
&subject,
|
||||
Some(&text),
|
||||
None,
|
||||
None,
|
||||
None,
|
||||
)?;
|
||||
|
||||
Ok(())
|
||||
}
|
||||
|
||||
fn unload_media(&mut self) -> Result<(), Error> {
|
||||
/* ignore ? */
|
||||
Ok(())
|
||||
}
|
||||
|
||||
fn list_media_changer_ids(&self) -> Result<Vec<String>, Error> {
|
||||
Ok(Vec::new())
|
||||
}
|
||||
|
||||
}
|
145
src/tape/changer/linux_tape.rs
Normal file
145
src/tape/changer/linux_tape.rs
Normal file
@ -0,0 +1,145 @@
|
||||
use anyhow::{bail, Error};
|
||||
|
||||
use crate::{
|
||||
tape::changer::{
|
||||
MediaChange,
|
||||
MtxStatus,
|
||||
ElementStatus,
|
||||
mtx_status,
|
||||
mtx_load,
|
||||
mtx_unload,
|
||||
},
|
||||
api2::types::{
|
||||
ScsiTapeChanger,
|
||||
LinuxTapeDrive,
|
||||
},
|
||||
};
|
||||
|
||||
fn unload_to_free_slot(drive_name: &str, path: &str, status: &MtxStatus, drivenum: u64) -> Result<(), Error> {
|
||||
|
||||
if drivenum >= status.drives.len() as u64 {
|
||||
bail!("unload drive '{}' got unexpected drive number '{}' - changer only has '{}' drives",
|
||||
drive_name, drivenum, status.drives.len());
|
||||
}
|
||||
let drive_status = &status.drives[drivenum as usize];
|
||||
if let Some(slot) = drive_status.loaded_slot {
|
||||
mtx_unload(path, slot, drivenum)
|
||||
} else {
|
||||
let mut free_slot = None;
|
||||
for i in 0..status.slots.len() {
|
||||
if let ElementStatus::Empty = status.slots[i] {
|
||||
free_slot = Some((i+1) as u64);
|
||||
break;
|
||||
}
|
||||
}
|
||||
if let Some(slot) = free_slot {
|
||||
mtx_unload(path, slot, drivenum)
|
||||
} else {
|
||||
bail!("drive '{}' unload failure - no free slot", drive_name);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
impl MediaChange for LinuxTapeDrive {
|
||||
|
||||
fn load_media(&mut self, changer_id: &str) -> Result<(), Error> {
|
||||
|
||||
if changer_id.starts_with("CLN") {
|
||||
bail!("unable to load media '{}' (seems top be a a cleaning units)", changer_id);
|
||||
}
|
||||
|
||||
let (config, _digest) = crate::config::drive::config()?;
|
||||
|
||||
let changer: ScsiTapeChanger = match self.changer {
|
||||
Some(ref changer) => config.lookup("changer", changer)?,
|
||||
None => bail!("drive '{}' has no associated changer", self.name),
|
||||
};
|
||||
|
||||
let status = mtx_status(&changer.path)?;
|
||||
|
||||
let drivenum = self.changer_drive_id.unwrap_or(0);
|
||||
|
||||
// already loaded?
|
||||
for (i, drive_status) in status.drives.iter().enumerate() {
|
||||
if let ElementStatus::VolumeTag(ref tag) = drive_status.status {
|
||||
if *tag == changer_id {
|
||||
if i as u64 != drivenum {
|
||||
bail!("unable to load media '{}' - media in wrong drive ({} != {})",
|
||||
changer_id, i, drivenum);
|
||||
}
|
||||
return Ok(())
|
||||
}
|
||||
}
|
||||
if i as u64 == drivenum {
|
||||
match drive_status.status {
|
||||
ElementStatus::Empty => { /* OK */ },
|
||||
_ => unload_to_free_slot(&self.name, &changer.path, &status, drivenum as u64)?,
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
let mut slot = None;
|
||||
for (i, element_status) in status.slots.iter().enumerate() {
|
||||
if let ElementStatus::VolumeTag(tag) = element_status {
|
||||
if *tag == changer_id {
|
||||
slot = Some(i+1);
|
||||
break;
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
let slot = match slot {
|
||||
None => bail!("unable to find media '{}' (offline?)", changer_id),
|
||||
Some(slot) => slot,
|
||||
};
|
||||
|
||||
|
||||
mtx_load(&changer.path, slot as u64, drivenum as u64)
|
||||
}
|
||||
|
||||
fn unload_media(&mut self) -> Result<(), Error> {
|
||||
let (config, _digest) = crate::config::drive::config()?;
|
||||
|
||||
let changer: ScsiTapeChanger = match self.changer {
|
||||
Some(ref changer) => config.lookup("changer", changer)?,
|
||||
None => return Ok(()),
|
||||
};
|
||||
|
||||
let drivenum = self.changer_drive_id.unwrap_or(0);
|
||||
|
||||
let status = mtx_status(&changer.path)?;
|
||||
|
||||
unload_to_free_slot(&self.name, &changer.path, &status, drivenum)
|
||||
}
|
||||
|
||||
fn eject_on_unload(&self) -> bool {
|
||||
true
|
||||
}
|
||||
|
||||
fn list_media_changer_ids(&self) -> Result<Vec<String>, Error> {
|
||||
let (config, _digest) = crate::config::drive::config()?;
|
||||
|
||||
let changer: ScsiTapeChanger = match self.changer {
|
||||
Some(ref changer) => config.lookup("changer", changer)?,
|
||||
None => return Ok(Vec::new()),
|
||||
};
|
||||
|
||||
let status = mtx_status(&changer.path)?;
|
||||
|
||||
let mut list = Vec::new();
|
||||
|
||||
for drive_status in status.drives.iter() {
|
||||
if let ElementStatus::VolumeTag(ref tag) = drive_status.status {
|
||||
list.push(tag.clone());
|
||||
}
|
||||
}
|
||||
|
||||
for element_status in status.slots.iter() {
|
||||
if let ElementStatus::VolumeTag(ref tag) = element_status {
|
||||
list.push(tag.clone());
|
||||
}
|
||||
}
|
||||
|
||||
Ok(list)
|
||||
}
|
||||
}
|
35
src/tape/changer/mod.rs
Normal file
35
src/tape/changer/mod.rs
Normal file
@ -0,0 +1,35 @@
|
||||
mod email;
|
||||
pub use email::*;
|
||||
|
||||
mod parse_mtx_status;
|
||||
pub use parse_mtx_status::*;
|
||||
|
||||
mod mtx_wrapper;
|
||||
pub use mtx_wrapper::*;
|
||||
|
||||
mod linux_tape;
|
||||
pub use linux_tape::*;
|
||||
|
||||
use anyhow::Error;
|
||||
|
||||
/// Interface to media change devices
|
||||
pub trait MediaChange {
|
||||
|
||||
/// Load media into drive
|
||||
///
|
||||
/// This unloads first if the drive is already loaded with another media.
|
||||
fn load_media(&mut self, changer_id: &str) -> Result<(), Error>;
|
||||
|
||||
/// Unload media from drive
|
||||
///
|
||||
/// This is a nop on drives without autoloader.
|
||||
fn unload_media(&mut self) -> Result<(), Error>;
|
||||
|
||||
/// Returns true if unload_media automatically ejects drive media
|
||||
fn eject_on_unload(&self) -> bool {
|
||||
false
|
||||
}
|
||||
|
||||
/// List media changer IDs (barcodes)
|
||||
fn list_media_changer_ids(&self) -> Result<Vec<String>, Error>;
|
||||
}
|
99
src/tape/changer/mtx_wrapper.rs
Normal file
99
src/tape/changer/mtx_wrapper.rs
Normal file
@ -0,0 +1,99 @@
|
||||
use std::collections::HashSet;
|
||||
|
||||
use anyhow::Error;
|
||||
|
||||
use proxmox::tools::Uuid;
|
||||
|
||||
use crate::{
|
||||
tools::run_command,
|
||||
tape::{
|
||||
Inventory,
|
||||
changer::{
|
||||
MtxStatus,
|
||||
ElementStatus,
|
||||
parse_mtx_status,
|
||||
},
|
||||
},
|
||||
};
|
||||
|
||||
/// Run 'mtx status' and return parsed result.
|
||||
pub fn mtx_status(path: &str) -> Result<MtxStatus, Error> {
|
||||
|
||||
let mut command = std::process::Command::new("mtx");
|
||||
command.args(&["-f", path, "status"]);
|
||||
|
||||
let output = run_command(command, None)?;
|
||||
|
||||
let status = parse_mtx_status(&output)?;
|
||||
|
||||
Ok(status)
|
||||
}
|
||||
|
||||
/// Run 'mtx load'
|
||||
pub fn mtx_load(
|
||||
path: &str,
|
||||
slot: u64,
|
||||
drivenum: u64,
|
||||
) -> Result<(), Error> {
|
||||
|
||||
let mut command = std::process::Command::new("mtx");
|
||||
command.args(&["-f", path, "load", &slot.to_string(), &drivenum.to_string()]);
|
||||
run_command(command, None)?;
|
||||
|
||||
Ok(())
|
||||
}
|
||||
|
||||
/// Run 'mtx unload'
|
||||
pub fn mtx_unload(
|
||||
path: &str,
|
||||
slot: u64,
|
||||
drivenum: u64,
|
||||
) -> Result<(), Error> {
|
||||
|
||||
let mut command = std::process::Command::new("mtx");
|
||||
command.args(&["-f", path, "unload", &slot.to_string(), &drivenum.to_string()]);
|
||||
run_command(command, None)?;
|
||||
|
||||
Ok(())
|
||||
}
|
||||
|
||||
/// Run 'mtx transfer'
|
||||
pub fn mtx_transfer(
|
||||
path: &str,
|
||||
from_slot: u64,
|
||||
to_slot: u64,
|
||||
) -> Result<(), Error> {
|
||||
|
||||
let mut command = std::process::Command::new("mtx");
|
||||
command.args(&["-f", path, "transfer", &from_slot.to_string(), &to_slot.to_string()]);
|
||||
|
||||
run_command(command, None)?;
|
||||
|
||||
Ok(())
|
||||
}
|
||||
|
||||
/// Extract the list of online media from MtxStatus
|
||||
///
|
||||
/// Returns a HashSet containing all found media Uuid
|
||||
pub fn mtx_status_to_online_set(status: &MtxStatus, inventory: &Inventory) -> HashSet<Uuid> {
|
||||
|
||||
let mut online_set = HashSet::new();
|
||||
|
||||
for drive_status in status.drives.iter() {
|
||||
if let ElementStatus::VolumeTag(ref changer_id) = drive_status.status {
|
||||
if let Some(media_id) = inventory.find_media_by_changer_id(changer_id) {
|
||||
online_set.insert(media_id.label.uuid.clone());
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
for slot_status in status.slots.iter() {
|
||||
if let ElementStatus::VolumeTag(ref changer_id) = slot_status {
|
||||
if let Some(media_id) = inventory.find_media_by_changer_id(changer_id) {
|
||||
online_set.insert(media_id.label.uuid.clone());
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
online_set
|
||||
}
|
161
src/tape/changer/parse_mtx_status.rs
Normal file
161
src/tape/changer/parse_mtx_status.rs
Normal file
@ -0,0 +1,161 @@
|
||||
use anyhow::Error;
|
||||
|
||||
use nom::{
|
||||
bytes::complete::{take_while, tag},
|
||||
};
|
||||
|
||||
use crate::tools::nom::{
|
||||
parse_complete, multispace0, multispace1, parse_u64,
|
||||
parse_failure, parse_error, IResult,
|
||||
};
|
||||
|
||||
pub enum ElementStatus {
|
||||
Empty,
|
||||
Full,
|
||||
VolumeTag(String),
|
||||
}
|
||||
|
||||
pub struct DriveStatus {
|
||||
pub loaded_slot: Option<u64>,
|
||||
pub status: ElementStatus,
|
||||
}
|
||||
|
||||
pub struct MtxStatus {
|
||||
pub drives: Vec<DriveStatus>,
|
||||
pub slots: Vec<ElementStatus>,
|
||||
}
|
||||
|
||||
// Recognizes one line
|
||||
fn next_line(i: &str) -> IResult<&str, &str> {
|
||||
let (i, line) = take_while(|c| (c != '\n'))(i)?;
|
||||
if i.is_empty() {
|
||||
Ok((i, line))
|
||||
} else {
|
||||
Ok((&i[1..], line))
|
||||
}
|
||||
}
|
||||
|
||||
fn parse_storage_changer(i: &str) -> IResult<&str, ()> {
|
||||
|
||||
let (i, _) = multispace0(i)?;
|
||||
let (i, _) = tag("Storage Changer")(i)?;
|
||||
let (i, _) = next_line(i)?; // skip
|
||||
|
||||
Ok((i, ()))
|
||||
}
|
||||
|
||||
fn parse_drive_status(i: &str) -> IResult<&str, DriveStatus> {
|
||||
|
||||
let mut loaded_slot = None;
|
||||
|
||||
if i.starts_with("Empty") {
|
||||
return Ok((&i[5..], DriveStatus { loaded_slot, status: ElementStatus::Empty }));
|
||||
}
|
||||
let (mut i, _) = tag("Full (")(i)?;
|
||||
|
||||
if i.starts_with("Storage Element ") {
|
||||
let n = &i[16..];
|
||||
let (n, id) = parse_u64(n)?;
|
||||
loaded_slot = Some(id);
|
||||
let (n, _) = tag(" Loaded")(n)?;
|
||||
i = n;
|
||||
} else {
|
||||
let (n, _) = take_while(|c| !(c == ')' || c == '\n'))(i)?; // skip to ')'
|
||||
i = n;
|
||||
}
|
||||
|
||||
let (i, _) = tag(")")(i)?;
|
||||
|
||||
if i.starts_with(":VolumeTag = ") {
|
||||
let i = &i[13..];
|
||||
let (i, tag) = take_while(|c| !(c == ' ' || c == ':' || c == '\n'))(i)?;
|
||||
let (i, _) = take_while(|c| c != '\n')(i)?; // skip to eol
|
||||
return Ok((i, DriveStatus { loaded_slot, status: ElementStatus::VolumeTag(tag.to_string()) }));
|
||||
}
|
||||
|
||||
let (i, _) = take_while(|c| c != '\n')(i)?; // skip
|
||||
|
||||
Ok((i, DriveStatus { loaded_slot, status: ElementStatus::Full }))
|
||||
}
|
||||
|
||||
fn parse_slot_status(i: &str) -> IResult<&str, ElementStatus> {
|
||||
if i.starts_with("Empty") {
|
||||
return Ok((&i[5..], ElementStatus::Empty));
|
||||
}
|
||||
if i.starts_with("Full ") {
|
||||
let mut n = &i[5..];
|
||||
|
||||
if n.starts_with(":VolumeTag=") {
|
||||
n = &n[11..];
|
||||
let (n, tag) = take_while(|c| !(c == ' ' || c == ':' || c == '\n'))(n)?;
|
||||
let (n, _) = take_while(|c| c != '\n')(n)?; // skip to eol
|
||||
return Ok((n, ElementStatus::VolumeTag(tag.to_string())));
|
||||
|
||||
}
|
||||
let (n, _) = take_while(|c| c != '\n')(n)?; // skip
|
||||
|
||||
return Ok((n, ElementStatus::Full));
|
||||
}
|
||||
|
||||
Err(parse_error(i, "unexptected element status"))
|
||||
}
|
||||
|
||||
fn parse_data_transfer_element(i: &str) -> IResult<&str, (u64, DriveStatus)> {
|
||||
|
||||
let (i, _) = tag("Data Transfer Element")(i)?;
|
||||
let (i, _) = multispace1(i)?;
|
||||
let (i, id) = parse_u64(i)?;
|
||||
let (i, _) = nom::character::complete::char(':')(i)?;
|
||||
let (i, element_status) = parse_drive_status(i)?;
|
||||
let (i, _) = nom::character::complete::newline(i)?;
|
||||
|
||||
Ok((i, (id, element_status)))
|
||||
}
|
||||
|
||||
fn parse_storage_element(i: &str) -> IResult<&str, (u64, ElementStatus)> {
|
||||
|
||||
let (i, _) = multispace1(i)?;
|
||||
let (i, _) = tag("Storage Element")(i)?;
|
||||
let (i, _) = multispace1(i)?;
|
||||
let (i, id) = parse_u64(i)?;
|
||||
let (i, _) = nom::character::complete::char(':')(i)?;
|
||||
let (i, element_status) = parse_slot_status(i)?;
|
||||
let (i, _) = nom::character::complete::newline(i)?;
|
||||
|
||||
Ok((i, (id, element_status)))
|
||||
}
|
||||
|
||||
fn parse_status(i: &str) -> IResult<&str, MtxStatus> {
|
||||
|
||||
let (mut i, _) = parse_storage_changer(i)?;
|
||||
|
||||
let mut drives = Vec::new();
|
||||
while let Ok((n, (id, drive_status))) = parse_data_transfer_element(i) {
|
||||
if id != drives.len() as u64 {
|
||||
return Err(parse_failure(i, "unexpected drive number"));
|
||||
}
|
||||
i = n;
|
||||
drives.push(drive_status);
|
||||
}
|
||||
|
||||
let mut slots = Vec::new();
|
||||
while let Ok((n, (id, element_status))) = parse_storage_element(i) {
|
||||
if id != (slots.len() as u64 + 1) {
|
||||
return Err(parse_failure(i, "unexpected slot number"));
|
||||
}
|
||||
i = n;
|
||||
slots.push(element_status);
|
||||
}
|
||||
|
||||
let status = MtxStatus { drives, slots };
|
||||
|
||||
Ok((i, status))
|
||||
}
|
||||
|
||||
/// Parses the output from 'mtx status'
|
||||
pub fn parse_mtx_status(i: &str) -> Result<MtxStatus, Error> {
|
||||
|
||||
let status = parse_complete("mtx status", i, parse_status)?;
|
||||
|
||||
Ok(status)
|
||||
}
|
232
src/tape/drive/linux_list_drives.rs
Normal file
232
src/tape/drive/linux_list_drives.rs
Normal file
@ -0,0 +1,232 @@
|
||||
use std::path::{Path, PathBuf};
|
||||
use std::collections::HashMap;
|
||||
|
||||
use anyhow::{bail, Error};
|
||||
|
||||
use crate::{
|
||||
api2::types::{
|
||||
DeviceKind,
|
||||
TapeDeviceInfo,
|
||||
},
|
||||
tools::fs::scan_subdir,
|
||||
};
|
||||
|
||||
/// List linux tape changer devices
|
||||
pub fn linux_tape_changer_list() -> Vec<TapeDeviceInfo> {
|
||||
|
||||
lazy_static::lazy_static!{
|
||||
static ref SCSI_GENERIC_NAME_REGEX: regex::Regex =
|
||||
regex::Regex::new(r"^sg\d+$").unwrap();
|
||||
}
|
||||
|
||||
let mut list = Vec::new();
|
||||
|
||||
let dir_iter = match scan_subdir(
|
||||
libc::AT_FDCWD,
|
||||
"/sys/class/scsi_generic",
|
||||
&SCSI_GENERIC_NAME_REGEX)
|
||||
{
|
||||
Err(_) => return list,
|
||||
Ok(iter) => iter,
|
||||
};
|
||||
|
||||
for item in dir_iter {
|
||||
let item = match item {
|
||||
Err(_) => continue,
|
||||
Ok(item) => item,
|
||||
};
|
||||
|
||||
let name = item.file_name().to_str().unwrap().to_string();
|
||||
|
||||
let mut sys_path = PathBuf::from("/sys/class/scsi_generic");
|
||||
sys_path.push(&name);
|
||||
|
||||
let device = match udev::Device::from_syspath(&sys_path) {
|
||||
Err(_) => continue,
|
||||
Ok(device) => device,
|
||||
};
|
||||
|
||||
let devnum = match device.devnum() {
|
||||
None => continue,
|
||||
Some(devnum) => devnum,
|
||||
};
|
||||
|
||||
let parent = match device.parent() {
|
||||
None => continue,
|
||||
Some(parent) => parent,
|
||||
};
|
||||
|
||||
match parent.attribute_value("type") {
|
||||
Some(type_osstr) => {
|
||||
if type_osstr != "8" {
|
||||
continue;
|
||||
}
|
||||
}
|
||||
_ => { continue; }
|
||||
}
|
||||
|
||||
// let mut test_path = sys_path.clone();
|
||||
// test_path.push("device/scsi_changer");
|
||||
// if !test_path.exists() { continue; }
|
||||
|
||||
let _dev_path = match device.devnode().map(Path::to_owned) {
|
||||
None => continue,
|
||||
Some(dev_path) => dev_path,
|
||||
};
|
||||
|
||||
let serial = match device.property_value("ID_SCSI_SERIAL")
|
||||
.map(std::ffi::OsString::from)
|
||||
.and_then(|s| if let Ok(s) = s.into_string() { Some(s) } else { None })
|
||||
{
|
||||
None => continue,
|
||||
Some(serial) => serial,
|
||||
};
|
||||
|
||||
let vendor = device.property_value("ID_VENDOR")
|
||||
.map(std::ffi::OsString::from)
|
||||
.and_then(|s| if let Ok(s) = s.into_string() { Some(s) } else { None })
|
||||
.unwrap_or(String::from("unknown"));
|
||||
|
||||
let model = device.property_value("ID_MODEL")
|
||||
.map(std::ffi::OsString::from)
|
||||
.and_then(|s| if let Ok(s) = s.into_string() { Some(s) } else { None })
|
||||
.unwrap_or(String::from("unknown"));
|
||||
|
||||
let dev_path = format!("/dev/tape/by-id/scsi-{}", serial);
|
||||
|
||||
if PathBuf::from(&dev_path).exists() {
|
||||
list.push(TapeDeviceInfo {
|
||||
kind: DeviceKind::Changer,
|
||||
path: dev_path,
|
||||
serial,
|
||||
vendor,
|
||||
model,
|
||||
major: unsafe { libc::major(devnum) },
|
||||
minor: unsafe { libc::minor(devnum) },
|
||||
});
|
||||
}
|
||||
}
|
||||
|
||||
list
|
||||
}
|
||||
|
||||
/// List linux tape devices (non-rewinding)
|
||||
pub fn linux_tape_device_list() -> Vec<TapeDeviceInfo> {
|
||||
|
||||
lazy_static::lazy_static!{
|
||||
static ref NST_TAPE_NAME_REGEX: regex::Regex =
|
||||
regex::Regex::new(r"^nst\d+$").unwrap();
|
||||
}
|
||||
|
||||
let mut list = Vec::new();
|
||||
|
||||
let dir_iter = match scan_subdir(
|
||||
libc::AT_FDCWD,
|
||||
"/sys/class/scsi_tape",
|
||||
&NST_TAPE_NAME_REGEX)
|
||||
{
|
||||
Err(_) => return list,
|
||||
Ok(iter) => iter,
|
||||
};
|
||||
|
||||
for item in dir_iter {
|
||||
let item = match item {
|
||||
Err(_) => continue,
|
||||
Ok(item) => item,
|
||||
};
|
||||
|
||||
let name = item.file_name().to_str().unwrap().to_string();
|
||||
|
||||
let mut sys_path = PathBuf::from("/sys/class/scsi_tape");
|
||||
sys_path.push(&name);
|
||||
|
||||
let device = match udev::Device::from_syspath(&sys_path) {
|
||||
Err(_) => continue,
|
||||
Ok(device) => device,
|
||||
};
|
||||
|
||||
let devnum = match device.devnum() {
|
||||
None => continue,
|
||||
Some(devnum) => devnum,
|
||||
};
|
||||
|
||||
let _dev_path = match device.devnode().map(Path::to_owned) {
|
||||
None => continue,
|
||||
Some(dev_path) => dev_path,
|
||||
};
|
||||
|
||||
let serial = match device.property_value("ID_SCSI_SERIAL")
|
||||
.map(std::ffi::OsString::from)
|
||||
.and_then(|s| if let Ok(s) = s.into_string() { Some(s) } else { None })
|
||||
{
|
||||
None => continue,
|
||||
Some(serial) => serial,
|
||||
};
|
||||
|
||||
let vendor = device.property_value("ID_VENDOR")
|
||||
.map(std::ffi::OsString::from)
|
||||
.and_then(|s| if let Ok(s) = s.into_string() { Some(s) } else { None })
|
||||
.unwrap_or(String::from("unknown"));
|
||||
|
||||
let model = device.property_value("ID_MODEL")
|
||||
.map(std::ffi::OsString::from)
|
||||
.and_then(|s| if let Ok(s) = s.into_string() { Some(s) } else { None })
|
||||
.unwrap_or(String::from("unknown"));
|
||||
|
||||
let dev_path = format!("/dev/tape/by-id/scsi-{}-nst", serial);
|
||||
|
||||
if PathBuf::from(&dev_path).exists() {
|
||||
list.push(TapeDeviceInfo {
|
||||
kind: DeviceKind::Tape,
|
||||
path: dev_path,
|
||||
serial,
|
||||
vendor,
|
||||
model,
|
||||
major: unsafe { libc::major(devnum) },
|
||||
minor: unsafe { libc::minor(devnum) },
|
||||
});
|
||||
}
|
||||
}
|
||||
|
||||
list
|
||||
}
|
||||
|
||||
/// Test if path is a linux tape device
|
||||
pub fn lookup_drive<'a>(
|
||||
drives: &'a[TapeDeviceInfo],
|
||||
path: &str,
|
||||
) -> Option<&'a TapeDeviceInfo> {
|
||||
|
||||
if let Ok(stat) = nix::sys::stat::stat(path) {
|
||||
|
||||
let major = unsafe { libc::major(stat.st_rdev) };
|
||||
let minor = unsafe { libc::minor(stat.st_rdev) };
|
||||
|
||||
drives.iter().find(|d| d.major == major && d.minor == minor)
|
||||
} else {
|
||||
None
|
||||
}
|
||||
}
|
||||
|
||||
/// Make sure path is a linux tape device
|
||||
pub fn check_drive_path(
|
||||
drives: &[TapeDeviceInfo],
|
||||
path: &str,
|
||||
) -> Result<(), Error> {
|
||||
if lookup_drive(drives, path).is_none() {
|
||||
bail!("path '{}' is not a linux (non-rewinding) tape device", path);
|
||||
}
|
||||
Ok(())
|
||||
}
|
||||
|
||||
// shell completion helper
|
||||
|
||||
/// List changer device paths
|
||||
pub fn complete_changer_path(_arg: &str, _param: &HashMap<String, String>) -> Vec<String> {
|
||||
linux_tape_changer_list().iter().map(|v| v.path.clone()).collect()
|
||||
}
|
||||
|
||||
/// List tape device paths
|
||||
pub fn complete_drive_path(_arg: &str, _param: &HashMap<String, String>) -> Vec<String> {
|
||||
linux_tape_device_list().iter().map(|v| v.path.clone()).collect()
|
||||
}
|
153
src/tape/drive/linux_mtio.rs
Normal file
153
src/tape/drive/linux_mtio.rs
Normal file
@ -0,0 +1,153 @@
|
||||
//! Linux Magnetic Tape Driver ioctl definitions
|
||||
//!
|
||||
//! from: /usr/include/x86_64-linux-gnu/sys/mtio.h
|
||||
//!
|
||||
//! also see: man 4 st
|
||||
|
||||
#[repr(C)]
|
||||
pub struct mtop {
|
||||
pub mt_op: MTCmd, /* Operations defined below. */
|
||||
pub mt_count: libc::c_int, /* How many of them. */
|
||||
}
|
||||
|
||||
#[repr(i16)]
|
||||
#[allow(dead_code)] // do not warn about unused command
|
||||
pub enum MTCmd {
|
||||
MTRESET = 0, /* +reset drive in case of problems */
|
||||
MTFSF = 1, /* forward space over FileMark,
|
||||
* position at first record of next file
|
||||
*/
|
||||
MTBSF = 2, /* backward space FileMark (position before FM) */
|
||||
MTFSR = 3, /* forward space record */
|
||||
MTBSR = 4, /* backward space record */
|
||||
MTWEOF = 5, /* write an end-of-file record (mark) */
|
||||
MTREW = 6, /* rewind */
|
||||
MTOFFL = 7, /* rewind and put the drive offline (eject?) */
|
||||
MTNOP = 8, /* no op, set status only (read with MTIOCGET) */
|
||||
MTRETEN = 9, /* retension tape */
|
||||
MTBSFM = 10, /* +backward space FileMark, position at FM */
|
||||
MTFSFM = 11, /* +forward space FileMark, position at FM */
|
||||
MTEOM = 12, /* goto end of recorded media (for appending files).
|
||||
* MTEOM positions after the last FM, ready for
|
||||
* appending another file.
|
||||
*/
|
||||
MTERASE = 13, /* erase tape -- be careful! */
|
||||
MTRAS1 = 14, /* run self test 1 (nondestructive) */
|
||||
MTRAS2 = 15, /* run self test 2 (destructive) */
|
||||
MTRAS3 = 16, /* reserved for self test 3 */
|
||||
MTSETBLK = 20, /* set block length (SCSI) */
|
||||
MTSETDENSITY = 21, /* set tape density (SCSI) */
|
||||
MTSEEK = 22, /* seek to block (Tandberg, etc.) */
|
||||
MTTELL = 23, /* tell block (Tandberg, etc.) */
|
||||
MTSETDRVBUFFER = 24,/* set the drive buffering according to SCSI-2 */
|
||||
|
||||
/* ordinary buffered operation with code 1 */
|
||||
MTFSS = 25, /* space forward over setmarks */
|
||||
MTBSS = 26, /* space backward over setmarks */
|
||||
MTWSM = 27, /* write setmarks */
|
||||
|
||||
MTLOCK = 28, /* lock the drive door */
|
||||
MTUNLOCK = 29, /* unlock the drive door */
|
||||
MTLOAD = 30, /* execute the SCSI load command */
|
||||
MTUNLOAD = 31, /* execute the SCSI unload command */
|
||||
MTCOMPRESSION = 32, /* control compression with SCSI mode page 15 */
|
||||
MTSETPART = 33, /* Change the active tape partition */
|
||||
MTMKPART = 34, /* Format the tape with one or two partitions */
|
||||
MTWEOFI = 35, /* write an end-of-file record (mark) in immediate mode */
|
||||
}
|
||||
|
||||
//#define MTIOCTOP _IOW('m', 1, struct mtop) /* Do a mag tape op. */
|
||||
nix::ioctl_write_ptr!(mtioctop, b'm', 1, mtop);
|
||||
|
||||
// from: /usr/include/x86_64-linux-gnu/sys/mtio.h
|
||||
#[derive(Default, Debug)]
|
||||
#[repr(C)]
|
||||
pub struct mtget {
|
||||
pub mt_type: libc::c_long, /* Type of magtape device. */
|
||||
pub mt_resid: libc::c_long, /* Residual count: (not sure)
|
||||
number of bytes ignored, or
|
||||
number of files not skipped, or
|
||||
number of records not skipped. */
|
||||
/* The following registers are device dependent. */
|
||||
pub mt_dsreg: libc::c_long, /* Status register. */
|
||||
pub mt_gstat: libc::c_long, /* Generic (device independent) status. */
|
||||
pub mt_erreg: libc::c_long, /* Error register. */
|
||||
/* The next two fields are not always used. */
|
||||
pub mt_fileno: i32 , /* Number of current file on tape. */
|
||||
pub mt_blkno: i32, /* Current block number. */
|
||||
}
|
||||
|
||||
//#define MTIOCGET _IOR('m', 2, struct mtget) /* Get tape status. */
|
||||
nix::ioctl_read!(mtiocget, b'm', 2, mtget);
|
||||
|
||||
#[repr(C)]
|
||||
#[allow(dead_code)]
|
||||
pub struct mtpos {
|
||||
pub mt_blkno: libc::c_long, /* current block number */
|
||||
}
|
||||
|
||||
//#define MTIOCPOS _IOR('m', 3, struct mtpos) /* Get tape position.*/
|
||||
nix::ioctl_read!(mtiocpos, b'm', 3, mtpos);
|
||||
|
||||
pub const MT_ST_BLKSIZE_MASK: libc::c_long = 0x0ffffff;
|
||||
pub const MT_ST_BLKSIZE_SHIFT: usize = 0;
|
||||
pub const MT_ST_DENSITY_MASK: libc::c_long = 0xff000000;
|
||||
pub const MT_ST_DENSITY_SHIFT: usize = 24;
|
||||
|
||||
pub const MT_TYPE_ISSCSI1: libc::c_long = 0x71; /* Generic ANSI SCSI-1 tape unit. */
|
||||
pub const MT_TYPE_ISSCSI2: libc::c_long = 0x72; /* Generic ANSI SCSI-2 tape unit. */
|
||||
|
||||
// Generic Mag Tape (device independent) status macros for examining mt_gstat -- HP-UX compatible
|
||||
// from: /usr/include/x86_64-linux-gnu/sys/mtio.h
|
||||
bitflags::bitflags!{
|
||||
pub struct GMTStatusFlags: libc::c_long {
|
||||
const EOF = 0x80000000;
|
||||
const BOT = 0x40000000;
|
||||
const EOT = 0x20000000;
|
||||
const SM = 0x10000000; /* DDS setmark */
|
||||
const EOD = 0x08000000; /* DDS EOD */
|
||||
const WR_PROT = 0x04000000;
|
||||
|
||||
const ONLINE = 0x01000000;
|
||||
const D_6250 = 0x00800000;
|
||||
const D_1600 = 0x00400000;
|
||||
const D_800 = 0x00200000;
|
||||
const DRIVE_OPEN = 0x00040000; /* Door open (no tape). */
|
||||
const IM_REP_EN = 0x00010000; /* Immediate report mode.*/
|
||||
const END_OF_STREAM = 0b00000001;
|
||||
}
|
||||
}
|
||||
|
||||
#[repr(i32)]
|
||||
#[allow(non_camel_case_types, dead_code)]
|
||||
pub enum SetDrvBufferCmd {
|
||||
MT_ST_BOOLEANS = 0x10000000,
|
||||
MT_ST_SETBOOLEANS = 0x30000000,
|
||||
MT_ST_CLEARBOOLEANS = 0x40000000,
|
||||
MT_ST_WRITE_THRESHOLD = 0x20000000,
|
||||
MT_ST_DEF_BLKSIZE = 0x50000000,
|
||||
MT_ST_DEF_OPTIONS = 0x60000000,
|
||||
MT_ST_SET_TIMEOUT = 0x70000000,
|
||||
MT_ST_SET_LONG_TIMEOUT = 0x70100000,
|
||||
MT_ST_SET_CLN = 0x80000000u32 as i32,
|
||||
}
|
||||
|
||||
bitflags::bitflags!{
|
||||
pub struct SetDrvBufferOptions: i32 {
|
||||
const MT_ST_BUFFER_WRITES = 0x1;
|
||||
const MT_ST_ASYNC_WRITES = 0x2;
|
||||
const MT_ST_READ_AHEAD = 0x4;
|
||||
const MT_ST_DEBUGGING = 0x8;
|
||||
const MT_ST_TWO_FM = 0x10;
|
||||
const MT_ST_FAST_MTEOM = 0x20;
|
||||
const MT_ST_AUTO_LOCK = 0x40;
|
||||
const MT_ST_DEF_WRITES = 0x80;
|
||||
const MT_ST_CAN_BSR = 0x100;
|
||||
const MT_ST_NO_BLKLIMS = 0x200;
|
||||
const MT_ST_CAN_PARTITIONS = 0x400;
|
||||
const MT_ST_SCSI2LOGICAL = 0x800;
|
||||
const MT_ST_SYSV = 0x1000;
|
||||
const MT_ST_NOWAIT = 0x2000;
|
||||
const MT_ST_SILI = 0x4000;
|
||||
}
|
||||
}
|
446
src/tape/drive/linux_tape.rs
Normal file
446
src/tape/drive/linux_tape.rs
Normal file
@ -0,0 +1,446 @@
|
||||
use std::fs::{OpenOptions, File};
|
||||
use std::os::unix::fs::OpenOptionsExt;
|
||||
use std::os::unix::io::AsRawFd;
|
||||
use std::convert::TryFrom;
|
||||
|
||||
use anyhow::{bail, format_err, Error};
|
||||
use nix::fcntl::{fcntl, FcntlArg, OFlag};
|
||||
|
||||
use proxmox::sys::error::SysResult;
|
||||
use proxmox::tools::Uuid;
|
||||
|
||||
use crate::{
|
||||
tape::{
|
||||
TapeRead,
|
||||
TapeWrite,
|
||||
drive::{
|
||||
LinuxTapeDrive,
|
||||
TapeDriver,
|
||||
linux_mtio::*,
|
||||
},
|
||||
file_formats::{
|
||||
PROXMOX_TAPE_BLOCK_SIZE,
|
||||
MediaSetLabel,
|
||||
MediaContentHeader,
|
||||
PROXMOX_BACKUP_MEDIA_SET_LABEL_MAGIC_1_0,
|
||||
},
|
||||
helpers::{
|
||||
BlockedReader,
|
||||
BlockedWriter,
|
||||
},
|
||||
}
|
||||
};
|
||||
|
||||
#[derive(Debug)]
|
||||
pub enum TapeDensity {
|
||||
None, // no tape loaded
|
||||
LTO2,
|
||||
LTO3,
|
||||
LTO4,
|
||||
LTO5,
|
||||
LTO6,
|
||||
LTO7,
|
||||
LTO7M8,
|
||||
LTO8,
|
||||
}
|
||||
|
||||
impl TryFrom<u8> for TapeDensity {
|
||||
type Error = Error;
|
||||
|
||||
fn try_from(value: u8) -> Result<Self, Self::Error> {
|
||||
let density = match value {
|
||||
0x00 => TapeDensity::None,
|
||||
0x42 => TapeDensity::LTO2,
|
||||
0x44 => TapeDensity::LTO3,
|
||||
0x46 => TapeDensity::LTO4,
|
||||
0x58 => TapeDensity::LTO5,
|
||||
0x5a => TapeDensity::LTO6,
|
||||
0x5c => TapeDensity::LTO7,
|
||||
0x5d => TapeDensity::LTO7M8,
|
||||
0x5e => TapeDensity::LTO8,
|
||||
_ => bail!("unknown tape density code 0x{:02x}", value),
|
||||
};
|
||||
Ok(density)
|
||||
}
|
||||
}
|
||||
|
||||
#[derive(Debug)]
|
||||
pub struct DriveStatus {
|
||||
pub blocksize: u32,
|
||||
pub density: TapeDensity,
|
||||
pub status: GMTStatusFlags,
|
||||
pub file_number: i32,
|
||||
pub block_number: i32,
|
||||
}
|
||||
|
||||
impl DriveStatus {
|
||||
pub fn tape_is_ready(&self) -> bool {
|
||||
self.status.contains(GMTStatusFlags::ONLINE) &&
|
||||
!self.status.contains(GMTStatusFlags::DRIVE_OPEN)
|
||||
}
|
||||
}
|
||||
|
||||
impl LinuxTapeDrive {
|
||||
|
||||
/// This needs to lock the drive
|
||||
pub fn open(&self) -> Result<LinuxTapeHandle, Error> {
|
||||
|
||||
let file = OpenOptions::new()
|
||||
.read(true)
|
||||
.write(true)
|
||||
.custom_flags(libc::O_NONBLOCK)
|
||||
.open(&self.path)?;
|
||||
|
||||
// clear O_NONBLOCK from now on.
|
||||
|
||||
let flags = fcntl(file.as_raw_fd(), FcntlArg::F_GETFL)
|
||||
.into_io_result()?;
|
||||
|
||||
let mut flags = OFlag::from_bits_truncate(flags);
|
||||
flags.remove(OFlag::O_NONBLOCK);
|
||||
|
||||
fcntl(file.as_raw_fd(), FcntlArg::F_SETFL(flags))
|
||||
.into_io_result()?;
|
||||
|
||||
if !tape_is_linux_tape_device(&file) {
|
||||
bail!("file {:?} is not a linux tape device", self.path);
|
||||
}
|
||||
|
||||
let handle = LinuxTapeHandle { drive_name: self.name.clone(), file };
|
||||
|
||||
let drive_status = handle.get_drive_status()?;
|
||||
println!("drive status: {:?}", drive_status);
|
||||
|
||||
if !drive_status.tape_is_ready() {
|
||||
bail!("tape not ready (no tape loaded)");
|
||||
}
|
||||
|
||||
if drive_status.blocksize == 0 {
|
||||
eprintln!("device is variable block size");
|
||||
} else {
|
||||
if drive_status.blocksize != PROXMOX_TAPE_BLOCK_SIZE as u32 {
|
||||
eprintln!("device is in fixed block size mode with wrong size ({} bytes)", drive_status.blocksize);
|
||||
eprintln!("trying to set variable block size mode...");
|
||||
if handle.set_block_size(0).is_err() {
|
||||
bail!("set variable block size mod failed - device uses wrong blocksize.");
|
||||
}
|
||||
} else {
|
||||
eprintln!("device is in fixed block size mode ({} bytes)", drive_status.blocksize);
|
||||
}
|
||||
}
|
||||
|
||||
// Only root can seth driver options, so we cannot
|
||||
// handle.set_default_options()?;
|
||||
|
||||
Ok(handle)
|
||||
}
|
||||
}
|
||||
|
||||
pub struct LinuxTapeHandle {
|
||||
drive_name: String,
|
||||
file: File,
|
||||
//_lock: File,
|
||||
}
|
||||
|
||||
impl LinuxTapeHandle {
|
||||
|
||||
/// Return the drive name (useful for log and debug)
|
||||
pub fn dive_name(&self) -> &str {
|
||||
&self.drive_name
|
||||
}
|
||||
|
||||
/// Set all options we need/want
|
||||
pub fn set_default_options(&self) -> Result<(), Error> {
|
||||
|
||||
let mut opts = SetDrvBufferOptions::empty();
|
||||
|
||||
// fixme: ? man st(4) claims we need to clear this for reliable multivolume
|
||||
opts.set(SetDrvBufferOptions::MT_ST_BUFFER_WRITES, true);
|
||||
|
||||
// fixme: ?man st(4) claims we need to clear this for reliable multivolume
|
||||
opts.set(SetDrvBufferOptions::MT_ST_ASYNC_WRITES, true);
|
||||
|
||||
opts.set(SetDrvBufferOptions::MT_ST_READ_AHEAD, true);
|
||||
|
||||
self.set_drive_buffer_options(opts)
|
||||
}
|
||||
|
||||
/// call MTSETDRVBUFFER to set boolean options
|
||||
///
|
||||
/// Note: this uses MT_ST_BOOLEANS, so missing options are cleared!
|
||||
pub fn set_drive_buffer_options(&self, opts: SetDrvBufferOptions) -> Result<(), Error> {
|
||||
|
||||
let cmd = mtop {
|
||||
mt_op: MTCmd::MTSETDRVBUFFER,
|
||||
mt_count: (SetDrvBufferCmd::MT_ST_BOOLEANS as i32) | opts.bits(),
|
||||
};
|
||||
unsafe {
|
||||
mtioctop(self.file.as_raw_fd(), &cmd)
|
||||
}.map_err(|err| format_err!("MTSETDRVBUFFER options failed - {}", err))?;
|
||||
|
||||
Ok(())
|
||||
}
|
||||
|
||||
/// This flushes the driver's buffer as a side effect. Should be
|
||||
/// used before reading status with MTIOCGET.
|
||||
fn mtnop(&self) -> Result<(), Error> {
|
||||
|
||||
let cmd = mtop { mt_op: MTCmd::MTNOP, mt_count: 1, };
|
||||
|
||||
unsafe {
|
||||
mtioctop(self.file.as_raw_fd(), &cmd)
|
||||
}.map_err(|err| format_err!("MTNOP failed - {}", err))?;
|
||||
|
||||
Ok(())
|
||||
}
|
||||
|
||||
/// Set tape compression feature
|
||||
pub fn set_compression(&self, on: bool) -> Result<(), Error> {
|
||||
|
||||
let cmd = mtop { mt_op: MTCmd::MTCOMPRESSION, mt_count: if on { 1 } else { 0 } };
|
||||
|
||||
unsafe {
|
||||
mtioctop(self.file.as_raw_fd(), &cmd)
|
||||
}.map_err(|err| format_err!("set compression to {} failed - {}", on, err))?;
|
||||
|
||||
Ok(())
|
||||
}
|
||||
|
||||
/// Write a single EOF mark
|
||||
pub fn write_eof_mark(&self) -> Result<(), Error> {
|
||||
tape_write_eof_mark(&self.file)?;
|
||||
Ok(())
|
||||
}
|
||||
|
||||
/// Set the drive's block length to the value specified.
|
||||
///
|
||||
/// A block length of zero sets the drive to variable block
|
||||
/// size mode.
|
||||
pub fn set_block_size(&self, block_length: usize) -> Result<(), Error> {
|
||||
|
||||
if block_length > 256*1024*1024 {
|
||||
bail!("block_length too large (> max linux scsii block length)");
|
||||
}
|
||||
|
||||
let cmd = mtop { mt_op: MTCmd::MTSETBLK, mt_count: block_length as i32 };
|
||||
|
||||
unsafe {
|
||||
mtioctop(self.file.as_raw_fd(), &cmd)
|
||||
}.map_err(|err| format_err!("MTSETBLK failed - {}", err))?;
|
||||
|
||||
Ok(())
|
||||
}
|
||||
|
||||
/// Get Tape configuration with MTIOCGET ioctl
|
||||
pub fn get_drive_status(&self) -> Result<DriveStatus, Error> {
|
||||
|
||||
self.mtnop()?;
|
||||
|
||||
let mut status = mtget::default();
|
||||
|
||||
if let Err(err) = unsafe { mtiocget(self.file.as_raw_fd(), &mut status) } {
|
||||
bail!("MTIOCGET failed - {}", err);
|
||||
}
|
||||
|
||||
println!("{:?}", status);
|
||||
|
||||
let gmt = GMTStatusFlags::from_bits_truncate(status.mt_gstat);
|
||||
|
||||
let blocksize;
|
||||
|
||||
if status.mt_type == MT_TYPE_ISSCSI1 || status.mt_type == MT_TYPE_ISSCSI2 {
|
||||
blocksize = ((status.mt_dsreg & MT_ST_BLKSIZE_MASK) >> MT_ST_BLKSIZE_SHIFT) as u32;
|
||||
} else {
|
||||
bail!("got unsupported tape type {}", status.mt_type);
|
||||
}
|
||||
|
||||
let density = ((status.mt_dsreg & MT_ST_DENSITY_MASK) >> MT_ST_DENSITY_SHIFT) as u8;
|
||||
|
||||
let density = TapeDensity::try_from(density)?;
|
||||
|
||||
Ok(DriveStatus {
|
||||
blocksize,
|
||||
density,
|
||||
status: gmt,
|
||||
file_number: status.mt_fileno,
|
||||
block_number: status.mt_blkno,
|
||||
})
|
||||
}
|
||||
|
||||
}
|
||||
|
||||
|
||||
impl TapeDriver for LinuxTapeHandle {
|
||||
|
||||
fn sync(&mut self) -> Result<(), Error> {
|
||||
|
||||
println!("SYNC/FLUSH TAPE");
|
||||
// MTWEOF with count 0 => flush
|
||||
let cmd = mtop { mt_op: MTCmd::MTWEOF, mt_count: 0 };
|
||||
|
||||
unsafe {
|
||||
mtioctop(self.file.as_raw_fd(), &cmd)
|
||||
}.map_err(|err| proxmox::io_format_err!("MT sync failed - {}", err))?;
|
||||
|
||||
Ok(())
|
||||
}
|
||||
|
||||
/// Go to the end of the recorded media (for appending files).
|
||||
fn move_to_eom(&mut self) -> Result<(), Error> {
|
||||
|
||||
let cmd = mtop { mt_op: MTCmd::MTEOM, mt_count: 1, };
|
||||
|
||||
unsafe {
|
||||
mtioctop(self.file.as_raw_fd(), &cmd)
|
||||
}.map_err(|err| format_err!("MTEOM failed - {}", err))?;
|
||||
|
||||
|
||||
Ok(())
|
||||
}
|
||||
|
||||
fn rewind(&mut self) -> Result<(), Error> {
|
||||
|
||||
let cmd = mtop { mt_op: MTCmd::MTREW, mt_count: 1, };
|
||||
|
||||
unsafe {
|
||||
mtioctop(self.file.as_raw_fd(), &cmd)
|
||||
}.map_err(|err| format_err!("tape rewind failed - {}", err))?;
|
||||
|
||||
Ok(())
|
||||
}
|
||||
|
||||
fn current_file_number(&mut self) -> Result<usize, Error> {
|
||||
let mut status = mtget::default();
|
||||
|
||||
self.mtnop()?;
|
||||
|
||||
if let Err(err) = unsafe { mtiocget(self.file.as_raw_fd(), &mut status) } {
|
||||
bail!("current_file_number MTIOCGET failed - {}", err);
|
||||
}
|
||||
|
||||
if status.mt_fileno < 0 {
|
||||
bail!("current_file_number failed (got {})", status.mt_fileno);
|
||||
}
|
||||
Ok(status.mt_fileno as usize)
|
||||
}
|
||||
|
||||
fn erase_media(&mut self, fast: bool) -> Result<(), Error> {
|
||||
|
||||
self.rewind()?; // important - erase from BOT
|
||||
|
||||
let cmd = mtop { mt_op: MTCmd::MTERASE, mt_count: if fast { 0 } else { 1 } };
|
||||
|
||||
unsafe {
|
||||
mtioctop(self.file.as_raw_fd(), &cmd)
|
||||
}.map_err(|err| format_err!("MTERASE failed - {}", err))?;
|
||||
|
||||
Ok(())
|
||||
}
|
||||
|
||||
fn read_next_file<'a>(&'a mut self) -> Result<Option<Box<dyn TapeRead + 'a>>, std::io::Error> {
|
||||
match BlockedReader::open(&mut self.file)? {
|
||||
Some(reader) => Ok(Some(Box::new(reader))),
|
||||
None => Ok(None),
|
||||
}
|
||||
}
|
||||
|
||||
fn write_file<'a>(&'a mut self) -> Result<Box<dyn TapeWrite + 'a>, std::io::Error> {
|
||||
|
||||
let handle = TapeWriterHandle {
|
||||
writer: BlockedWriter::new(&mut self.file),
|
||||
};
|
||||
|
||||
Ok(Box::new(handle))
|
||||
}
|
||||
|
||||
fn write_media_set_label(&mut self, media_set_label: &MediaSetLabel) -> Result<Uuid, Error> {
|
||||
|
||||
let file_number = self.current_file_number()?;
|
||||
if file_number != 1 {
|
||||
bail!("write_media_set_label failed - got wrong file number ({} != 1)", file_number);
|
||||
}
|
||||
|
||||
let mut handle = TapeWriterHandle {
|
||||
writer: BlockedWriter::new(&mut self.file),
|
||||
};
|
||||
let raw = serde_json::to_string_pretty(&serde_json::to_value(media_set_label)?)?;
|
||||
|
||||
let header = MediaContentHeader::new(PROXMOX_BACKUP_MEDIA_SET_LABEL_MAGIC_1_0, raw.len() as u32);
|
||||
handle.write_header(&header, raw.as_bytes())?;
|
||||
handle.finish(false)?;
|
||||
|
||||
self.sync()?; // sync data to tape
|
||||
|
||||
Ok(Uuid::from(header.uuid))
|
||||
}
|
||||
|
||||
/// Rewind and put the drive off line (Eject media).
|
||||
fn eject_media(&mut self) -> Result<(), Error> {
|
||||
let cmd = mtop { mt_op: MTCmd::MTOFFL, mt_count: 1 };
|
||||
|
||||
unsafe {
|
||||
mtioctop(self.file.as_raw_fd(), &cmd)
|
||||
}.map_err(|err| format_err!("MTOFFL failed - {}", err))?;
|
||||
|
||||
Ok(())
|
||||
}
|
||||
}
|
||||
|
||||
/// Write a single EOF mark without flushing buffers
|
||||
fn tape_write_eof_mark(file: &File) -> Result<(), std::io::Error> {
|
||||
|
||||
println!("WRITE EOF MARK");
|
||||
let cmd = mtop { mt_op: MTCmd::MTWEOFI, mt_count: 1 };
|
||||
|
||||
unsafe {
|
||||
mtioctop(file.as_raw_fd(), &cmd)
|
||||
}.map_err(|err| proxmox::io_format_err!("MTWEOFI failed - {}", err))?;
|
||||
|
||||
Ok(())
|
||||
}
|
||||
|
||||
fn tape_is_linux_tape_device(file: &File) -> bool {
|
||||
|
||||
let devnum = match nix::sys::stat::fstat(file.as_raw_fd()) {
|
||||
Ok(stat) => stat.st_rdev,
|
||||
_ => return false,
|
||||
};
|
||||
|
||||
let major = unsafe { libc::major(devnum) };
|
||||
let minor = unsafe { libc::minor(devnum) };
|
||||
|
||||
if major != 9 { return false; } // The st driver uses major device number 9
|
||||
if (minor & 128) == 0 {
|
||||
eprintln!("Detected rewinding tape. Please use non-rewinding tape devices (/dev/nstX).");
|
||||
return false;
|
||||
}
|
||||
|
||||
true
|
||||
}
|
||||
|
||||
/// like BlockedWriter, but writes EOF mark on finish
|
||||
pub struct TapeWriterHandle<'a> {
|
||||
writer: BlockedWriter<&'a mut File>,
|
||||
}
|
||||
|
||||
impl TapeWrite for TapeWriterHandle<'_> {
|
||||
|
||||
fn write_all(&mut self, data: &[u8]) -> Result<bool, std::io::Error> {
|
||||
self.writer.write_all(data)
|
||||
}
|
||||
|
||||
fn bytes_written(&self) -> usize {
|
||||
self.writer.bytes_written()
|
||||
}
|
||||
|
||||
fn finish(&mut self, incomplete: bool) -> Result<bool, std::io::Error> {
|
||||
println!("FINISH TAPE HANDLE");
|
||||
let leof = self.writer.finish(incomplete)?;
|
||||
tape_write_eof_mark(self.writer.writer_ref_mut())?;
|
||||
Ok(leof)
|
||||
}
|
||||
|
||||
fn logical_end_of_media(&self) -> bool {
|
||||
self.writer.logical_end_of_media()
|
||||
}
|
||||
}
|
299
src/tape/drive/mod.rs
Normal file
299
src/tape/drive/mod.rs
Normal file
@ -0,0 +1,299 @@
|
||||
mod virtual_tape;
|
||||
mod linux_mtio;
|
||||
mod linux_tape;
|
||||
|
||||
mod linux_list_drives;
|
||||
pub use linux_list_drives::*;
|
||||
|
||||
use anyhow::{bail, format_err, Error};
|
||||
use ::serde::{Deserialize, Serialize};
|
||||
|
||||
use proxmox::tools::Uuid;
|
||||
use proxmox::tools::io::ReadExt;
|
||||
use proxmox::api::section_config::SectionConfigData;
|
||||
|
||||
use crate::{
|
||||
api2::types::{
|
||||
VirtualTapeDrive,
|
||||
LinuxTapeDrive,
|
||||
},
|
||||
tape::{
|
||||
TapeWrite,
|
||||
TapeRead,
|
||||
file_formats::{
|
||||
PROXMOX_BACKUP_DRIVE_LABEL_MAGIC_1_0,
|
||||
PROXMOX_BACKUP_MEDIA_SET_LABEL_MAGIC_1_0,
|
||||
DriveLabel,
|
||||
MediaSetLabel,
|
||||
MediaContentHeader,
|
||||
},
|
||||
changer::{
|
||||
MediaChange,
|
||||
ChangeMediaEmail,
|
||||
},
|
||||
},
|
||||
};
|
||||
|
||||
#[derive(Serialize,Deserialize)]
|
||||
pub struct MediaLabelInfo {
|
||||
pub label: DriveLabel,
|
||||
pub label_uuid: Uuid,
|
||||
#[serde(skip_serializing_if="Option::is_none")]
|
||||
pub media_set_label: Option<(MediaSetLabel, Uuid)>
|
||||
}
|
||||
|
||||
/// Tape driver interface
|
||||
pub trait TapeDriver {
|
||||
|
||||
/// Flush all data to the tape
|
||||
fn sync(&mut self) -> Result<(), Error>;
|
||||
|
||||
/// Rewind the tape
|
||||
fn rewind(&mut self) -> Result<(), Error>;
|
||||
|
||||
/// Move to end of recorded data
|
||||
///
|
||||
/// We assume this flushes the tape write buffer.
|
||||
fn move_to_eom(&mut self) -> Result<(), Error>;
|
||||
|
||||
/// Current file number
|
||||
fn current_file_number(&mut self) -> Result<usize, Error>;
|
||||
|
||||
/// Completely erase the media
|
||||
fn erase_media(&mut self, fast: bool) -> Result<(), Error>;
|
||||
|
||||
/// Read/Open the next file
|
||||
fn read_next_file<'a>(&'a mut self) -> Result<Option<Box<dyn TapeRead + 'a>>, std::io::Error>;
|
||||
|
||||
/// Write/Append a new file
|
||||
fn write_file<'a>(&'a mut self) -> Result<Box<dyn TapeWrite + 'a>, std::io::Error>;
|
||||
|
||||
/// Write label to tape (erase tape content)
|
||||
///
|
||||
/// This returns the MediaContentHeader uuid (not the media uuid).
|
||||
fn label_tape(&mut self, label: &DriveLabel) -> Result<Uuid, Error> {
|
||||
|
||||
self.rewind()?;
|
||||
|
||||
self.erase_media(true)?;
|
||||
|
||||
let raw = serde_json::to_string_pretty(&serde_json::to_value(&label)?)?;
|
||||
|
||||
let header = MediaContentHeader::new(PROXMOX_BACKUP_DRIVE_LABEL_MAGIC_1_0, raw.len() as u32);
|
||||
let content_uuid = header.content_uuid();
|
||||
|
||||
{
|
||||
let mut writer = self.write_file()?;
|
||||
writer.write_header(&header, raw.as_bytes())?;
|
||||
writer.finish(false)?;
|
||||
}
|
||||
|
||||
self.sync()?; // sync data to tape
|
||||
|
||||
Ok(content_uuid)
|
||||
}
|
||||
|
||||
/// Write the media set label to tape
|
||||
///
|
||||
/// This returns the MediaContentHeader uuid (not the media uuid).
|
||||
fn write_media_set_label(&mut self, media_set_label: &MediaSetLabel) -> Result<Uuid, Error>;
|
||||
|
||||
/// Read the media label
|
||||
///
|
||||
/// This tries to read both media labels (label and media_set_label).
|
||||
fn read_label(&mut self) -> Result<Option<MediaLabelInfo>, Error> {
|
||||
|
||||
self.rewind()?;
|
||||
|
||||
let (label, label_uuid) = {
|
||||
let mut reader = match self.read_next_file()? {
|
||||
None => return Ok(None), // tape is empty
|
||||
Some(reader) => reader,
|
||||
};
|
||||
|
||||
let header: MediaContentHeader = unsafe { reader.read_le_value()? };
|
||||
header.check(PROXMOX_BACKUP_DRIVE_LABEL_MAGIC_1_0, 1, 64*1024)?;
|
||||
let data = reader.read_exact_allocated(header.size as usize)?;
|
||||
|
||||
let label: DriveLabel = serde_json::from_slice(&data)
|
||||
.map_err(|err| format_err!("unable to parse drive label - {}", err))?;
|
||||
|
||||
// make sure we read the EOF marker
|
||||
if reader.skip_to_end()? != 0 {
|
||||
bail!("got unexpected data after label");
|
||||
}
|
||||
|
||||
(label, Uuid::from(header.uuid))
|
||||
};
|
||||
|
||||
let mut info = MediaLabelInfo { label, label_uuid, media_set_label: None };
|
||||
|
||||
// try to read MediaSet label
|
||||
let mut reader = match self.read_next_file()? {
|
||||
None => return Ok(Some(info)),
|
||||
Some(reader) => reader,
|
||||
};
|
||||
|
||||
let header: MediaContentHeader = unsafe { reader.read_le_value()? };
|
||||
header.check(PROXMOX_BACKUP_MEDIA_SET_LABEL_MAGIC_1_0, 1, 64*1024)?;
|
||||
let data = reader.read_exact_allocated(header.size as usize)?;
|
||||
|
||||
let media_set_label: MediaSetLabel = serde_json::from_slice(&data)
|
||||
.map_err(|err| format_err!("unable to parse media set label - {}", err))?;
|
||||
|
||||
// make sure we read the EOF marker
|
||||
if reader.skip_to_end()? != 0 {
|
||||
bail!("got unexpected data after media set label");
|
||||
}
|
||||
|
||||
info.media_set_label = Some((media_set_label, Uuid::from(header.uuid)));
|
||||
|
||||
Ok(Some(info))
|
||||
}
|
||||
|
||||
/// Eject media
|
||||
fn eject_media(&mut self) -> Result<(), Error>;
|
||||
}
|
||||
|
||||
/// Get the media changer (name + MediaChange) associated with a tape drie.
|
||||
///
|
||||
/// If allow_email is set, returns an ChangeMediaEmail instance for
|
||||
/// standalone tape drives (changer name set to "").
|
||||
pub fn media_changer(
|
||||
config: &SectionConfigData,
|
||||
drive: &str,
|
||||
allow_email: bool,
|
||||
) -> Result<(Box<dyn MediaChange>, String), Error> {
|
||||
|
||||
match config.sections.get(drive) {
|
||||
Some((section_type_name, config)) => {
|
||||
match section_type_name.as_ref() {
|
||||
"virtual" => {
|
||||
let tape = VirtualTapeDrive::deserialize(config)?;
|
||||
Ok((Box::new(tape), drive.to_string()))
|
||||
}
|
||||
"linux" => {
|
||||
let tape = LinuxTapeDrive::deserialize(config)?;
|
||||
match tape.changer {
|
||||
Some(ref changer_name) => {
|
||||
let changer_name = changer_name.to_string();
|
||||
Ok((Box::new(tape), changer_name))
|
||||
}
|
||||
None => {
|
||||
if !allow_email {
|
||||
bail!("drive '{}' has no changer device", drive);
|
||||
}
|
||||
let to = "root@localhost"; // fixme
|
||||
let changer = ChangeMediaEmail::new(drive, to);
|
||||
Ok((Box::new(changer), String::new()))
|
||||
},
|
||||
}
|
||||
}
|
||||
_ => bail!("drive type '{}' not implemented!"),
|
||||
}
|
||||
}
|
||||
None => {
|
||||
bail!("no such drive '{}'", drive);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
pub fn open_drive(
|
||||
config: &SectionConfigData,
|
||||
drive: &str,
|
||||
) -> Result<Box<dyn TapeDriver>, Error> {
|
||||
|
||||
match config.sections.get(drive) {
|
||||
Some((section_type_name, config)) => {
|
||||
match section_type_name.as_ref() {
|
||||
"virtual" => {
|
||||
let tape = VirtualTapeDrive::deserialize(config)?;
|
||||
let handle = tape.open()
|
||||
.map_err(|err| format_err!("open drive '{}' ({}) failed - {}", drive, tape.path, err))?;
|
||||
Ok(Box::new(handle))
|
||||
}
|
||||
"linux" => {
|
||||
let tape = LinuxTapeDrive::deserialize(config)?;
|
||||
let handle = tape.open()
|
||||
.map_err(|err| format_err!("open drive '{}' ({}) failed - {}", drive, tape.path, err))?;
|
||||
Ok(Box::new(handle))
|
||||
}
|
||||
_ => bail!("drive type '{}' not implemented!"),
|
||||
}
|
||||
}
|
||||
None => {
|
||||
bail!("no such drive '{}'", drive);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
/// Requests a specific 'media' to be inserted into 'drive'. Within a
|
||||
/// loop, this then tries to read the media label and waits until it
|
||||
/// finds the requested media.
|
||||
///
|
||||
/// Returns a handle to the opened drive and the media labels.
|
||||
pub fn request_and_load_media(
|
||||
config: &SectionConfigData,
|
||||
drive: &str,
|
||||
label: &DriveLabel,
|
||||
) -> Result<(
|
||||
Box<dyn TapeDriver>,
|
||||
MediaLabelInfo,
|
||||
), Error> {
|
||||
|
||||
match config.sections.get(drive) {
|
||||
Some((section_type_name, config)) => {
|
||||
match section_type_name.as_ref() {
|
||||
"virtual" => {
|
||||
let mut drive = VirtualTapeDrive::deserialize(config)?;
|
||||
|
||||
let changer_id = label.changer_id.clone();
|
||||
|
||||
drive.load_media(&changer_id)?;
|
||||
|
||||
let mut handle = drive.open()?;
|
||||
|
||||
if let Ok(Some(info)) = handle.read_label() {
|
||||
println!("found media label {} ({})", info.label.changer_id, info.label.uuid.to_string());
|
||||
if info.label.uuid == label.uuid {
|
||||
return Ok((Box::new(handle), info));
|
||||
}
|
||||
}
|
||||
bail!("read label failed (label all tapes first)");
|
||||
}
|
||||
"linux" => {
|
||||
let tape = LinuxTapeDrive::deserialize(config)?;
|
||||
|
||||
let id = label.changer_id.clone();
|
||||
|
||||
println!("Please insert media '{}' into drive '{}'", id, drive);
|
||||
|
||||
loop {
|
||||
let mut handle = match tape.open() {
|
||||
Ok(handle) => handle,
|
||||
Err(_) => {
|
||||
eprintln!("tape open failed - test again in 5 secs");
|
||||
std::thread::sleep(std::time::Duration::from_millis(5_000));
|
||||
continue;
|
||||
}
|
||||
};
|
||||
|
||||
if let Ok(Some(info)) = handle.read_label() {
|
||||
println!("found media label {} ({})", info.label.changer_id, info.label.uuid.to_string());
|
||||
if info.label.uuid == label.uuid {
|
||||
return Ok((Box::new(handle), info));
|
||||
}
|
||||
}
|
||||
|
||||
println!("read label failed - test again in 5 secs");
|
||||
std::thread::sleep(std::time::Duration::from_millis(5_000));
|
||||
}
|
||||
}
|
||||
_ => bail!("drive type '{}' not implemented!"),
|
||||
}
|
||||
}
|
||||
None => {
|
||||
bail!("no such drive '{}'", drive);
|
||||
}
|
||||
}
|
||||
}
|
424
src/tape/drive/virtual_tape.rs
Normal file
424
src/tape/drive/virtual_tape.rs
Normal file
@ -0,0 +1,424 @@
|
||||
// Note: This is only for test an debug
|
||||
|
||||
use std::fs::File;
|
||||
use std::io;
|
||||
|
||||
use anyhow::{bail, format_err, Error};
|
||||
use serde::{Serialize, Deserialize};
|
||||
|
||||
use proxmox::tools::{
|
||||
Uuid,
|
||||
fs::{replace_file, CreateOptions},
|
||||
};
|
||||
|
||||
use crate::{
|
||||
tape::{
|
||||
TapeWrite,
|
||||
TapeRead,
|
||||
changer::MediaChange,
|
||||
drive::{
|
||||
VirtualTapeDrive,
|
||||
TapeDriver,
|
||||
},
|
||||
file_formats::{
|
||||
MediaSetLabel,
|
||||
MediaContentHeader,
|
||||
PROXMOX_BACKUP_MEDIA_SET_LABEL_MAGIC_1_0,
|
||||
},
|
||||
helpers::{
|
||||
EmulateTapeReader,
|
||||
EmulateTapeWriter,
|
||||
BlockedReader,
|
||||
BlockedWriter,
|
||||
},
|
||||
},
|
||||
};
|
||||
|
||||
impl VirtualTapeDrive {
|
||||
|
||||
/// This needs to lock the drive
|
||||
pub fn open(&self) -> Result<VirtualTapeHandle, Error> {
|
||||
let mut lock_path = std::path::PathBuf::from(&self.path);
|
||||
lock_path.push(".drive.lck");
|
||||
|
||||
let timeout = std::time::Duration::new(10, 0);
|
||||
let lock = proxmox::tools::fs::open_file_locked(&lock_path, timeout, true)?;
|
||||
|
||||
Ok(VirtualTapeHandle {
|
||||
_lock: lock,
|
||||
max_size: self.max_size.unwrap_or(64*1024*1024),
|
||||
path: std::path::PathBuf::from(&self.path),
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
#[derive(Serialize,Deserialize)]
|
||||
struct VirtualTapeStatus {
|
||||
name: String,
|
||||
pos: usize,
|
||||
}
|
||||
|
||||
#[derive(Serialize,Deserialize)]
|
||||
struct VirtualDriveStatus {
|
||||
current_tape: Option<VirtualTapeStatus>,
|
||||
}
|
||||
|
||||
#[derive(Serialize,Deserialize)]
|
||||
struct TapeIndex {
|
||||
files: usize,
|
||||
}
|
||||
|
||||
pub struct VirtualTapeHandle {
|
||||
path: std::path::PathBuf,
|
||||
max_size: usize,
|
||||
_lock: File,
|
||||
}
|
||||
|
||||
impl VirtualTapeHandle {
|
||||
|
||||
pub fn insert_tape(&self, _tape_filename: &str) {
|
||||
unimplemented!();
|
||||
}
|
||||
|
||||
pub fn eject_tape(&self) {
|
||||
unimplemented!();
|
||||
}
|
||||
|
||||
fn status_file_path(&self) -> std::path::PathBuf {
|
||||
let mut path = self.path.clone();
|
||||
path.push("drive-status.json");
|
||||
path
|
||||
}
|
||||
|
||||
fn tape_index_path(&self, tape_name: &str) -> std::path::PathBuf {
|
||||
let mut path = self.path.clone();
|
||||
path.push(format!("tape-{}.json", tape_name));
|
||||
path
|
||||
}
|
||||
|
||||
fn tape_file_path(&self, tape_name: &str, pos: usize) -> std::path::PathBuf {
|
||||
let mut path = self.path.clone();
|
||||
path.push(format!("tapefile-{}-{}.json", pos, tape_name));
|
||||
path
|
||||
}
|
||||
|
||||
fn load_tape_index(&self, tape_name: &str) -> Result<TapeIndex, Error> {
|
||||
let path = self.tape_index_path(tape_name);
|
||||
let raw = proxmox::tools::fs::file_get_contents(&path)?;
|
||||
if raw.is_empty() {
|
||||
return Ok(TapeIndex { files: 0 });
|
||||
}
|
||||
let data: TapeIndex = serde_json::from_slice(&raw)?;
|
||||
Ok(data)
|
||||
}
|
||||
|
||||
fn store_tape_index(&self, tape_name: &str, index: &TapeIndex) -> Result<(), Error> {
|
||||
let path = self.tape_index_path(tape_name);
|
||||
let raw = serde_json::to_string_pretty(&serde_json::to_value(index)?)?;
|
||||
|
||||
let options = CreateOptions::new();
|
||||
replace_file(&path, raw.as_bytes(), options)?;
|
||||
Ok(())
|
||||
}
|
||||
|
||||
fn truncate_tape(&self, tape_name: &str, pos: usize) -> Result<usize, Error> {
|
||||
let mut index = self.load_tape_index(tape_name)?;
|
||||
|
||||
if index.files <= pos {
|
||||
return Ok(index.files)
|
||||
}
|
||||
|
||||
for i in pos..index.files {
|
||||
let path = self.tape_file_path(tape_name, i);
|
||||
let _ = std::fs::remove_file(path);
|
||||
}
|
||||
|
||||
index.files = pos;
|
||||
|
||||
self.store_tape_index(tape_name, &index)?;
|
||||
|
||||
Ok(index.files)
|
||||
}
|
||||
|
||||
fn load_status(&self) -> Result<VirtualDriveStatus, Error> {
|
||||
let path = self.status_file_path();
|
||||
|
||||
let default = serde_json::to_value(VirtualDriveStatus {
|
||||
current_tape: None,
|
||||
})?;
|
||||
|
||||
let data = proxmox::tools::fs::file_get_json(&path, Some(default))?;
|
||||
let status: VirtualDriveStatus = serde_json::from_value(data)?;
|
||||
Ok(status)
|
||||
}
|
||||
|
||||
fn store_status(&self, status: &VirtualDriveStatus) -> Result<(), Error> {
|
||||
let path = self.status_file_path();
|
||||
let raw = serde_json::to_string_pretty(&serde_json::to_value(status)?)?;
|
||||
|
||||
let options = CreateOptions::new();
|
||||
replace_file(&path, raw.as_bytes(), options)?;
|
||||
Ok(())
|
||||
}
|
||||
}
|
||||
|
||||
impl TapeDriver for VirtualTapeHandle {
|
||||
|
||||
fn sync(&mut self) -> Result<(), Error> {
|
||||
Ok(()) // do nothing for now
|
||||
}
|
||||
|
||||
fn current_file_number(&mut self) -> Result<usize, Error> {
|
||||
let status = self.load_status()
|
||||
.map_err(|err| format_err!("current_file_number failed: {}", err.to_string()))?;
|
||||
|
||||
match status.current_tape {
|
||||
Some(VirtualTapeStatus { pos, .. }) => { Ok(pos)},
|
||||
None => bail!("current_file_number failed: drive is empty (no tape loaded)."),
|
||||
}
|
||||
}
|
||||
|
||||
fn read_next_file(&mut self) -> Result<Option<Box<dyn TapeRead>>, io::Error> {
|
||||
let mut status = self.load_status()
|
||||
.map_err(|err| io::Error::new(io::ErrorKind::Other, err.to_string()))?;
|
||||
|
||||
match status.current_tape {
|
||||
Some(VirtualTapeStatus { ref name, ref mut pos }) => {
|
||||
|
||||
let index = self.load_tape_index(name)
|
||||
.map_err(|err| io::Error::new(io::ErrorKind::Other, err.to_string()))?;
|
||||
|
||||
if *pos >= index.files {
|
||||
return Ok(None); // EOM
|
||||
}
|
||||
|
||||
let path = self.tape_file_path(name, *pos);
|
||||
let file = std::fs::OpenOptions::new()
|
||||
.read(true)
|
||||
.open(path)?;
|
||||
|
||||
*pos += 1;
|
||||
self.store_status(&status)
|
||||
.map_err(|err| io::Error::new(io::ErrorKind::Other, err.to_string()))?;
|
||||
|
||||
let reader = Box::new(file);
|
||||
let reader = Box::new(EmulateTapeReader::new(reader));
|
||||
|
||||
match BlockedReader::open(reader)? {
|
||||
Some(reader) => Ok(Some(Box::new(reader))),
|
||||
None => Ok(None),
|
||||
}
|
||||
}
|
||||
None => proxmox::io_bail!("drive is empty (no tape loaded)."),
|
||||
}
|
||||
}
|
||||
|
||||
fn write_file(&mut self) -> Result<Box<dyn TapeWrite>, io::Error> {
|
||||
let mut status = self.load_status()
|
||||
.map_err(|err| io::Error::new(io::ErrorKind::Other, err.to_string()))?;
|
||||
|
||||
match status.current_tape {
|
||||
Some(VirtualTapeStatus { ref name, ref mut pos }) => {
|
||||
|
||||
let mut index = self.load_tape_index(name)
|
||||
.map_err(|err| io::Error::new(io::ErrorKind::Other, err.to_string()))?;
|
||||
|
||||
for i in *pos..index.files {
|
||||
let path = self.tape_file_path(name, i);
|
||||
let _ = std::fs::remove_file(path);
|
||||
}
|
||||
|
||||
let mut used_space = 0;
|
||||
for i in 0..*pos {
|
||||
let path = self.tape_file_path(name, i);
|
||||
used_space += path.metadata()?.len() as usize;
|
||||
|
||||
}
|
||||
index.files = *pos + 1;
|
||||
|
||||
self.store_tape_index(name, &index)
|
||||
.map_err(|err| io::Error::new(io::ErrorKind::Other, err.to_string()))?;
|
||||
|
||||
let path = self.tape_file_path(name, *pos);
|
||||
let file = std::fs::OpenOptions::new()
|
||||
.write(true)
|
||||
.create(true)
|
||||
.truncate(true)
|
||||
.open(path)?;
|
||||
|
||||
*pos = index.files;
|
||||
|
||||
self.store_status(&status)
|
||||
.map_err(|err| io::Error::new(io::ErrorKind::Other, err.to_string()))?;
|
||||
|
||||
let mut free_space = 0;
|
||||
if used_space < self.max_size {
|
||||
free_space = self.max_size - used_space;
|
||||
}
|
||||
|
||||
let writer = Box::new(file);
|
||||
let writer = Box::new(EmulateTapeWriter::new(writer, free_space));
|
||||
let writer = Box::new(BlockedWriter::new(writer));
|
||||
|
||||
Ok(writer)
|
||||
}
|
||||
None => proxmox::io_bail!("drive is empty (no tape loaded)."),
|
||||
}
|
||||
}
|
||||
|
||||
fn move_to_eom(&mut self) -> Result<(), Error> {
|
||||
let mut status = self.load_status()?;
|
||||
match status.current_tape {
|
||||
Some(VirtualTapeStatus { ref name, ref mut pos }) => {
|
||||
|
||||
let index = self.load_tape_index(name)
|
||||
.map_err(|err| io::Error::new(io::ErrorKind::Other, err.to_string()))?;
|
||||
|
||||
*pos = index.files;
|
||||
self.store_status(&status)
|
||||
.map_err(|err| io::Error::new(io::ErrorKind::Other, err.to_string()))?;
|
||||
|
||||
Ok(())
|
||||
}
|
||||
None => bail!("drive is empty (no tape loaded)."),
|
||||
}
|
||||
}
|
||||
|
||||
fn rewind(&mut self) -> Result<(), Error> {
|
||||
let mut status = self.load_status()?;
|
||||
match status.current_tape {
|
||||
Some(ref mut tape_status) => {
|
||||
tape_status.pos = 0;
|
||||
self.store_status(&status)?;
|
||||
Ok(())
|
||||
}
|
||||
None => bail!("drive is empty (no tape loaded)."),
|
||||
}
|
||||
}
|
||||
|
||||
fn erase_media(&mut self, _fast: bool) -> Result<(), Error> {
|
||||
let mut status = self.load_status()?;
|
||||
match status.current_tape {
|
||||
Some(VirtualTapeStatus { ref name, ref mut pos }) => {
|
||||
*pos = self.truncate_tape(name, 0)?;
|
||||
self.store_status(&status)?;
|
||||
Ok(())
|
||||
}
|
||||
None => bail!("drive is empty (no tape loaded)."),
|
||||
}
|
||||
}
|
||||
|
||||
fn write_media_set_label(&mut self, media_set_label: &MediaSetLabel) -> Result<Uuid, Error> {
|
||||
|
||||
let mut status = self.load_status()?;
|
||||
match status.current_tape {
|
||||
Some(VirtualTapeStatus { ref name, ref mut pos }) => {
|
||||
*pos = self.truncate_tape(name, 1)?;
|
||||
let pos = *pos;
|
||||
self.store_status(&status)?;
|
||||
|
||||
if pos == 0 {
|
||||
bail!("media is empty (no label).");
|
||||
}
|
||||
if pos != 1 {
|
||||
bail!("write_media_set_label: truncate failed - got wrong pos '{}'", pos);
|
||||
}
|
||||
|
||||
let raw = serde_json::to_string_pretty(&serde_json::to_value(media_set_label)?)?;
|
||||
let header = MediaContentHeader::new(PROXMOX_BACKUP_MEDIA_SET_LABEL_MAGIC_1_0, raw.len() as u32);
|
||||
|
||||
{
|
||||
let mut writer = self.write_file()?;
|
||||
writer.write_header(&header, raw.as_bytes())?;
|
||||
writer.finish(false)?;
|
||||
}
|
||||
|
||||
Ok(Uuid::from(header.uuid))
|
||||
}
|
||||
None => bail!("drive is empty (no tape loaded)."),
|
||||
}
|
||||
}
|
||||
|
||||
fn eject_media(&mut self) -> Result<(), Error> {
|
||||
let status = VirtualDriveStatus {
|
||||
current_tape: None,
|
||||
};
|
||||
self.store_status(&status)
|
||||
}
|
||||
}
|
||||
|
||||
impl MediaChange for VirtualTapeHandle {
|
||||
|
||||
/// Try to load media
|
||||
///
|
||||
/// We automatically create an empty virtual tape here (if it does
|
||||
/// not exist already)
|
||||
fn load_media(&mut self, label: &str) -> Result<(), Error> {
|
||||
let name = format!("tape-{}.json", label);
|
||||
let mut path = self.path.clone();
|
||||
path.push(&name);
|
||||
if !path.exists() {
|
||||
eprintln!("unable to find tape {} - creating file {:?}", label, path);
|
||||
let index = TapeIndex { files: 0 };
|
||||
self.store_tape_index(label, &index)?;
|
||||
}
|
||||
|
||||
let status = VirtualDriveStatus {
|
||||
current_tape: Some(VirtualTapeStatus {
|
||||
name: label.to_string(),
|
||||
pos: 0,
|
||||
}),
|
||||
};
|
||||
self.store_status(&status)
|
||||
}
|
||||
|
||||
fn unload_media(&mut self) -> Result<(), Error> {
|
||||
self.eject_media()?;
|
||||
Ok(())
|
||||
}
|
||||
|
||||
fn eject_on_unload(&self) -> bool {
|
||||
true
|
||||
}
|
||||
|
||||
fn list_media_changer_ids(&self) -> Result<Vec<String>, Error> {
|
||||
let mut list = Vec::new();
|
||||
for entry in std::fs::read_dir(&self.path)? {
|
||||
let entry = entry?;
|
||||
let path = entry.path();
|
||||
if path.is_file() && path.extension() == Some(std::ffi::OsStr::new("json")) {
|
||||
if let Some(name) = path.file_stem() {
|
||||
if let Some(name) = name.to_str() {
|
||||
if name.starts_with("tape-") {
|
||||
list.push(name[5..].to_string());
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
Ok(list)
|
||||
}
|
||||
}
|
||||
|
||||
impl MediaChange for VirtualTapeDrive {
|
||||
|
||||
fn load_media(&mut self, changer_id: &str) -> Result<(), Error> {
|
||||
let mut handle = self.open()?;
|
||||
handle.load_media(changer_id)
|
||||
}
|
||||
|
||||
fn unload_media(&mut self) -> Result<(), Error> {
|
||||
let mut handle = self.open()?;
|
||||
handle.eject_media()?;
|
||||
Ok(())
|
||||
}
|
||||
|
||||
fn eject_on_unload(&self) -> bool {
|
||||
true
|
||||
}
|
||||
|
||||
fn list_media_changer_ids(&self) -> Result<Vec<String>, Error> {
|
||||
let handle = self.open()?;
|
||||
handle.list_media_changer_ids()
|
||||
}
|
||||
}
|
190
src/tape/file_formats.rs
Normal file
190
src/tape/file_formats.rs
Normal file
@ -0,0 +1,190 @@
|
||||
use anyhow::{bail, Error};
|
||||
use ::serde::{Deserialize, Serialize};
|
||||
use endian_trait::Endian;
|
||||
use bitflags::bitflags;
|
||||
|
||||
use proxmox::tools::Uuid;
|
||||
|
||||
/// We use 256KB blocksize (always)
|
||||
pub const PROXMOX_TAPE_BLOCK_SIZE: usize = 256*1024;
|
||||
|
||||
// openssl::sha::sha256(b"Proxmox Tape Block Header v1.0")[0..8]
|
||||
pub const PROXMOX_TAPE_BLOCK_HEADER_MAGIC_1_0: [u8; 8] = [220, 189, 175, 202, 235, 160, 165, 40];
|
||||
|
||||
// openssl::sha::sha256(b"Proxmox Backup Content Header v1.0")[0..8];
|
||||
pub const PROXMOX_BACKUP_CONTENT_HEADER_MAGIC_1_0: [u8; 8] = [99, 238, 20, 159, 205, 242, 155, 12];
|
||||
// openssl::sha::sha256(b"Proxmox Backup Tape Label v1.0")[0..8];
|
||||
pub const PROXMOX_BACKUP_DRIVE_LABEL_MAGIC_1_0: [u8; 8] = [42, 5, 191, 60, 176, 48, 170, 57];
|
||||
// openssl::sha::sha256(b"Proxmox Backup MediaSet Label v1.0")
|
||||
pub const PROXMOX_BACKUP_MEDIA_SET_LABEL_MAGIC_1_0: [u8; 8] = [8, 96, 99, 249, 47, 151, 83, 216];
|
||||
|
||||
// openssl::sha::sha256(b"Proxmox Backup Chunk Archive v1.0")[0..8]
|
||||
pub const PROXMOX_BACKUP_CHUNK_ARCHIVE_MAGIC_1_0: [u8; 8] = [62, 173, 167, 95, 49, 76, 6, 110];
|
||||
// openssl::sha::sha256(b"Proxmox Backup Chunk Archive Entry v1.0")[0..8]
|
||||
pub const PROXMOX_BACKUP_CHUNK_ARCHIVE_ENTRY_MAGIC_1_0: [u8; 8] = [72, 87, 109, 242, 222, 66, 143, 220];
|
||||
|
||||
// openssl::sha::sha256(b"Proxmox Backup Snapshot Archive v1.0")[0..8];
|
||||
pub const PROXMOX_BACKUP_SNAPSHOT_ARCHIVE_MAGIC_1_0: [u8; 8] = [9, 182, 2, 31, 125, 232, 114, 133];
|
||||
|
||||
/// Tape Block Header with data payload
|
||||
///
|
||||
/// Note: this struct is large, never put this on the stack!
|
||||
/// so we use an unsized type to avoid that.
|
||||
///
|
||||
/// Tape data block are always read/written with a fixed size
|
||||
/// (PROXMOX_TAPE_BLOCK_SIZE). But they may contain less data, so the
|
||||
/// header has an additional size field. For streams of blocks, there
|
||||
/// is a sequence number ('seq_nr') which may be use for additional
|
||||
/// error checking.
|
||||
#[repr(C,packed)]
|
||||
pub struct BlockHeader {
|
||||
pub magic: [u8; 8],
|
||||
pub flags: BlockHeaderFlags,
|
||||
/// size as 3 bytes unsigned, little endian
|
||||
pub size: [u8; 3],
|
||||
/// block sequence number
|
||||
pub seq_nr: u32,
|
||||
pub payload: [u8],
|
||||
}
|
||||
|
||||
bitflags! {
|
||||
pub struct BlockHeaderFlags: u8 {
|
||||
/// Marks the last block in a stream.
|
||||
const END_OF_STREAM = 0b00000001;
|
||||
/// Mark multivolume streams (when set in the last block)
|
||||
const INCOMPLETE = 0b00000010;
|
||||
}
|
||||
}
|
||||
|
||||
#[derive(Endian)]
|
||||
#[repr(C,packed)]
|
||||
pub struct ChunkArchiveEntryHeader {
|
||||
pub magic: [u8; 8],
|
||||
pub digest: [u8; 32],
|
||||
pub size: u64,
|
||||
}
|
||||
|
||||
#[derive(Endian, Copy, Clone, Debug)]
|
||||
#[repr(C,packed)]
|
||||
pub struct MediaContentHeader {
|
||||
pub magic: [u8; 8],
|
||||
pub content_magic: [u8; 8],
|
||||
pub uuid: [u8; 16],
|
||||
pub ctime: i64,
|
||||
pub size: u32,
|
||||
pub part_number: u8,
|
||||
pub reserved_0: u8,
|
||||
pub reserved_1: u8,
|
||||
pub reserved_2: u8,
|
||||
}
|
||||
|
||||
impl MediaContentHeader {
|
||||
|
||||
pub fn new(content_magic: [u8; 8], size: u32) -> Self {
|
||||
let uuid = *proxmox::tools::uuid::Uuid::generate()
|
||||
.into_inner();
|
||||
Self {
|
||||
magic: PROXMOX_BACKUP_CONTENT_HEADER_MAGIC_1_0,
|
||||
content_magic,
|
||||
uuid,
|
||||
ctime: proxmox::tools::time::epoch_i64(),
|
||||
size,
|
||||
part_number: 0,
|
||||
reserved_0: 0,
|
||||
reserved_1: 0,
|
||||
reserved_2: 0,
|
||||
}
|
||||
}
|
||||
|
||||
pub fn check(&self, content_magic: [u8; 8], min_size: u32, max_size: u32) -> Result<(), Error> {
|
||||
if self.magic != PROXMOX_BACKUP_CONTENT_HEADER_MAGIC_1_0 {
|
||||
bail!("MediaContentHeader: wrong magic");
|
||||
}
|
||||
if self.content_magic != content_magic {
|
||||
bail!("MediaContentHeader: wrong content magic");
|
||||
}
|
||||
if self.size < min_size || self.size > max_size {
|
||||
bail!("MediaContentHeader: got unexpected size");
|
||||
}
|
||||
Ok(())
|
||||
}
|
||||
|
||||
pub fn content_uuid(&self) -> Uuid {
|
||||
Uuid::from(self.uuid)
|
||||
}
|
||||
}
|
||||
|
||||
#[derive(Serialize,Deserialize,Clone,Debug)]
|
||||
pub struct DriveLabel {
|
||||
/// Unique ID
|
||||
pub uuid: Uuid,
|
||||
/// Media Changer ID or Barcode
|
||||
pub changer_id: String,
|
||||
/// Creation time stamp
|
||||
pub ctime: i64,
|
||||
}
|
||||
|
||||
|
||||
#[derive(Serialize,Deserialize,Clone,Debug)]
|
||||
pub struct MediaSetLabel {
|
||||
pub pool: String,
|
||||
/// MediaSet Uuid. We use the all-zero Uuid to reseve an empty media for a specific pool
|
||||
pub uuid: Uuid,
|
||||
/// MediaSet media sequence number
|
||||
pub seq_nr: u64,
|
||||
/// Creation time stamp
|
||||
pub ctime: i64,
|
||||
}
|
||||
|
||||
impl MediaSetLabel {
|
||||
|
||||
pub fn with_data(pool: &str, uuid: Uuid, seq_nr: u64, ctime: i64) -> Self {
|
||||
Self {
|
||||
pool: pool.to_string(),
|
||||
uuid,
|
||||
seq_nr,
|
||||
ctime,
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
impl BlockHeader {
|
||||
|
||||
pub const SIZE: usize = PROXMOX_TAPE_BLOCK_SIZE;
|
||||
|
||||
/// Allocates a new instance on the heap
|
||||
pub fn new() -> Box<Self> {
|
||||
use std::alloc::{alloc_zeroed, Layout};
|
||||
|
||||
let mut buffer = unsafe {
|
||||
let ptr = alloc_zeroed(
|
||||
Layout::from_size_align(Self::SIZE, std::mem::align_of::<u64>())
|
||||
.unwrap(),
|
||||
);
|
||||
Box::from_raw(
|
||||
std::slice::from_raw_parts_mut(ptr, Self::SIZE - 16)
|
||||
as *mut [u8] as *mut Self
|
||||
)
|
||||
};
|
||||
buffer.magic = PROXMOX_TAPE_BLOCK_HEADER_MAGIC_1_0;
|
||||
buffer
|
||||
}
|
||||
|
||||
pub fn set_size(&mut self, size: usize) {
|
||||
let size = size.to_le_bytes();
|
||||
self.size.copy_from_slice(&size[..3]);
|
||||
}
|
||||
|
||||
pub fn size(&self) -> usize {
|
||||
(self.size[0] as usize) + ((self.size[1] as usize)<<8) + ((self.size[2] as usize)<<16)
|
||||
}
|
||||
|
||||
pub fn set_seq_nr(&mut self, seq_nr: u32) {
|
||||
self.seq_nr = seq_nr.to_le();
|
||||
}
|
||||
|
||||
pub fn seq_nr(&self) -> u32 {
|
||||
u32::from_le(self.seq_nr)
|
||||
}
|
||||
|
||||
}
|
309
src/tape/helpers/blocked_reader.rs
Normal file
309
src/tape/helpers/blocked_reader.rs
Normal file
@ -0,0 +1,309 @@
|
||||
use std::io::Read;
|
||||
|
||||
use crate::tape::{
|
||||
TapeRead,
|
||||
tape_device_read_block,
|
||||
file_formats::{
|
||||
PROXMOX_TAPE_BLOCK_HEADER_MAGIC_1_0,
|
||||
BlockHeader,
|
||||
BlockHeaderFlags,
|
||||
},
|
||||
};
|
||||
|
||||
/// Read a block stream generated by 'BlockWriter'.
|
||||
///
|
||||
/// This class implements 'TapeRead'. It always read whole blocks from
|
||||
/// the underlying reader, and does additional error checks:
|
||||
///
|
||||
/// - check magic number (detect streams not written by 'BlockWriter')
|
||||
/// - check block size
|
||||
/// - check block sequence numbers
|
||||
///
|
||||
/// The reader consumes the EOF mark after the data stream (if read to
|
||||
/// the end of the stream).
|
||||
pub struct BlockedReader<R> {
|
||||
reader: R,
|
||||
buffer: Box<BlockHeader>,
|
||||
seq_nr: u32,
|
||||
found_end_marker: bool,
|
||||
incomplete: bool,
|
||||
got_eod: bool,
|
||||
read_error: bool,
|
||||
read_pos: usize,
|
||||
}
|
||||
|
||||
impl <R: Read> BlockedReader<R> {
|
||||
|
||||
/// Create a new BlockedReader instance.
|
||||
///
|
||||
/// This tries to read the first block, and returns None if we are
|
||||
/// at EOT.
|
||||
pub fn open(mut reader: R) -> Result<Option<Self>, std::io::Error> {
|
||||
|
||||
let mut buffer = BlockHeader::new();
|
||||
|
||||
if !Self::read_block_frame(&mut buffer, &mut reader)? {
|
||||
return Ok(None);
|
||||
}
|
||||
|
||||
let (_size, found_end_marker) = Self::check_buffer(&buffer, 0)?;
|
||||
|
||||
let mut incomplete = false;
|
||||
if found_end_marker {
|
||||
incomplete = buffer.flags.contains(BlockHeaderFlags::INCOMPLETE);
|
||||
}
|
||||
Ok(Some(Self {
|
||||
reader,
|
||||
buffer,
|
||||
found_end_marker,
|
||||
incomplete,
|
||||
seq_nr: 1,
|
||||
got_eod: false,
|
||||
read_error: false,
|
||||
read_pos: 0,
|
||||
}))
|
||||
}
|
||||
|
||||
fn check_buffer(buffer: &BlockHeader, seq_nr: u32) -> Result<(usize, bool), std::io::Error> {
|
||||
|
||||
if buffer.magic != PROXMOX_TAPE_BLOCK_HEADER_MAGIC_1_0 {
|
||||
proxmox::io_bail!("detected tape block with wrong magic number - not written by proxmox tape");
|
||||
}
|
||||
|
||||
if seq_nr != buffer.seq_nr() {
|
||||
proxmox::io_bail!(
|
||||
"detected tape block with wrong seqence number ({} != {})",
|
||||
seq_nr, buffer.seq_nr())
|
||||
}
|
||||
|
||||
let size = buffer.size();
|
||||
let found_end_marker = buffer.flags.contains(BlockHeaderFlags::END_OF_STREAM);
|
||||
|
||||
if size > buffer.payload.len() {
|
||||
proxmox::io_bail!("detected tape block with wrong payload size ({} > {}", size, buffer.payload.len());
|
||||
} else if size == 0 {
|
||||
if !found_end_marker{
|
||||
proxmox::io_bail!("detected tape block with zero payload size");
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
Ok((size, found_end_marker))
|
||||
}
|
||||
|
||||
fn read_block_frame(buffer: &mut BlockHeader, reader: &mut R) -> Result<bool, std::io::Error> {
|
||||
|
||||
let data = unsafe {
|
||||
std::slice::from_raw_parts_mut(
|
||||
(buffer as *mut BlockHeader) as *mut u8,
|
||||
BlockHeader::SIZE,
|
||||
)
|
||||
};
|
||||
|
||||
tape_device_read_block(reader, data)
|
||||
}
|
||||
|
||||
fn read_block(&mut self) -> Result<usize, std::io::Error> {
|
||||
|
||||
if !Self::read_block_frame(&mut self.buffer, &mut self.reader)? {
|
||||
self.got_eod = true;
|
||||
self.read_pos = self.buffer.payload.len();
|
||||
if !self.found_end_marker {
|
||||
proxmox::io_bail!("detected tape stream without end marker");
|
||||
}
|
||||
return Ok(0); // EOD
|
||||
}
|
||||
|
||||
let (size, found_end_marker) = Self::check_buffer(&self.buffer, self.seq_nr)?;
|
||||
self.seq_nr += 1;
|
||||
|
||||
if found_end_marker { // consume EOF mark
|
||||
self.found_end_marker = true;
|
||||
self.incomplete = self.buffer.flags.contains(BlockHeaderFlags::INCOMPLETE);
|
||||
let mut tmp_buf = [0u8; 512]; // use a small buffer for testing EOF
|
||||
if tape_device_read_block(&mut self.reader, &mut tmp_buf)? {
|
||||
proxmox::io_bail!("detected tape block after stream end marker");
|
||||
} else {
|
||||
self.got_eod = true;
|
||||
}
|
||||
}
|
||||
|
||||
self.read_pos = 0;
|
||||
|
||||
Ok(size)
|
||||
}
|
||||
}
|
||||
|
||||
impl <R: Read> TapeRead for BlockedReader<R> {
|
||||
|
||||
fn is_incomplete(&self) -> Result<bool, std::io::Error> {
|
||||
if !self.got_eod {
|
||||
proxmox::io_bail!("is_incomplete failed: EOD not reached");
|
||||
}
|
||||
if !self.found_end_marker {
|
||||
proxmox::io_bail!("is_incomplete failed: no end marker found");
|
||||
}
|
||||
|
||||
Ok(self.incomplete)
|
||||
}
|
||||
|
||||
fn has_end_marker(&self) -> Result<bool, std::io::Error> {
|
||||
if !self.got_eod {
|
||||
proxmox::io_bail!("has_end_marker failed: EOD not reached");
|
||||
}
|
||||
|
||||
Ok(self.found_end_marker)
|
||||
}
|
||||
}
|
||||
|
||||
impl <R: Read> Read for BlockedReader<R> {
|
||||
|
||||
fn read(&mut self, buffer: &mut [u8]) -> Result<usize, std::io::Error> {
|
||||
|
||||
if self.read_error {
|
||||
proxmox::io_bail!("detected read after error - internal error");
|
||||
}
|
||||
|
||||
let mut buffer_size = self.buffer.size();
|
||||
let mut rest = (buffer_size as isize) - (self.read_pos as isize);
|
||||
|
||||
if rest <= 0 && !self.got_eod { // try to refill buffer
|
||||
buffer_size = match self.read_block() {
|
||||
Ok(len) => len,
|
||||
err => {
|
||||
self.read_error = true;
|
||||
return err;
|
||||
}
|
||||
};
|
||||
rest = buffer_size as isize;
|
||||
}
|
||||
|
||||
if rest <= 0 {
|
||||
return Ok(0);
|
||||
} else {
|
||||
let copy_len = if (buffer.len() as isize) < rest {
|
||||
buffer.len()
|
||||
} else {
|
||||
rest as usize
|
||||
};
|
||||
buffer[..copy_len].copy_from_slice(
|
||||
&self.buffer.payload[self.read_pos..(self.read_pos + copy_len)]);
|
||||
self.read_pos += copy_len;
|
||||
return Ok(copy_len);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
#[cfg(test)]
|
||||
mod test {
|
||||
use std::io::Read;
|
||||
use anyhow::Error;
|
||||
use crate::tape::{
|
||||
TapeWrite,
|
||||
file_formats::PROXMOX_TAPE_BLOCK_SIZE,
|
||||
helpers::{
|
||||
BlockedReader,
|
||||
BlockedWriter,
|
||||
},
|
||||
};
|
||||
|
||||
fn write_and_verify(data: &[u8]) -> Result<(), Error> {
|
||||
|
||||
let mut tape_data = Vec::new();
|
||||
|
||||
let mut writer = BlockedWriter::new(&mut tape_data);
|
||||
|
||||
writer.write_all(data)?;
|
||||
|
||||
writer.finish(false)?;
|
||||
|
||||
assert_eq!(
|
||||
tape_data.len(),
|
||||
((data.len() + PROXMOX_TAPE_BLOCK_SIZE)/PROXMOX_TAPE_BLOCK_SIZE)
|
||||
*PROXMOX_TAPE_BLOCK_SIZE
|
||||
);
|
||||
|
||||
let reader = &mut &tape_data[..];
|
||||
let mut reader = BlockedReader::open(reader)?.unwrap();
|
||||
|
||||
let mut read_data = Vec::with_capacity(PROXMOX_TAPE_BLOCK_SIZE);
|
||||
reader.read_to_end(&mut read_data)?;
|
||||
|
||||
assert_eq!(data.len(), read_data.len());
|
||||
|
||||
assert_eq!(data, &read_data[..]);
|
||||
|
||||
Ok(())
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn empty_stream() -> Result<(), Error> {
|
||||
write_and_verify(b"")
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn small_data() -> Result<(), Error> {
|
||||
write_and_verify(b"ABC")
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn large_data() -> Result<(), Error> {
|
||||
let data = proxmox::sys::linux::random_data(1024*1024*5)?;
|
||||
write_and_verify(&data)
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn no_data() -> Result<(), Error> {
|
||||
let tape_data = Vec::new();
|
||||
let reader = &mut &tape_data[..];
|
||||
let reader = BlockedReader::open(reader)?;
|
||||
assert!(reader.is_none());
|
||||
|
||||
Ok(())
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn no_end_marker() -> Result<(), Error> {
|
||||
let mut tape_data = Vec::new();
|
||||
{
|
||||
let mut writer = BlockedWriter::new(&mut tape_data);
|
||||
// write at least one block
|
||||
let data = proxmox::sys::linux::random_data(PROXMOX_TAPE_BLOCK_SIZE)?;
|
||||
writer.write_all(&data)?;
|
||||
// but do not call finish here
|
||||
}
|
||||
let reader = &mut &tape_data[..];
|
||||
let mut reader = BlockedReader::open(reader)?.unwrap();
|
||||
|
||||
let mut data = Vec::with_capacity(PROXMOX_TAPE_BLOCK_SIZE);
|
||||
assert!(reader.read_to_end(&mut data).is_err());
|
||||
|
||||
Ok(())
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn small_read_buffer() -> Result<(), Error> {
|
||||
let mut tape_data = Vec::new();
|
||||
|
||||
let mut writer = BlockedWriter::new(&mut tape_data);
|
||||
|
||||
writer.write_all(b"ABC")?;
|
||||
|
||||
writer.finish(false)?;
|
||||
|
||||
let reader = &mut &tape_data[..];
|
||||
let mut reader = BlockedReader::open(reader)?.unwrap();
|
||||
|
||||
let mut buf = [0u8; 1];
|
||||
assert_eq!(reader.read(&mut buf)?, 1, "wrong byte count");
|
||||
assert_eq!(&buf, b"A");
|
||||
assert_eq!(reader.read(&mut buf)?, 1, "wrong byte count");
|
||||
assert_eq!(&buf, b"B");
|
||||
assert_eq!(reader.read(&mut buf)?, 1, "wrong byte count");
|
||||
assert_eq!(&buf, b"C");
|
||||
assert_eq!(reader.read(&mut buf)?, 0, "wrong byte count");
|
||||
assert_eq!(reader.read(&mut buf)?, 0, "wrong byte count");
|
||||
|
||||
Ok(())
|
||||
}
|
||||
}
|
124
src/tape/helpers/blocked_writer.rs
Normal file
124
src/tape/helpers/blocked_writer.rs
Normal file
@ -0,0 +1,124 @@
|
||||
use std::io::Write;
|
||||
|
||||
use proxmox::tools::vec;
|
||||
|
||||
use crate::tape::{
|
||||
TapeWrite,
|
||||
tape_device_write_block,
|
||||
file_formats::{
|
||||
BlockHeader,
|
||||
BlockHeaderFlags,
|
||||
},
|
||||
};
|
||||
|
||||
/// Assemble and write blocks of data
|
||||
///
|
||||
/// This type implement 'TapeWrite'. Data written is assembled to
|
||||
/// equally sized blocks (see 'BlockHeader'), which are then written
|
||||
/// to the underlying writer.
|
||||
pub struct BlockedWriter<W> {
|
||||
writer: W,
|
||||
buffer: Box<BlockHeader>,
|
||||
buffer_pos: usize,
|
||||
seq_nr: u32,
|
||||
logical_end_of_media: bool,
|
||||
bytes_written: usize,
|
||||
}
|
||||
|
||||
impl <W: Write> BlockedWriter<W> {
|
||||
|
||||
/// Allow access to underlying writer
|
||||
pub fn writer_ref_mut(&mut self) -> &mut W {
|
||||
&mut self.writer
|
||||
}
|
||||
|
||||
/// Creates a new instance.
|
||||
pub fn new(writer: W) -> Self {
|
||||
Self {
|
||||
writer,
|
||||
buffer: BlockHeader::new(),
|
||||
buffer_pos: 0,
|
||||
seq_nr: 0,
|
||||
logical_end_of_media: false,
|
||||
bytes_written: 0,
|
||||
}
|
||||
}
|
||||
|
||||
fn write_block(buffer: &BlockHeader, writer: &mut W) -> Result<bool, std::io::Error> {
|
||||
|
||||
let data = unsafe {
|
||||
std::slice::from_raw_parts(
|
||||
(buffer as *const BlockHeader) as *const u8,
|
||||
BlockHeader::SIZE,
|
||||
)
|
||||
};
|
||||
tape_device_write_block(writer, data)
|
||||
}
|
||||
|
||||
fn write(&mut self, data: &[u8]) -> Result<usize, std::io::Error> {
|
||||
|
||||
if data.is_empty() { return Ok(0); }
|
||||
|
||||
let rest = self.buffer.payload.len() - self.buffer_pos;
|
||||
let bytes = if data.len() < rest { data.len() } else { rest };
|
||||
self.buffer.payload[self.buffer_pos..(self.buffer_pos+bytes)]
|
||||
.copy_from_slice(&data[..bytes]);
|
||||
|
||||
let rest = rest - bytes;
|
||||
|
||||
if rest == 0 {
|
||||
self.buffer.flags = BlockHeaderFlags::empty();
|
||||
self.buffer.set_size(self.buffer.payload.len());
|
||||
self.buffer.set_seq_nr(self.seq_nr);
|
||||
self.seq_nr += 1;
|
||||
let leom = Self::write_block(&self.buffer, &mut self.writer)?;
|
||||
if leom { self.logical_end_of_media = true; }
|
||||
self.buffer_pos = 0;
|
||||
self.bytes_written += BlockHeader::SIZE;
|
||||
|
||||
} else {
|
||||
self.buffer_pos = self.buffer_pos + bytes;
|
||||
}
|
||||
|
||||
Ok(bytes)
|
||||
}
|
||||
|
||||
}
|
||||
|
||||
impl <W: Write> TapeWrite for BlockedWriter<W> {
|
||||
|
||||
fn write_all(&mut self, mut data: &[u8]) -> Result<bool, std::io::Error> {
|
||||
while !data.is_empty() {
|
||||
match self.write(data) {
|
||||
Ok(n) => data = &data[n..],
|
||||
Err(e) => return Err(e),
|
||||
}
|
||||
}
|
||||
Ok(self.logical_end_of_media)
|
||||
}
|
||||
|
||||
fn bytes_written(&self) -> usize {
|
||||
self.bytes_written
|
||||
}
|
||||
|
||||
/// flush last block, set END_OF_STREAM flag
|
||||
///
|
||||
/// Note: This may write an empty block just including the
|
||||
/// END_OF_STREAM flag.
|
||||
fn finish(&mut self, incomplete: bool) -> Result<bool, std::io::Error> {
|
||||
vec::clear(&mut self.buffer.payload[self.buffer_pos..]);
|
||||
self.buffer.flags = BlockHeaderFlags::END_OF_STREAM;
|
||||
if incomplete { self.buffer.flags |= BlockHeaderFlags::INCOMPLETE; }
|
||||
self.buffer.set_size(self.buffer_pos);
|
||||
self.buffer.set_seq_nr(self.seq_nr);
|
||||
self.seq_nr += 1;
|
||||
self.bytes_written += BlockHeader::SIZE;
|
||||
Self::write_block(&self.buffer, &mut self.writer)
|
||||
}
|
||||
|
||||
/// Returns if the writer already detected the logical end of media
|
||||
fn logical_end_of_media(&self) -> bool {
|
||||
self.logical_end_of_media
|
||||
}
|
||||
|
||||
}
|
56
src/tape/helpers/emulate_tape_reader.rs
Normal file
56
src/tape/helpers/emulate_tape_reader.rs
Normal file
@ -0,0 +1,56 @@
|
||||
use std::io::{self, Read};
|
||||
|
||||
use crate::tape::file_formats::PROXMOX_TAPE_BLOCK_SIZE;
|
||||
|
||||
/// Emulate tape read behavior on a normal Reader
|
||||
///
|
||||
/// Tapes reads are always return one whole block PROXMOX_TAPE_BLOCK_SIZE.
|
||||
pub struct EmulateTapeReader<R> {
|
||||
reader: R,
|
||||
}
|
||||
|
||||
impl <R: Read> EmulateTapeReader<R> {
|
||||
|
||||
pub fn new(reader: R) -> Self {
|
||||
Self { reader }
|
||||
}
|
||||
}
|
||||
|
||||
impl <R: Read> Read for EmulateTapeReader<R> {
|
||||
|
||||
fn read(&mut self, mut buffer: &mut [u8]) -> Result<usize, io::Error> {
|
||||
|
||||
let initial_buffer_len = buffer.len(); // store, check later
|
||||
|
||||
let mut bytes = 0;
|
||||
|
||||
while !buffer.is_empty() {
|
||||
match self.reader.read(buffer) {
|
||||
Ok(0) => break,
|
||||
Ok(n) => {
|
||||
bytes += n;
|
||||
let tmp = buffer;
|
||||
buffer = &mut tmp[n..];
|
||||
}
|
||||
Err(ref e) if e.kind() == io::ErrorKind::Interrupted => {}
|
||||
Err(e) => return Err(e),
|
||||
}
|
||||
}
|
||||
|
||||
if bytes == 0 {
|
||||
return Ok(0);
|
||||
}
|
||||
|
||||
// test buffer len after EOF test (to allow EOF test with small buffers in BufferedReader)
|
||||
if initial_buffer_len != PROXMOX_TAPE_BLOCK_SIZE {
|
||||
proxmox::io_bail!("EmulateTapeReader: got read with wrong block size ({} != {})",
|
||||
buffer.len(), PROXMOX_TAPE_BLOCK_SIZE);
|
||||
}
|
||||
|
||||
if !buffer.is_empty() {
|
||||
Err(io::Error::new(io::ErrorKind::UnexpectedEof, "failed to fill whole buffer"))
|
||||
} else {
|
||||
Ok(bytes)
|
||||
}
|
||||
}
|
||||
}
|
68
src/tape/helpers/emulate_tape_writer.rs
Normal file
68
src/tape/helpers/emulate_tape_writer.rs
Normal file
@ -0,0 +1,68 @@
|
||||
use std::io::{self, Write};
|
||||
|
||||
use crate::tape::file_formats::PROXMOX_TAPE_BLOCK_SIZE;
|
||||
|
||||
/// Emulate tape write behavior on a normal Writer
|
||||
///
|
||||
/// Data need to be written in blocks of size PROXMOX_TAPE_BLOCK_SIZE.
|
||||
/// Before reaching the EOT, the writer returns ENOSPC (like a linux
|
||||
/// tape device).
|
||||
pub struct EmulateTapeWriter<W> {
|
||||
block_nr: usize,
|
||||
max_blocks: usize,
|
||||
writer: W,
|
||||
leom_sent: bool,
|
||||
}
|
||||
|
||||
impl <W: Write> EmulateTapeWriter<W> {
|
||||
|
||||
/// Create a new instance allowing to write about max_size bytes
|
||||
pub fn new(writer: W, max_size: usize) -> Self {
|
||||
|
||||
let mut max_blocks = max_size/PROXMOX_TAPE_BLOCK_SIZE;
|
||||
|
||||
if max_blocks < 2 {
|
||||
max_blocks = 2; // at least 2 blocks
|
||||
}
|
||||
|
||||
Self {
|
||||
block_nr: 0,
|
||||
leom_sent: false,
|
||||
writer,
|
||||
max_blocks,
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
impl <W: Write> Write for EmulateTapeWriter<W> {
|
||||
|
||||
fn write(&mut self, buffer: &[u8]) -> Result<usize, io::Error> {
|
||||
|
||||
if buffer.len() != PROXMOX_TAPE_BLOCK_SIZE {
|
||||
proxmox::io_bail!("EmulateTapeWriter: got write with wrong block size ({} != {}",
|
||||
buffer.len(), PROXMOX_TAPE_BLOCK_SIZE);
|
||||
}
|
||||
|
||||
if self.block_nr >= self.max_blocks + 2 {
|
||||
return Err(io::Error::from_raw_os_error(nix::errno::Errno::ENOSPC as i32));
|
||||
}
|
||||
|
||||
if self.block_nr >= self.max_blocks {
|
||||
if !self.leom_sent {
|
||||
self.leom_sent = true;
|
||||
return Err(io::Error::from_raw_os_error(nix::errno::Errno::ENOSPC as i32));
|
||||
} else {
|
||||
self.leom_sent = false;
|
||||
}
|
||||
}
|
||||
|
||||
self.writer.write_all(buffer)?;
|
||||
self.block_nr += 1;
|
||||
|
||||
Ok(buffer.len())
|
||||
}
|
||||
|
||||
fn flush(&mut self) -> Result<(), io::Error> {
|
||||
proxmox::io_bail!("EmulateTapeWriter does not support flush");
|
||||
}
|
||||
}
|
11
src/tape/helpers/mod.rs
Normal file
11
src/tape/helpers/mod.rs
Normal file
@ -0,0 +1,11 @@
|
||||
mod emulate_tape_writer;
|
||||
pub use emulate_tape_writer::*;
|
||||
|
||||
mod emulate_tape_reader;
|
||||
pub use emulate_tape_reader::*;
|
||||
|
||||
mod blocked_reader;
|
||||
pub use blocked_reader::*;
|
||||
|
||||
mod blocked_writer;
|
||||
pub use blocked_writer::*;
|
652
src/tape/inventory.rs
Normal file
652
src/tape/inventory.rs
Normal file
@ -0,0 +1,652 @@
|
||||
//! Backup media Inventory
|
||||
//!
|
||||
//! The Inventory persistently stores the list of known backup
|
||||
//! media. A backup media is identified by its 'MediaId', which is the
|
||||
//! DriveLabel/MediaSetLabel combination.
|
||||
|
||||
use std::collections::{HashMap, BTreeMap};
|
||||
use std::path::{Path, PathBuf};
|
||||
|
||||
use anyhow::{bail, Error};
|
||||
use serde::{Serialize, Deserialize};
|
||||
use serde_json::json;
|
||||
|
||||
use proxmox::tools::{
|
||||
Uuid,
|
||||
fs::{
|
||||
open_file_locked,
|
||||
replace_file,
|
||||
file_get_json,
|
||||
CreateOptions,
|
||||
},
|
||||
};
|
||||
|
||||
use crate::{
|
||||
tools::systemd::time::compute_next_event,
|
||||
api2::types::{
|
||||
MediaSetPolicy,
|
||||
RetentionPolicy,
|
||||
},
|
||||
tape::{
|
||||
TAPE_STATUS_DIR,
|
||||
MediaLabelInfo,
|
||||
file_formats::{
|
||||
DriveLabel,
|
||||
MediaSetLabel,
|
||||
},
|
||||
},
|
||||
};
|
||||
|
||||
/// Unique Media Identifier
|
||||
///
|
||||
/// This combines the label and media set label.
|
||||
#[derive(Debug,Serialize,Deserialize,Clone)]
|
||||
pub struct MediaId {
|
||||
pub label: DriveLabel,
|
||||
#[serde(skip_serializing_if="Option::is_none")]
|
||||
pub media_set_label: Option<MediaSetLabel>,
|
||||
}
|
||||
|
||||
impl From<MediaLabelInfo> for MediaId {
|
||||
fn from(info: MediaLabelInfo) -> Self {
|
||||
Self {
|
||||
label: info.label.clone(),
|
||||
media_set_label: info.media_set_label.map(|(l, _)| l),
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
/// Media Set
|
||||
///
|
||||
/// A List of backup media
|
||||
#[derive(Debug, Serialize, Deserialize)]
|
||||
pub struct MediaSet {
|
||||
/// Unique media set ID
|
||||
uuid: Uuid,
|
||||
/// List of BackupMedia
|
||||
media_list: Vec<Option<Uuid>>,
|
||||
}
|
||||
|
||||
impl MediaSet {
|
||||
|
||||
pub const MEDIA_SET_MAX_SEQ_NR: u64 = 100;
|
||||
|
||||
pub fn new() -> Self {
|
||||
let uuid = Uuid::generate();
|
||||
Self {
|
||||
uuid,
|
||||
media_list: Vec::new(),
|
||||
}
|
||||
}
|
||||
|
||||
pub fn with_data(uuid: Uuid, media_list: Vec<Option<Uuid>>) -> Self {
|
||||
Self { uuid, media_list }
|
||||
}
|
||||
|
||||
pub fn uuid(&self) -> &Uuid {
|
||||
&self.uuid
|
||||
}
|
||||
|
||||
pub fn media_list(&self) -> &[Option<Uuid>] {
|
||||
&self.media_list
|
||||
}
|
||||
|
||||
pub fn add_media(&mut self, uuid: Uuid) {
|
||||
self.media_list.push(Some(uuid));
|
||||
}
|
||||
|
||||
pub fn insert_media(&mut self, uuid: Uuid, seq_nr: u64) -> Result<(), Error> {
|
||||
if seq_nr > Self::MEDIA_SET_MAX_SEQ_NR {
|
||||
bail!("media set sequence number to large in media set {} ({} > {})",
|
||||
self.uuid.to_string(), seq_nr, Self::MEDIA_SET_MAX_SEQ_NR);
|
||||
}
|
||||
let seq_nr = seq_nr as usize;
|
||||
if self.media_list.len() > seq_nr {
|
||||
if self.media_list[seq_nr].is_some() {
|
||||
bail!("found duplicate squence number in media set '{}/{}'",
|
||||
self.uuid.to_string(), seq_nr);
|
||||
}
|
||||
} else {
|
||||
self.media_list.resize(seq_nr + 1, None);
|
||||
}
|
||||
self.media_list[seq_nr] = Some(uuid);
|
||||
Ok(())
|
||||
}
|
||||
|
||||
pub fn last_media_uuid(&self) -> Option<&Uuid> {
|
||||
match self.media_list.last() {
|
||||
None => None,
|
||||
Some(None) => None,
|
||||
Some(Some(ref last_uuid)) => Some(last_uuid),
|
||||
}
|
||||
}
|
||||
|
||||
pub fn is_last_media(&self, uuid: &Uuid) -> bool {
|
||||
match self.media_list.last() {
|
||||
None => false,
|
||||
Some(None) => false,
|
||||
Some(Some(last_uuid)) => uuid == last_uuid,
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
/// Media Inventory
|
||||
pub struct Inventory {
|
||||
map: BTreeMap<Uuid, MediaId>,
|
||||
|
||||
inventory_path: PathBuf,
|
||||
lockfile_path: PathBuf,
|
||||
|
||||
// helpers
|
||||
media_set_start_times: HashMap<Uuid, i64>
|
||||
}
|
||||
|
||||
impl Inventory {
|
||||
|
||||
pub const MEDIA_INVENTORY_FILENAME: &'static str = "inventory.json";
|
||||
pub const MEDIA_INVENTORY_LOCKFILE: &'static str = ".inventory.lck";
|
||||
|
||||
fn new(base_path: &Path) -> Self {
|
||||
|
||||
let mut inventory_path = base_path.to_owned();
|
||||
inventory_path.push(Self::MEDIA_INVENTORY_FILENAME);
|
||||
|
||||
let mut lockfile_path = base_path.to_owned();
|
||||
lockfile_path.push(Self::MEDIA_INVENTORY_LOCKFILE);
|
||||
|
||||
Self {
|
||||
map: BTreeMap::new(),
|
||||
media_set_start_times: HashMap::new(),
|
||||
inventory_path,
|
||||
lockfile_path,
|
||||
}
|
||||
}
|
||||
|
||||
pub fn load(base_path: &Path) -> Result<Self, Error> {
|
||||
let mut me = Self::new(base_path);
|
||||
me.reload()?;
|
||||
Ok(me)
|
||||
}
|
||||
|
||||
/// Reload the database
|
||||
pub fn reload(&mut self) -> Result<(), Error> {
|
||||
self.map = Self::load_media_db(&self.inventory_path)?;
|
||||
self.update_helpers();
|
||||
Ok(())
|
||||
}
|
||||
|
||||
fn update_helpers(&mut self) {
|
||||
|
||||
// recompute media_set_start_times
|
||||
|
||||
let mut set_start_times = HashMap::new();
|
||||
|
||||
for media in self.map.values() {
|
||||
let set = match &media.media_set_label {
|
||||
None => continue,
|
||||
Some(set) => set,
|
||||
};
|
||||
if set.seq_nr == 0 {
|
||||
set_start_times.insert(set.uuid.clone(), set.ctime);
|
||||
}
|
||||
}
|
||||
|
||||
self.media_set_start_times = set_start_times;
|
||||
}
|
||||
|
||||
/// Lock the database
|
||||
pub fn lock(&self) -> Result<std::fs::File, Error> {
|
||||
open_file_locked(&self.lockfile_path, std::time::Duration::new(10, 0), true)
|
||||
}
|
||||
|
||||
fn load_media_db(path: &Path) -> Result<BTreeMap<Uuid, MediaId>, Error> {
|
||||
|
||||
let data = file_get_json(path, Some(json!([])))?;
|
||||
let media_list: Vec<MediaId> = serde_json::from_value(data)?;
|
||||
|
||||
let mut map = BTreeMap::new();
|
||||
for item in media_list.into_iter() {
|
||||
map.insert(item.label.uuid.clone(), item);
|
||||
}
|
||||
|
||||
Ok(map)
|
||||
}
|
||||
|
||||
fn replace_file(&self) -> Result<(), Error> {
|
||||
let list: Vec<&MediaId> = self.map.values().collect();
|
||||
let raw = serde_json::to_string_pretty(&serde_json::to_value(list)?)?;
|
||||
|
||||
let backup_user = crate::backup::backup_user()?;
|
||||
let mode = nix::sys::stat::Mode::from_bits_truncate(0o0640);
|
||||
let options = CreateOptions::new()
|
||||
.perm(mode)
|
||||
.owner(backup_user.uid)
|
||||
.group(backup_user.gid);
|
||||
|
||||
replace_file(&self.inventory_path, raw.as_bytes(), options)?;
|
||||
|
||||
Ok(())
|
||||
}
|
||||
|
||||
/// Stores a single MediaID persistently
|
||||
pub fn store(&mut self, mut media_id: MediaId) -> Result<(), Error> {
|
||||
let _lock = self.lock()?;
|
||||
self.map = Self::load_media_db(&self.inventory_path)?;
|
||||
|
||||
// do not overwrite unsaved pool assignments
|
||||
if media_id.media_set_label.is_none() {
|
||||
if let Some(previous) = self.map.get(&media_id.label.uuid) {
|
||||
if let Some(ref set) = previous.media_set_label {
|
||||
if set.uuid.as_ref() == [0u8;16] {
|
||||
media_id.media_set_label = Some(set.clone());
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
self.map.insert(media_id.label.uuid.clone(), media_id);
|
||||
self.update_helpers();
|
||||
self.replace_file()?;
|
||||
Ok(())
|
||||
}
|
||||
|
||||
/// Lookup media
|
||||
pub fn lookup_media(&self, uuid: &Uuid) -> Option<&MediaId> {
|
||||
self.map.get(uuid)
|
||||
}
|
||||
|
||||
/// find media by changer_id
|
||||
pub fn find_media_by_changer_id(&self, changer_id: &str) -> Option<&MediaId> {
|
||||
for (_uuid, media_id) in &self.map {
|
||||
if media_id.label.changer_id == changer_id {
|
||||
return Some(media_id);
|
||||
}
|
||||
}
|
||||
None
|
||||
}
|
||||
|
||||
/// Lookup media pool
|
||||
///
|
||||
/// Returns (pool, is_empty)
|
||||
pub fn lookup_media_pool(&self, uuid: &Uuid) -> Option<(&str, bool)> {
|
||||
match self.map.get(uuid) {
|
||||
None => None,
|
||||
Some(media_id) => {
|
||||
match media_id.media_set_label {
|
||||
None => None, // not assigned to any pool
|
||||
Some(ref set) => {
|
||||
let is_empty = set.uuid.as_ref() == [0u8;16];
|
||||
Some((&set.pool, is_empty))
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
/// List all media assigned to the pool
|
||||
pub fn list_pool_media(&self, pool: &str) -> Vec<MediaId> {
|
||||
let mut list = Vec::new();
|
||||
|
||||
for (_uuid, media_id) in &self.map {
|
||||
match media_id.media_set_label {
|
||||
None => continue, // not assigned to any pool
|
||||
Some(ref set) => {
|
||||
if set.pool != pool {
|
||||
continue; // belong to another pool
|
||||
}
|
||||
|
||||
if set.uuid.as_ref() == [0u8;16] { // should we do this??
|
||||
list.push(MediaId {
|
||||
label: media_id.label.clone(),
|
||||
media_set_label: None,
|
||||
})
|
||||
} else {
|
||||
list.push(media_id.clone());
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
}
|
||||
|
||||
list
|
||||
}
|
||||
|
||||
/// List all used media
|
||||
pub fn list_used_media(&self) -> Vec<MediaId> {
|
||||
let mut list = Vec::new();
|
||||
|
||||
for (_uuid, media_id) in &self.map {
|
||||
match media_id.media_set_label {
|
||||
None => continue, // not assigned to any pool
|
||||
Some(ref set) => {
|
||||
if set.uuid.as_ref() != [0u8;16] {
|
||||
list.push(media_id.clone());
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
list
|
||||
}
|
||||
|
||||
/// List media not assigned to any pool
|
||||
pub fn list_unassigned_media(&self) -> Vec<MediaId> {
|
||||
let mut list = Vec::new();
|
||||
|
||||
for (_uuid, media_id) in &self.map {
|
||||
if media_id.media_set_label.is_none() {
|
||||
list.push(media_id.clone());
|
||||
}
|
||||
}
|
||||
|
||||
list
|
||||
}
|
||||
|
||||
pub fn media_set_start_time(&self, media_set_uuid: &Uuid) -> Option<i64> {
|
||||
self.media_set_start_times.get(media_set_uuid).map(|t| *t)
|
||||
}
|
||||
|
||||
/// Compute a single media sets
|
||||
pub fn compute_media_set_members(&self, media_set_uuid: &Uuid) -> Result<MediaSet, Error> {
|
||||
|
||||
let mut set = MediaSet::with_data(media_set_uuid.clone(), Vec::new());
|
||||
|
||||
for media in self.map.values() {
|
||||
match media.media_set_label {
|
||||
None => continue,
|
||||
Some(MediaSetLabel { seq_nr, ref uuid, .. }) => {
|
||||
if uuid != media_set_uuid {
|
||||
continue;
|
||||
}
|
||||
set.insert_media(media.label.uuid.clone(), seq_nr)?;
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
Ok(set)
|
||||
}
|
||||
|
||||
/// Compute all media sets
|
||||
pub fn compute_media_set_list(&self) -> Result<HashMap<Uuid, MediaSet>, Error> {
|
||||
|
||||
let mut set_map: HashMap<Uuid, MediaSet> = HashMap::new();
|
||||
|
||||
for media in self.map.values() {
|
||||
match media.media_set_label {
|
||||
None => continue,
|
||||
Some(MediaSetLabel { seq_nr, ref uuid, .. }) => {
|
||||
|
||||
let set = set_map.entry(uuid.clone()).or_insert_with(|| {
|
||||
MediaSet::with_data(uuid.clone(), Vec::new())
|
||||
});
|
||||
|
||||
set.insert_media(media.label.uuid.clone(), seq_nr)?;
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
Ok(set_map)
|
||||
}
|
||||
|
||||
/// Returns the latest media set for a pool
|
||||
pub fn latest_media_set(&self, pool: &str) -> Option<Uuid> {
|
||||
|
||||
let mut last_set: Option<(Uuid, i64)> = None;
|
||||
|
||||
let set_list = self.map.values()
|
||||
.filter_map(|media| media.media_set_label.as_ref())
|
||||
.filter(|set| &set.pool == &pool && set.uuid.as_ref() != [0u8;16]);
|
||||
|
||||
for set in set_list {
|
||||
match last_set {
|
||||
None => {
|
||||
last_set = Some((set.uuid.clone(), set.ctime));
|
||||
}
|
||||
Some((_, last_ctime)) => {
|
||||
if set.ctime > last_ctime {
|
||||
last_set = Some((set.uuid.clone(), set.ctime));
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
let (uuid, ctime) = match last_set {
|
||||
None => return None,
|
||||
Some((uuid, ctime)) => (uuid, ctime),
|
||||
};
|
||||
|
||||
// consistency check - must be the only set with that ctime
|
||||
let set_list = self.map.values()
|
||||
.filter_map(|media| media.media_set_label.as_ref())
|
||||
.filter(|set| &set.pool == &pool && set.uuid.as_ref() != [0u8;16]);
|
||||
|
||||
for set in set_list {
|
||||
if set.uuid != uuid && set.ctime >= ctime { // should not happen
|
||||
eprintln!("latest_media_set: found set with equal ctime ({}, {})", set.uuid, uuid);
|
||||
return None;
|
||||
}
|
||||
}
|
||||
|
||||
Some(uuid)
|
||||
}
|
||||
|
||||
// Test if there is a media set (in the same pool) newer than this one.
|
||||
// Return the ctime of the nearest media set
|
||||
fn media_set_next_start_time(&self, media_set_uuid: &Uuid) -> Option<i64> {
|
||||
|
||||
let (pool, ctime) = match self.map.values()
|
||||
.filter_map(|media| media.media_set_label.as_ref())
|
||||
.find_map(|set| {
|
||||
if &set.uuid == media_set_uuid {
|
||||
Some((set.pool.clone(), set.ctime))
|
||||
} else {
|
||||
None
|
||||
}
|
||||
}) {
|
||||
Some((pool, ctime)) => (pool, ctime),
|
||||
None => return None,
|
||||
};
|
||||
|
||||
let set_list = self.map.values()
|
||||
.filter_map(|media| media.media_set_label.as_ref())
|
||||
.filter(|set| (&set.uuid != media_set_uuid) && (&set.pool == &pool));
|
||||
|
||||
let mut next_ctime = None;
|
||||
|
||||
for set in set_list {
|
||||
if set.ctime > ctime {
|
||||
match next_ctime {
|
||||
None => {
|
||||
next_ctime = Some(set.ctime);
|
||||
}
|
||||
Some(last_next_ctime) => {
|
||||
if set.ctime < last_next_ctime {
|
||||
next_ctime = Some(set.ctime);
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
next_ctime
|
||||
}
|
||||
|
||||
pub fn media_expire_time(
|
||||
&self,
|
||||
media: &MediaId,
|
||||
media_set_policy: &MediaSetPolicy,
|
||||
retention_policy: &RetentionPolicy,
|
||||
) -> i64 {
|
||||
|
||||
if let RetentionPolicy::KeepForever = retention_policy {
|
||||
return i64::MAX;
|
||||
}
|
||||
|
||||
let set = match media.media_set_label {
|
||||
None => return i64::MAX,
|
||||
Some(ref set) => set,
|
||||
};
|
||||
|
||||
let set_start_time = match self.media_set_start_time(&set.uuid) {
|
||||
None => {
|
||||
// missing information, use ctime from this
|
||||
// set (always greater than ctime from seq_nr 0)
|
||||
set.ctime
|
||||
}
|
||||
Some(time) => time,
|
||||
};
|
||||
|
||||
let max_use_time = match media_set_policy {
|
||||
MediaSetPolicy::ContinueCurrent => {
|
||||
match self.media_set_next_start_time(&set.uuid) {
|
||||
Some(next_start_time) => next_start_time,
|
||||
None => return i64::MAX,
|
||||
}
|
||||
}
|
||||
MediaSetPolicy::AlwaysCreate => {
|
||||
set_start_time + 1
|
||||
}
|
||||
MediaSetPolicy::CreateAt(ref event) => {
|
||||
match compute_next_event(event, set_start_time, false) {
|
||||
Ok(Some(next)) => next,
|
||||
Ok(None) | Err(_) => return i64::MAX,
|
||||
}
|
||||
}
|
||||
};
|
||||
|
||||
match retention_policy {
|
||||
RetentionPolicy::KeepForever => i64::MAX,
|
||||
RetentionPolicy::OverwriteAlways => max_use_time,
|
||||
RetentionPolicy::ProtectFor(time_span) => {
|
||||
let seconds = f64::from(time_span.clone()) as i64;
|
||||
max_use_time + seconds
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
/// Generate a human readable name for the media set
|
||||
///
|
||||
/// The template can include strftime time format specifications.
|
||||
pub fn generate_media_set_name(
|
||||
&self,
|
||||
media_set_uuid: &Uuid,
|
||||
template: Option<String>,
|
||||
) -> Result<String, Error> {
|
||||
|
||||
if let Some(ctime) = self.media_set_start_time(media_set_uuid) {
|
||||
let mut template = template.unwrap_or(String::from("%id%"));
|
||||
template = template.replace("%id%", &media_set_uuid.to_string());
|
||||
proxmox::tools::time::strftime_local(&template, ctime)
|
||||
} else {
|
||||
// We don't know the set start time, so we cannot use the template
|
||||
Ok(media_set_uuid.to_string())
|
||||
}
|
||||
}
|
||||
|
||||
// Helpers to simplify testing
|
||||
|
||||
/// Genreate and insert a new free tape (test helper)
|
||||
pub fn generate_free_tape(&mut self, changer_id: &str, ctime: i64) -> Uuid {
|
||||
|
||||
let label = DriveLabel {
|
||||
changer_id: changer_id.to_string(),
|
||||
uuid: Uuid::generate(),
|
||||
ctime,
|
||||
};
|
||||
let uuid = label.uuid.clone();
|
||||
|
||||
self.store(MediaId { label, media_set_label: None }).unwrap();
|
||||
|
||||
uuid
|
||||
}
|
||||
|
||||
/// Genreate and insert a new tape assigned to a specific pool
|
||||
/// (test helper)
|
||||
pub fn generate_assigned_tape(
|
||||
&mut self,
|
||||
changer_id: &str,
|
||||
pool: &str,
|
||||
ctime: i64,
|
||||
) -> Uuid {
|
||||
|
||||
let label = DriveLabel {
|
||||
changer_id: changer_id.to_string(),
|
||||
uuid: Uuid::generate(),
|
||||
ctime,
|
||||
};
|
||||
|
||||
let uuid = label.uuid.clone();
|
||||
|
||||
let set = MediaSetLabel::with_data(pool, [0u8; 16].into(), 0, ctime);
|
||||
|
||||
self.store(MediaId { label, media_set_label: Some(set) }).unwrap();
|
||||
|
||||
uuid
|
||||
}
|
||||
|
||||
/// Genreate and insert a used tape (test helper)
|
||||
pub fn generate_used_tape(
|
||||
&mut self,
|
||||
changer_id: &str,
|
||||
set: MediaSetLabel,
|
||||
ctime: i64,
|
||||
) -> Uuid {
|
||||
let label = DriveLabel {
|
||||
changer_id: changer_id.to_string(),
|
||||
uuid: Uuid::generate(),
|
||||
ctime,
|
||||
};
|
||||
let uuid = label.uuid.clone();
|
||||
|
||||
self.store(MediaId { label, media_set_label: Some(set) }).unwrap();
|
||||
|
||||
uuid
|
||||
}
|
||||
}
|
||||
|
||||
// shell completion helper
|
||||
|
||||
/// List of known media uuids
|
||||
pub fn complete_media_uuid(
|
||||
_arg: &str,
|
||||
_param: &HashMap<String, String>,
|
||||
) -> Vec<String> {
|
||||
|
||||
let inventory = match Inventory::load(Path::new(TAPE_STATUS_DIR)) {
|
||||
Ok(inventory) => inventory,
|
||||
Err(_) => return Vec::new(),
|
||||
};
|
||||
|
||||
inventory.map.keys().map(|uuid| uuid.to_string()).collect()
|
||||
}
|
||||
|
||||
/// List of known media sets
|
||||
pub fn complete_media_set_uuid(
|
||||
_arg: &str,
|
||||
_param: &HashMap<String, String>,
|
||||
) -> Vec<String> {
|
||||
|
||||
let inventory = match Inventory::load(Path::new(TAPE_STATUS_DIR)) {
|
||||
Ok(inventory) => inventory,
|
||||
Err(_) => return Vec::new(),
|
||||
};
|
||||
|
||||
inventory.map.values()
|
||||
.filter_map(|media| media.media_set_label.as_ref())
|
||||
.map(|set| set.uuid.to_string()).collect()
|
||||
}
|
||||
|
||||
/// List of known media labels (barcodes)
|
||||
pub fn complete_media_changer_id(
|
||||
_arg: &str,
|
||||
_param: &HashMap<String, String>,
|
||||
) -> Vec<String> {
|
||||
|
||||
let inventory = match Inventory::load(Path::new(TAPE_STATUS_DIR)) {
|
||||
Ok(inventory) => inventory,
|
||||
Err(_) => return Vec::new(),
|
||||
};
|
||||
|
||||
inventory.map.values().map(|media| media.label.changer_id.clone()).collect()
|
||||
}
|
483
src/tape/media_pool.rs
Normal file
483
src/tape/media_pool.rs
Normal file
@ -0,0 +1,483 @@
|
||||
//! Media Pool
|
||||
//!
|
||||
//! A set of backup medias.
|
||||
//!
|
||||
//! This struct manages backup media state during backup. The main
|
||||
//! purpose is to allocate media sets and assing new tapes to it.
|
||||
//!
|
||||
//!
|
||||
|
||||
use std::path::Path;
|
||||
use anyhow::{bail, Error};
|
||||
use ::serde::{Deserialize, Serialize};
|
||||
|
||||
use proxmox::tools::Uuid;
|
||||
|
||||
use crate::{
|
||||
api2::types::{
|
||||
MediaStatus,
|
||||
MediaSetPolicy,
|
||||
RetentionPolicy,
|
||||
MediaPoolConfig,
|
||||
},
|
||||
tools::systemd::time::compute_next_event,
|
||||
tape::{
|
||||
MediaId,
|
||||
MediaSet,
|
||||
MediaLocation,
|
||||
Inventory,
|
||||
MediaStateDatabase,
|
||||
file_formats::{
|
||||
DriveLabel,
|
||||
MediaSetLabel,
|
||||
},
|
||||
}
|
||||
};
|
||||
|
||||
pub struct MediaPoolLockGuard(std::fs::File);
|
||||
|
||||
/// Media Pool
|
||||
pub struct MediaPool {
|
||||
|
||||
name: String,
|
||||
|
||||
media_set_policy: MediaSetPolicy,
|
||||
retention: RetentionPolicy,
|
||||
|
||||
inventory: Inventory,
|
||||
state_db: MediaStateDatabase,
|
||||
|
||||
current_media_set: MediaSet,
|
||||
}
|
||||
|
||||
impl MediaPool {
|
||||
|
||||
/// Creates a new instance
|
||||
pub fn new(
|
||||
name: &str,
|
||||
state_path: &Path,
|
||||
media_set_policy: MediaSetPolicy,
|
||||
retention: RetentionPolicy,
|
||||
) -> Result<Self, Error> {
|
||||
|
||||
let inventory = Inventory::load(state_path)?;
|
||||
|
||||
let current_media_set = match inventory.latest_media_set(name) {
|
||||
Some(set_uuid) => inventory.compute_media_set_members(&set_uuid)?,
|
||||
None => MediaSet::new(),
|
||||
};
|
||||
|
||||
let state_db = MediaStateDatabase::load(state_path)?;
|
||||
|
||||
Ok(MediaPool {
|
||||
name: String::from(name),
|
||||
media_set_policy,
|
||||
retention,
|
||||
inventory,
|
||||
state_db,
|
||||
current_media_set,
|
||||
})
|
||||
}
|
||||
|
||||
/// Creates a new instance using the media pool configuration
|
||||
pub fn with_config(
|
||||
name: &str,
|
||||
state_path: &Path,
|
||||
config: &MediaPoolConfig,
|
||||
) -> Result<Self, Error> {
|
||||
|
||||
let allocation = config.allocation.clone().unwrap_or(String::from("continue")).parse()?;
|
||||
|
||||
let retention = config.retention.clone().unwrap_or(String::from("keep")).parse()?;
|
||||
|
||||
MediaPool::new(name, state_path, allocation, retention)
|
||||
}
|
||||
|
||||
/// Returns the pool name
|
||||
pub fn name(&self) -> &str {
|
||||
&self.name
|
||||
}
|
||||
|
||||
fn compute_media_state(&self, media_id: &MediaId) -> (MediaStatus, MediaLocation) {
|
||||
|
||||
let (status, location) = self.state_db.status_and_location(&media_id.label.uuid);
|
||||
|
||||
match status {
|
||||
MediaStatus::Full | MediaStatus::Damaged | MediaStatus::Retired => {
|
||||
return (status, location);
|
||||
}
|
||||
MediaStatus::Unknown | MediaStatus::Writable => {
|
||||
/* possibly writable - fall through to check */
|
||||
}
|
||||
}
|
||||
|
||||
let set = match media_id.media_set_label {
|
||||
None => return (MediaStatus::Writable, location), // not assigned to any pool
|
||||
Some(ref set) => set,
|
||||
};
|
||||
|
||||
if set.pool != self.name { // should never trigger
|
||||
return (MediaStatus::Unknown, location); // belong to another pool
|
||||
}
|
||||
if set.uuid.as_ref() == [0u8;16] { // not assigned to any pool
|
||||
return (MediaStatus::Writable, location);
|
||||
}
|
||||
|
||||
if &set.uuid != self.current_media_set.uuid() {
|
||||
return (MediaStatus::Full, location); // assume FULL
|
||||
}
|
||||
|
||||
// media is member of current set
|
||||
if self.current_media_set.is_last_media(&media_id.label.uuid) {
|
||||
(MediaStatus::Writable, location) // last set member is writable
|
||||
} else {
|
||||
(MediaStatus::Full, location)
|
||||
}
|
||||
}
|
||||
|
||||
/// Returns the 'MediaId' with associated state
|
||||
pub fn lookup_media(&self, uuid: &Uuid) -> Result<BackupMedia, Error> {
|
||||
let media_id = match self.inventory.lookup_media(uuid) {
|
||||
None => bail!("unable to lookup media {}", uuid),
|
||||
Some(media_id) => media_id.clone(),
|
||||
};
|
||||
|
||||
if let Some(ref set) = media_id.media_set_label {
|
||||
if set.pool != self.name {
|
||||
bail!("media does not belong to pool ({} != {})", set.pool, self.name);
|
||||
}
|
||||
}
|
||||
|
||||
let (status, location) = self.compute_media_state(&media_id);
|
||||
|
||||
Ok(BackupMedia::with_media_id(
|
||||
media_id,
|
||||
location,
|
||||
status,
|
||||
))
|
||||
}
|
||||
|
||||
/// List all media associated with this pool
|
||||
pub fn list_media(&self) -> Vec<BackupMedia> {
|
||||
let media_id_list = self.inventory.list_pool_media(&self.name);
|
||||
|
||||
media_id_list.into_iter()
|
||||
.map(|media_id| {
|
||||
let (status, location) = self.compute_media_state(&media_id);
|
||||
BackupMedia::with_media_id(
|
||||
media_id,
|
||||
location,
|
||||
status,
|
||||
)
|
||||
})
|
||||
.collect()
|
||||
}
|
||||
|
||||
/// Set media status to FULL.
|
||||
pub fn set_media_status_full(&mut self, uuid: &Uuid) -> Result<(), Error> {
|
||||
let media = self.lookup_media(uuid)?; // check if media belongs to this pool
|
||||
if media.status() != &MediaStatus::Full {
|
||||
self.state_db.set_media_status_full(uuid)?;
|
||||
}
|
||||
Ok(())
|
||||
}
|
||||
|
||||
/// Make sure the current media set is usable for writing
|
||||
///
|
||||
/// If not, starts a new media set. Also creates a new
|
||||
/// set if media_set_policy implies it.
|
||||
pub fn start_write_session(&mut self, current_time: i64) -> Result<(), Error> {
|
||||
|
||||
let mut create_new_set = match self.current_set_usable() {
|
||||
Err(err) => {
|
||||
eprintln!("unable to use current media set - {}", err);
|
||||
true
|
||||
}
|
||||
Ok(usable) => !usable,
|
||||
};
|
||||
|
||||
if !create_new_set {
|
||||
|
||||
match &self.media_set_policy {
|
||||
MediaSetPolicy::AlwaysCreate => {
|
||||
create_new_set = true;
|
||||
}
|
||||
MediaSetPolicy::CreateAt(event) => {
|
||||
if let Some(set_start_time) = self.inventory.media_set_start_time(&self.current_media_set.uuid()) {
|
||||
if let Ok(Some(alloc_time)) = compute_next_event(event, set_start_time as i64, false) {
|
||||
if current_time >= alloc_time {
|
||||
create_new_set = true;
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
MediaSetPolicy::ContinueCurrent => { /* do nothing here */ }
|
||||
}
|
||||
}
|
||||
|
||||
if create_new_set {
|
||||
let media_set = MediaSet::new();
|
||||
eprintln!("starting new media set {}", media_set.uuid());
|
||||
self.current_media_set = media_set;
|
||||
}
|
||||
|
||||
Ok(())
|
||||
}
|
||||
|
||||
/// List media in current media set
|
||||
pub fn current_media_list(&self) -> Result<Vec<&Uuid>, Error> {
|
||||
let mut list = Vec::new();
|
||||
for opt_uuid in self.current_media_set.media_list().iter() {
|
||||
match opt_uuid {
|
||||
Some(ref uuid) => list.push(uuid),
|
||||
None => bail!("current_media_list failed - media set is incomplete"),
|
||||
}
|
||||
}
|
||||
Ok(list)
|
||||
}
|
||||
|
||||
// tests if the media data is considered as expired at sepcified time
|
||||
pub fn media_is_expired(&self, media: &BackupMedia, current_time: i64) -> bool {
|
||||
if media.status() != &MediaStatus::Full {
|
||||
return false;
|
||||
}
|
||||
|
||||
let expire_time = self.inventory.media_expire_time(
|
||||
media.id(), &self.media_set_policy, &self.retention);
|
||||
|
||||
current_time > expire_time
|
||||
}
|
||||
|
||||
/// Allocates a writable media to the current media set
|
||||
pub fn alloc_writable_media(&mut self, current_time: i64) -> Result<Uuid, Error> {
|
||||
|
||||
let last_is_writable = self.current_set_usable()?;
|
||||
|
||||
let pool = self.name.clone();
|
||||
|
||||
if last_is_writable {
|
||||
let last_uuid = self.current_media_set.last_media_uuid().unwrap();
|
||||
let media = self.lookup_media(last_uuid)?;
|
||||
return Ok(media.uuid().clone());
|
||||
}
|
||||
|
||||
// try to find empty media in pool, add to media set
|
||||
|
||||
let mut media_list = self.list_media();
|
||||
|
||||
let mut empty_media = Vec::new();
|
||||
for media in media_list.iter_mut() {
|
||||
// already part of a media set?
|
||||
if media.media_set_label().is_some() { continue; }
|
||||
|
||||
// check if media is on site
|
||||
match media.location() {
|
||||
MediaLocation::Online(_) | MediaLocation::Offline => { /* OK */ },
|
||||
MediaLocation::Vault(_) => continue,
|
||||
}
|
||||
|
||||
// only consider writable media
|
||||
if media.status() != &MediaStatus::Writable { continue; }
|
||||
|
||||
empty_media.push(media);
|
||||
}
|
||||
|
||||
// sort empty_media, oldest media first
|
||||
empty_media.sort_unstable_by_key(|media| media.label().ctime);
|
||||
|
||||
if let Some(media) = empty_media.first_mut() {
|
||||
// found empty media, add to media set an use it
|
||||
let seq_nr = self.current_media_set.media_list().len() as u64;
|
||||
|
||||
let set = MediaSetLabel::with_data(&pool, self.current_media_set.uuid().clone(), seq_nr, current_time);
|
||||
|
||||
media.set_media_set_label(set);
|
||||
|
||||
self.inventory.store(media.id().clone())?; // store persistently
|
||||
|
||||
self.current_media_set.add_media(media.uuid().clone());
|
||||
|
||||
return Ok(media.uuid().clone());
|
||||
}
|
||||
|
||||
println!("no empty media in pool, try to reuse expired media");
|
||||
|
||||
let mut expired_media = Vec::new();
|
||||
|
||||
for media in media_list.into_iter() {
|
||||
if let Some(set) = media.media_set_label() {
|
||||
if &set.uuid == self.current_media_set.uuid() {
|
||||
continue;
|
||||
}
|
||||
}
|
||||
if self.media_is_expired(&media, current_time) {
|
||||
println!("found expired media on media '{}'", media.changer_id());
|
||||
expired_media.push(media);
|
||||
}
|
||||
}
|
||||
|
||||
// sort, oldest media first
|
||||
expired_media.sort_unstable_by_key(|media| {
|
||||
match media.media_set_label() {
|
||||
None => 0, // should not happen here
|
||||
Some(set) => set.ctime,
|
||||
}
|
||||
});
|
||||
|
||||
match expired_media.first_mut() {
|
||||
None => {
|
||||
bail!("alloc writable media in pool '{}' failed: no usable media found", self.name());
|
||||
}
|
||||
Some(media) => {
|
||||
println!("reuse expired media '{}'", media.changer_id());
|
||||
|
||||
let seq_nr = self.current_media_set.media_list().len() as u64;
|
||||
let set = MediaSetLabel::with_data(&pool, self.current_media_set.uuid().clone(), seq_nr, current_time);
|
||||
|
||||
media.set_media_set_label(set);
|
||||
|
||||
self.inventory.store(media.id().clone())?; // store persistently
|
||||
self.state_db.clear_media_status(media.uuid())?; // remove Full status
|
||||
|
||||
self.current_media_set.add_media(media.uuid().clone());
|
||||
|
||||
return Ok(media.uuid().clone());
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
/// check if the current media set is usable for writing
|
||||
///
|
||||
/// This does several consistency checks, and return if
|
||||
/// the last media in the current set is in writable state.
|
||||
///
|
||||
/// This return error when the media set must not be used any
|
||||
/// longer because of consistency errors.
|
||||
pub fn current_set_usable(&self) -> Result<bool, Error> {
|
||||
|
||||
let media_count = self.current_media_set.media_list().len();
|
||||
if media_count == 0 {
|
||||
return Ok(false);
|
||||
}
|
||||
|
||||
let set_uuid = self.current_media_set.uuid();
|
||||
let mut last_is_writable = false;
|
||||
|
||||
for (seq, opt_uuid) in self.current_media_set.media_list().iter().enumerate() {
|
||||
let uuid = match opt_uuid {
|
||||
None => bail!("media set is incomplete (missing media information)"),
|
||||
Some(uuid) => uuid,
|
||||
};
|
||||
let media = self.lookup_media(uuid)?;
|
||||
match media.media_set_label() {
|
||||
Some(MediaSetLabel { seq_nr, uuid, ..}) if *seq_nr == seq as u64 && uuid == set_uuid => { /* OK */ },
|
||||
Some(MediaSetLabel { seq_nr, uuid, ..}) if uuid == set_uuid => {
|
||||
bail!("media sequence error ({} != {})", *seq_nr, seq);
|
||||
},
|
||||
Some(MediaSetLabel { uuid, ..}) => bail!("media owner error ({} != {}", uuid, set_uuid),
|
||||
None => bail!("media owner error (no owner)"),
|
||||
}
|
||||
match media.status() {
|
||||
MediaStatus::Full => { /* OK */ },
|
||||
MediaStatus::Writable if (seq + 1) == media_count => {
|
||||
last_is_writable = true;
|
||||
match media.location() {
|
||||
MediaLocation::Online(_) | MediaLocation::Offline => { /* OK */ },
|
||||
MediaLocation::Vault(vault) => {
|
||||
bail!("writable media offsite in vault '{}'", vault);
|
||||
}
|
||||
}
|
||||
},
|
||||
_ => bail!("unable to use media set - wrong media status {:?}", media.status()),
|
||||
}
|
||||
}
|
||||
Ok(last_is_writable)
|
||||
}
|
||||
|
||||
/// Generate a human readable name for the media set
|
||||
pub fn generate_media_set_name(
|
||||
&self,
|
||||
media_set_uuid: &Uuid,
|
||||
template: Option<String>,
|
||||
) -> Result<String, Error> {
|
||||
self.inventory.generate_media_set_name(media_set_uuid, template)
|
||||
}
|
||||
|
||||
/// Lock the pool
|
||||
pub fn lock(base_path: &Path, name: &str) -> Result<MediaPoolLockGuard, Error> {
|
||||
let mut path = base_path.to_owned();
|
||||
path.push(format!(".{}", name));
|
||||
path.set_extension("lck");
|
||||
|
||||
let timeout = std::time::Duration::new(10, 0);
|
||||
let lock = proxmox::tools::fs::open_file_locked(&path, timeout, true)?;
|
||||
|
||||
Ok(MediaPoolLockGuard(lock))
|
||||
}
|
||||
}
|
||||
|
||||
/// Backup media
|
||||
///
|
||||
/// Combines 'MediaId' with 'MediaLocation' and 'MediaStatus'
|
||||
/// information.
|
||||
#[derive(Debug,Serialize,Deserialize,Clone)]
|
||||
pub struct BackupMedia {
|
||||
/// Media ID
|
||||
id: MediaId,
|
||||
/// Media location
|
||||
location: MediaLocation,
|
||||
/// Media status
|
||||
status: MediaStatus,
|
||||
}
|
||||
|
||||
impl BackupMedia {
|
||||
|
||||
/// Creates a new instance
|
||||
pub fn with_media_id(
|
||||
id: MediaId,
|
||||
location: MediaLocation,
|
||||
status: MediaStatus,
|
||||
) -> Self {
|
||||
Self { id, location, status }
|
||||
}
|
||||
|
||||
/// Returns the media location
|
||||
pub fn location(&self) -> &MediaLocation {
|
||||
&self.location
|
||||
}
|
||||
|
||||
/// Returns the media status
|
||||
pub fn status(&self) -> &MediaStatus {
|
||||
&self.status
|
||||
}
|
||||
|
||||
/// Returns the media uuid
|
||||
pub fn uuid(&self) -> &Uuid {
|
||||
&self.id.label.uuid
|
||||
}
|
||||
|
||||
/// Returns the media set label
|
||||
pub fn media_set_label(&self) -> &Option<MediaSetLabel> {
|
||||
&self.id.media_set_label
|
||||
}
|
||||
|
||||
/// Updates the media set label
|
||||
pub fn set_media_set_label(&mut self, set_label: MediaSetLabel) {
|
||||
self.id.media_set_label = Some(set_label);
|
||||
}
|
||||
|
||||
/// Returns the drive label
|
||||
pub fn label(&self) -> &DriveLabel {
|
||||
&self.id.label
|
||||
}
|
||||
|
||||
/// Returns the media id (drive label + media set label)
|
||||
pub fn id(&self) -> &MediaId {
|
||||
&self.id
|
||||
}
|
||||
|
||||
/// Returns the media label (Barcode)
|
||||
pub fn changer_id(&self) -> &str {
|
||||
&self.id.label.changer_id
|
||||
}
|
||||
}
|
224
src/tape/media_state_database.rs
Normal file
224
src/tape/media_state_database.rs
Normal file
@ -0,0 +1,224 @@
|
||||
use std::path::{Path, PathBuf};
|
||||
use std::collections::BTreeMap;
|
||||
|
||||
use anyhow::Error;
|
||||
use ::serde::{Deserialize, Serialize};
|
||||
use serde_json::json;
|
||||
|
||||
use proxmox::tools::{
|
||||
Uuid,
|
||||
fs::{
|
||||
open_file_locked,
|
||||
replace_file,
|
||||
file_get_json,
|
||||
CreateOptions,
|
||||
},
|
||||
};
|
||||
|
||||
use crate::{
|
||||
tape::{
|
||||
OnlineStatusMap,
|
||||
},
|
||||
api2::types::{
|
||||
MediaStatus,
|
||||
},
|
||||
};
|
||||
|
||||
#[derive(Debug, PartialEq, Clone, Serialize, Deserialize)]
|
||||
/// Media location
|
||||
pub enum MediaLocation {
|
||||
/// Ready for use (inside tape library)
|
||||
Online(String),
|
||||
/// Local available, but need to be mounted (insert into tape
|
||||
/// drive)
|
||||
Offline,
|
||||
/// Media is inside a Vault
|
||||
Vault(String),
|
||||
}
|
||||
|
||||
#[derive(Serialize,Deserialize)]
|
||||
struct MediaStateEntry {
|
||||
u: Uuid,
|
||||
#[serde(skip_serializing_if="Option::is_none")]
|
||||
l: Option<MediaLocation>,
|
||||
#[serde(skip_serializing_if="Option::is_none")]
|
||||
s: Option<MediaStatus>,
|
||||
}
|
||||
|
||||
impl MediaStateEntry {
|
||||
fn new(uuid: Uuid) -> Self {
|
||||
MediaStateEntry { u: uuid, l: None, s: None }
|
||||
}
|
||||
}
|
||||
|
||||
/// Stores MediaLocation and MediaState persistently
|
||||
pub struct MediaStateDatabase {
|
||||
|
||||
map: BTreeMap<Uuid, MediaStateEntry>,
|
||||
|
||||
database_path: PathBuf,
|
||||
lockfile_path: PathBuf,
|
||||
}
|
||||
|
||||
impl MediaStateDatabase {
|
||||
|
||||
pub const MEDIA_STATUS_DATABASE_FILENAME: &'static str = "media-status-db.json";
|
||||
pub const MEDIA_STATUS_DATABASE_LOCKFILE: &'static str = ".media-status-db.lck";
|
||||
|
||||
|
||||
/// Lock the database
|
||||
pub fn lock(&self) -> Result<std::fs::File, Error> {
|
||||
open_file_locked(&self.lockfile_path, std::time::Duration::new(10, 0), true)
|
||||
}
|
||||
|
||||
/// Returns status and location with reasonable defaults.
|
||||
///
|
||||
/// Default status is 'MediaStatus::Unknown'.
|
||||
/// Default location is 'MediaLocation::Offline'.
|
||||
pub fn status_and_location(&self, uuid: &Uuid) -> (MediaStatus, MediaLocation) {
|
||||
|
||||
match self.map.get(uuid) {
|
||||
None => {
|
||||
// no info stored - assume media is writable/offline
|
||||
(MediaStatus::Unknown, MediaLocation::Offline)
|
||||
}
|
||||
Some(entry) => {
|
||||
let location = entry.l.clone().unwrap_or(MediaLocation::Offline);
|
||||
let status = entry.s.unwrap_or(MediaStatus::Unknown);
|
||||
(status, location)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
fn load_media_db(path: &Path) -> Result<BTreeMap<Uuid, MediaStateEntry>, Error> {
|
||||
|
||||
let data = file_get_json(path, Some(json!([])))?;
|
||||
let list: Vec<MediaStateEntry> = serde_json::from_value(data)?;
|
||||
|
||||
let mut map = BTreeMap::new();
|
||||
for entry in list.into_iter() {
|
||||
map.insert(entry.u.clone(), entry);
|
||||
}
|
||||
|
||||
Ok(map)
|
||||
}
|
||||
|
||||
/// Load the database into memory
|
||||
pub fn load(base_path: &Path) -> Result<MediaStateDatabase, Error> {
|
||||
|
||||
let mut database_path = base_path.to_owned();
|
||||
database_path.push(Self::MEDIA_STATUS_DATABASE_FILENAME);
|
||||
|
||||
let mut lockfile_path = base_path.to_owned();
|
||||
lockfile_path.push(Self::MEDIA_STATUS_DATABASE_LOCKFILE);
|
||||
|
||||
Ok(MediaStateDatabase {
|
||||
map: Self::load_media_db(&database_path)?,
|
||||
database_path,
|
||||
lockfile_path,
|
||||
})
|
||||
}
|
||||
|
||||
/// Lock database, reload database, set status to Full, store database
|
||||
pub fn set_media_status_full(&mut self, uuid: &Uuid) -> Result<(), Error> {
|
||||
let _lock = self.lock()?;
|
||||
self.map = Self::load_media_db(&self.database_path)?;
|
||||
let entry = self.map.entry(uuid.clone()).or_insert(MediaStateEntry::new(uuid.clone()));
|
||||
entry.s = Some(MediaStatus::Full);
|
||||
self.store()
|
||||
}
|
||||
|
||||
/// Update online status
|
||||
pub fn update_online_status(&mut self, online_map: &OnlineStatusMap) -> Result<(), Error> {
|
||||
let _lock = self.lock()?;
|
||||
self.map = Self::load_media_db(&self.database_path)?;
|
||||
|
||||
for (_uuid, entry) in self.map.iter_mut() {
|
||||
if let Some(changer_name) = online_map.lookup_changer(&entry.u) {
|
||||
entry.l = Some(MediaLocation::Online(changer_name.to_string()));
|
||||
} else {
|
||||
if let Some(MediaLocation::Online(ref changer_name)) = entry.l {
|
||||
match online_map.online_map(changer_name) {
|
||||
None => {
|
||||
// no such changer device
|
||||
entry.l = Some(MediaLocation::Offline);
|
||||
}
|
||||
Some(None) => {
|
||||
// got no info - do nothing
|
||||
}
|
||||
Some(Some(_)) => {
|
||||
// media changer changed
|
||||
entry.l = Some(MediaLocation::Offline);
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
for (uuid, changer_name) in online_map.changer_map() {
|
||||
if self.map.contains_key(uuid) { continue; }
|
||||
let mut entry = MediaStateEntry::new(uuid.clone());
|
||||
entry.l = Some(MediaLocation::Online(changer_name.to_string()));
|
||||
self.map.insert(uuid.clone(), entry);
|
||||
}
|
||||
|
||||
self.store()
|
||||
}
|
||||
|
||||
/// Lock database, reload database, set status to Damaged, store database
|
||||
pub fn set_media_status_damaged(&mut self, uuid: &Uuid) -> Result<(), Error> {
|
||||
let _lock = self.lock()?;
|
||||
self.map = Self::load_media_db(&self.database_path)?;
|
||||
let entry = self.map.entry(uuid.clone()).or_insert(MediaStateEntry::new(uuid.clone()));
|
||||
entry.s = Some(MediaStatus::Damaged);
|
||||
self.store()
|
||||
}
|
||||
|
||||
/// Lock database, reload database, set status to None, store database
|
||||
pub fn clear_media_status(&mut self, uuid: &Uuid) -> Result<(), Error> {
|
||||
let _lock = self.lock()?;
|
||||
self.map = Self::load_media_db(&self.database_path)?;
|
||||
let entry = self.map.entry(uuid.clone()).or_insert(MediaStateEntry::new(uuid.clone()));
|
||||
entry.s = None ;
|
||||
self.store()
|
||||
}
|
||||
|
||||
/// Lock database, reload database, set location to vault, store database
|
||||
pub fn set_media_location_vault(&mut self, uuid: &Uuid, vault: &str) -> Result<(), Error> {
|
||||
let _lock = self.lock()?;
|
||||
self.map = Self::load_media_db(&self.database_path)?;
|
||||
let entry = self.map.entry(uuid.clone()).or_insert(MediaStateEntry::new(uuid.clone()));
|
||||
entry.l = Some(MediaLocation::Vault(vault.to_string()));
|
||||
self.store()
|
||||
}
|
||||
|
||||
/// Lock database, reload database, set location to offline, store database
|
||||
pub fn set_media_location_offline(&mut self, uuid: &Uuid) -> Result<(), Error> {
|
||||
let _lock = self.lock()?;
|
||||
self.map = Self::load_media_db(&self.database_path)?;
|
||||
let entry = self.map.entry(uuid.clone()).or_insert(MediaStateEntry::new(uuid.clone()));
|
||||
entry.l = Some(MediaLocation::Offline);
|
||||
self.store()
|
||||
}
|
||||
|
||||
fn store(&self) -> Result<(), Error> {
|
||||
|
||||
let mut list = Vec::new();
|
||||
for entry in self.map.values() {
|
||||
list.push(entry);
|
||||
}
|
||||
|
||||
let raw = serde_json::to_string_pretty(&serde_json::to_value(list)?)?;
|
||||
|
||||
let backup_user = crate::backup::backup_user()?;
|
||||
let mode = nix::sys::stat::Mode::from_bits_truncate(0o0640);
|
||||
let options = CreateOptions::new()
|
||||
.perm(mode)
|
||||
.owner(backup_user.uid)
|
||||
.group(backup_user.gid);
|
||||
|
||||
replace_file(&self.database_path, raw.as_bytes(), options)?;
|
||||
|
||||
Ok(())
|
||||
}
|
||||
}
|
62
src/tape/mod.rs
Normal file
62
src/tape/mod.rs
Normal file
@ -0,0 +1,62 @@
|
||||
use anyhow::{format_err, Error};
|
||||
|
||||
use proxmox::tools::fs::{
|
||||
create_path,
|
||||
CreateOptions,
|
||||
};
|
||||
|
||||
pub mod file_formats;
|
||||
|
||||
mod tape_write;
|
||||
pub use tape_write::*;
|
||||
|
||||
mod tape_read;
|
||||
pub use tape_read::*;
|
||||
|
||||
mod helpers;
|
||||
pub use helpers::*;
|
||||
|
||||
mod inventory;
|
||||
pub use inventory::*;
|
||||
|
||||
mod changer;
|
||||
pub use changer::*;
|
||||
|
||||
mod drive;
|
||||
pub use drive::*;
|
||||
|
||||
mod media_state_database;
|
||||
pub use media_state_database::*;
|
||||
|
||||
mod online_status_map;
|
||||
pub use online_status_map::*;
|
||||
|
||||
mod media_pool;
|
||||
pub use media_pool::*;
|
||||
|
||||
/// Directory path where we store all tape status information
|
||||
pub const TAPE_STATUS_DIR: &str = "/var/lib/proxmox-backup/tape";
|
||||
|
||||
/// We limit chunk archive size, so that we can faster restore a
|
||||
/// specific chunk (The catalog only store file numbers, so we
|
||||
/// need to read the whole archive to restore a single chunk)
|
||||
pub const MAX_CHUNK_ARCHIVE_SIZE: usize = 4*1024*1024*1024; // 4GB for now
|
||||
|
||||
/// To improve performance, we need to avoid tape drive buffer flush.
|
||||
pub const COMMIT_BLOCK_SIZE: usize = 128*1024*1024*1024; // 128 GiB
|
||||
|
||||
|
||||
/// Create tape status dir with correct permission
|
||||
pub fn create_tape_status_dir() -> Result<(), Error> {
|
||||
let backup_user = crate::backup::backup_user()?;
|
||||
let mode = nix::sys::stat::Mode::from_bits_truncate(0o0640);
|
||||
let opts = CreateOptions::new()
|
||||
.perm(mode)
|
||||
.owner(backup_user.uid)
|
||||
.group(backup_user.gid);
|
||||
|
||||
create_path(TAPE_STATUS_DIR, None, Some(opts))
|
||||
.map_err(|err: Error| format_err!("unable to create tape status dir - {}", err))?;
|
||||
|
||||
Ok(())
|
||||
}
|
164
src/tape/online_status_map.rs
Normal file
164
src/tape/online_status_map.rs
Normal file
@ -0,0 +1,164 @@
|
||||
use std::path::Path;
|
||||
use std::collections::{HashMap, HashSet};
|
||||
|
||||
use anyhow::{bail, Error};
|
||||
|
||||
use proxmox::tools::Uuid;
|
||||
use proxmox::api::section_config::SectionConfigData;
|
||||
|
||||
use crate::{
|
||||
api2::types::{
|
||||
VirtualTapeDrive,
|
||||
ScsiTapeChanger,
|
||||
},
|
||||
tape::{
|
||||
MediaChange,
|
||||
Inventory,
|
||||
MediaStateDatabase,
|
||||
mtx_status,
|
||||
mtx_status_to_online_set,
|
||||
},
|
||||
};
|
||||
|
||||
/// Helper to update media online status
|
||||
///
|
||||
/// A tape media is considered online if it is accessible by a changer
|
||||
/// device. This class can store the list of available changes,
|
||||
/// together with the accessible media ids.
|
||||
pub struct OnlineStatusMap {
|
||||
map: HashMap<String, Option<HashSet<Uuid>>>,
|
||||
changer_map: HashMap<Uuid, String>,
|
||||
}
|
||||
|
||||
impl OnlineStatusMap {
|
||||
|
||||
/// Creates a new instance with one map entry for each configured
|
||||
/// changer (or 'VirtualTapeDrive', which has an internal
|
||||
/// changer). The map entry is set to 'None' to indicate that we
|
||||
/// do not have information about the online status.
|
||||
pub fn new(config: &SectionConfigData) -> Result<Self, Error> {
|
||||
|
||||
let mut map = HashMap::new();
|
||||
|
||||
let changers: Vec<ScsiTapeChanger> = config.convert_to_typed_array("changer")?;
|
||||
for changer in changers {
|
||||
map.insert(changer.name.clone(), None);
|
||||
}
|
||||
|
||||
let vtapes: Vec<VirtualTapeDrive> = config.convert_to_typed_array("virtual")?;
|
||||
for vtape in vtapes {
|
||||
map.insert(vtape.name.clone(), None);
|
||||
}
|
||||
|
||||
Ok(Self { map, changer_map: HashMap::new() })
|
||||
}
|
||||
|
||||
/// Returns the assiciated changer name for a media.
|
||||
pub fn lookup_changer(&self, uuid: &Uuid) -> Option<&String> {
|
||||
self.changer_map.get(uuid)
|
||||
}
|
||||
|
||||
/// Returns the map which assiciates media uuids with changer names.
|
||||
pub fn changer_map(&self) -> &HashMap<Uuid, String> {
|
||||
&self.changer_map
|
||||
}
|
||||
|
||||
/// Returns the set of online media for the specified changer.
|
||||
pub fn online_map(&self, changer_name: &str) -> Option<&Option<HashSet<Uuid>>> {
|
||||
self.map.get(changer_name)
|
||||
}
|
||||
|
||||
/// Update the online set for the specified changer
|
||||
pub fn update_online_status(&mut self, changer_name: &str, online_set: HashSet<Uuid>) -> Result<(), Error> {
|
||||
|
||||
match self.map.get(changer_name) {
|
||||
None => bail!("no such changer '{}' device", changer_name),
|
||||
Some(None) => { /* Ok */ },
|
||||
Some(Some(_)) => {
|
||||
// do not allow updates to keep self.changer_map consistent
|
||||
bail!("update_online_status '{}' called twice", changer_name);
|
||||
}
|
||||
}
|
||||
|
||||
for uuid in online_set.iter() {
|
||||
self.changer_map.insert(uuid.clone(), changer_name.to_string());
|
||||
}
|
||||
|
||||
self.map.insert(changer_name.to_string(), Some(online_set));
|
||||
|
||||
Ok(())
|
||||
}
|
||||
}
|
||||
|
||||
/// Update online media status
|
||||
///
|
||||
/// Simply ask all changer devices.
|
||||
pub fn update_online_status(state_path: &Path) -> Result<OnlineStatusMap, Error> {
|
||||
|
||||
let (config, _digest) = crate::config::drive::config()?;
|
||||
|
||||
let inventory = Inventory::load(state_path)?;
|
||||
|
||||
let changers: Vec<ScsiTapeChanger> = config.convert_to_typed_array("changer")?;
|
||||
|
||||
let mut map = OnlineStatusMap::new(&config)?;
|
||||
|
||||
for changer in changers {
|
||||
let status = match mtx_status(&changer.path) {
|
||||
Ok(status) => status,
|
||||
Err(err) => {
|
||||
eprintln!("unable to get changer '{}' status - {}", changer.name, err);
|
||||
continue;
|
||||
}
|
||||
};
|
||||
|
||||
let online_set = mtx_status_to_online_set(&status, &inventory);
|
||||
map.update_online_status(&changer.name, online_set)?;
|
||||
}
|
||||
|
||||
let vtapes: Vec<VirtualTapeDrive> = config.convert_to_typed_array("virtual")?;
|
||||
for vtape in vtapes {
|
||||
let media_list = match vtape.list_media_changer_ids() {
|
||||
Ok(media_list) => media_list,
|
||||
Err(err) => {
|
||||
eprintln!("unable to get changer '{}' status - {}", vtape.name, err);
|
||||
continue;
|
||||
}
|
||||
};
|
||||
|
||||
let mut online_set = HashSet::new();
|
||||
for changer_id in media_list {
|
||||
if let Some(media_id) = inventory.find_media_by_changer_id(&changer_id) {
|
||||
online_set.insert(media_id.label.uuid.clone());
|
||||
}
|
||||
}
|
||||
map.update_online_status(&vtape.name, online_set)?;
|
||||
}
|
||||
|
||||
let mut state_db = MediaStateDatabase::load(state_path)?;
|
||||
state_db.update_online_status(&map)?;
|
||||
|
||||
Ok(map)
|
||||
}
|
||||
|
||||
/// Update online media status with data from a single changer device
|
||||
pub fn update_changer_online_status(
|
||||
drive_config: &SectionConfigData,
|
||||
inventory: &mut Inventory,
|
||||
state_db: &mut MediaStateDatabase,
|
||||
changer_name: &str,
|
||||
changer_id_list: &Vec<String>,
|
||||
) -> Result<(), Error> {
|
||||
|
||||
let mut online_map = OnlineStatusMap::new(drive_config)?;
|
||||
let mut online_set = HashSet::new();
|
||||
for changer_id in changer_id_list.iter() {
|
||||
if let Some(media_id) = inventory.find_media_by_changer_id(&changer_id) {
|
||||
online_set.insert(media_id.label.uuid.clone());
|
||||
}
|
||||
}
|
||||
online_map.update_online_status(&changer_name, online_set)?;
|
||||
state_db.update_online_status(&online_map)?;
|
||||
|
||||
Ok(())
|
||||
}
|
45
src/tape/tape_read.rs
Normal file
45
src/tape/tape_read.rs
Normal file
@ -0,0 +1,45 @@
|
||||
use std::io::Read;
|
||||
|
||||
/// Read trait for tape devices
|
||||
///
|
||||
/// Normal Read, but allows to query additional status flags.
|
||||
pub trait TapeRead: Read {
|
||||
/// Return true if there is an "INCOMPLETE" mark at EOF
|
||||
///
|
||||
/// Raises an error if you query this flag before reaching EOF.
|
||||
fn is_incomplete(&self) -> Result<bool, std::io::Error>;
|
||||
|
||||
/// Return true if there is a file end marker before EOF
|
||||
///
|
||||
/// Raises an error if you query this flag before reaching EOF.
|
||||
fn has_end_marker(&self) -> Result<bool, std::io::Error>;
|
||||
}
|
||||
|
||||
/// Read a single block from a tape device
|
||||
///
|
||||
/// Assumes that 'reader' is a linux tape device.
|
||||
///
|
||||
/// Return true on success, false on EOD
|
||||
pub fn tape_device_read_block<R: Read>(
|
||||
reader: &mut R,
|
||||
buffer: &mut [u8],
|
||||
) -> Result<bool, std::io::Error> {
|
||||
|
||||
loop {
|
||||
match reader.read(buffer) {
|
||||
Ok(0) => { return Ok(false); /* EOD */ }
|
||||
Ok(count) => {
|
||||
if count == buffer.len() {
|
||||
return Ok(true);
|
||||
}
|
||||
proxmox::io_bail!("short block read ({} < {}). Tape drive uses wrong block size.",
|
||||
count, buffer.len());
|
||||
}
|
||||
// handle interrupted system call
|
||||
Err(err) if err.kind() == std::io::ErrorKind::Interrupted => {
|
||||
continue;
|
||||
}
|
||||
Err(err) => return Err(err),
|
||||
}
|
||||
}
|
||||
}
|
105
src/tape/tape_write.rs
Normal file
105
src/tape/tape_write.rs
Normal file
@ -0,0 +1,105 @@
|
||||
use std::io::Write;
|
||||
|
||||
use endian_trait::Endian;
|
||||
|
||||
use proxmox::sys::error::SysError;
|
||||
|
||||
use crate::tape::file_formats::MediaContentHeader;
|
||||
|
||||
/// Write trait for tape devices
|
||||
///
|
||||
/// The 'write_all' function returns if the drive reached the Logical
|
||||
/// End Of Media (early warning).
|
||||
///
|
||||
/// It is mandatory to call 'finish' before closing the stream to mark it
|
||||
/// as correctly written.
|
||||
///
|
||||
/// Please note that there is no flush method. Tapes flush there internal
|
||||
/// buffer when they write an EOF marker.
|
||||
pub trait TapeWrite {
|
||||
/// writes all data, returns true on LEOM
|
||||
fn write_all(&mut self, data: &[u8]) -> Result<bool, std::io::Error>;
|
||||
|
||||
/// Returns how many bytes (raw data on tape) have been written
|
||||
fn bytes_written(&self) -> usize;
|
||||
|
||||
/// flush last block, write file end mark
|
||||
///
|
||||
/// The incomplete flag is used to mark multivolume stream.
|
||||
fn finish(&mut self, incomplete: bool) -> Result<bool, std::io::Error>;
|
||||
|
||||
/// Returns true if the writer already detected the logical end of media
|
||||
fn logical_end_of_media(&self) -> bool;
|
||||
|
||||
/// writes header and data, returns true on LEOM
|
||||
fn write_header(
|
||||
&mut self,
|
||||
header: &MediaContentHeader,
|
||||
data: &[u8],
|
||||
) -> Result<bool, std::io::Error> {
|
||||
if header.size as usize != data.len() {
|
||||
proxmox::io_bail!("write_header with wrong size - internal error");
|
||||
}
|
||||
let header = header.to_le();
|
||||
|
||||
let res = self.write_all(unsafe { std::slice::from_raw_parts(
|
||||
&header as *const MediaContentHeader as *const u8,
|
||||
std::mem::size_of::<MediaContentHeader>(),
|
||||
)})?;
|
||||
|
||||
if data.is_empty() { return Ok(res); }
|
||||
|
||||
self.write_all(data)
|
||||
}
|
||||
}
|
||||
|
||||
/// Write a single block to a tape device
|
||||
///
|
||||
/// Assumes that 'writer' is a linux tape device.
|
||||
///
|
||||
/// EOM Behaviour on Linux: When the end of medium early warning is
|
||||
/// encountered, the current write is finished and the number of bytes
|
||||
/// is returned. The next write returns -1 and errno is set to
|
||||
/// ENOSPC. To enable writing a trailer, the next write is allowed to
|
||||
/// proceed and, if successful, the number of bytes is returned. After
|
||||
/// this, -1 and the number of bytes are alternately returned until
|
||||
/// the physical end of medium (or some other error) is encountered.
|
||||
///
|
||||
/// See: https://github.com/torvalds/linux/blob/master/Documentation/scsi/st.rst
|
||||
///
|
||||
/// On sucess, this returns if we en countered a EOM condition.
|
||||
pub fn tape_device_write_block<W: Write>(
|
||||
writer: &mut W,
|
||||
data: &[u8],
|
||||
) -> Result<bool, std::io::Error> {
|
||||
|
||||
let mut leof = false;
|
||||
|
||||
loop {
|
||||
match writer.write(data) {
|
||||
Ok(count) if count == data.len() => return Ok(leof),
|
||||
Ok(count) if count > 0 => {
|
||||
proxmox::io_bail!(
|
||||
"short block write ({} < {}). Tape drive uses wrong block size.",
|
||||
count, data.len());
|
||||
}
|
||||
Ok(_) => { // count is 0 here, assume EOT
|
||||
return Err(std::io::Error::from_raw_os_error(nix::errno::Errno::ENOSPC as i32));
|
||||
}
|
||||
// handle interrupted system call
|
||||
Err(err) if err.kind() == std::io::ErrorKind::Interrupted => {
|
||||
continue;
|
||||
}
|
||||
// detect and handle LEOM (early warning)
|
||||
Err(err) if err.is_errno(nix::errno::Errno::ENOSPC) => {
|
||||
if leof {
|
||||
return Err(err);
|
||||
} else {
|
||||
leof = true;
|
||||
continue; // next write will succeed
|
||||
}
|
||||
}
|
||||
Err(err) => return Err(err),
|
||||
}
|
||||
}
|
||||
}
|
28
src/tools.rs
28
src/tools.rs
@ -2,6 +2,7 @@
|
||||
//!
|
||||
//! This is a collection of small and useful tools.
|
||||
use std::any::Any;
|
||||
use std::borrow::Borrow;
|
||||
use std::collections::HashMap;
|
||||
use std::hash::BuildHasher;
|
||||
use std::fs::File;
|
||||
@ -128,35 +129,38 @@ pub fn required_string_property<'a>(param: &'a Value, name: &str) -> Result<&'a
|
||||
}
|
||||
}
|
||||
|
||||
pub fn required_integer_param<'a>(param: &'a Value, name: &str) -> Result<i64, Error> {
|
||||
pub fn required_integer_param(param: &Value, name: &str) -> Result<i64, Error> {
|
||||
match param[name].as_i64() {
|
||||
Some(s) => Ok(s),
|
||||
None => bail!("missing parameter '{}'", name),
|
||||
}
|
||||
}
|
||||
|
||||
pub fn required_integer_property<'a>(param: &'a Value, name: &str) -> Result<i64, Error> {
|
||||
pub fn required_integer_property(param: &Value, name: &str) -> Result<i64, Error> {
|
||||
match param[name].as_i64() {
|
||||
Some(s) => Ok(s),
|
||||
None => bail!("missing property '{}'", name),
|
||||
}
|
||||
}
|
||||
|
||||
pub fn required_array_param<'a>(param: &'a Value, name: &str) -> Result<Vec<Value>, Error> {
|
||||
pub fn required_array_param<'a>(param: &'a Value, name: &str) -> Result<&'a [Value], Error> {
|
||||
match param[name].as_array() {
|
||||
Some(s) => Ok(s.to_vec()),
|
||||
Some(s) => Ok(&s),
|
||||
None => bail!("missing parameter '{}'", name),
|
||||
}
|
||||
}
|
||||
|
||||
pub fn required_array_property<'a>(param: &'a Value, name: &str) -> Result<Vec<Value>, Error> {
|
||||
pub fn required_array_property<'a>(param: &'a Value, name: &str) -> Result<&'a [Value], Error> {
|
||||
match param[name].as_array() {
|
||||
Some(s) => Ok(s.to_vec()),
|
||||
Some(s) => Ok(&s),
|
||||
None => bail!("missing property '{}'", name),
|
||||
}
|
||||
}
|
||||
|
||||
pub fn complete_file_name<S: BuildHasher>(arg: &str, _param: &HashMap<String, String, S>) -> Vec<String> {
|
||||
pub fn complete_file_name<S>(arg: &str, _param: &HashMap<String, String, S>) -> Vec<String>
|
||||
where
|
||||
S: BuildHasher,
|
||||
{
|
||||
let mut result = vec![];
|
||||
|
||||
use nix::fcntl::AtFlags;
|
||||
@ -294,14 +298,14 @@ pub fn percent_encode_component(comp: &str) -> String {
|
||||
utf8_percent_encode(comp, percent_encoding::NON_ALPHANUMERIC).to_string()
|
||||
}
|
||||
|
||||
pub fn join(data: &Vec<String>, sep: char) -> String {
|
||||
pub fn join<S: Borrow<str>>(data: &[S], sep: char) -> String {
|
||||
let mut list = String::new();
|
||||
|
||||
for item in data {
|
||||
if !list.is_empty() {
|
||||
list.push(sep);
|
||||
}
|
||||
list.push_str(item);
|
||||
list.push_str(item.borrow());
|
||||
}
|
||||
|
||||
list
|
||||
@ -312,7 +316,7 @@ pub fn join(data: &Vec<String>, sep: char) -> String {
|
||||
/// This function fails with a reasonable error message if checksums do not match.
|
||||
pub fn detect_modified_configuration_file(digest1: &[u8;32], digest2: &[u8;32]) -> Result<(), Error> {
|
||||
if digest1 != digest2 {
|
||||
bail!("detected modified configuration - file changed by other user? Try again.");
|
||||
bail!("detected modified configuration - file changed by other user? Try again.");
|
||||
}
|
||||
Ok(())
|
||||
}
|
||||
@ -534,15 +538,13 @@ pub fn compute_file_csum(file: &mut File) -> Result<([u8; 32], u64), Error> {
|
||||
|
||||
loop {
|
||||
let count = match file.read(&mut buffer) {
|
||||
Ok(0) => break,
|
||||
Ok(count) => count,
|
||||
Err(ref err) if err.kind() == std::io::ErrorKind::Interrupted => {
|
||||
continue;
|
||||
}
|
||||
Err(err) => return Err(err.into()),
|
||||
};
|
||||
if count == 0 {
|
||||
break;
|
||||
}
|
||||
size += count as u64;
|
||||
hasher.update(&buffer[..count]);
|
||||
}
|
||||
|
@ -20,6 +20,7 @@ use std::io::Write;
|
||||
/// };
|
||||
/// let mut log = FileLogger::new("test.log", options).unwrap();
|
||||
/// flog!(log, "A simple log: {}", "Hello!");
|
||||
/// # std::fs::remove_file("test.log");
|
||||
/// ```
|
||||
|
||||
#[derive(Debug, Default)]
|
||||
@ -51,7 +52,7 @@ pub struct FileLogger {
|
||||
options: FileLogOptions,
|
||||
}
|
||||
|
||||
/// Log messages to [FileLogger](tools/struct.FileLogger.html)
|
||||
/// Log messages to [`FileLogger`](tools/struct.FileLogger.html)
|
||||
#[macro_export]
|
||||
macro_rules! flog {
|
||||
($log:expr, $($arg:tt)*) => ({
|
||||
|
@ -1,6 +1,6 @@
|
||||
use std::path::{Path, PathBuf};
|
||||
use std::fs::{File, rename};
|
||||
use std::os::unix::io::FromRawFd;
|
||||
use std::os::unix::io::{FromRawFd, IntoRawFd};
|
||||
use std::io::Read;
|
||||
|
||||
use anyhow::{bail, Error};
|
||||
@ -49,7 +49,7 @@ impl LogRotate {
|
||||
fn compress(source_path: &PathBuf, target_path: &PathBuf, options: &CreateOptions) -> Result<(), Error> {
|
||||
let mut source = File::open(source_path)?;
|
||||
let (fd, tmp_path) = make_tmp_file(target_path, options.clone())?;
|
||||
let target = unsafe { File::from_raw_fd(fd) };
|
||||
let target = unsafe { File::from_raw_fd(fd.into_raw_fd()) };
|
||||
let mut encoder = match zstd::stream::write::Encoder::new(target, 0) {
|
||||
Ok(encoder) => encoder,
|
||||
Err(err) => {
|
||||
|
@ -395,7 +395,7 @@ Ext.define('PBS.dashboard.SubscriptionInfo', {
|
||||
break;
|
||||
case 0:
|
||||
icon = 'times-circle critical';
|
||||
message = gettext('<h1>No valid subscription</h1>' + PBS.Utils.noSubKeyHtml);
|
||||
message = `<h1>${gettext('No valid subscription')}</h1>${PBS.Utils.noSubKeyHtml}`;
|
||||
break;
|
||||
default:
|
||||
throw 'invalid subscription status';
|
||||
|
11
www/Makefile
11
www/Makefile
@ -1,9 +1,17 @@
|
||||
include ../defines.mk
|
||||
|
||||
IMAGES := \
|
||||
images/logo-128.png \
|
||||
images/icon-tape.svg \
|
||||
images/logo-128.png \
|
||||
images/proxmox_logo.png
|
||||
|
||||
TAPE_UI_FILES=
|
||||
|
||||
ifdef TEST_TAPE_GUI
|
||||
TAPE_UI_FILES= \
|
||||
TapeManagement.js
|
||||
endif
|
||||
|
||||
JSSRC= \
|
||||
Utils.js \
|
||||
form/UserSelector.js \
|
||||
@ -63,6 +71,7 @@ JSSRC= \
|
||||
ServerStatus.js \
|
||||
ServerAdministration.js \
|
||||
Dashboard.js \
|
||||
${TAPE_UI_FILES} \
|
||||
NavigationTree.js \
|
||||
Application.js \
|
||||
MainView.js
|
||||
|
@ -116,6 +116,19 @@ Ext.define('PBS.view.main.NavigationTree', {
|
||||
|
||||
let root = view.getStore().getRoot();
|
||||
|
||||
if (PBS.TapeManagement !== undefined) {
|
||||
if (!root.findChild('id', 'tape_management', false)) {
|
||||
root.insertChild(3, {
|
||||
text: "Tape Backup",
|
||||
iconCls: 'pbs-icon-tape',
|
||||
id: 'tape_management',
|
||||
path: 'pbsTapeManagement',
|
||||
expanded: true,
|
||||
children: [],
|
||||
});
|
||||
}
|
||||
}
|
||||
|
||||
records.sort((a, b) => a.id.localeCompare(b.id));
|
||||
|
||||
var list = root.findChild('id', 'datastores', false);
|
||||
|
11
www/TapeManagement.js
Normal file
11
www/TapeManagement.js
Normal file
@ -0,0 +1,11 @@
|
||||
Ext.define('PBS.TapeManagement', {
|
||||
extend: 'Ext.tab.Panel',
|
||||
alias: 'widget.pbsTapeManagement',
|
||||
|
||||
title: gettext('Tape Backup'),
|
||||
|
||||
border: true,
|
||||
defaults: { border: false },
|
||||
|
||||
html: "Experimental tape backup GUI.",
|
||||
});
|
@ -274,12 +274,17 @@ Ext.define('PBS.Utils', {
|
||||
// do whatever you want here
|
||||
Proxmox.Utils.override_task_descriptions({
|
||||
backup: (type, id) => PBS.Utils.render_datastore_worker_id(id, gettext('Backup')),
|
||||
"barcode-label-media": [gettext('Drive'), gettext('Barcode label media')],
|
||||
dircreate: [gettext('Directory Storage'), gettext('Create')],
|
||||
dirremove: [gettext('Directory'), gettext('Remove')],
|
||||
"erase-media": [gettext('Drive'), gettext('Erase media')],
|
||||
garbage_collection: ['Datastore', gettext('Garbage collect')],
|
||||
"inventory-update": [gettext('Drive'), gettext('Inventory update')],
|
||||
"label-media": [gettext('Drive'), gettext('Label media')],
|
||||
logrotate: [null, gettext('Log Rotation')],
|
||||
prune: (type, id) => PBS.Utils.render_datastore_worker_id(id, gettext('Prune')),
|
||||
reader: (type, id) => PBS.Utils.render_datastore_worker_id(id, gettext('Read objects')),
|
||||
"rewind-media": [gettext('Drive'), gettext('Rewind media')],
|
||||
sync: ['Datastore', gettext('Remote Sync')],
|
||||
syncjob: [gettext('Sync Job'), gettext('Remote Sync')],
|
||||
verify: ['Datastore', gettext('Verification')],
|
||||
|
@ -253,3 +253,20 @@ span.snapshot-comment-column {
|
||||
text-shadow: 1px 1px 1px #AAA;
|
||||
font-weight: 800;
|
||||
}
|
||||
|
||||
/*' PBS specific icons */
|
||||
|
||||
.pbs-icon-tape
|
||||
{
|
||||
background-repeat: no-repeat;
|
||||
background-position: bottom;
|
||||
vertical-align: bottom;
|
||||
padding: 0;
|
||||
}
|
||||
|
||||
.pbs-icon-tape
|
||||
{
|
||||
background-size: 16px;
|
||||
height: 20px;
|
||||
background-image:url(../images/icon-tape.svg);
|
||||
}
|
||||
|
@ -266,7 +266,8 @@ Ext.define('PBS.DataStoreSummary', {
|
||||
},
|
||||
failure: function(response) {
|
||||
// fallback if e.g. we have no permissions to the config
|
||||
let rec = Ext.getStore('pbs-datastore-list').findRecord('store', me.datastore);
|
||||
let rec = Ext.getStore('pbs-datastore-list')
|
||||
.findRecord('store', me.datastore, 0, false, true, true);
|
||||
if (rec) {
|
||||
me.down('pbsDataStoreNotes').setNotes(rec.data.comment || "");
|
||||
}
|
||||
|
18
www/images/icon-tape.svg
Normal file
18
www/images/icon-tape.svg
Normal file
@ -0,0 +1,18 @@
|
||||
<?xml version="1.0" encoding="UTF-8" standalone="no"?>
|
||||
<svg
|
||||
xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#"
|
||||
xmlns:svg="http://www.w3.org/2000/svg"
|
||||
xmlns="http://www.w3.org/2000/svg"
|
||||
width="100"
|
||||
height="100"
|
||||
version="1.1">
|
||||
|
||||
<rect x="10" y="20" rx="5" ry="5" width="80" height="60" stroke="black" stroke-width="4" fill="none"/>
|
||||
<line x1="10" y1="35" x2="90" y2="35" stroke="black" stroke-width="4"/>
|
||||
<line x1="10" y1="65" x2="90" y2="65" stroke="black" stroke-width="4"/>
|
||||
|
||||
<circle cx="31" cy="50" r="15" stroke="black" stroke-width="4" fill="none"/>
|
||||
<circle cx="31" cy="50" r="5" stroke="black" stroke-width="4" fill="black"/>
|
||||
<circle cx="69" cy="50" r="15" stroke="black" stroke-width="4" fill="none"/>
|
||||
<circle cx="69" cy="50" r="5" stroke="black" stroke-width="4" fill="black"/>
|
||||
</svg>
|
After Width: | Height: | Size: 832 B |
Reference in New Issue
Block a user