Compare commits

...

45 Commits

Author SHA1 Message Date
cf063c1973 bump version to 0.8.6-1
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-07-10 11:35:04 +02:00
f58233a73a src/backup/data_blob_reader.rs: avoid unwrap() - return error instead 2020-07-10 11:28:19 +02:00
d257c2ecbd ui: fingerprint: add icon to copy button
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-07-10 11:17:20 +02:00
e4ee7b7ac8 ui: fingerprint: add copy button
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-07-10 11:13:54 +02:00
1f0d23f792 ui: add show fingerprint button to dashboard
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-07-10 11:08:53 +02:00
bfcef26a99 api2/node/status: add fingerprint
and rename get_usage to get_status (since its not usage only anymore)

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-07-10 11:08:53 +02:00
ec01eeadc6 refactor CertInfo to tools
we want to reuse some of the functionality elsewhere

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-07-10 11:08:53 +02:00
660a34892d update proxmox crate to 0.2.0
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2020-07-10 11:08:27 +02:00
d86034afec src/bin/proxmox_backup_client/catalog.rs: fix keyfile handling 2020-07-10 10:36:45 +02:00
62593aba1e src/backup/manifest.rs: fix signature (exclude 'signature' property) 2020-07-10 10:36:45 +02:00
0eaef8eb84 client: show key path when creating/changing default key
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2020-07-10 09:58:24 +02:00
e39974afbf client: add simple version command
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-07-10 09:34:07 +02:00
dde18bbb85 proxmox-backup-client benchmark: improve output format 2020-07-10 09:13:52 +02:00
a40e1b0e8b src/server/rest.rs: avoid compiler warning 2020-07-10 09:13:52 +02:00
a0eb0cd372 ui: running task: increase active limit we show in badge to 99
Two digits fit nicely, and the extra plus for the >99 case doesn't
takes that much space either. So that and the fact that 9 is just
really low makes me bump this to 99 as cut-off value.

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-07-09 21:56:46 +02:00
41067870c6 ui: tune badge styling a bit
the idea is to blend in when no task is running, thus no
background-color there. When tasks are running use the proxmox
branding guideline dark-grey, it isn't used as often so it should
fall into ones eye when changing but it has some use so it doesn't
seems out of place.

Reduce the border radius by a lot, so that it seems similar to the
one our ExtJS theme uses for the buttons outside - the original
border radius seems like it comes from the time where this was
intended to be a floating badge, there it'd make sense but as
integrated button one this seems to fit the style much more.

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-07-09 21:51:25 +02:00
33a87bc39a docs: reference PDF variant in HTML output
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-07-09 21:31:38 +02:00
bed3e15f16 debian/proxmox-backup-docs.links: fix name and target
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-07-09 21:23:41 +02:00
c687da9e8e datastore: chown base dir on creation
When creating a new datastore the basedir is only owned by the backup
user if it did not exist beforehand (create_path chowns only if it
creates the directory), and returns false if it did not create the
directory).

This improves the experience when adding a new datastore on a fresh
disk or existing directory (not owned by backup) - backups/pulls can
be run instead of terminating with EPERM.

Tested on my local testinstall with a new disk, and a existing directory:

Signed-off-by: Stoiko Ivanov <s.ivanov@proxmox.com>
2020-07-09 18:20:16 +02:00
be30e7d269 ui: dashboard/TaskSummary: fade icons if count is zero
so that users can see the relevant counts faster

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-07-09 17:10:47 +02:00
106603c58f ui: fix crypt mode caluclation
also include 'mixed' in the calculation of the overall mode of a
snapshot and group

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-07-09 17:09:56 +02:00
7ba2c1c386 docs: add initial basic software stack definition
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-07-09 17:09:05 +02:00
4327a8462a proxmox-backup-client benchamrk: add more speed tests 2020-07-09 17:07:22 +02:00
e193544b8e src/server/rest.rs: disable debug logs 2020-07-09 16:18:14 +02:00
323b2f3dd6 proxmox-backup-client benchmark: add --verbose flag 2020-07-09 16:16:39 +02:00
7884e7ef4f bump version to 0.8.5-1 2020-07-09 15:35:07 +02:00
fae11693f0 fix cross process task listing
it does not make sense to check if the worker is running if we already
have an endtime and state

our 'worker_is_active_local' heuristic returns true for non
process-local tasks, so we got 'running' for all tasks that were not
started by 'our' pid and were still running

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-07-09 15:30:52 +02:00
22231524e2 docs: expand datastore documentation
document retention settings and schedules per datastore with
some minimal examples.

Signed-off-by: Stoiko Ivanov <s.ivanov@proxmox.com>
2020-07-09 15:04:26 +02:00
9634ca07db docs: add remotes and sync-jobs and schedules
Signed-off-by: Stoiko Ivanov <s.ivanov@proxmox.com>
2020-07-09 15:04:26 +02:00
62f6a7e3d9 bump pathpatterns to 0.1.2
Fixes `**/foo` not matching "foo" without slashes.
(`**/lost+found` now matches the `lost+found` dir at the
root of our tree properly).

Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2020-07-09 14:34:10 +02:00
86443141b5 ui: align version and user-menu spacing with pve/pmg
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-07-09 14:31:21 +02:00
f6e964b96e ui: make username a menu-button
like we did in PVE and PMG

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-07-09 14:31:21 +02:00
c8bed1b4d7 bump version to 0.8.4-1 2020-07-09 14:28:44 +02:00
a3970d6c1e ui: add TaskButton in header
opens a grid with the running tasks and a shortcut the the node tasks

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-07-09 14:26:57 +02:00
cc83c13660 ui: add RunningTasksStore
so that we have a global store for running tasks

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-07-09 14:26:57 +02:00
bf7e2a4648 simpler lost+found pattern
the **/ is not required and currently also mistakenly
doesn't match /lost+found which is probably buggy on the
pathpatterns crate side and needs fixing there

Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2020-07-09 14:06:42 +02:00
e284073e4a bump version to 0.8.3-1 2020-07-09 13:55:15 +02:00
3ec99affc8 get_disks: don't fail on zfs_devices
zfs does not have to be installed, so simply log an error and
continue, users still get an error when clicking directly on
ZFS

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-07-09 13:47:31 +02:00
a9649ddc44 disks/zpool_status: add test for pool with special character
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-07-09 13:37:31 +02:00
4f9096a211 disks/zpool_list: allow some more characters for pool list
not exhaustive of what zfs allows (space is missing), but this
can be done easily without problems

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-07-09 13:37:31 +02:00
c3a4b5e2e1 zpool_list: add tests for special pool names
those names are allowed for zpools

these will fail for now, but it will be fixed in the next commit

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-07-09 13:37:31 +02:00
7957fabff2 api: add ZPOOL_NAME_SCHEMA and regex
poolnames can containe spaces and some other special characters

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-07-09 13:37:31 +02:00
20a4e4e252 minor optimization to 'to_canonical_json'
* don't clone hash keys, just use references
* we don't need a String, stick to Vec<u8> and use
  serde_json::to_writer to avoid a temporary strings
  altogether

Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2020-07-09 13:32:11 +02:00
2774566b03 ui: adapt for new sign-only crypt mode
we can now show 'none', 'encprypted', 'signed' or 'mixed' for
the crypt mode

also adds a different icon for signed files, and adds a hint that
signatures cannot be verified on the server

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-07-09 13:28:55 +02:00
4459ffe30e src/backup/manifest.rs: add default toömake it compatible with older backus 2020-07-09 13:25:38 +02:00
40 changed files with 1111 additions and 206 deletions

View File

@ -1,6 +1,6 @@
[package]
name = "proxmox-backup"
version = "0.8.2"
version = "0.8.6"
authors = ["Dietmar Maurer <dietmar@proxmox.com>"]
edition = "2018"
license = "AGPL-3"
@ -37,8 +37,8 @@ pam = "0.7"
pam-sys = "0.5"
percent-encoding = "2.1"
pin-utils = "0.1.0"
pathpatterns = "0.1.1"
proxmox = { version = "0.1.42", features = [ "sortable-macro", "api-macro" ] }
pathpatterns = "0.1.2"
proxmox = { version = "0.2.0", features = [ "sortable-macro", "api-macro" ] }
#proxmox = { git = "ssh://gitolite3@proxdev.maurer-it.com/rust/proxmox", version = "0.1.2", features = [ "sortable-macro", "api-macro" ] }
#proxmox = { path = "../proxmox/proxmox", features = [ "sortable-macro", "api-macro" ] }
proxmox-fuse = "0.1.0"

49
debian/changelog vendored
View File

@ -1,3 +1,52 @@
rust-proxmox-backup (0.8.6-1) unstable; urgency=medium
* ui: add button for easily showing the server fingerprint dashboard
* proxmox-backup-client benchmark: add --verbose flag and improve output
format
* docs: reference PDF variant in HTML output
* proxmox-backup-client: add simple version command
* improve keyfile and signature handling in catalog and manifest
-- Proxmox Support Team <support@proxmox.com> Fri, 10 Jul 2020 11:34:14 +0200
rust-proxmox-backup (0.8.5-1) unstable; urgency=medium
* fix cross process task listing
* docs: expand datastore documentation
* docs: add remotes and sync-jobs and schedules
* bump pathpatterns to 0.1.2
* ui: align version and user-menu spacing with pve/pmg
* ui: make username a menu-button
-- Proxmox Support Team <support@proxmox.com> Thu, 09 Jul 2020 15:32:39 +0200
rust-proxmox-backup (0.8.4-1) unstable; urgency=medium
* add TaskButton in header
* simpler lost+found pattern
-- Proxmox Support Team <support@proxmox.com> Thu, 09 Jul 2020 14:28:24 +0200
rust-proxmox-backup (0.8.3-1) unstable; urgency=medium
* get_disks: don't fail on zfs_devices
* allow some more characters for zpool list
* ui: adapt for new sign-only crypt mode
-- Proxmox Support Team <support@proxmox.com> Thu, 09 Jul 2020 13:55:06 +0200
rust-proxmox-backup (0.8.2-1) unstable; urgency=medium
* buildsys: also upload debug packages

View File

@ -1 +1 @@
/usr/share/doc/proxmox-backup/proxmox-backup.pdf /usr/share/doc/proxmox-backup/docs/proxmox-backup.pdf
/usr/share/doc/proxmox-backup/proxmox-backup.pdf /usr/share/doc/proxmox-backup/html/proxmox-backup.pdf

View File

@ -146,7 +146,12 @@ Datastore Configuration
You can configure multiple datastores. Minimum one datastore needs to be
configured. The datastore is identified by a simple `name` and points to a
directory on the filesystem.
directory on the filesystem. Each datastore also has associated retention
settings of how many backup snapshots for each interval of ``hourly``,
``daily``, ``weekly``, ``monthly``, ``yearly`` as well as an time independent
number of backups to keep in that store. :ref:`Pruning <pruning>` and
:ref:`garbage collection <garbage-collection>` can also be configured to run
periodically based on a configured :term:`schedule` per datastore.
The following command creates a new datastore called ``store1`` on :file:`/backup/disk1/store1`
@ -165,6 +170,30 @@ To list existing datastores run:
│ store1 │ /backup/disk1/store1 │ This is my default storage. │
└────────┴──────────────────────┴─────────────────────────────┘
You can change settings of a datastore, for example to set a prune and garbage
collection schedule or retention settings using ``update`` subcommand and view
a datastore with the ``show`` subcommand:
.. code-block:: console
# proxmox-backup-manager datastore update store1 --keep-last 7 --prune-schedule daily --gc-schedule 'Tue 04:27'
# proxmox-backup-manager datastore show store1
┌────────────────┬─────────────────────────────┐
│ Name │ Value │
╞════════════════╪═════════════════════════════╡
│ name │ store1 │
├────────────────┼─────────────────────────────┤
│ path │ /backup/disk1/store1 │
├────────────────┼─────────────────────────────┤
│ comment │ This is my default storage. │
├────────────────┼─────────────────────────────┤
│ gc-schedule │ Tue 04:27 │
├────────────────┼─────────────────────────────┤
│ keep-last │ 7 │
├────────────────┼─────────────────────────────┤
│ prune-schedule │ daily │
└────────────────┴─────────────────────────────┘
Finally, it is possible to remove the datastore configuration:
.. code-block:: console
@ -340,6 +369,64 @@ following roles exist:
Is allowed to read data from a remote.
:term:`Remote`
~~~~~~~~~~~~~~
A remote is a different Proxmox Backup Server installation and a user on that
installation, from which you can `sync` datastores to a local datastore with a
`Sync Job`.
For adding a remote you need its hostname or ip, a userid and password on the
remote and its certificate fingerprint to add it. To get the fingerprint use
the ``proxmox-backup-manager cert info`` command on the remote.
.. code-block:: console
# proxmox-backup-manager cert info |grep Fingerprint
Fingerprint (sha256): 64:d3:ff:3a:50:38:53:5a:9b:f7:50:...:ab:fe
With the needed information add the remote with:
.. code-block:: console
# proxmox-backup-manager remote create pbs2 --host pbs2.mydomain.example --userid sync@pam --password 'SECRET' --fingerprint 64:d3:ff:3a:50:38:53:5a:9b:f7:50:...:ab:fe
Use the ``list``, ``show``, ``update``, ``remove`` subcommands of
``proxmox-backup-manager remote`` to manage your remotes:
.. code-block:: console
# proxmox-backup-manager remote update pbs2 --host pbs2.example
# proxmox-backup-manager remote list
┌──────┬──────────────┬──────────┬───────────────────────────────────────────┬─────────┐
│ name │ host │ userid │ fingerprint │ comment │
╞══════╪══════════════╪══════════╪═══════════════════════════════════════════╪═════════╡
│ pbs2 │ pbs2.example │ sync@pam │64:d3:ff:3a:50:38:53:5a:9b:f7:50:...:ab:fe │ │
└──────┴──────────────┴──────────┴───────────────────────────────────────────┴─────────┘
# proxmox-backup-manager remote remove pbs2
Sync Jobs
~~~~~~~~~
Sync jobs are configured to pull the contents of a datastore on a `Remote` to a
local datastore. You can either start the sync job manually on the GUI or
provide it with a :term:`schedule` to run regularly. The
``proxmox-backup-manager sync-job`` command is used to manage sync jobs:
.. code-block:: console
# proxmox-backup-manager sync-job create pbs2-local --remote pbs2 --remote-store local --store local --schedule 'Wed 02:30'
# proxmox-backup-manager sync-job update pbs2-local --comment 'offsite'
# proxmox-backup-manager sync-job list
┌────────────┬───────┬────────┬──────────────┬───────────┬─────────┐
│ id │ store │ remote │ remote-store │ schedule │ comment │
╞════════════╪═══════╪════════╪══════════════╪═══════════╪═════════╡
│ pbs2-local │ local │ pbs2 │ local │ Wed 02:30 │ offsite │
└────────────┴───────┴────────┴──────────────┴───────────┴─────────┘
# proxmox-backup-manager sync-job remove pbs2-local
Backup Client usage
-------------------
@ -764,6 +851,8 @@ To remove the ticket, issue a logout:
# proxmox-backup-client logout
.. _pruning:
Pruning and Removing Backups
~~~~~~~~~~~~~~~~~~~~~~~~~~~~

View File

@ -46,3 +46,19 @@ Glossary
kernel driver handles filesystem requests and sends them to a
userspace application.
Remote
A remote Proxmox Backup Server installation and credentials for a user on it.
You can pull datastores from a remote to a local datastore in order to
have redundant backups.
Schedule
Certain tasks, for example pruning and garbage collection, need to be
performed on a regular basis. Proxmox Backup Server uses a subset of the
`systemd Time and Date Specification
<https://www.freedesktop.org/software/systemd/man/systemd.time.html#>`_.
The subset currently supports time of day specifications and weekdays, in
addition to the shorthand expressions 'minutely', 'hourly', 'daily'.
There is no support for specifying timezones, the tasks are run in the
timezone configured on the server.

View File

@ -12,6 +12,10 @@ Front-Cover Texts, and no Back-Cover Texts. A copy of the license is included
in the section entitled "GNU Free Documentation License".
.. only:: html
A `PDF` version of the documentation is `also available here <./proxmox-backup.pdf>`_
.. toctree::
:maxdepth: 3
:caption: Table of Contents

View File

@ -102,8 +102,30 @@ Therefore, ensure that you perform regular backups and run restore tests.
Software Stack
--------------
.. todo:: Eplain why we use Rust (and Flutter)
Proxmox Backup Server consists of multiple components:
* server-daemon providing, among others, a RESTfull API, super-fast
asynchronous tasks, lightweight usage statistic collection, scheduling
events, strict separation of privileged and unprivileged execution
environments, ...
* JavaScript management webinterface
* management CLI tool for the server (`proxmox-backup-manager`)
* client CLI tool (`proxmox-backup-client`) to access the server easily from
any `Linux amd64` environment.
Everything besides the web interface are written in the Rust programming
language.
"The Rust programming language helps you write faster, more reliable software.
High-level ergonomics and low-level control are often at odds in programming
language design; Rust challenges that conflict. Through balancing powerful
technical capacity and a great developer experience, Rust gives you the option
to control low-level details (such as memory usage) without all the hassle
traditionally associated with such control."
-- `The Rust Programming Language <https://doc.rust-lang.org/book/ch00-00-introduction.html>`_
.. todo:: further explain the software stack
Getting Help
------------

View File

@ -535,7 +535,7 @@ macro_rules! add_common_prune_prameters {
pub const API_RETURN_SCHEMA_PRUNE: Schema = ArraySchema::new(
"Returns the list of snapshots and a flag indicating if there are kept or removed.",
PruneListItem::API_SCHEMA
&PruneListItem::API_SCHEMA
).schema();
const API_METHOD_PRUNE: ApiMethod = ApiMethod::new(

View File

@ -41,6 +41,9 @@ pub const ZFS_ASHIFT_SCHEMA: Schema = IntegerSchema::new(
.default(12)
.schema();
pub const ZPOOL_NAME_SCHEMA: Schema =StringSchema::new("ZFS Pool Name")
.format(&ApiStringFormat::Pattern(&ZPOOL_NAME_REGEX))
.schema();
#[api(
default: "On",
@ -157,7 +160,7 @@ pub fn list_zpools() -> Result<Vec<ZpoolListItem>, Error> {
schema: NODE_SCHEMA,
},
name: {
schema: DATASTORE_SCHEMA,
schema: ZPOOL_NAME_SCHEMA,
},
},
},

View File

@ -10,6 +10,7 @@ use proxmox::api::{api, ApiMethod, Router, RpcEnvironment, Permission};
use crate::api2::types::*;
use crate::config::acl::{PRIV_SYS_AUDIT, PRIV_SYS_POWER_MANAGEMENT};
use crate::tools::cert::CertInfo;
#[api(
input: {
@ -46,14 +47,24 @@ use crate::config::acl::{PRIV_SYS_AUDIT, PRIV_SYS_POWER_MANAGEMENT};
description: "Total CPU usage since last query.",
optional: true,
},
}
info: {
type: Object,
description: "contains node information",
properties: {
fingerprint: {
description: "The SSL Fingerprint",
type: String,
},
},
},
},
},
access: {
permission: &Permission::Privilege(&["system", "status"], PRIV_SYS_AUDIT, false),
},
)]
/// Read node memory, CPU and (root) disk usage
fn get_usage(
fn get_status(
_param: Value,
_info: &ApiMethod,
_rpcenv: &mut dyn RpcEnvironment,
@ -63,6 +74,10 @@ fn get_usage(
let kstat: procfs::ProcFsStat = procfs::read_proc_stat()?;
let disk_usage = crate::tools::disks::disk_usage(Path::new("/"))?;
// get fingerprint
let cert = CertInfo::new()?;
let fp = cert.fingerprint()?;
Ok(json!({
"memory": {
"total": meminfo.memtotal,
@ -74,7 +89,10 @@ fn get_usage(
"total": disk_usage.total,
"used": disk_usage.used,
"free": disk_usage.avail,
}
},
"info": {
"fingerprint": fp,
},
}))
}
@ -122,5 +140,5 @@ fn reboot_or_shutdown(command: NodePowerCommand) -> Result<(), Error> {
}
pub const ROUTER: Router = Router::new()
.get(&API_METHOD_GET_USAGE)
.get(&API_METHOD_GET_STATUS)
.post(&API_METHOD_REBOOT_OR_SHUTDOWN);

View File

@ -78,6 +78,8 @@ const_regex!{
pub ACL_PATH_REGEX = concat!(r"^(?:/|", r"(?:/", PROXMOX_SAFE_ID_REGEX_STR!(), ")+", r")$");
pub BLOCKDEVICE_NAME_REGEX = r"^(:?(:?h|s|x?v)d[a-z]+)|(:?nvme\d+n\d+)$";
pub ZPOOL_NAME_REGEX = r"^[a-zA-Z][a-z0-9A-Z\-_.:]+$";
}
pub const SYSTEMD_DATETIME_FORMAT: ApiStringFormat =

View File

@ -80,8 +80,9 @@ impl ChunkStore {
let default_options = CreateOptions::new();
if let Err(err) = create_path(&base, Some(default_options.clone()), Some(options.clone())) {
bail!("unable to create chunk store '{}' at {:?} - {}", name, base, err);
match create_path(&base, Some(default_options.clone()), Some(options.clone())) {
Err(err) => bail!("unable to create chunk store '{}' at {:?} - {}", name, base, err),
Ok(res) => if ! res { nix::unistd::chown(&base, Some(uid), Some(gid))? },
}
if let Err(err) = create_dir(&chunk_dir, options.clone()) {

View File

@ -1,4 +1,4 @@
use anyhow::{bail, Error};
use anyhow::{bail, format_err, Error};
use std::sync::Arc;
use std::io::{Read, BufReader};
use proxmox::tools::io::ReadExt;
@ -40,23 +40,25 @@ impl <R: Read> DataBlobReader<R> {
Ok(Self { state: BlobReaderState::Compressed { expected_crc, decompr }})
}
ENCRYPTED_BLOB_MAGIC_1_0 => {
let config = config.ok_or_else(|| format_err!("unable to read encrypted blob without key"))?;
let expected_crc = u32::from_le_bytes(head.crc);
let mut iv = [0u8; 16];
let mut expected_tag = [0u8; 16];
reader.read_exact(&mut iv)?;
reader.read_exact(&mut expected_tag)?;
let csum_reader = ChecksumReader::new(reader, None);
let decrypt_reader = CryptReader::new(BufReader::with_capacity(64*1024, csum_reader), iv, expected_tag, config.unwrap())?;
let decrypt_reader = CryptReader::new(BufReader::with_capacity(64*1024, csum_reader), iv, expected_tag, config)?;
Ok(Self { state: BlobReaderState::Encrypted { expected_crc, decrypt_reader }})
}
ENCR_COMPR_BLOB_MAGIC_1_0 => {
let config = config.ok_or_else(|| format_err!("unable to read encrypted blob without key"))?;
let expected_crc = u32::from_le_bytes(head.crc);
let mut iv = [0u8; 16];
let mut expected_tag = [0u8; 16];
reader.read_exact(&mut iv)?;
reader.read_exact(&mut expected_tag)?;
let csum_reader = ChecksumReader::new(reader, None);
let decrypt_reader = CryptReader::new(BufReader::with_capacity(64*1024, csum_reader), iv, expected_tag, config.unwrap())?;
let decrypt_reader = CryptReader::new(BufReader::with_capacity(64*1024, csum_reader), iv, expected_tag, config)?;
let decompr = zstd::stream::read::Decoder::new(decrypt_reader)?;
Ok(Self { state: BlobReaderState::EncryptedCompressed { expected_crc, decompr }})
}

View File

@ -35,10 +35,14 @@ mod hex_csum {
}
}
fn crypt_mode_none() -> CryptMode { CryptMode::None }
fn empty_value() -> Value { json!({}) }
#[derive(Serialize, Deserialize)]
#[serde(rename_all="kebab-case")]
pub struct FileInfo {
pub filename: String,
#[serde(default="crypt_mode_none")] // to be compatible with < 0.8.0 backups
pub crypt_mode: CryptMode,
pub size: u64,
#[serde(with = "hex_csum")]
@ -52,6 +56,7 @@ pub struct BackupManifest {
backup_id: String,
backup_time: i64,
files: Vec<FileInfo>,
#[serde(default="empty_value")] // to be compatible with < 0.8.0 backups
pub unprotected: Value,
}
@ -125,39 +130,47 @@ impl BackupManifest {
}
// Generate cannonical json
fn to_canonical_json(value: &Value, output: &mut String) -> Result<(), Error> {
fn to_canonical_json(value: &Value) -> Result<Vec<u8>, Error> {
let mut data = Vec::new();
Self::write_canonical_json(value, &mut data)?;
Ok(data)
}
fn write_canonical_json(value: &Value, output: &mut Vec<u8>) -> Result<(), Error> {
match value {
Value::Null => bail!("got unexpected null value"),
Value::String(_) => {
output.push_str(&serde_json::to_string(value)?);
},
Value::Number(_) => {
output.push_str(&serde_json::to_string(value)?);
Value::String(_) | Value::Number(_) | Value::Bool(_) => {
serde_json::to_writer(output, &value)?;
}
Value::Bool(_) => {
output.push_str(&serde_json::to_string(value)?);
},
Value::Array(list) => {
output.push('[');
for (i, item) in list.iter().enumerate() {
if i != 0 { output.push(','); }
Self::to_canonical_json(item, output)?;
output.push(b'[');
let mut iter = list.iter();
if let Some(item) = iter.next() {
Self::write_canonical_json(item, output)?;
for item in iter {
output.push(b',');
Self::write_canonical_json(item, output)?;
}
}
output.push(']');
output.push(b']');
}
Value::Object(map) => {
output.push('{');
let mut keys: Vec<String> = map.keys().map(|s| s.clone()).collect();
output.push(b'{');
let mut keys: Vec<&str> = map.keys().map(String::as_str).collect();
keys.sort();
for (i, key) in keys.iter().enumerate() {
let item = map.get(key).unwrap();
if i != 0 { output.push(','); }
output.push_str(&serde_json::to_string(&Value::String(key.clone()))?);
output.push(':');
Self::to_canonical_json(item, output)?;
let mut iter = keys.into_iter();
if let Some(key) = iter.next() {
output.extend(key.as_bytes());
output.push(b':');
Self::write_canonical_json(&map[key], output)?;
for key in iter {
output.push(b',');
output.extend(key.as_bytes());
output.push(b':');
Self::write_canonical_json(&map[key], output)?;
}
}
output.push('}');
output.push(b'}');
}
}
Ok(())
@ -176,11 +189,11 @@ impl BackupManifest {
let mut signed_data = data.clone();
signed_data.as_object_mut().unwrap().remove("unprotected"); // exclude
signed_data.as_object_mut().unwrap().remove("signature"); // exclude
let mut canonical = String::new();
Self::to_canonical_json(&signed_data, &mut canonical)?;
let canonical = Self::to_canonical_json(&signed_data)?;
let sig = crypt_config.compute_auth_tag(canonical.as_bytes());
let sig = crypt_config.compute_auth_tag(&canonical);
Ok(sig)
}

View File

@ -25,6 +25,7 @@ use pxar::accessor::{MaybeReady, ReadAt, ReadAtOperation};
use proxmox_backup::tools;
use proxmox_backup::api2::types::*;
use proxmox_backup::api2::version;
use proxmox_backup::client::*;
use proxmox_backup::pxar::catalog::*;
use proxmox_backup::backup::{
@ -552,6 +553,56 @@ fn api_logout(param: Value) -> Result<Value, Error> {
Ok(Value::Null)
}
#[api(
input: {
properties: {
repository: {
schema: REPO_URL_SCHEMA,
optional: true,
},
"output-format": {
schema: OUTPUT_FORMAT,
optional: true,
},
}
}
)]
/// Show client and optional server version
async fn api_version(param: Value) -> Result<(), Error> {
let output_format = get_output_format(&param);
let mut version_info = json!({
"client": {
"version": version::PROXMOX_PKG_VERSION,
"release": version::PROXMOX_PKG_RELEASE,
"repoid": version::PROXMOX_PKG_REPOID,
}
});
let repo = extract_repository_from_value(&param);
if let Ok(repo) = repo {
let client = connect(repo.host(), repo.user())?;
match client.get("api2/json/version", None).await {
Ok(mut result) => version_info["server"] = result["data"].take(),
Err(e) => eprintln!("could not connect to server - {}", e),
}
}
if output_format == "text" {
println!("client version: {}.{}", version::PROXMOX_PKG_VERSION, version::PROXMOX_PKG_RELEASE);
if let Some(server) = version_info["server"].as_object() {
let server_version = server["version"].as_str().unwrap();
let server_release = server["release"].as_str().unwrap();
println!("server version: {}.{}", server_version, server_release);
}
} else {
format_and_print_result(&version_info, &output_format);
}
Ok(())
}
#[api(
input: {
@ -1878,6 +1929,9 @@ fn main() {
let logout_cmd_def = CliCommand::new(&API_METHOD_API_LOGOUT)
.completion_cb("repository", complete_repository);
let version_cmd_def = CliCommand::new(&API_METHOD_API_VERSION)
.completion_cb("repository", complete_repository);
let cmd_def = CliCommandMap::new()
.insert("backup", backup_cmd_def)
.insert("upload-log", upload_log_cmd_def)
@ -1895,6 +1949,7 @@ fn main() {
.insert("mount", mount_cmd_def())
.insert("catalog", catalog_mgmt_cli())
.insert("task", task_mgmt_cli())
.insert("version", version_cmd_def)
.insert("benchmark", benchmark_cmd_def);
let rpcenv = CliEnvironment::new();

View File

@ -127,7 +127,7 @@ async fn garbage_collection_status(param: Value) -> Result<Value, Error> {
let mut result = client.get(&path, None).await?;
let mut data = result["data"].take();
let schema = api2::admin::datastore::API_RETURN_SCHEMA_GARBAGE_COLLECTION_STATUS;
let schema = &api2::admin::datastore::API_RETURN_SCHEMA_GARBAGE_COLLECTION_STATUS;
let options = default_table_format_options();
@ -193,7 +193,7 @@ async fn task_list(param: Value) -> Result<Value, Error> {
let mut result = client.get("api2/json/nodes/localhost/tasks", Some(args)).await?;
let mut data = result["data"].take();
let schema = api2::node::tasks::API_RETURN_SCHEMA_LIST_TASKS;
let schema = &api2::node::tasks::API_RETURN_SCHEMA_LIST_TASKS;
let options = default_table_format_options()
.column(ColumnConfig::new("starttime").right_align(false).renderer(tools::format::render_epoch))

View File

@ -4,14 +4,24 @@ use std::sync::Arc;
use anyhow::{Error};
use serde_json::Value;
use chrono::{TimeZone, Utc};
use serde::Serialize;
use proxmox::api::{ApiMethod, RpcEnvironment};
use proxmox::api::api;
use proxmox::api::{
api,
cli::{
OUTPUT_FORMAT,
ColumnConfig,
get_output_format,
format_and_print_result_full,
default_table_format_options,
},
};
use proxmox_backup::backup::{
load_and_decrypt_key,
CryptConfig,
load_and_decrypt_key,
CryptConfig,
KeyDerivationConfig,
};
use proxmox_backup::client::*;
@ -23,6 +33,75 @@ use crate::{
connect,
};
#[api()]
#[derive(Copy, Clone, Serialize)]
/// Speed test result
struct Speed {
/// The meassured speed in Bytes/second
#[serde(skip_serializing_if="Option::is_none")]
speed: Option<f64>,
/// Top result we want to compare with
top: f64,
}
#[api(
properties: {
"tls": {
type: Speed,
},
"sha256": {
type: Speed,
},
"compress": {
type: Speed,
},
"decompress": {
type: Speed,
},
"aes256_gcm": {
type: Speed,
},
},
)]
#[derive(Copy, Clone, Serialize)]
/// Benchmark Results
struct BenchmarkResult {
/// TLS upload speed
tls: Speed,
/// SHA256 checksum comptation speed
sha256: Speed,
/// ZStd level 1 compression speed
compress: Speed,
/// ZStd level 1 decompression speed
decompress: Speed,
/// AES256 GCM encryption speed
aes256_gcm: Speed,
}
static BENCHMARK_RESULT_2020_TOP: BenchmarkResult = BenchmarkResult {
tls: Speed {
speed: None,
top: 1_000_000.0 * 590.0, // TLS to localhost, AMD Ryzen 7 2700X
},
sha256: Speed {
speed: None,
top: 1_000_000.0 * 2120.0, // AMD Ryzen 7 2700X
},
compress: Speed {
speed: None,
top: 1_000_000.0 * 2158.0, // AMD Ryzen 7 2700X
},
decompress: Speed {
speed: None,
top: 1_000_000.0 * 8062.0, // AMD Ryzen 7 2700X
},
aes256_gcm: Speed {
speed: None,
top: 1_000_000.0 * 3803.0, // AMD Ryzen 7 2700X
},
};
#[api(
input: {
properties: {
@ -30,10 +109,19 @@ use crate::{
schema: REPO_URL_SCHEMA,
optional: true,
},
verbose: {
description: "Verbose output.",
type: bool,
optional: true,
},
keyfile: {
schema: KEYFILE_SCHEMA,
optional: true,
},
"output-format": {
schema: OUTPUT_FORMAT,
optional: true,
},
}
}
)]
@ -44,10 +132,14 @@ pub async fn benchmark(
_rpcenv: &mut dyn RpcEnvironment,
) -> Result<(), Error> {
let repo = extract_repository_from_value(&param)?;
let repo = extract_repository_from_value(&param).ok();
let keyfile = param["keyfile"].as_str().map(PathBuf::from);
let verbose = param["verbose"].as_bool().unwrap_or(false);
let output_format = get_output_format(&param);
let crypt_config = match keyfile {
None => None,
Some(path) => {
@ -57,25 +149,178 @@ pub async fn benchmark(
}
};
let mut benchmark_result = BENCHMARK_RESULT_2020_TOP;
// do repo tests first, because this may prompt for a password
if let Some(repo) = repo {
test_upload_speed(&mut benchmark_result, repo, crypt_config.clone(), verbose).await?;
}
test_crypt_speed(&mut benchmark_result, verbose)?;
render_result(&output_format, &benchmark_result)?;
Ok(())
}
// print comparison table
fn render_result(
output_format: &str,
benchmark_result: &BenchmarkResult,
) -> Result<(), Error> {
let mut data = serde_json::to_value(benchmark_result)?;
let schema = &BenchmarkResult::API_SCHEMA;
let render_speed = |value: &Value, _record: &Value| -> Result<String, Error> {
match value["speed"].as_f64() {
None => Ok(String::from("not tested")),
Some(speed) => {
let top = value["top"].as_f64().unwrap();
Ok(format!("{:.2} MB/s ({:.0}%)", speed/1_000_000.0, (speed*100.0)/top))
}
}
};
let options = default_table_format_options()
.column(ColumnConfig::new("tls")
.header("TLS (maximal backup upload speed)")
.right_align(false).renderer(render_speed))
.column(ColumnConfig::new("sha256")
.header("SHA256 checksum comptation speed")
.right_align(false).renderer(render_speed))
.column(ColumnConfig::new("compress")
.header("ZStd level 1 compression speed")
.right_align(false).renderer(render_speed))
.column(ColumnConfig::new("decompress")
.header("ZStd level 1 decompression speed")
.right_align(false).renderer(render_speed))
.column(ColumnConfig::new("aes256_gcm")
.header("AES256 GCM encryption speed")
.right_align(false).renderer(render_speed));
format_and_print_result_full(&mut data, schema, output_format, &options);
Ok(())
}
async fn test_upload_speed(
benchmark_result: &mut BenchmarkResult,
repo: BackupRepository,
crypt_config: Option<Arc<CryptConfig>>,
verbose: bool,
) -> Result<(), Error> {
let backup_time = Utc.timestamp(Utc::now().timestamp(), 0);
let client = connect(repo.host(), repo.user())?;
record_repository(&repo);
if verbose { eprintln!("Connecting to backup server"); }
let client = BackupWriter::start(
client,
crypt_config.clone(),
repo.store(),
"host",
"benshmark",
"benchmark",
backup_time,
false,
).await?;
println!("Start upload speed test");
let speed = client.upload_speedtest().await?;
if verbose { eprintln!("Start TLS speed test"); }
let speed = client.upload_speedtest(verbose).await?;
println!("Upload speed: {} MiB/s", speed);
eprintln!("TLS speed: {:.2} MB/s", speed/1_000_000.0);
benchmark_result.tls.speed = Some(speed);
Ok(())
}
// test hash/crypt/compress speed
fn test_crypt_speed(
benchmark_result: &mut BenchmarkResult,
_verbose: bool,
) -> Result<(), Error> {
let pw = b"test";
let kdf = KeyDerivationConfig::Scrypt {
n: 65536,
r: 8,
p: 1,
salt: Vec::new(),
};
let testkey = kdf.derive_key(pw)?;
let crypt_config = CryptConfig::new(testkey)?;
let random_data = proxmox::sys::linux::random_data(1024*1024)?;
let start_time = std::time::Instant::now();
let mut bytes = 0;
loop {
openssl::sha::sha256(&random_data);
bytes += random_data.len();
if start_time.elapsed().as_micros() > 1_000_000 { break; }
}
let speed = (bytes as f64)/start_time.elapsed().as_secs_f64();
benchmark_result.sha256.speed = Some(speed);
eprintln!("SHA256 speed: {:.2} MB/s", speed/1_000_000_.0);
let start_time = std::time::Instant::now();
let mut bytes = 0;
loop {
let mut reader = &random_data[..];
zstd::stream::encode_all(&mut reader, 1)?;
bytes += random_data.len();
if start_time.elapsed().as_micros() > 3_000_000 { break; }
}
let speed = (bytes as f64)/start_time.elapsed().as_secs_f64();
benchmark_result.compress.speed = Some(speed);
eprintln!("Compression speed: {:.2} MB/s", speed/1_000_000_.0);
let start_time = std::time::Instant::now();
let compressed_data = {
let mut reader = &random_data[..];
zstd::stream::encode_all(&mut reader, 1)?
};
let mut bytes = 0;
loop {
let mut reader = &compressed_data[..];
let data = zstd::stream::decode_all(&mut reader)?;
bytes += data.len();
if start_time.elapsed().as_micros() > 1_000_000 { break; }
}
let speed = (bytes as f64)/start_time.elapsed().as_secs_f64();
benchmark_result.decompress.speed = Some(speed);
eprintln!("Decompress speed: {:.2} MB/s", speed/1_000_000_.0);
let start_time = std::time::Instant::now();
let mut bytes = 0;
loop {
let mut out = Vec::new();
crypt_config.encrypt_to(&random_data, &mut out)?;
bytes += random_data.len();
if start_time.elapsed().as_micros() > 1_000_000 { break; }
}
let speed = (bytes as f64)/start_time.elapsed().as_secs_f64();
benchmark_result.aes256_gcm.speed = Some(speed);
eprintln!("AES256/GCM speed: {:.2} MB/s", speed/1_000_000_.0);
Ok(())
}

View File

@ -1,6 +1,5 @@
use std::os::unix::fs::OpenOptionsExt;
use std::io::{Seek, SeekFrom};
use std::path::PathBuf;
use std::sync::Arc;
use anyhow::{bail, format_err, Error};
@ -14,8 +13,12 @@ use proxmox_backup::client::*;
use crate::{
REPO_URL_SCHEMA,
KEYFD_SCHEMA,
extract_repository_from_value,
record_repository,
keyfile_parameters,
key::get_encryption_key_password,
decrypt_key,
api_datastore_latest_snapshot,
complete_repository,
complete_backup_snapshot,
@ -34,10 +37,6 @@ use crate::{
Shell,
};
use proxmox_backup::backup::load_and_decrypt_key;
use crate::key::get_encryption_key_password;
#[api(
input: {
properties: {
@ -49,6 +48,15 @@ use crate::key::get_encryption_key_password;
type: String,
description: "Snapshot path.",
},
"keyfile": {
optional: true,
type: String,
description: "Path to encryption key.",
},
"keyfd": {
schema: KEYFD_SCHEMA,
optional: true,
},
}
}
)]
@ -60,13 +68,14 @@ async fn dump_catalog(param: Value) -> Result<Value, Error> {
let path = tools::required_string_param(&param, "snapshot")?;
let snapshot: BackupDir = path.parse()?;
let keyfile = param["keyfile"].as_str().map(PathBuf::from);
let (keydata, _) = keyfile_parameters(&param)?;
let crypt_config = match keyfile {
let crypt_config = match keydata {
None => None,
Some(path) => {
let (key, _) = load_and_decrypt_key(&path, &get_encryption_key_password)?;
Some(Arc::new(CryptConfig::new(key)?))
Some(key) => {
let (key, _created) = decrypt_key(&key, &get_encryption_key_password)?;
let crypt_config = CryptConfig::new(key)?;
Some(Arc::new(crypt_config))
}
};
@ -132,7 +141,11 @@ async fn dump_catalog(param: Value) -> Result<Value, Error> {
type: String,
description: "Path to encryption key.",
},
},
"keyfd": {
schema: KEYFD_SCHEMA,
optional: true,
},
},
},
)]
/// Shell to interactively inspect and restore snapshots.
@ -150,12 +163,14 @@ async fn catalog_shell(param: Value) -> Result<(), Error> {
(snapshot.group().backup_type().to_owned(), snapshot.group().backup_id().to_owned(), snapshot.backup_time())
};
let keyfile = param["keyfile"].as_str().map(|p| PathBuf::from(p));
let crypt_config = match keyfile {
let (keydata, _) = keyfile_parameters(&param)?;
let crypt_config = match keydata {
None => None,
Some(path) => {
let (key, _) = load_and_decrypt_key(&path, &get_encryption_key_password)?;
Some(Arc::new(CryptConfig::new(key)?))
Some(key) => {
let (key, _created) = decrypt_key(&key, &get_encryption_key_password)?;
let crypt_config = CryptConfig::new(key)?;
Some(Arc::new(crypt_config))
}
};

View File

@ -99,7 +99,11 @@ impl Default for Kdf {
fn create(kdf: Option<Kdf>, path: Option<String>) -> Result<(), Error> {
let path = match path {
Some(path) => PathBuf::from(path),
None => place_default_encryption_key()?,
None => {
let path = place_default_encryption_key()?;
println!("creating default key at: {:?}", path);
path
}
};
let kdf = kdf.unwrap_or_default();
@ -156,8 +160,14 @@ fn create(kdf: Option<Kdf>, path: Option<String>) -> Result<(), Error> {
fn change_passphrase(kdf: Option<Kdf>, path: Option<String>) -> Result<(), Error> {
let path = match path {
Some(path) => PathBuf::from(path),
None => find_default_encryption_key()?
.ok_or_else(|| format_err!("no encryption file provided and no default file found"))?,
None => {
let path = find_default_encryption_key()?
.ok_or_else(|| {
format_err!("no encryption file provided and no default file found")
})?;
println!("updating default key at: {:?}", path);
path
}
};
let kdf = kdf.unwrap_or_default();

View File

@ -1,32 +1,18 @@
use std::path::PathBuf;
use anyhow::{bail, Error};
use proxmox::api::{api, cli::*};
use proxmox_backup::config;
use proxmox_backup::configdir;
use proxmox_backup::auth_helpers::*;
fn x509name_to_string(name: &openssl::x509::X509NameRef) -> Result<String, Error> {
let mut parts = Vec::new();
for entry in name.entries() {
parts.push(format!("{} = {}", entry.object().nid().short_name()?, entry.data().as_utf8()?));
}
Ok(parts.join(", "))
}
use proxmox_backup::tools::cert::CertInfo;
#[api]
/// Display node certificate information.
fn cert_info() -> Result<(), Error> {
let cert_path = PathBuf::from(configdir!("/proxy.pem"));
let cert = CertInfo::new()?;
let cert_pem = proxmox::tools::fs::file_get_contents(&cert_path)?;
let cert = openssl::x509::X509::from_pem(&cert_pem)?;
println!("Subject: {}", x509name_to_string(cert.subject_name())?);
println!("Subject: {}", cert.subject_name()?);
if let Some(san) = cert.subject_alt_names() {
for name in san.iter() {
@ -42,17 +28,12 @@ fn cert_info() -> Result<(), Error> {
}
}
println!("Issuer: {}", x509name_to_string(cert.issuer_name())?);
println!("Issuer: {}", cert.issuer_name()?);
println!("Validity:");
println!(" Not Before: {}", cert.not_before());
println!(" Not After : {}", cert.not_after());
let fp = cert.digest(openssl::hash::MessageDigest::sha256())?;
let fp_string = proxmox::tools::digest_to_hex(&fp);
let fp_string = fp_string.as_bytes().chunks(2).map(|v| std::str::from_utf8(v).unwrap())
.collect::<Vec<&str>>().join(":");
println!("Fingerprint (sha256): {}", fp_string);
println!("Fingerprint (sha256): {}", cert.fingerprint()?);
let pubkey = cert.public_key()?;
println!("Public key type: {}", openssl::nid::Nid::from_raw(pubkey.id().as_raw()).long_name()?);

View File

@ -274,7 +274,7 @@ impl BackupWriter {
})
}
fn response_queue() -> (
fn response_queue(verbose: bool) -> (
mpsc::Sender<h2::client::ResponseFuture>,
oneshot::Receiver<Result<(), Error>>
) {
@ -298,11 +298,11 @@ impl BackupWriter {
tokio::spawn(
verify_queue_rx
.map(Ok::<_, Error>)
.try_for_each(|response: h2::client::ResponseFuture| {
.try_for_each(move |response: h2::client::ResponseFuture| {
response
.map_err(Error::from)
.and_then(H2Client::h2api_response)
.map_ok(|result| println!("RESPONSE: {:?}", result))
.map_ok(move |result| if verbose { println!("RESPONSE: {:?}", result) })
.map_err(|err| format_err!("pipelined request failed: {}", err))
})
.map(|result| {
@ -600,7 +600,8 @@ impl BackupWriter {
})
}
pub async fn upload_speedtest(&self) -> Result<usize, Error> {
/// Upload speed test - prints result ot stderr
pub async fn upload_speedtest(&self, verbose: bool) -> Result<f64, Error> {
let mut data = vec![];
// generate pseudo random byte sequence
@ -615,7 +616,7 @@ impl BackupWriter {
let mut repeat = 0;
let (upload_queue, upload_result) = Self::response_queue();
let (upload_queue, upload_result) = Self::response_queue(verbose);
let start_time = std::time::Instant::now();
@ -627,7 +628,7 @@ impl BackupWriter {
let mut upload_queue = upload_queue.clone();
println!("send test data ({} bytes)", data.len());
if verbose { eprintln!("send test data ({} bytes)", data.len()); }
let request = H2Client::request_builder("localhost", "POST", "speedtest", None, None).unwrap();
let request_future = self.h2.send_request(request, Some(bytes::Bytes::from(data.clone()))).await?;
@ -638,9 +639,9 @@ impl BackupWriter {
let _ = upload_result.await?;
println!("Uploaded {} chunks in {} seconds.", repeat, start_time.elapsed().as_secs());
let speed = ((item_len*1_000_000*(repeat as usize))/(1024*1024))/(start_time.elapsed().as_micros() as usize);
println!("Time per request: {} microseconds.", (start_time.elapsed().as_micros())/(repeat as u128));
eprintln!("Uploaded {} chunks in {} seconds.", repeat, start_time.elapsed().as_secs());
let speed = ((item_len*(repeat as usize)) as f64)/start_time.elapsed().as_secs_f64();
eprintln!("Time per request: {} microseconds.", (start_time.elapsed().as_micros())/(repeat as u128));
Ok(speed)
}

View File

@ -161,7 +161,7 @@ where
if skip_lost_and_found {
patterns.push(MatchEntry::parse_pattern(
"**/lost+found",
"lost+found",
PatternFlag::PATH_NAME,
MatchType::Exclude,
)?);

View File

@ -493,12 +493,12 @@ pub async fn handle_request(api: Arc<ApiConfig>, req: Request<Body>) -> Result<R
let (parts, body) = req.into_parts();
let method = parts.method.clone();
let (path, components) = tools::normalize_uri_path(parts.uri.path())?;
let (_path, components) = tools::normalize_uri_path(parts.uri.path())?;
let comp_len = components.len();
println!("REQUEST {} {}", method, path);
println!("COMPO {:?}", components);
//println!("REQUEST {} {}", method, path);
//println!("COMPO {:?}", components);
let env_type = api.env_type();
let mut rpcenv = RestEnvironment::new(env_type);

View File

@ -270,28 +270,22 @@ fn update_active_workers(new_upid: Option<&UPID>) -> Result<Vec<TaskListInfo>, E
let line = line?;
match parse_worker_status_line(&line) {
Err(err) => bail!("unable to parse active worker status '{}' - {}", line, err),
Ok((upid_str, upid, state)) => {
let running = worker_is_active_local(&upid);
if running {
Ok((upid_str, upid, state)) => match state {
None if worker_is_active_local(&upid) => {
active_list.push(TaskListInfo { upid, upid_str, state: None });
} else {
match state {
None => {
println!("Detected stopped UPID {}", upid_str);
let status = upid_read_status(&upid)
.unwrap_or_else(|_| String::from("unknown"));
finish_list.push(TaskListInfo {
upid, upid_str, state: Some((Local::now().timestamp(), status))
});
}
Some((endtime, status)) => {
finish_list.push(TaskListInfo {
upid, upid_str, state: Some((endtime, status))
})
}
}
},
None => {
println!("Detected stopped UPID {}", upid_str);
let status = upid_read_status(&upid)
.unwrap_or_else(|_| String::from("unknown"));
finish_list.push(TaskListInfo {
upid, upid_str, state: Some((Local::now().timestamp(), status))
});
},
Some((endtime, status)) => {
finish_list.push(TaskListInfo {
upid, upid_str, state: Some((endtime, status))
})
}
}
}

View File

@ -23,6 +23,7 @@ pub use proxmox::tools::fd::Fd;
pub mod acl;
pub mod async_io;
pub mod borrow;
pub mod cert;
pub mod daemon;
pub mod disks;
pub mod fs;

67
src/tools/cert.rs Normal file
View File

@ -0,0 +1,67 @@
use std::path::PathBuf;
use anyhow::Error;
use openssl::x509::{X509, GeneralName};
use openssl::stack::Stack;
use openssl::pkey::{Public, PKey};
use crate::configdir;
pub struct CertInfo {
x509: X509,
}
fn x509name_to_string(name: &openssl::x509::X509NameRef) -> Result<String, Error> {
let mut parts = Vec::new();
for entry in name.entries() {
parts.push(format!("{} = {}", entry.object().nid().short_name()?, entry.data().as_utf8()?));
}
Ok(parts.join(", "))
}
impl CertInfo {
pub fn new() -> Result<Self, Error> {
Self::from_path(PathBuf::from(configdir!("/proxy.pem")))
}
pub fn from_path(path: PathBuf) -> Result<Self, Error> {
let cert_pem = proxmox::tools::fs::file_get_contents(&path)?;
let x509 = openssl::x509::X509::from_pem(&cert_pem)?;
Ok(Self{
x509
})
}
pub fn subject_alt_names(&self) -> Option<Stack<GeneralName>> {
self.x509.subject_alt_names()
}
pub fn subject_name(&self) -> Result<String, Error> {
Ok(x509name_to_string(self.x509.subject_name())?)
}
pub fn issuer_name(&self) -> Result<String, Error> {
Ok(x509name_to_string(self.x509.issuer_name())?)
}
pub fn fingerprint(&self) -> Result<String, Error> {
let fp = self.x509.digest(openssl::hash::MessageDigest::sha256())?;
let fp_string = proxmox::tools::digest_to_hex(&fp);
let fp_string = fp_string.as_bytes().chunks(2).map(|v| std::str::from_utf8(v).unwrap())
.collect::<Vec<&str>>().join(":");
Ok(fp_string)
}
pub fn public_key(&self) -> Result<PKey<Public>, Error> {
let pubkey = self.x509.public_key()?;
Ok(pubkey)
}
pub fn not_before(&self) -> &openssl::asn1::Asn1TimeRef {
self.x509.not_before()
}
pub fn not_after(&self) -> &openssl::asn1::Asn1TimeRef {
self.x509.not_after()
}
}

View File

@ -743,7 +743,10 @@ pub fn get_disks(
let partition_type_map = get_partition_type_info()?;
let zfs_devices = zfs_devices(&partition_type_map, None)?;
let zfs_devices = zfs_devices(&partition_type_map, None).or_else(|err| -> Result<HashSet<u64>, Error> {
eprintln!("error getting zfs devices: {}", err);
Ok(HashSet::new())
})?;
let lvm_devices = get_lvm_devices(&partition_type_map)?;

View File

@ -64,7 +64,7 @@ fn parse_zpool_list_header(i: &str) -> IResult<&str, ZFSPoolInfo> {
let (i, (text, size, alloc, free, _, _,
frag, _, dedup, health,
_altroot, _eol)) = tuple((
take_while1(|c| char::is_alphanumeric(c)), // name
take_while1(|c| char::is_alphanumeric(c) || c == '-' || c == ':' || c == '_' || c == '.'), // name
preceded(multispace1, parse_optional_u64), // size
preceded(multispace1, parse_optional_u64), // allocated
preceded(multispace1, parse_optional_u64), // free
@ -221,7 +221,7 @@ logs
assert_eq!(data, expect);
let output = "\
btest 427349245952 761856 427348484096 - - 0 0 1.00 ONLINE -
b-test 427349245952 761856 427348484096 - - 0 0 1.00 ONLINE -
mirror 213674622976 438272 213674184704 - - 0 0 - ONLINE
/dev/sda1 - - - - - - - - ONLINE
/dev/sda2 - - - - - - - - ONLINE
@ -235,7 +235,7 @@ logs - - - - - - - - -
let data = parse_zpool_list(&output)?;
let expect = vec![
ZFSPoolInfo {
name: String::from("btest"),
name: String::from("b-test"),
health: String::from("ONLINE"),
usage: Some(ZFSPoolUsage {
size: 427349245952,
@ -261,5 +261,31 @@ logs - - - - - - - - -
assert_eq!(data, expect);
let output = "\
b.test 427349245952 761856 427348484096 - - 0 0 1.00 ONLINE -
mirror 213674622976 438272 213674184704 - - 0 0 - ONLINE
/dev/sda1 - - - - - - - - ONLINE
";
let data = parse_zpool_list(&output)?;
let expect = vec![
ZFSPoolInfo {
name: String::from("b.test"),
health: String::from("ONLINE"),
usage: Some(ZFSPoolUsage {
size: 427349245952,
alloc: 761856,
free: 427348484096,
dedup: 1.0,
frag: 0,
}),
devices: vec![
String::from("/dev/sda1"),
]
},
];
assert_eq!(data, expect);
Ok(())
}

View File

@ -430,3 +430,38 @@ errors: No known data errors
Ok(())
}
#[test]
fn test_zpool_status_parser3() -> Result<(), Error> {
let output = r###" pool: bt-est
state: ONLINE
scan: none requested
config:
NAME STATE READ WRITE CKSUM
bt-est ONLINE 0 0 0
mirror-0 ONLINE 0 0 0
/dev/sda1 ONLINE 0 0 0
/dev/sda2 ONLINE 0 0 0
mirror-1 ONLINE 0 0 0
/dev/sda3 ONLINE 0 0 0
/dev/sda4 ONLINE 0 0 0
logs
/dev/sda5 ONLINE 0 0 0
errors: No known data errors
"###;
let key_value_list = parse_zpool_status(&output)?;
for (k, v) in key_value_list {
println!("{} => {}", k,v);
if k == "config" {
let vdev_list = parse_zpool_status_config_tree(&v)?;
let _tree = vdev_list_to_tree(&vdev_list);
//println!("TREE1 {}", serde_json::to_string_pretty(&tree)?);
}
}
Ok(())
}

View File

@ -76,6 +76,7 @@ Ext.define('PBS.Dashboard', {
let viewmodel = me.getViewModel();
let res = records[0].data;
viewmodel.set('fingerprint', res.info.fingerprint || Proxmox.Utils.unknownText);
let cpu = res.cpu,
mem = res.memory,
@ -91,6 +92,45 @@ Ext.define('PBS.Dashboard', {
hdPanel.updateValue(root.used / root.total);
},
showFingerPrint: function() {
let me = this;
let vm = me.getViewModel();
let fingerprint = vm.get('fingerprint');
Ext.create('Ext.window.Window', {
modal: true,
width: 600,
title: gettext('Fingerprint'),
layout: 'form',
bodyPadding: '10 0',
items: [
{
xtype: 'textfield',
inputId: 'fingerprintField',
value: fingerprint,
editable: false,
},
],
buttons: [
{
xtype: 'button',
iconCls: 'fa fa-clipboard',
handler: function(b) {
var el = document.getElementById('fingerprintField');
el.select();
document.execCommand("copy");
},
text: gettext('Copy')
},
{
text: gettext('Ok'),
handler: function() {
this.up('window').close();
},
},
],
}).show();
},
updateTasks: function(store, records, success) {
if (!success) return;
let me = this;
@ -134,11 +174,16 @@ Ext.define('PBS.Dashboard', {
timespan: 300, // in seconds
hours: 12, // in hours
error_shown: false,
fingerprint: "",
'bytes_in': 0,
'bytes_out': 0,
'avg_ptime': 0.0
},
formulas: {
disableFPButton: (get) => get('fingerprint') === "",
},
stores: {
usage: {
storeid: 'dash-usage',
@ -211,6 +256,16 @@ Ext.define('PBS.Dashboard', {
iconCls: 'fa fa-tasks',
title: gettext('Server Resources'),
bodyPadding: '0 20 0 20',
tools: [
{
xtype: 'button',
text: gettext('Show Fingerprint'),
handler: 'showFingerPrint',
bind: {
disabled: '{disableFPButton}',
},
},
],
layout: {
type: 'hbox',
align: 'center'

View File

@ -12,26 +12,28 @@ Ext.define('pbs-data-store-snapshots', {
'owner',
{ name: 'size', type: 'int', allowNull: true, },
{
name: 'encrypted',
name: 'crypt-mode',
type: 'boolean',
calculate: function(data) {
let encrypted = 0;
let files = 0;
let crypt = {
none: 0,
mixed: 0,
'sign-only': 0,
encrypt: 0,
count: 0,
};
let signed = 0;
data.files.forEach(file => {
if (file.filename === 'index.json.blob') return; // is never encrypted
if (file.encrypted) {
encrypted++;
let mode = PBS.Utils.cryptmap.indexOf(file['crypt-mode']);
if (mode !== -1) {
crypt[file['crypt-mode']]++;
}
files++;
crypt.count++;
});
if (encrypted === 0) {
return 0;
} else if (encrypted < files) {
return 1;
} else {
return 2;
}
return PBS.Utils.calculateCryptMode(crypt);
}
}
]
@ -149,11 +151,14 @@ Ext.define('PBS.DataStoreContent', {
let children = [];
for (const [_key, group] of Object.entries(groups)) {
let last_backup = 0;
let encrypted = 0;
let crypt = {
none: 0,
mixed: 0,
'sign-only': 0,
encrypt: 0,
};
for (const item of group.children) {
if (item.encrypted > 0) {
encrypted++;
}
crypt[PBS.Utils.cryptmap[item['crypt-mode']]]++;
if (item["backup-time"] > last_backup && item.size !== null) {
last_backup = item["backup-time"];
group["backup-time"] = last_backup;
@ -163,14 +168,9 @@ Ext.define('PBS.DataStoreContent', {
}
}
if (encrypted === 0) {
group.encrypted = 0;
} else if (encrypted < group.children.length) {
group.encrypted = 1;
} else {
group.encrypted = 2;
}
group.count = group.children.length;
crypt.count = group.count;
group['crypt-mode'] = PBS.Utils.calculateCryptMode(crypt);
children.push(group);
}
@ -296,7 +296,7 @@ Ext.define('PBS.DataStoreContent', {
let encrypted = false;
data.files.forEach(file => {
if (file.filename === 'catalog.pcat1.didx' && file.encrypted) {
if (file.filename === 'catalog.pcat1.didx' && file['crypt-mode'] === 'encrypt') {
encrypted = true;
}
});
@ -365,15 +365,8 @@ Ext.define('PBS.DataStoreContent', {
},
{
header: gettext('Encrypted'),
dataIndex: 'encrypted',
renderer: function(value) {
switch (value) {
case 0: return Proxmox.Utils.noText;
case 1: return gettext('Mixed');
case 2: return Proxmox.Utils.yesText;
default: Proxmox.Utils.unknownText;
}
}
dataIndex: 'crypt-mode',
renderer: value => PBS.Utils.cryptText[value] || Proxmox.Utils.unknownText,
},
{
header: gettext("Files"),
@ -383,8 +376,10 @@ Ext.define('PBS.DataStoreContent', {
return files.map((file) => {
let icon = '';
let size = '';
if (file.encrypted) {
icon = '<i class="fa fa-lock"></i> ';
let mode = PBS.Utils.cryptmap.indexOf(file['crypt-mode']);
let iconCls = PBS.Utils.cryptIconCls[mode] || '';
if (iconCls !== '') {
icon = `<i class="fa fa-${iconCls}"></i> `;
}
if (file.size) {
size = ` (${Proxmox.Utils.format_size(file.size)})`;

View File

@ -125,7 +125,7 @@ Ext.define('PBS.MainView', {
},
control: {
'button[reference=logoutButton]': {
'[reference=logoutButton]': {
click: 'logout'
}
},
@ -133,7 +133,8 @@ Ext.define('PBS.MainView', {
init: function(view) {
var me = this;
me.lookupReference('usernameinfo').update({username:Proxmox.UserName});
PBS.data.RunningTasksStore.startUpdate();
me.lookupReference('usernameinfo').setText(Proxmox.UserName);
// show login on requestexception
// fixme: what about other errors
@ -189,7 +190,7 @@ Ext.define('PBS.MainView', {
type: 'hbox',
align: 'middle'
},
margin: '2 5 2 5',
margin: '2 0 2 5',
height: 38,
items: [
{
@ -197,7 +198,8 @@ Ext.define('PBS.MainView', {
prefix: '',
},
{
xtype: 'versioninfo'
padding: '0 0 0 5',
xtype: 'versioninfo',
},
{
padding: 5,
@ -208,12 +210,6 @@ Ext.define('PBS.MainView', {
flex: 1,
baseCls: 'x-plain',
},
{
baseCls: 'x-plain',
reference: 'usernameinfo',
padding: '0 5',
tpl: Ext.String.format(gettext("You are logged in as {0}"), "'{username}'")
},
{
xtype: 'button',
baseCls: 'x-btn',
@ -224,11 +220,27 @@ Ext.define('PBS.MainView', {
margin: '0 5 0 0',
},
{
reference: 'logoutButton',
xtype: 'pbsTaskButton',
margin: '0 5 0 0',
},
{
xtype: 'button',
iconCls: 'fa fa-sign-out',
text: gettext('Logout')
}
reference: 'usernameinfo',
style: {
// proxmox dark grey p light grey as border
backgroundColor: '#464d4d',
borderColor: '#ABBABA'
},
margin: '0 5 0 0',
iconCls: 'fa fa-user',
menu: [
{
reference: 'logoutButton',
iconCls: 'fa fa-sign-out',
text: gettext('Logout'),
},
],
},
]
},
{

View File

@ -8,6 +8,8 @@ JSSRC= \
form/UserSelector.js \
form/RemoteSelector.js \
form/DataStoreSelector.js \
data/RunningTasksStore.js \
button/TaskButton.js \
config/UserView.js \
config/RemoteView.js \
config/ACLView.js \

View File

@ -13,6 +13,45 @@ Ext.define('PBS.Utils', {
dataStorePrefix: 'DataStore-',
cryptmap: [
'none',
'mixed',
'sign-only',
'encrypt',
],
cryptText: [
Proxmox.Utils.noText,
gettext('Mixed'),
gettext('Signed'),
gettext('Encrypted'),
],
cryptIconCls: [
'',
'',
'certificate',
'lock',
],
calculateCryptMode: function(data) {
let mixed = data.mixed;
let encrypted = data.encrypt;
let signed = data['sign-only'];
let files = data.count;
if (mixed > 0) {
return PBS.Utils.cryptmap.indexOf('mixed');
} else if (files === encrypted) {
return PBS.Utils.cryptmap.indexOf('encrypt');
} else if (files === signed) {
return PBS.Utils.cryptmap.indexOf('sign-only');
} else if ((signed+encrypted) === 0) {
return PBS.Utils.cryptmap.indexOf('none');
} else {
return PBS.Utils.cryptmap.indexOf('mixed');
}
},
getDataStoreFromPath: function(path) {
return path.slice(PBS.Utils.dataStorePrefix.length);
},

92
www/button/TaskButton.js Normal file
View File

@ -0,0 +1,92 @@
Ext.define('PBS.TaskButton', {
extend: 'Ext.button.Button',
alias: 'widget.pbsTaskButton',
config: {
badgeText: '0',
badgeCls: '',
},
iconCls: 'fa fa-list',
userCls: 'pmx-has-badge',
text: gettext('Tasks'),
setText: function(value) {
let me = this;
me.realText = value;
let badgeText = me.getBadgeText();
let badgeCls = me.getBadgeCls();
let text = `${value} <span class="pmx-button-badge ${badgeCls}">${badgeText}</span>`;
return me.callParent([text]);
},
getText: function() {
let me = this;
return me.realText;
},
setBadgeText: function(value) {
let me = this;
me.badgeText = value.toString();
return me.setText(me.getText());
},
setBadgeCls: function(value) {
let me = this;
let res = me.callParent([value]);
let badgeText = me.getBadgeText();
me.setBadgeText(badgeText);
return res;
},
handler: function() {
let me = this;
if (me.grid.isVisible()) {
me.grid.setVisible(false);
} else {
me.grid.showBy(me, 'tr-br');
}
},
initComponent: function() {
let me = this;
me.grid = Ext.create({
xtype: 'pbsRunningTasks',
title: '',
hideHeaders: false,
floating: true,
width: 600,
bbar: [
'->',
{
xtype: 'button',
text: gettext('Show All Tasks'),
handler: function() {
var mainview = me.up('mainview');
mainview.getController().redirectTo('pbsServerAdministration:tasks');
me.grid.hide();
},
},
],
listeners: {
'taskopened': function() {
me.grid.hide();
},
},
});
me.callParent();
me.mon(me.grid.getStore().rstore, 'load', function(store, records, success) {
if (!success) return;
let count = records.length;
let text = count > 99 ? '99+' : count.toString();
let cls = count > 0 ? 'active': '';
me.setBadgeText(text);
me.setBadgeCls(cls);
});
},
});

View File

@ -190,3 +190,21 @@ p.logs {
visibility: hidden;
width: 5px;
}
.pmx-has-badge .x-btn-inner {
padding: 0 0 0 5px;
min-width: 24px;
}
.pmx-button-badge {
display: inline-block;
font-weight: bold;
border-radius: 4px;
padding: 2px 3px;
min-width: 24px;
line-height: 1em;
}
.pmx-button-badge.active {
background-color: #464d4d;
}

View File

@ -18,6 +18,8 @@ Ext.define('PBS.RunningTasks', {
upid: record.data.upid,
endtime: record.data.endtime,
}).show();
view.fireEvent('taskopened', view, record.data.upid);
},
openTaskItemDblClick: function(grid, record) {
@ -54,20 +56,8 @@ Ext.define('PBS.RunningTasks', {
store: {
type: 'diff',
autoDestroy: true,
autoDestroyRstore: true,
sorters: 'starttime',
rstore: {
type: 'update',
autoStart: true,
interval: 3000,
storeid: 'pbs-running-tasks-dash',
model: 'proxmox-tasks',
proxy: {
type: 'proxmox',
// maybe separate api call?
url: '/api2/json/nodes/localhost/tasks?running=1'
},
},
rstore: PBS.data.RunningTasksStore,
},
columns: [

View File

@ -9,12 +9,27 @@ Ext.define('PBS.TaskSummary', {
render_count: function(value, md, record, rowindex, colindex) {
let cls = 'question';
let color = 'faded';
switch (colindex) {
case 1: cls = "times-circle critical"; break;
case 2: cls = "exclamation-circle warning"; break;
case 3: cls = "check-circle good"; break;
case 1:
cls = "times-circle";
color = "critical";
break;
case 2:
cls = "exclamation-circle";
color = "warning";
break;
case 3:
cls = "check-circle";
color = "good";
break;
default: break;
}
if (value < 1) {
color = "faded";
}
cls += " " + color;
return `<i class="fa fa-${cls}"></i> ${value}`;
},
},

View File

@ -0,0 +1,21 @@
Ext.define('PBS.data.RunningTasksStore', {
extend: 'Proxmox.data.UpdateStore',
singleton: true,
constructor: function(config) {
let me = this;
config = config || {};
Ext.apply(config, {
interval: 3000,
storeid: 'pbs-running-tasks-dash',
model: 'proxmox-tasks',
proxy: {
type: 'proxmox',
// maybe separate api call?
url: '/api2/json/nodes/localhost/tasks?running=1',
},
});
me.callParent([config]);
},
});

View File

@ -46,8 +46,9 @@ Ext.define('PBS.window.BackupFileDownloader', {
let me = this;
let combo = me.lookup('file');
let rec = combo.getStore().findRecord('filename', value, 0, false, true, true);
let canDownload = !rec.data.encrypted;
let canDownload = rec.data['crypt-mode'] !== 'encrypt';
me.lookup('encryptedHint').setVisible(!canDownload);
me.lookup('signedHint').setVisible(rec.data['crypt-mode'] === 'sign-only');
me.lookup('downloadBtn').setDisabled(!canDownload);
},
@ -88,7 +89,7 @@ Ext.define('PBS.window.BackupFileDownloader', {
emptyText: gettext('No file selected'),
fieldLabel: gettext('File'),
store: {
fields: ['filename', 'size', 'encrypted',],
fields: ['filename', 'size', 'crypt-mode',],
idProperty: ['filename'],
},
listConfig: {
@ -107,12 +108,25 @@ Ext.define('PBS.window.BackupFileDownloader', {
},
{
text: gettext('Encrypted'),
dataIndex: 'encrypted',
renderer: Proxmox.Utils.format_boolean,
dataIndex: 'crypt-mode',
renderer: function(value) {
let mode = -1;
if (value !== undefined) {
mode = PBS.Utils.cryptmap.indexOf(value);
}
return PBS.Utils.cryptText[mode] || Proxmox.Utils.unknownText;
}
},
],
},
},
{
xtype: 'displayfield',
userCls: 'pmx-hint',
reference: 'signedHint',
hidden: true,
value: gettext('Note: Signatures of signed files will not be verified on the server. Please use the client to do this.'),
},
{
xtype: 'displayfield',
userCls: 'pmx-hint',