Compare commits

..

40 Commits

Author SHA1 Message Date
1ff840ffad bump version to 0.7.0-1 2020-07-07 07:40:22 +02:00
7443a6e092 src/client/remote_chunk_reader.rs: implement clone for RemoteChunkReader 2020-07-07 07:34:58 +02:00
3a9988638b docs: move todolist to own document, don't link in release build
It is always build for html, but not linked if the devbuild tag isn't
set. This tag is set in the Makefile if the $(BUILD_MODE) variable
isn't "release".

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-07-06 14:44:53 +02:00
96ee857752 client: add --encryption boolen parameter
This can be used to explicitly disable encryption even if a
default key file exists in ~/.config.

Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2020-07-06 14:36:04 +02:00
887018bb79 client: use default encryption key if it is available
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2020-07-06 14:36:04 +02:00
9696f5193b client: move key management into separate module
and use api macro for methods and Kdf type

Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2020-07-06 14:36:04 +02:00
e13c4f66bb minor style & whitespace fixups
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2020-07-06 10:55:25 +02:00
8a25809573 docs: sync up copyright years
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-07-04 17:57:47 +02:00
d87b193b0b docs: todo: avoid leaking build details, link only
One can just search for them... If really wanted, we could set it to
true for dev builds (i.e., no DEB_VERSION defined)

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-07-04 17:54:00 +02:00
ea5289e869 d/rules: do not compress .pdf files
as else the docs .pdf is a PITA to use for some endusers..

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-07-04 17:53:04 +02:00
1f6a4f587a docs: do not hardcode version
use the debian package ones, if not defined we're doing a dev build

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-07-04 17:51:58 +02:00
705b2293ec d/control: add missing dependencies for lvm, smartmontools and ZFS
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-07-03 19:37:43 +02:00
d2c7ef09ba docs: rework and add a bit to introduction
Contributed-by: Daniela Häsler <daniela@proxmox.com>
[ discussed and edited some parts live with me, Thomas ]
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-07-03 18:58:17 +02:00
27f86f997e docs: fix index title
Contributed-by: Daniela Häsler <daniela@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-07-03 18:57:04 +02:00
fc93d38076 ui: ZFS create: set name-field minLength to 3 to match backend
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-07-03 18:03:51 +02:00
a5a85d41ff ui: ZFS create: use correct typeParameter name for disk selector
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-07-03 18:00:12 +02:00
08cb2038bd api: disks: indentation fixup
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-07-03 17:59:30 +02:00
6f711c1737 ui: ZFS list: fix details top-bar button handler
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-07-03 17:20:33 +02:00
42ec9f577f ui: buildsys: actually include PBS.window.ZFSCreate component in source
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-07-03 17:19:59 +02:00
9de69cdb1a src/bin/proxmox_backup_client/catalog.rs: split out catalog code 2020-07-03 16:45:47 +02:00
bd260569d3 ui: fix glitch on some zoom steps
if the baseCls is not 'x-plain' the background of the flex
element is white, and on some zoom steps it gets taller
than one pixel and appears as a white line

making it have the plain baseCls, so it does not get any
background color and is always invisible

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-07-03 13:05:19 +02:00
36cb4b30ef add beta text with link to bugtracker
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-07-03 13:05:08 +02:00
4e717240bf bump version to 0.6.0-1 2020-07-03 09:46:19 +02:00
e9764238df make ReadChunk not require mutable self.
That way we can reduce lock contentions because we lock for much shorter
times.
2020-07-03 07:37:29 +02:00
26f499b17b ui: increase timeout for snapshot listing
the api call can take a very long time (for now), until we can
improve that, increase the timeout from the default of 30s

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-07-03 06:14:21 +02:00
cc7995ac40 src/bin/proxmox_backup_client/task.rs: split out task command 2020-07-02 18:04:29 +02:00
43abba4b4f src/bin/proxmox_backup_client/mount.rs: split out mount code 2020-07-02 17:49:59 +02:00
58f950c546 ui: consistently spell Datastore without space between words
Not even hard feeling on 'Datastore' vs. 'Data Store' but consistency
is desired in such names.
Talked shortly with Dominik, which also slightly favored the one
without space - so just go for that one.

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-07-02 17:20:41 +02:00
c426e65893 ui: disk create: sync and improve 'add-datastore' checkbox label
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-07-02 17:06:37 +02:00
caea8d611f proxmox-backup-client: add benchmark command
This is just a start, We need to add more useful things here...
2020-07-02 14:01:57 +02:00
7d0754a6d2 pxar: fixup 'vanished-file' logic a bit
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2020-06-30 14:41:42 +02:00
5afa0755ea pxar: fix missing newlines in warnings
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2020-06-30 14:37:20 +02:00
40b63186a6 DataStoreConfig.js: add verify button 2020-06-30 13:28:42 +02:00
8f6088c130 DataStoreContent.js: add verify button 2020-06-30 13:22:02 +02:00
2162e2c15d src/api2/admin/datastore.rs: avoid slash in UPID strings 2020-06-30 13:11:22 +02:00
0d5ab04a90 bump version to 0.5.0-1 2020-06-29 13:01:11 +02:00
4059285649 fix typo 2020-06-29 12:59:25 +02:00
2e079b8bf2 partially revert commit 1f82f9b7b5
do it backwards compatible. Also, code was wrong because FixedIndexWriter
still computed old style csums...
2020-06-29 12:44:45 +02:00
4ff2c9b832 ui: allow to Forget (delete) backup snapshots. 2020-06-26 15:58:06 +02:00
a8e2940ff3 pxar: deal with files changing size during archiving
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2020-06-26 11:49:51 +02:00
37 changed files with 1462 additions and 924 deletions

View File

@ -1,6 +1,6 @@
[package]
name = "proxmox-backup"
version = "0.4.0"
version = "0.7.0"
authors = ["Dietmar Maurer <dietmar@proxmox.com>"]
edition = "2018"
license = "AGPL-3"
@ -38,7 +38,7 @@ pam-sys = "0.5"
percent-encoding = "2.1"
pin-utils = "0.1.0"
pathpatterns = "0.1.1"
proxmox = { version = "0.1.41", features = [ "sortable-macro", "api-macro" ] }
proxmox = { version = "0.1.42", features = [ "sortable-macro", "api-macro" ] }
#proxmox = { git = "ssh://gitolite3@proxdev.maurer-it.com/rust/proxmox", version = "0.1.2", features = [ "sortable-macro", "api-macro" ] }
#proxmox = { path = "../proxmox/proxmox", features = [ "sortable-macro", "api-macro" ] }
proxmox-fuse = "0.1.0"

View File

@ -37,6 +37,8 @@ CARGO ?= cargo
COMPILED_BINS := \
$(addprefix $(COMPILEDIR)/,$(USR_BIN) $(USR_SBIN) $(SERVICE_BIN))
export DEB_VERSION DEB_VERSION_UPSTREAM
SERVER_DEB=${PACKAGE}-server_${DEB_VERSION}_${ARCH}.deb
CLIENT_DEB=${PACKAGE}-client_${DEB_VERSION}_${ARCH}.deb
DOC_DEB=${PACKAGE}-docs_${DEB_VERSION}_all.deb

46
debian/changelog vendored
View File

@ -1,3 +1,49 @@
rust-proxmox-backup (0.7.0-1) unstable; urgency=medium
* implement clone for RemoteChunkReader
* improve docs
* client: add --encryption boolen parameter
* client: use default encryption key if it is available
* d/rules: do not compress .pdf files
* ui: various fixes
* add beta text with link to bugtracker
-- Proxmox Support Team <support@proxmox.com> Tue, 07 Jul 2020 07:40:05 +0200
rust-proxmox-backup (0.6.0-1) unstable; urgency=medium
* make ReadChunk not require mutable self.
* ui: increase timeout for snapshot listing
* ui: consistently spell Datastore without space between words
* ui: disk create: sync and improve 'add-datastore' checkbox label
* proxmox-backup-client: add benchmark command
* pxar: fixup 'vanished-file' logic a bit
* ui: add verify button
-- Proxmox Support Team <support@proxmox.com> Fri, 03 Jul 2020 09:45:52 +0200
rust-proxmox-backup (0.5.0-1) unstable; urgency=medium
* partially revert commit 1f82f9b7b5d231da22a541432d5617cb303c0000
* ui: allow to Forget (delete) backup snapshots
* pxar: deal with files changing size during archiving
-- Proxmox Support Team <support@proxmox.com> Mon, 29 Jun 2020 13:00:54 +0200
rust-proxmox-backup (0.4.0-1) unstable; urgency=medium
* change api for incremental backups mode

3
debian/control.in vendored
View File

@ -3,11 +3,14 @@ Architecture: any
Depends: fonts-font-awesome,
libjs-extjs (>= 6.0.1),
libzstd1 (>= 1.3.8),
lvm2,
proxmox-backup-docs,
proxmox-mini-journalreader,
proxmox-widget-toolkit (>= 2.2-4),
smartmontools,
${misc:Depends},
${shlibs:Depends},
Recommends: zfsutils-linux,
Description: Proxmox Backup Server daemon with tools and GUI
This package contains the Proxmox Backup Server daemons and related
tools. This includes a web-based graphical user interface.

3
debian/rules vendored
View File

@ -45,3 +45,6 @@ override_dh_installsystemd:
# TODO: remove once available (Debian 11 ?)
override_dh_dwz:
dh_dwz --no-dwz-multifile
override_dh_compress:
dh_compress -X.pdf

View File

@ -1,11 +1,5 @@
include ../defines.mk
ifeq ($(BUILD_MODE), release)
COMPILEDIR := ../target/release
else
COMPILEDIR := ../target/debug
endif
GENERATED_SYNOPSIS := \
proxmox-backup-client/synopsis.rst \
proxmox-backup-client/catalog-shell-synopsis.rst \
@ -26,6 +20,15 @@ SPHINXOPTS =
SPHINXBUILD = sphinx-build
BUILDDIR = output
ifeq ($(BUILD_MODE), release)
COMPILEDIR := ../target/release
SPHINXOPTS += -t release
else
COMPILEDIR := ../target/debug
SPHINXOPTS += -t devbuild
endif
# Sphinx internal variables.
ALLSPHINXOPTS = -d $(BUILDDIR)/doctrees $(SPHINXOPTS) .

View File

@ -17,7 +17,7 @@
# add these directories to sys.path here. If the directory is relative to the
# documentation root, use os.path.abspath to make it absolute, like shown here.
#
# import os
import os
# import sys
# sys.path.insert(0, os.path.abspath('.'))
@ -45,8 +45,11 @@ PygmentsBridge.latex_formatter = CustomLatexFormatter
# Add any Sphinx extension module names here, as strings. They can be
# extensions coming with Sphinx (named 'sphinx.ext.*') or your custom
# ones.
extensions = ["sphinx.ext.graphviz", "sphinx.ext.todo"]
todo_link_only = True
# Add any paths that contain templates here, relative to this directory.
templates_path = ['_templates']
@ -76,9 +79,11 @@ author = 'Proxmox Support Team'
# built documents.
#
# The short X.Y version.
version = '0.2'
vstr = lambda s: '<devbuild>' if s is None else str(s)
version = vstr(os.getenv('DEB_VERSION_UPSTREAM'))
# The full version, including alpha/beta/rc tags.
release = '0.2-1'
release = vstr(os.getenv('DEB_VERSION'))
# The language for content autogenerated by Sphinx. Refer to documentation
# for a list of supported languages.

View File

@ -1,18 +1,15 @@
.. Proxmox Backup documentation master file
Welcome to Proxmox Backup's documentation!
==========================================
Welcome to the Proxmox Backup documentation!
============================================
Copyright (C) 2019 Proxmox Server Solutions GmbH
Copyright (C) 2019-2020 Proxmox Server Solutions GmbH
Permission is granted to copy, distribute and/or modify this document
under the terms of the GNU Free Documentation License, Version 1.3 or
any later version published by the Free Software Foundation; with no
Invariant Sections, no Front-Cover Texts, and no Back-Cover Texts. A
copy of the license is included in the section entitled "GNU Free
Documentation License".
.. todolist::
Permission is granted to copy, distribute and/or modify this document under the
terms of the GNU Free Documentation License, Version 1.3 or any later version
published by the Free Software Foundation; with no Invariant Sections, no
Front-Cover Texts, and no Back-Cover Texts. A copy of the license is included
in the section entitled "GNU Free Documentation License".
.. toctree::
@ -37,5 +34,14 @@ Documentation License".
glossary.rst
GFDL.rst
.. only:: html and devbuild
.. toctree::
:maxdepth: 2
:caption: Developer Appendix
todos.rst
* :ref:`genindex`

View File

@ -1,57 +1,61 @@
Introduction
============
This documentation is written in :term:`reStructuredText` and formatted with :term:`Sphinx`.
This documentation is written in :term:`reStructuredText` and formatted with
:term:`Sphinx`.
What is Proxmox Backup
----------------------
What is Proxmox Backup Server
-----------------------------
Proxmox Backup is an enterprise class client-server backup software,
specially optimized for the `Proxmox Virtual Environment`_ to backup
:term:`virtual machine`\ s and :term:`container`\ s. It is also
possible to backup physical hosts.
Proxmox Backup Server is an enterprise-class client-server backup software that
backups :term:`virtual machine`\ s, :term:`container`\ s, and physical hosts.
It is specially optimized for the `Proxmox Virtual Environment`_ platform and
allows you to backup your data securely, even between remote sites, providing
easy management with a web-based user interface.
It supports deduplication, compression and authenticated encryption
(AE_). Using :term:`Rust` as implementation language guarantees high
Proxmox Backup Server supports deduplication, compression, and authenticated
encryption (AE_). Using :term:`Rust` as implementation language guarantees high
performance, low resource usage, and a safe, high quality code base.
Encryption is done at the client side. This makes backups to not fully
trusted targets possible.
It features strong encryption done on the client side. Thus, it's possible to
backup data to not fully trusted targets.
Architecture
------------
Proxmox Backup uses a `Client-server model`_. The server is
responsible to store the backup data and provides an API to create
backups and restore data. It is possible to manage disks and
other server side resources using this API.
Proxmox Backup Server uses a `client-server model`_. The server stores the
backup data and provides an API to create backups and restore data. With the
API it's also possible to manage disks and other server side resources.
A backup client uses this API to access the backed up data,
i.e. ``proxmox-backup-client`` is a command line tool to create
backups and restore data. We deliver an integrated client for
QEMU_ with `Proxmox Virtual Environment`_.
The backup client uses this API to access the backed up data. With the command
line tool ``proxmox-backup-client`` you can create backups and restore data.
For QEMU_ with `Proxmox Virtual Environment`_ we deliver an integrated client.
A single backup is allowed to contain several archives. For example,
when you backup a :term:`virtual machine`, each disk is stored as a
separate archive inside that backup. The VM configuration also gets an
extra file. This way, it is easy to access and restore important parts
of the backup without having to scan the whole backup.
A single backup is allowed to contain several archives. For example, when you
backup a :term:`virtual machine`, each disk is stored as a separate archive
inside that backup. The VM configuration itself is stored as an extra file.
This way, it is easy to access and restore only important parts of the backup
without the need to scan the whole backup.
Main Features
-------------
:Proxmox VE: The `Proxmox Virtual Environment`_ is fully
supported. You can backup :term:`virtual machine`\ s and
:Support for Proxmox VE: The `Proxmox Virtual Environment`_ is fully
supported and you can easily backup :term:`virtual machine`\ s and
:term:`container`\ s.
:GUI: We provide a graphical, web based user interface.
:Performance: The whole software stack is written in :term:`Rust`,
to provide high speed and memory efficiency.
:Deduplication: Incremental backups produce large amounts of duplicate
data. The deduplication layer removes that redundancy and makes
incremental backups small and space efficient.
:Deduplication: Periodic backups produce large amounts of duplicate
data. The deduplication layer avoids redundancy and minimizes the used
storage space.
:Incremental backups: Changes between backups are typically low. Reading and
sending only the delta reduces storage and network impact of backups.
:Data Integrity: The built in `SHA-256`_ checksum algorithm assures the
accuracy and consistency of your backups.
@ -59,43 +63,43 @@ Main Features
:Remote Sync: It is possible to efficiently synchronize data to remote
sites. Only deltas containing new data are transferred.
:Performance: The whole software stack is written in :term:`Rust`,
to provide high speed and memory efficiency.
:Compression: Ultra fast Zstandard_ compression is able to compress
:Compression: The ultra fast Zstandard_ compression is able to compress
several gigabytes of data per second.
:Encryption: Backups can be encrypted client-side using AES-256 in
:Encryption: Backups can be encrypted on the client-side using AES-256 in
GCM_ mode. This authenticated encryption mode (AE_) provides very
high performance on modern hardware.
:Open Source: No secrets. You have access to all the source code.
:Web interface: Manage Proxmox backups with the integrated web-based user
interface.
:Support: Commercial support options are available from `Proxmox`_.
:Open Source: No secrets. Proxmox Backup Server is free and open-source
software. The source code is licensed under AGPL, v3.
:Support: Enterprise support is available from `Proxmox`_.
Why Backup?
-----------
Reasons for Data Backup?
------------------------
The primary purpose of a backup is to protect against data loss. Data
loss can be caused by faulty hardware, but also by human error.
The main purpose of a backup is to protect against data loss. Data loss can be
caused by faulty hardware but also by human error.
A common mistake is to delete a file or folder which is still
required. Virtualization can amplify this problem. It is now
easy to delete a whole virtual machine by pressing a single button.
A common mistake is to accidentally delete a file or folder which is still
required. Virtualization can even amplify this problem; it easily happens that
a whole virtual machine is deleted by just pressing a single button.
Backups can serve as a toolkit for administrators to temporarily
store data. For example, it is common practice to perform full backups
before installing major software updates. If something goes wrong, you
can restore the previous state.
For administrators, backups can serve as a useful toolkit for temporarily
storing data. For example, it is common practice to perform full backups before
installing major software updates. If something goes wrong, you can easily
restore the previous state.
Another reason for backups are legal requirements. Some data must be
kept in a safe place for several years by law, so that it can be accessed if
required.
Another reason for backups are legal requirements. Some data, especially
business records, must be kept in a safe place for several years by law, so
that they can be accessed if required.
Data loss can be very costly as it can severely restrict your
business. Therefore, make sure that you perform a backup regularly
and run restore tests.
In general, data loss is very costly as it can severely damage your business.
Therefore, ensure that you perform regular backups and run restore tests.
Software Stack
@ -107,14 +111,14 @@ Software Stack
License
-------
Copyright (C) 2019 Proxmox Server Solutions GmbH
Copyright (C) 2019-2020 Proxmox Server Solutions GmbH
This software is written by Proxmox Server Solutions GmbH <support@proxmox.com>
Proxmox Backup is free software: you can redistribute it and/or modify
it under the terms of the GNU Affero General Public License as
published by the Free Software Foundation, either version 3 of the
License, or (at your option) any later version.
Proxmox Backup Server is free and open source software: you can use it,
redistribute it, and/or modify it under the terms of the GNU Affero General
Public License as published by the Free Software Foundation, either version 3
of the License, or (at your option) any later version.
This program is distributed in the hope that it will be useful, but
``WITHOUT ANY WARRANTY``; without even the implied warranty of

View File

@ -441,13 +441,13 @@ pub fn verify(
match (backup_type, backup_id, backup_time) {
(Some(backup_type), Some(backup_id), Some(backup_time)) => {
worker_id = format!("{}_{}_{}_{:08X}", store, backup_type, backup_id, backup_time);
let dir = BackupDir::new(backup_type, backup_id, backup_time);
worker_id = format!("{}_{}", store, dir);
backup_dir = Some(dir);
}
(Some(backup_type), Some(backup_id), None) => {
worker_id = format!("{}_{}_{}", store, backup_type, backup_id);
let group = BackupGroup::new(backup_type, backup_id);
worker_id = format!("{}_{}", store, group);
backup_group = Some(group);
}
(None, None, None) => {

View File

@ -26,10 +26,10 @@ pub mod zfs;
schema: NODE_SCHEMA,
},
skipsmart: {
description: "Skip smart checks.",
type: bool,
optional: true,
default: false,
description: "Skip smart checks.",
type: bool,
optional: true,
default: false,
},
"usage-type": {
type: DiskUsageType,

View File

@ -36,7 +36,7 @@ impl<S: AsyncReadChunk, I: IndexFile> AsyncIndexReader<S, I> {
Self {
store: Some(store),
index,
read_buffer: Vec::with_capacity(1024*1024),
read_buffer: Vec::with_capacity(1024 * 1024),
current_chunk_idx: 0,
current_chunk_digest: [0u8; 32],
state: AsyncIndexReaderState::NoData,
@ -44,9 +44,10 @@ impl<S: AsyncReadChunk, I: IndexFile> AsyncIndexReader<S, I> {
}
}
impl<S, I> AsyncRead for AsyncIndexReader<S, I> where
S: AsyncReadChunk + Unpin + 'static,
I: IndexFile + Unpin
impl<S, I> AsyncRead for AsyncIndexReader<S, I>
where
S: AsyncReadChunk + Unpin + Sync + 'static,
I: IndexFile + Unpin,
{
fn poll_read(
self: Pin<&mut Self>,
@ -57,7 +58,7 @@ I: IndexFile + Unpin
loop {
match &mut this.state {
AsyncIndexReaderState::NoData => {
if this.current_chunk_idx >= this.index.index_count() {
if this.current_chunk_idx >= this.index.index_count() {
return Poll::Ready(Ok(0));
}
@ -67,18 +68,18 @@ I: IndexFile + Unpin
.ok_or(io_format_err!("could not get digest"))?
.clone();
if digest == this.current_chunk_digest {
if digest == this.current_chunk_digest {
this.state = AsyncIndexReaderState::HaveData(0);
continue;
}
this.current_chunk_digest = digest;
let mut store = match this.store.take() {
let store = match this.store.take() {
Some(store) => store,
None => {
return Poll::Ready(Err(io_format_err!("could not find store")));
},
}
};
let future = async move {
@ -88,7 +89,7 @@ I: IndexFile + Unpin
};
this.state = AsyncIndexReaderState::WaitForData(future.boxed());
},
}
AsyncIndexReaderState::WaitForData(ref mut future) => {
match ready!(future.as_mut().poll(cx)) {
Ok((store, mut chunk_data)) => {
@ -96,12 +97,12 @@ I: IndexFile + Unpin
this.read_buffer.append(&mut chunk_data);
this.state = AsyncIndexReaderState::HaveData(0);
this.store = Some(store);
},
}
Err(err) => {
return Poll::Ready(Err(io_err_other(err)));
},
}
};
},
}
AsyncIndexReaderState::HaveData(offset) => {
let offset = *offset;
let len = this.read_buffer.len();
@ -111,7 +112,7 @@ I: IndexFile + Unpin
buf.len()
};
buf[0..n].copy_from_slice(&this.read_buffer[offset..offset+n]);
buf[0..n].copy_from_slice(&this.read_buffer[offset..(offset + n)]);
if offset + n == len {
this.state = AsyncIndexReaderState::NoData;
this.current_chunk_idx += 1;
@ -120,7 +121,7 @@ I: IndexFile + Unpin
}
return Poll::Ready(Ok(n));
},
}
}
}
}

View File

@ -189,6 +189,19 @@ impl IndexFile for DynamicIndexReader {
}
}
fn compute_csum(&self) -> ([u8; 32], u64) {
let mut csum = openssl::sha::Sha256::new();
let mut chunk_end = 0;
for pos in 0..self.index_count() {
let info = self.chunk_info(pos).unwrap();
chunk_end = info.range.end;
csum.update(&chunk_end.to_le_bytes());
csum.update(&info.digest);
}
let csum = csum.finish();
(csum, chunk_end)
}
#[allow(clippy::cast_ptr_alignment)]
fn chunk_info(&self, pos: usize) -> Option<ChunkReadInfo> {
if pos >= self.index.len() {

View File

@ -206,6 +206,19 @@ impl IndexFile for FixedIndexReader {
digest: *digest,
})
}
fn compute_csum(&self) -> ([u8; 32], u64) {
let mut csum = openssl::sha::Sha256::new();
let mut chunk_end = 0;
for pos in 0..self.index_count() {
let info = self.chunk_info(pos).unwrap();
chunk_end = info.range.end;
csum.update(&info.digest);
}
let csum = csum.finish();
(csum, chunk_end)
}
}
pub struct FixedIndexWriter {

View File

@ -23,19 +23,7 @@ pub trait IndexFile {
fn chunk_info(&self, pos: usize) -> Option<ChunkReadInfo>;
/// Compute index checksum and size
fn compute_csum(&self) -> ([u8; 32], u64) {
let mut csum = openssl::sha::Sha256::new();
let mut chunk_end = 0;
for pos in 0..self.index_count() {
let info = self.chunk_info(pos).unwrap();
chunk_end = info.range.end;
csum.update(&chunk_end.to_le_bytes());
csum.update(&info.digest);
}
let csum = csum.finish();
(csum, chunk_end)
}
fn compute_csum(&self) -> ([u8; 32], u64);
/// Returns most often used chunks
fn find_most_used_chunks(&self, max: usize) -> HashMap<[u8; 32], usize> {

View File

@ -11,10 +11,10 @@ use super::datastore::DataStore;
/// The ReadChunk trait allows reading backup data chunks (local or remote)
pub trait ReadChunk {
/// Returns the encoded chunk data
fn read_raw_chunk(&mut self, digest: &[u8; 32]) -> Result<DataBlob, Error>;
fn read_raw_chunk(&self, digest: &[u8; 32]) -> Result<DataBlob, Error>;
/// Returns the decoded chunk data
fn read_chunk(&mut self, digest: &[u8; 32]) -> Result<Vec<u8>, Error>;
fn read_chunk(&self, digest: &[u8; 32]) -> Result<Vec<u8>, Error>;
}
#[derive(Clone)]
@ -33,7 +33,7 @@ impl LocalChunkReader {
}
impl ReadChunk for LocalChunkReader {
fn read_raw_chunk(&mut self, digest: &[u8; 32]) -> Result<DataBlob, Error> {
fn read_raw_chunk(&self, digest: &[u8; 32]) -> Result<DataBlob, Error> {
let (path, _) = self.store.chunk_path(digest);
let raw_data = proxmox::tools::fs::file_get_contents(&path)?;
let chunk = DataBlob::from_raw(raw_data)?;
@ -42,7 +42,7 @@ impl ReadChunk for LocalChunkReader {
Ok(chunk)
}
fn read_chunk(&mut self, digest: &[u8; 32]) -> Result<Vec<u8>, Error> {
fn read_chunk(&self, digest: &[u8; 32]) -> Result<Vec<u8>, Error> {
let chunk = ReadChunk::read_raw_chunk(self, digest)?;
let raw_data = chunk.decode(self.crypt_config.as_ref().map(Arc::as_ref))?;
@ -56,20 +56,20 @@ impl ReadChunk for LocalChunkReader {
pub trait AsyncReadChunk: Send {
/// Returns the encoded chunk data
fn read_raw_chunk<'a>(
&'a mut self,
&'a self,
digest: &'a [u8; 32],
) -> Pin<Box<dyn Future<Output = Result<DataBlob, Error>> + Send + 'a>>;
/// Returns the decoded chunk data
fn read_chunk<'a>(
&'a mut self,
&'a self,
digest: &'a [u8; 32],
) -> Pin<Box<dyn Future<Output = Result<Vec<u8>, Error>> + Send + 'a>>;
}
impl AsyncReadChunk for LocalChunkReader {
fn read_raw_chunk<'a>(
&'a mut self,
&'a self,
digest: &'a [u8; 32],
) -> Pin<Box<dyn Future<Output = Result<DataBlob, Error>> + Send + 'a>> {
Box::pin(async move{
@ -84,7 +84,7 @@ impl AsyncReadChunk for LocalChunkReader {
}
fn read_chunk<'a>(
&'a mut self,
&'a self,
digest: &'a [u8; 32],
) -> Pin<Box<dyn Future<Output = Result<Vec<u8>, Error>> + Send + 'a>> {
Box::pin(async move {

File diff suppressed because it is too large Load Diff

View File

@ -0,0 +1,81 @@
use std::path::PathBuf;
use std::sync::Arc;
use anyhow::{Error};
use serde_json::Value;
use chrono::{TimeZone, Utc};
use proxmox::api::{ApiMethod, RpcEnvironment};
use proxmox::api::api;
use proxmox_backup::backup::{
load_and_decrypt_key,
CryptConfig,
};
use proxmox_backup::client::*;
use crate::{
KEYFILE_SCHEMA, REPO_URL_SCHEMA,
extract_repository_from_value,
record_repository,
connect,
};
#[api(
input: {
properties: {
repository: {
schema: REPO_URL_SCHEMA,
optional: true,
},
keyfile: {
schema: KEYFILE_SCHEMA,
optional: true,
},
}
}
)]
/// Run benchmark tests
pub async fn benchmark(
param: Value,
_info: &ApiMethod,
_rpcenv: &mut dyn RpcEnvironment,
) -> Result<(), Error> {
let repo = extract_repository_from_value(&param)?;
let keyfile = param["keyfile"].as_str().map(PathBuf::from);
let crypt_config = match keyfile {
None => None,
Some(path) => {
let (key, _) = load_and_decrypt_key(&path, &crate::key::get_encryption_key_password)?;
let crypt_config = CryptConfig::new(key)?;
Some(Arc::new(crypt_config))
}
};
let backup_time = Utc.timestamp(Utc::now().timestamp(), 0);
let client = connect(repo.host(), repo.user())?;
record_repository(&repo);
let client = BackupWriter::start(
client,
crypt_config.clone(),
repo.store(),
"host",
"benshmark",
backup_time,
false,
).await?;
println!("Start upload speed test");
let speed = client.upload_speedtest().await?;
println!("Upload speed: {} MiB/s", speed);
Ok(())
}

View File

@ -0,0 +1,245 @@
use std::os::unix::fs::OpenOptionsExt;
use std::io::{Seek, SeekFrom};
use std::path::PathBuf;
use std::sync::Arc;
use anyhow::{bail, format_err, Error};
use serde_json::Value;
use proxmox::api::{api, cli::*};
use proxmox_backup::tools;
use proxmox_backup::client::*;
use crate::{
REPO_URL_SCHEMA,
extract_repository_from_value,
record_repository,
load_and_decrypt_key,
api_datastore_latest_snapshot,
complete_repository,
complete_backup_snapshot,
complete_group_or_snapshot,
complete_pxar_archive_name,
connect,
BackupDir,
BackupGroup,
BufferedDynamicReader,
BufferedDynamicReadAt,
CatalogReader,
CATALOG_NAME,
CryptConfig,
DynamicIndexReader,
IndexFile,
Shell,
};
use crate::key::get_encryption_key_password;
#[api(
input: {
properties: {
repository: {
schema: REPO_URL_SCHEMA,
optional: true,
},
snapshot: {
type: String,
description: "Snapshot path.",
},
}
}
)]
/// Dump catalog.
async fn dump_catalog(param: Value) -> Result<Value, Error> {
let repo = extract_repository_from_value(&param)?;
let path = tools::required_string_param(&param, "snapshot")?;
let snapshot: BackupDir = path.parse()?;
let keyfile = param["keyfile"].as_str().map(PathBuf::from);
let crypt_config = match keyfile {
None => None,
Some(path) => {
let (key, _) = load_and_decrypt_key(&path, &get_encryption_key_password)?;
Some(Arc::new(CryptConfig::new(key)?))
}
};
let client = connect(repo.host(), repo.user())?;
let client = BackupReader::start(
client,
crypt_config.clone(),
repo.store(),
&snapshot.group().backup_type(),
&snapshot.group().backup_id(),
snapshot.backup_time(),
true,
).await?;
let manifest = client.download_manifest().await?;
let index = client.download_dynamic_index(&manifest, CATALOG_NAME).await?;
let most_used = index.find_most_used_chunks(8);
let chunk_reader = RemoteChunkReader::new(client.clone(), crypt_config, most_used);
let mut reader = BufferedDynamicReader::new(index, chunk_reader);
let mut catalogfile = std::fs::OpenOptions::new()
.write(true)
.read(true)
.custom_flags(libc::O_TMPFILE)
.open("/tmp")?;
std::io::copy(&mut reader, &mut catalogfile)
.map_err(|err| format_err!("unable to download catalog - {}", err))?;
catalogfile.seek(SeekFrom::Start(0))?;
let mut catalog_reader = CatalogReader::new(catalogfile);
catalog_reader.dump()?;
record_repository(&repo);
Ok(Value::Null)
}
#[api(
input: {
properties: {
"snapshot": {
type: String,
description: "Group/Snapshot path.",
},
"archive-name": {
type: String,
description: "Backup archive name.",
},
"repository": {
optional: true,
schema: REPO_URL_SCHEMA,
},
"keyfile": {
optional: true,
type: String,
description: "Path to encryption key.",
},
},
},
)]
/// Shell to interactively inspect and restore snapshots.
async fn catalog_shell(param: Value) -> Result<(), Error> {
let repo = extract_repository_from_value(&param)?;
let client = connect(repo.host(), repo.user())?;
let path = tools::required_string_param(&param, "snapshot")?;
let archive_name = tools::required_string_param(&param, "archive-name")?;
let (backup_type, backup_id, backup_time) = if path.matches('/').count() == 1 {
let group: BackupGroup = path.parse()?;
api_datastore_latest_snapshot(&client, repo.store(), group).await?
} else {
let snapshot: BackupDir = path.parse()?;
(snapshot.group().backup_type().to_owned(), snapshot.group().backup_id().to_owned(), snapshot.backup_time())
};
let keyfile = param["keyfile"].as_str().map(|p| PathBuf::from(p));
let crypt_config = match keyfile {
None => None,
Some(path) => {
let (key, _) = load_and_decrypt_key(&path, &get_encryption_key_password)?;
Some(Arc::new(CryptConfig::new(key)?))
}
};
let server_archive_name = if archive_name.ends_with(".pxar") {
format!("{}.didx", archive_name)
} else {
bail!("Can only mount pxar archives.");
};
let client = BackupReader::start(
client,
crypt_config.clone(),
repo.store(),
&backup_type,
&backup_id,
backup_time,
true,
).await?;
let mut tmpfile = std::fs::OpenOptions::new()
.write(true)
.read(true)
.custom_flags(libc::O_TMPFILE)
.open("/tmp")?;
let manifest = client.download_manifest().await?;
let index = client.download_dynamic_index(&manifest, &server_archive_name).await?;
let most_used = index.find_most_used_chunks(8);
let chunk_reader = RemoteChunkReader::new(client.clone(), crypt_config.clone(), most_used);
let reader = BufferedDynamicReader::new(index, chunk_reader);
let archive_size = reader.archive_size();
let reader: proxmox_backup::pxar::fuse::Reader =
Arc::new(BufferedDynamicReadAt::new(reader));
let decoder = proxmox_backup::pxar::fuse::Accessor::new(reader, archive_size).await?;
client.download(CATALOG_NAME, &mut tmpfile).await?;
let index = DynamicIndexReader::new(tmpfile)
.map_err(|err| format_err!("unable to read catalog index - {}", err))?;
// Note: do not use values stored in index (not trusted) - instead, computed them again
let (csum, size) = index.compute_csum();
manifest.verify_file(CATALOG_NAME, &csum, size)?;
let most_used = index.find_most_used_chunks(8);
let chunk_reader = RemoteChunkReader::new(client.clone(), crypt_config, most_used);
let mut reader = BufferedDynamicReader::new(index, chunk_reader);
let mut catalogfile = std::fs::OpenOptions::new()
.write(true)
.read(true)
.custom_flags(libc::O_TMPFILE)
.open("/tmp")?;
std::io::copy(&mut reader, &mut catalogfile)
.map_err(|err| format_err!("unable to download catalog - {}", err))?;
catalogfile.seek(SeekFrom::Start(0))?;
let catalog_reader = CatalogReader::new(catalogfile);
let state = Shell::new(
catalog_reader,
&server_archive_name,
decoder,
).await?;
println!("Starting interactive shell");
state.shell().await?;
record_repository(&repo);
Ok(())
}
pub fn catalog_mgmt_cli() -> CliCommandMap {
let catalog_shell_cmd_def = CliCommand::new(&API_METHOD_CATALOG_SHELL)
.arg_param(&["snapshot", "archive-name"])
.completion_cb("repository", complete_repository)
.completion_cb("archive-name", complete_pxar_archive_name)
.completion_cb("snapshot", complete_group_or_snapshot);
let catalog_dump_cmd_def = CliCommand::new(&API_METHOD_DUMP_CATALOG)
.arg_param(&["snapshot"])
.completion_cb("repository", complete_repository)
.completion_cb("snapshot", complete_backup_snapshot);
CliCommandMap::new()
.insert("dump", catalog_dump_cmd_def)
.insert("shell", catalog_shell_cmd_def)
}

View File

@ -0,0 +1,277 @@
use std::path::PathBuf;
use anyhow::{bail, Error};
use chrono::{Local, TimeZone};
use serde::{Deserialize, Serialize};
use xdg::BaseDirectories;
use proxmox::api::api;
use proxmox::api::cli::{CliCommand, CliCommandMap};
use proxmox::sys::linux::tty;
use proxmox::tools::fs::{file_get_contents, replace_file, CreateOptions};
use proxmox_backup::backup::{
encrypt_key_with_passphrase, load_and_decrypt_key, store_key_config, KeyConfig,
};
use proxmox_backup::tools;
pub fn master_pubkey_path() -> Result<PathBuf, Error> {
let base = BaseDirectories::with_prefix("proxmox-backup")?;
// usually $HOME/.config/proxmox-backup/master-public.pem
let path = base.place_config_file("master-public.pem")?;
Ok(path)
}
pub fn default_encryption_key_path() -> Result<PathBuf, Error> {
let base = BaseDirectories::with_prefix("proxmox-backup")?;
// usually $HOME/.config/proxmox-backup/encryption-key.json
let path = base.place_config_file("encryption-key.json")?;
Ok(path)
}
pub fn get_encryption_key_password() -> Result<Vec<u8>, Error> {
// fixme: implement other input methods
use std::env::VarError::*;
match std::env::var("PBS_ENCRYPTION_PASSWORD") {
Ok(p) => return Ok(p.as_bytes().to_vec()),
Err(NotUnicode(_)) => bail!("PBS_ENCRYPTION_PASSWORD contains bad characters"),
Err(NotPresent) => {
// Try another method
}
}
// If we're on a TTY, query the user for a password
if tty::stdin_isatty() {
return Ok(tty::read_password("Encryption Key Password: ")?);
}
bail!("no password input mechanism available");
}
/// Convenience helper to get the default key file path only if it exists.
pub fn optional_default_key_path() -> Result<Option<PathBuf>, Error> {
let path = default_encryption_key_path()?;
Ok(if path.exists() {
Some(path)
} else {
None
})
}
#[api(
default: "scrypt",
)]
#[derive(Clone, Copy, Debug, Deserialize, Serialize)]
#[serde(rename_all = "kebab-case")]
/// Key derivation function for password protected encryption keys.
pub enum Kdf {
/// Do not encrypt the key.
None,
/// Encrypt they key with a password using SCrypt.
Scrypt,
}
impl Default for Kdf {
#[inline]
fn default() -> Self {
Kdf::Scrypt
}
}
#[api(
input: {
properties: {
kdf: {
type: Kdf,
optional: true,
},
path: {
description:
"Output file. Without this the key will become the new default encryption key.",
optional: true,
}
},
},
)]
/// Create a new encryption key.
fn create(kdf: Option<Kdf>, path: Option<String>) -> Result<(), Error> {
let path = match path {
Some(path) => PathBuf::from(path),
None => default_encryption_key_path()?,
};
let kdf = kdf.unwrap_or_default();
let key = proxmox::sys::linux::random_data(32)?;
match kdf {
Kdf::None => {
let created = Local.timestamp(Local::now().timestamp(), 0);
store_key_config(
&path,
false,
KeyConfig {
kdf: None,
created,
modified: created,
data: key,
},
)?;
}
Kdf::Scrypt => {
// always read passphrase from tty
if !tty::stdin_isatty() {
bail!("unable to read passphrase - no tty");
}
let password = tty::read_and_verify_password("Encryption Key Password: ")?;
let key_config = encrypt_key_with_passphrase(&key, &password)?;
store_key_config(&path, false, key_config)?;
}
}
Ok(())
}
#[api(
input: {
properties: {
kdf: {
type: Kdf,
optional: true,
},
path: {
description: "Key file. Without this the default key's password will be changed.",
optional: true,
}
},
},
)]
/// Change the encryption key's password.
fn change_passphrase(kdf: Option<Kdf>, path: Option<String>) -> Result<(), Error> {
let path = match path {
Some(path) => PathBuf::from(path),
None => default_encryption_key_path()?,
};
let kdf = kdf.unwrap_or_default();
if !tty::stdin_isatty() {
bail!("unable to change passphrase - no tty");
}
let (key, created) = load_and_decrypt_key(&path, &get_encryption_key_password)?;
match kdf {
Kdf::None => {
let modified = Local.timestamp(Local::now().timestamp(), 0);
store_key_config(
&path,
true,
KeyConfig {
kdf: None,
created, // keep original value
modified,
data: key.to_vec(),
},
)?;
}
Kdf::Scrypt => {
let password = tty::read_and_verify_password("New Password: ")?;
let mut new_key_config = encrypt_key_with_passphrase(&key, &password)?;
new_key_config.created = created; // keep original value
store_key_config(&path, true, new_key_config)?;
}
}
Ok(())
}
#[api(
input: {
properties: {
path: {
description: "Path to the PEM formatted RSA public key.",
},
},
},
)]
/// Import an RSA public key used to put an encrypted version of the symmetric backup encryption
/// key onto the backup server along with each backup.
fn import_master_pubkey(path: String) -> Result<(), Error> {
let pem_data = file_get_contents(&path)?;
if let Err(err) = openssl::pkey::PKey::public_key_from_pem(&pem_data) {
bail!("Unable to decode PEM data - {}", err);
}
let target_path = master_pubkey_path()?;
replace_file(&target_path, &pem_data, CreateOptions::new())?;
println!("Imported public master key to {:?}", target_path);
Ok(())
}
#[api]
/// Create an RSA public/private key pair used to put an encrypted version of the symmetric backup
/// encryption key onto the backup server along with each backup.
fn create_master_key() -> Result<(), Error> {
// we need a TTY to query the new password
if !tty::stdin_isatty() {
bail!("unable to create master key - no tty");
}
let rsa = openssl::rsa::Rsa::generate(4096)?;
let pkey = openssl::pkey::PKey::from_rsa(rsa)?;
let password = String::from_utf8(tty::read_and_verify_password("Master Key Password: ")?)?;
let pub_key: Vec<u8> = pkey.public_key_to_pem()?;
let filename_pub = "master-public.pem";
println!("Writing public master key to {}", filename_pub);
replace_file(filename_pub, pub_key.as_slice(), CreateOptions::new())?;
let cipher = openssl::symm::Cipher::aes_256_cbc();
let priv_key: Vec<u8> = pkey.private_key_to_pem_pkcs8_passphrase(cipher, password.as_bytes())?;
let filename_priv = "master-private.pem";
println!("Writing private master key to {}", filename_priv);
replace_file(filename_priv, priv_key.as_slice(), CreateOptions::new())?;
Ok(())
}
pub fn cli() -> CliCommandMap {
let key_create_cmd_def = CliCommand::new(&API_METHOD_CREATE)
.arg_param(&["path"])
.completion_cb("path", tools::complete_file_name);
let key_change_passphrase_cmd_def = CliCommand::new(&API_METHOD_CHANGE_PASSPHRASE)
.arg_param(&["path"])
.completion_cb("path", tools::complete_file_name);
let key_create_master_key_cmd_def = CliCommand::new(&API_METHOD_CREATE_MASTER_KEY);
let key_import_master_pubkey_cmd_def = CliCommand::new(&API_METHOD_IMPORT_MASTER_PUBKEY)
.arg_param(&["path"])
.completion_cb("path", tools::complete_file_name);
CliCommandMap::new()
.insert("create", key_create_cmd_def)
.insert("create-master-key", key_create_master_key_cmd_def)
.insert("import-master-pubkey", key_import_master_pubkey_cmd_def)
.insert("change-passphrase", key_change_passphrase_cmd_def)
}

View File

@ -0,0 +1,10 @@
mod benchmark;
pub use benchmark::*;
mod mount;
pub use mount::*;
mod task;
pub use task::*;
mod catalog;
pub use catalog::*;
pub mod key;

View File

@ -0,0 +1,195 @@
use std::path::PathBuf;
use std::sync::Arc;
use std::os::unix::io::RawFd;
use std::path::Path;
use std::ffi::OsStr;
use anyhow::{bail, format_err, Error};
use serde_json::Value;
use tokio::signal::unix::{signal, SignalKind};
use nix::unistd::{fork, ForkResult, pipe};
use futures::select;
use futures::future::FutureExt;
use proxmox::{sortable, identity};
use proxmox::api::{ApiHandler, ApiMethod, RpcEnvironment, schema::*, cli::*};
use proxmox_backup::tools;
use proxmox_backup::backup::{
load_and_decrypt_key,
CryptConfig,
IndexFile,
BackupDir,
BackupGroup,
BufferedDynamicReader,
};
use proxmox_backup::client::*;
use crate::{
REPO_URL_SCHEMA,
extract_repository_from_value,
complete_pxar_archive_name,
complete_group_or_snapshot,
complete_repository,
record_repository,
connect,
api_datastore_latest_snapshot,
BufferedDynamicReadAt,
};
#[sortable]
const API_METHOD_MOUNT: ApiMethod = ApiMethod::new(
&ApiHandler::Sync(&mount),
&ObjectSchema::new(
"Mount pxar archive.",
&sorted!([
("snapshot", false, &StringSchema::new("Group/Snapshot path.").schema()),
("archive-name", false, &StringSchema::new("Backup archive name.").schema()),
("target", false, &StringSchema::new("Target directory path.").schema()),
("repository", true, &REPO_URL_SCHEMA),
("keyfile", true, &StringSchema::new("Path to encryption key.").schema()),
("verbose", true, &BooleanSchema::new("Verbose output.").default(false).schema()),
]),
)
);
pub fn mount_cmd_def() -> CliCommand {
CliCommand::new(&API_METHOD_MOUNT)
.arg_param(&["snapshot", "archive-name", "target"])
.completion_cb("repository", complete_repository)
.completion_cb("snapshot", complete_group_or_snapshot)
.completion_cb("archive-name", complete_pxar_archive_name)
.completion_cb("target", tools::complete_file_name)
}
fn mount(
param: Value,
_info: &ApiMethod,
_rpcenv: &mut dyn RpcEnvironment,
) -> Result<Value, Error> {
let verbose = param["verbose"].as_bool().unwrap_or(false);
if verbose {
// This will stay in foreground with debug output enabled as None is
// passed for the RawFd.
return proxmox_backup::tools::runtime::main(mount_do(param, None));
}
// Process should be deamonized.
// Make sure to fork before the async runtime is instantiated to avoid troubles.
let pipe = pipe()?;
match fork() {
Ok(ForkResult::Parent { .. }) => {
nix::unistd::close(pipe.1).unwrap();
// Blocks the parent process until we are ready to go in the child
let _res = nix::unistd::read(pipe.0, &mut [0]).unwrap();
Ok(Value::Null)
}
Ok(ForkResult::Child) => {
nix::unistd::close(pipe.0).unwrap();
nix::unistd::setsid().unwrap();
proxmox_backup::tools::runtime::main(mount_do(param, Some(pipe.1)))
}
Err(_) => bail!("failed to daemonize process"),
}
}
async fn mount_do(param: Value, pipe: Option<RawFd>) -> Result<Value, Error> {
let repo = extract_repository_from_value(&param)?;
let archive_name = tools::required_string_param(&param, "archive-name")?;
let target = tools::required_string_param(&param, "target")?;
let client = connect(repo.host(), repo.user())?;
record_repository(&repo);
let path = tools::required_string_param(&param, "snapshot")?;
let (backup_type, backup_id, backup_time) = if path.matches('/').count() == 1 {
let group: BackupGroup = path.parse()?;
api_datastore_latest_snapshot(&client, repo.store(), group).await?
} else {
let snapshot: BackupDir = path.parse()?;
(snapshot.group().backup_type().to_owned(), snapshot.group().backup_id().to_owned(), snapshot.backup_time())
};
let keyfile = param["keyfile"].as_str().map(PathBuf::from);
let crypt_config = match keyfile {
None => None,
Some(path) => {
let (key, _) = load_and_decrypt_key(&path, &crate::key::get_encryption_key_password)?;
Some(Arc::new(CryptConfig::new(key)?))
}
};
let server_archive_name = if archive_name.ends_with(".pxar") {
format!("{}.didx", archive_name)
} else {
bail!("Can only mount pxar archives.");
};
let client = BackupReader::start(
client,
crypt_config.clone(),
repo.store(),
&backup_type,
&backup_id,
backup_time,
true,
).await?;
let manifest = client.download_manifest().await?;
if server_archive_name.ends_with(".didx") {
let index = client.download_dynamic_index(&manifest, &server_archive_name).await?;
let most_used = index.find_most_used_chunks(8);
let chunk_reader = RemoteChunkReader::new(client.clone(), crypt_config, most_used);
let reader = BufferedDynamicReader::new(index, chunk_reader);
let archive_size = reader.archive_size();
let reader: proxmox_backup::pxar::fuse::Reader =
Arc::new(BufferedDynamicReadAt::new(reader));
let decoder = proxmox_backup::pxar::fuse::Accessor::new(reader, archive_size).await?;
let options = OsStr::new("ro,default_permissions");
let session = proxmox_backup::pxar::fuse::Session::mount(
decoder,
&options,
false,
Path::new(target),
)
.map_err(|err| format_err!("pxar mount failed: {}", err))?;
if let Some(pipe) = pipe {
nix::unistd::chdir(Path::new("/")).unwrap();
// Finish creation of daemon by redirecting filedescriptors.
let nullfd = nix::fcntl::open(
"/dev/null",
nix::fcntl::OFlag::O_RDWR,
nix::sys::stat::Mode::empty(),
).unwrap();
nix::unistd::dup2(nullfd, 0).unwrap();
nix::unistd::dup2(nullfd, 1).unwrap();
nix::unistd::dup2(nullfd, 2).unwrap();
if nullfd > 2 {
nix::unistd::close(nullfd).unwrap();
}
// Signal the parent process that we are done with the setup and it can
// terminate.
nix::unistd::write(pipe, &[0u8])?;
nix::unistd::close(pipe).unwrap();
}
let mut interrupt = signal(SignalKind::interrupt())?;
select! {
res = session.fuse() => res?,
_ = interrupt.recv().fuse() => {
// exit on interrupted
}
}
} else {
bail!("unknown archive file extension (expected .pxar)");
}
Ok(Value::Null)
}

View File

@ -0,0 +1,148 @@
use anyhow::{Error};
use serde_json::{json, Value};
use proxmox::api::{api, cli::*};
use proxmox_backup::tools;
use proxmox_backup::client::*;
use proxmox_backup::api2::types::UPID_SCHEMA;
use crate::{
REPO_URL_SCHEMA,
extract_repository_from_value,
complete_repository,
connect,
};
#[api(
input: {
properties: {
repository: {
schema: REPO_URL_SCHEMA,
optional: true,
},
limit: {
description: "The maximal number of tasks to list.",
type: Integer,
optional: true,
minimum: 1,
maximum: 1000,
default: 50,
},
"output-format": {
schema: OUTPUT_FORMAT,
optional: true,
},
all: {
type: Boolean,
description: "Also list stopped tasks.",
optional: true,
},
}
}
)]
/// List running server tasks for this repo user
async fn task_list(param: Value) -> Result<Value, Error> {
let output_format = get_output_format(&param);
let repo = extract_repository_from_value(&param)?;
let client = connect(repo.host(), repo.user())?;
let limit = param["limit"].as_u64().unwrap_or(50) as usize;
let running = !param["all"].as_bool().unwrap_or(false);
let args = json!({
"running": running,
"start": 0,
"limit": limit,
"userfilter": repo.user(),
"store": repo.store(),
});
let mut result = client.get("api2/json/nodes/localhost/tasks", Some(args)).await?;
let mut data = result["data"].take();
let schema = &proxmox_backup::api2::node::tasks::API_RETURN_SCHEMA_LIST_TASKS;
let options = default_table_format_options()
.column(ColumnConfig::new("starttime").right_align(false).renderer(tools::format::render_epoch))
.column(ColumnConfig::new("endtime").right_align(false).renderer(tools::format::render_epoch))
.column(ColumnConfig::new("upid"))
.column(ColumnConfig::new("status").renderer(tools::format::render_task_status));
format_and_print_result_full(&mut data, schema, &output_format, &options);
Ok(Value::Null)
}
#[api(
input: {
properties: {
repository: {
schema: REPO_URL_SCHEMA,
optional: true,
},
upid: {
schema: UPID_SCHEMA,
},
}
}
)]
/// Display the task log.
async fn task_log(param: Value) -> Result<Value, Error> {
let repo = extract_repository_from_value(&param)?;
let upid = tools::required_string_param(&param, "upid")?;
let client = connect(repo.host(), repo.user())?;
display_task_log(client, upid, true).await?;
Ok(Value::Null)
}
#[api(
input: {
properties: {
repository: {
schema: REPO_URL_SCHEMA,
optional: true,
},
upid: {
schema: UPID_SCHEMA,
},
}
}
)]
/// Try to stop a specific task.
async fn task_stop(param: Value) -> Result<Value, Error> {
let repo = extract_repository_from_value(&param)?;
let upid_str = tools::required_string_param(&param, "upid")?;
let mut client = connect(repo.host(), repo.user())?;
let path = format!("api2/json/nodes/localhost/tasks/{}", upid_str);
let _ = client.delete(&path, None).await?;
Ok(Value::Null)
}
pub fn task_mgmt_cli() -> CliCommandMap {
let task_list_cmd_def = CliCommand::new(&API_METHOD_TASK_LIST)
.completion_cb("repository", complete_repository);
let task_log_cmd_def = CliCommand::new(&API_METHOD_TASK_LOG)
.arg_param(&["upid"]);
let task_stop_cmd_def = CliCommand::new(&API_METHOD_TASK_STOP)
.arg_param(&["upid"]);
CliCommandMap::new()
.insert("log", task_log_cmd_def)
.insert("list", task_list_cmd_def)
.insert("stop", task_stop_cmd_def)
}

View File

@ -434,7 +434,7 @@ impl BackupWriter {
self.h2.download("previous", Some(param), &mut tmpfile).await?;
let index = DynamicIndexReader::new(tmpfile)
.map_err(|err| format_err!("unable to read fixed index '{}' - {}", archive_name, err))?;
.map_err(|err| format_err!("unable to read dynmamic index '{}' - {}", archive_name, err))?;
// Note: do not use values stored in index (not trusted) - instead, computed them again
let (csum, size) = index.compute_csum();
manifest.verify_file(archive_name, &csum, size)?;

View File

@ -1,7 +1,7 @@
use std::future::Future;
use std::collections::HashMap;
use std::pin::Pin;
use std::sync::Arc;
use std::sync::{Arc, Mutex};
use anyhow::Error;
@ -10,11 +10,12 @@ use crate::backup::{AsyncReadChunk, CryptConfig, DataBlob, ReadChunk};
use crate::tools::runtime::block_on;
/// Read chunks from remote host using ``BackupReader``
#[derive(Clone)]
pub struct RemoteChunkReader {
client: Arc<BackupReader>,
crypt_config: Option<Arc<CryptConfig>>,
cache_hint: HashMap<[u8; 32], usize>,
cache: HashMap<[u8; 32], Vec<u8>>,
cache: Arc<Mutex<HashMap<[u8; 32], Vec<u8>>>>,
}
impl RemoteChunkReader {
@ -30,11 +31,11 @@ impl RemoteChunkReader {
client,
crypt_config,
cache_hint,
cache: HashMap::new(),
cache: Arc::new(Mutex::new(HashMap::new())),
}
}
pub async fn read_raw_chunk(&mut self, digest: &[u8; 32]) -> Result<DataBlob, Error> {
pub async fn read_raw_chunk(&self, digest: &[u8; 32]) -> Result<DataBlob, Error> {
let mut chunk_data = Vec::with_capacity(4 * 1024 * 1024);
self.client
@ -49,12 +50,12 @@ impl RemoteChunkReader {
}
impl ReadChunk for RemoteChunkReader {
fn read_raw_chunk(&mut self, digest: &[u8; 32]) -> Result<DataBlob, Error> {
fn read_raw_chunk(&self, digest: &[u8; 32]) -> Result<DataBlob, Error> {
block_on(Self::read_raw_chunk(self, digest))
}
fn read_chunk(&mut self, digest: &[u8; 32]) -> Result<Vec<u8>, Error> {
if let Some(raw_data) = self.cache.get(digest) {
fn read_chunk(&self, digest: &[u8; 32]) -> Result<Vec<u8>, Error> {
if let Some(raw_data) = (*self.cache.lock().unwrap()).get(digest) {
return Ok(raw_data.to_vec());
}
@ -66,7 +67,7 @@ impl ReadChunk for RemoteChunkReader {
let use_cache = self.cache_hint.contains_key(digest);
if use_cache {
self.cache.insert(*digest, raw_data.to_vec());
(*self.cache.lock().unwrap()).insert(*digest, raw_data.to_vec());
}
Ok(raw_data)
@ -75,18 +76,18 @@ impl ReadChunk for RemoteChunkReader {
impl AsyncReadChunk for RemoteChunkReader {
fn read_raw_chunk<'a>(
&'a mut self,
&'a self,
digest: &'a [u8; 32],
) -> Pin<Box<dyn Future<Output = Result<DataBlob, Error>> + Send + 'a>> {
Box::pin(Self::read_raw_chunk(self, digest))
}
fn read_chunk<'a>(
&'a mut self,
&'a self,
digest: &'a [u8; 32],
) -> Pin<Box<dyn Future<Output = Result<Vec<u8>, Error>> + Send + 'a>> {
Box::pin(async move {
if let Some(raw_data) = self.cache.get(digest) {
if let Some(raw_data) = (*self.cache.lock().unwrap()).get(digest) {
return Ok(raw_data.to_vec());
}
@ -98,7 +99,7 @@ impl AsyncReadChunk for RemoteChunkReader {
let use_cache = self.cache_hint.contains_key(digest);
if use_cache {
self.cache.insert(*digest, raw_data.to_vec());
(*self.cache.lock().unwrap()).insert(*digest, raw_data.to_vec());
}
Ok(raw_data)

View File

@ -2,7 +2,7 @@ use std::collections::{HashSet, HashMap};
use std::convert::TryFrom;
use std::ffi::{CStr, CString, OsStr};
use std::fmt;
use std::io::{self, Write};
use std::io::{self, Read, Write};
use std::os::unix::ffi::OsStrExt;
use std::os::unix::io::{AsRawFd, FromRawFd, IntoRawFd, RawFd};
use std::path::{Path, PathBuf};
@ -20,6 +20,7 @@ use pxar::encoder::LinkOffset;
use proxmox::c_str;
use proxmox::sys::error::SysError;
use proxmox::tools::fd::RawFdNum;
use proxmox::tools::vec;
use crate::pxar::catalog::BackupCatalogWriter;
use crate::pxar::Flags;
@ -35,6 +36,7 @@ fn detect_fs_type(fd: RawFd) -> Result<i64, Error> {
Ok(fs_stat.f_type)
}
#[rustfmt::skip]
pub fn is_virtual_file_system(magic: i64) -> bool {
use proxmox::sys::linux::magic::*;
@ -114,6 +116,7 @@ struct Archiver<'a, 'b> {
device_set: Option<HashSet<u64>>,
hardlinks: HashMap<HardLinkInfo, (PathBuf, LinkOffset)>,
errors: ErrorReporter,
file_copy_buffer: Vec<u8>,
}
type Encoder<'a, 'b> = pxar::encoder::Encoder<'a, &'b mut dyn pxar::encoder::SeqWrite>;
@ -178,6 +181,7 @@ where
device_set,
hardlinks: HashMap::new(),
errors: ErrorReporter,
file_copy_buffer: vec::undefined(4 * 1024 * 1024),
};
archiver.archive_dir_contents(&mut encoder, source_dir, true)?;
@ -244,11 +248,15 @@ impl<'a, 'b> Archiver<'a, 'b> {
}
/// openat() wrapper which allows but logs `EACCES` and turns `ENOENT` into `None`.
///
/// The `existed` flag is set when iterating through a directory to note that we know the file
/// is supposed to exist and we should warn if it doesnt'.
fn open_file(
&mut self,
parent: RawFd,
file_name: &CStr,
oflags: OFlag,
existed: bool,
) -> Result<Option<Fd>, Error> {
match Fd::openat(
&unsafe { RawFdNum::from_raw_fd(parent) },
@ -257,9 +265,14 @@ impl<'a, 'b> Archiver<'a, 'b> {
Mode::empty(),
) {
Ok(fd) => Ok(Some(fd)),
Err(nix::Error::Sys(Errno::ENOENT)) => Ok(None),
Err(nix::Error::Sys(Errno::ENOENT)) => {
if existed {
self.report_vanished_file()?;
}
Ok(None)
}
Err(nix::Error::Sys(Errno::EACCES)) => {
write!(self.errors, "failed to open file: {:?}: access denied", file_name)?;
writeln!(self.errors, "failed to open file: {:?}: access denied", file_name)?;
Ok(None)
}
Err(other) => Err(Error::from(other)),
@ -271,6 +284,7 @@ impl<'a, 'b> Archiver<'a, 'b> {
parent,
c_str!(".pxarexclude"),
OFlag::O_RDONLY | OFlag::O_CLOEXEC | OFlag::O_NOCTTY,
false,
)?;
let old_pattern_count = self.patterns.len();
@ -283,7 +297,7 @@ impl<'a, 'b> Archiver<'a, 'b> {
let line = match line {
Ok(line) => line,
Err(err) => {
let _ = write!(
let _ = writeln!(
self.errors,
"ignoring .pxarexclude after read error in {:?}: {}",
self.path,
@ -303,7 +317,7 @@ impl<'a, 'b> Archiver<'a, 'b> {
match MatchEntry::parse_pattern(line, PatternFlag::PATH_NAME, MatchType::Exclude) {
Ok(pattern) => self.patterns.push(pattern),
Err(err) => {
let _ = write!(self.errors, "bad pattern in {:?}: {}", self.path, err);
let _ = writeln!(self.errors, "bad pattern in {:?}: {}", self.path, err);
}
}
}
@ -406,7 +420,25 @@ impl<'a, 'b> Archiver<'a, 'b> {
}
fn report_vanished_file(&mut self) -> Result<(), Error> {
write!(self.errors, "warning: file vanished while reading: {:?}", self.path)?;
writeln!(self.errors, "warning: file vanished while reading: {:?}", self.path)?;
Ok(())
}
fn report_file_shrunk_while_reading(&mut self) -> Result<(), Error> {
writeln!(
self.errors,
"warning: file size shrunk while reading: {:?}, file will be padded with zeros!",
self.path,
)?;
Ok(())
}
fn report_file_grew_while_reading(&mut self) -> Result<(), Error> {
writeln!(
self.errors,
"warning: file size increased while reading: {:?}, file will be truncated!",
self.path,
)?;
Ok(())
}
@ -430,14 +462,12 @@ impl<'a, 'b> Archiver<'a, 'b> {
parent,
c_file_name,
open_mode | OFlag::O_RDONLY | OFlag::O_NOFOLLOW | OFlag::O_CLOEXEC | OFlag::O_NOCTTY,
true,
)?;
let fd = match fd {
Some(fd) => fd,
None => {
self.report_vanished_file()?;
return Ok(());
}
None => return Ok(()),
};
let metadata = get_metadata(fd.as_raw_fd(), &stat, self.flags(), self.fs_magic)?;
@ -591,8 +621,29 @@ impl<'a, 'b> Archiver<'a, 'b> {
file_size: u64,
) -> Result<LinkOffset, Error> {
let mut file = unsafe { std::fs::File::from_raw_fd(fd.into_raw_fd()) };
let offset = encoder.add_file(metadata, file_name, file_size, &mut file)?;
Ok(offset)
let mut remaining = file_size;
let mut out = encoder.create_file(metadata, file_name, file_size)?;
while remaining != 0 {
let mut got = file.read(&mut self.file_copy_buffer[..])?;
if got as u64 > remaining {
self.report_file_grew_while_reading()?;
got = remaining as usize;
}
out.write_all(&self.file_copy_buffer[..got])?;
remaining -= got as u64;
}
if remaining > 0 {
self.report_file_shrunk_while_reading()?;
let to_zero = remaining.min(self.file_copy_buffer.len() as u64) as usize;
vec::clear(&mut self.file_copy_buffer[..to_zero]);
while remaining != 0 {
let fill = remaining.min(self.file_copy_buffer.len() as u64) as usize;
out.write_all(&self.file_copy_buffer[..fill])?;
remaining -= fill as u64;
}
}
Ok(out.file_offset())
}
fn add_symlink(

View File

@ -79,6 +79,7 @@ Ext.define('PBS.DataStoreContent', {
let url = `/api2/json/admin/datastore/${view.datastore}/snapshots`;
this.store.setProxy({
type: 'proxmox',
timeout: 300*1000, // 5 minutes, we should make that api call faster
url: url
});
@ -199,6 +200,73 @@ Ext.define('PBS.DataStoreContent', {
win.show();
},
onVerify: function() {
var view = this.getView();
if (!view.datastore) return;
let rec = view.selModel.getSelection()[0];
if (!(rec && rec.data)) return;
let data = rec.data;
let params;
if (data.leaf) {
params = {
"backup-type": data["backup-type"],
"backup-id": data["backup-id"],
"backup-time": (data['backup-time'].getTime()/1000).toFixed(0),
};
} else {
params = {
"backup-type": data.backup_type,
"backup-id": data.backup_id,
};
}
Proxmox.Utils.API2Request({
params: params,
url: `/admin/datastore/${view.datastore}/verify`,
method: 'POST',
failure: function(response) {
Ext.Msg.alert(gettext('Error'), response.htmlStatus);
},
success: function(response, options) {
Ext.create('Proxmox.window.TaskViewer', {
upid: response.result.data,
}).show();
},
});
},
onForget: function() {
var view = this.getView();
let rec = view.selModel.getSelection()[0];
if (!(rec && rec.data)) return;
let data = rec.data;
if (!data.leaf) return;
if (!view.datastore) return;
console.log(data);
Proxmox.Utils.API2Request({
params: {
"backup-type": data["backup-type"],
"backup-id": data["backup-id"],
"backup-time": (data['backup-time'].getTime()/1000).toFixed(0),
},
url: `/admin/datastore/${view.datastore}/snapshots`,
method: 'DELETE',
waitMsgTarget: view,
failure: function(response, opts) {
Ext.Msg.alert(gettext('Error'), response.htmlStatus);
},
callback: this.reload.bind(this),
});
},
openBackupFileDownloader: function() {
let me = this;
let view = me.getView();
@ -328,6 +396,14 @@ Ext.define('PBS.DataStoreContent', {
iconCls: 'fa fa-refresh',
handler: 'reload',
},
{
xtype: 'proxmoxButton',
text: gettext('Verify'),
disabled: true,
parentXType: 'pbsDataStoreContent',
enableFn: function(record) { return !!record.data; },
handler: 'onVerify',
},
{
xtype: 'proxmoxButton',
text: gettext('Prune'),
@ -336,6 +412,21 @@ Ext.define('PBS.DataStoreContent', {
enableFn: function(record) { return !record.data.leaf; },
handler: 'onPrune',
},
{
xtype: 'proxmoxButton',
text: gettext('Forget'),
disabled: true,
parentXType: 'pbsDataStoreContent',
handler: 'onForget',
confirmMsg: function(record) {
console.log(record);
let name = record.data.text;
return Ext.String.format(gettext('Are you sure you want to remove snapshot {0}'), `'${name}'`);
},
enableFn: function(record) {
return !!record.data.leaf;
},
},
{
xtype: 'proxmoxButton',
text: gettext('Download Files'),

View File

@ -40,7 +40,7 @@ Ext.define('PBS.DataStorePanel', {
initComponent: function() {
let me = this;
me.title = `${gettext("Data Store")}: ${me.datastore}`;
me.title = `${gettext("Datastore")}: ${me.datastore}`;
me.callParent();
},
});

View File

@ -200,7 +200,13 @@ Ext.define('PBS.MainView', {
xtype: 'versioninfo'
},
{
flex: 1
padding: 5,
html: '<a href="https://bugzilla.proxmox.com" target="_blank">BETA</a>',
baseCls: 'x-plain',
},
{
flex: 1,
baseCls: 'x-plain',
},
{
baseCls: 'x-plain',

View File

@ -19,6 +19,7 @@ JSSRC= \
window/ACLEdit.js \
window/DataStoreEdit.js \
window/CreateDirectory.js \
window/ZFSCreate.js \
window/FileBrowser.js \
window/BackupFileDownloader.js \
dashboard/DataStoreStatistics.js \

View File

@ -80,7 +80,7 @@ Ext.define('PBS.store.NavigationStore', {
]
},
{
text: gettext('Data Store'),
text: gettext('Datastore'),
iconCls: 'fa fa-archive',
path: 'pbsDataStoreConfig',
expanded: true,

View File

@ -33,23 +33,18 @@ Ext.define('PBS.Utils', {
},
render_datastore_worker_id: function(id, what) {
const result = id.match(/^(\S+)_([^_\s]+)_([^_\s]+)$/);
if (result) {
let datastore = result[1], type = result[2], id = result[3];
return `Datastore ${datastore} - ${what} ${type}/${id}`;
}
return `Datastore ${id} - ${what}`;
},
render_datastore_time_worker_id: function(id, what) {
const res = id.match(/^(\S+)_([^_\s]+)_([^_\s]+)_([^_\s]+)$/);
const res = id.match(/^(\S+?)_(\S+?)_(\S+?)(_(.+))?$/);
if (res) {
let datastore = res[1], type = res[2], id = res[3];
let datetime = Ext.Date.parse(parseInt(res[4], 16), 'U');
let utctime = PBS.Utils.render_datetime_utc(datetime);
return `Datastore ${datastore} - ${what} ${type}/${id}/${utctime}`;
if (res[4] !== undefined) {
let datetime = Ext.Date.parse(parseInt(res[5], 16), 'U');
let utctime = PBS.Utils.render_datetime_utc(datetime);
return `Datastore ${datastore} ${what} ${type}/${id}/${utctime}`;
} else {
return `Datastore ${datastore} ${what} ${type}/${id}`;
}
}
return what;
return `Datastore ${what} ${id}`;
},
constructor: function() {
@ -70,7 +65,7 @@ Ext.define('PBS.Utils', {
return PBS.Utils.render_datastore_worker_id(id, gettext('Backup'));
},
reader: (type, id) => {
return PBS.Utils.render_datastore_time_worker_id(id, gettext('Read objects'));
return PBS.Utils.render_datastore_worker_id(id, gettext('Read objects'));
},
});
}

View File

@ -117,8 +117,7 @@ Ext.define('PBS.admin.ZFSList', {
text: gettext('Detail'),
xtype: 'proxmoxButton',
disabled: true,
handler: function() {
}
handler: 'openDetailWindow',
}
],

View File

@ -25,7 +25,7 @@ Ext.define('PBS.DataStoreConfig', {
extend: 'Ext.grid.GridPanel',
alias: 'widget.pbsDataStoreConfig',
title: gettext('Data Store Configuration'),
title: gettext('Datastore Configuration'),
controller: {
xclass: 'Ext.app.ViewController',
@ -58,6 +58,27 @@ Ext.define('PBS.DataStoreConfig', {
}).show();
},
onVerify: function() {
var view = this.getView();
let rec = view.selModel.getSelection()[0];
if (!(rec && rec.data)) return;
let data = rec.data;
Proxmox.Utils.API2Request({
url: `/admin/datastore/${data.name}/verify`,
method: 'POST',
failure: function(response) {
Ext.Msg.alert(gettext('Error'), response.htmlStatus);
},
success: function(response, options) {
Ext.create('Proxmox.window.TaskViewer', {
upid: response.result.data,
}).show();
},
});
},
garbageCollect: function() {
let me = this;
let view = me.getView();
@ -115,6 +136,12 @@ Ext.define('PBS.DataStoreConfig', {
},
// remove_btn
'-',
{
xtype: 'proxmoxButton',
text: gettext('Verify'),
disabled: true,
handler: 'onVerify',
},
{
xtype: 'proxmoxButton',
text: gettext('Start GC'),

View File

@ -16,7 +16,7 @@ Ext.define('PBS.form.DataStoreSelector', {
listConfig: {
columns: [
{
header: gettext('DataStore'),
header: gettext('Datastore'),
sortable: true,
dataIndex: 'store',
renderer: Ext.String.htmlEncode,

View File

@ -39,7 +39,7 @@ Ext.define('PBS.window.CreateDirectory', {
{
xtype: 'proxmoxcheckbox',
name: 'add-datastore',
fieldLabel: gettext('Add Data Store'),
fieldLabel: gettext('Add as Datastore'),
value: '1',
},
],

View File

@ -23,12 +23,13 @@ Ext.define('PBS.window.CreateZFS', {
xtype: 'proxmoxtextfield',
name: 'name',
fieldLabel: gettext('Name'),
allowBlank: false
minLength: 3,
allowBlank: false,
},
{
xtype: 'proxmoxcheckbox',
name: 'add-datastore',
fieldLabel: gettext('Add Datastore'),
fieldLabel: gettext('Add as Datastore'),
value: '1'
}
],
@ -75,7 +76,7 @@ Ext.define('PBS.window.CreateZFS', {
xtype: 'pmxMultiDiskSelector',
name: 'devices',
nodename: 'localhost',
typeParam: 'usage-type',
typeParameter: 'usage-type',
valueField: 'name',
height: 200,
emptyText: gettext('No Disks unused'),