Compare commits

...

85 Commits

Author SHA1 Message Date
43ba913977 bump version to 0.2.3-1
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-06-04 10:39:15 +02:00
a720894ff0 rrd: fix off-by-one in save interval calculation
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-06-04 10:30:47 +02:00
a95a3fb893 fix csum calculation of not 'chunk_size' aligned images
the last chunk does not have to be as big as the chunk_size,
just use the already available 'chunk_end' function which does the
correct thing

this fixes restoration of images whose sizes are not a multiple of
'chunk_size' as well

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-06-04 10:18:30 +02:00
620911b426 src/tools/disks/lvm.rs: implement get_lvm_devices() 2020-06-04 09:12:19 +02:00
5c264c8d80 src/tools/disks.rs: add/use get_partition_type_info 2020-06-04 07:48:22 +02:00
8d78589969 improve display of 'next run' for sync jobs
if the last sync job is too far in the past (or there was none at all
for now) we run it at the next iteration, so we want to show that

we now calculate the next_run by using either the real last endtime
as time or 0

then in the frontend, we check if the next_run is < now and show 'pending'
(we do it this way also for replication on pve)

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-06-04 07:03:54 +02:00
eed8a5ad79 tools/systemd/time: fix compute_next_event for weekdays
two things were wrong here:
* the range (x..y) does not include y, so the range
  (day_num+1..6) goes from (day_num+1) to 5 (but sunday is 6)

* WeekDays.bits() does not return the 'day_num' of that day, but
  the bit value (e.g. 64 for SUNDAY) but was treated as the index of
  the day of the week
  to fix this, we drop the map to WeekDays and use the 'indices'
  directly

this patch makes the test work again

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-06-04 07:02:33 +02:00
538b9c1c27 systemd/time: add tests for all weekdays
this fails for now

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-06-04 07:02:23 +02:00
55919bf141 verify_file: add missing closing parenthesis in error message
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-06-03 19:10:01 +02:00
456ad0c478 src/tools/disks/zfs.rs: add parser for zpool list output 2020-06-03 12:16:08 +02:00
c76c7f8303 bump version to 0.2.2-1 2020-06-03 10:37:46 +02:00
c48aa39f3b src/bin/proxmox-backup-client.rs: implement quite flag 2020-06-03 10:11:37 +02:00
2d32fe2c04 client restore: don't add server file ending if already specified
If one executes a client command like
 # proxmox-backup-client files <snapshot> --repository ...
the files shown have already the '.fidx' or '.blob' file ending, so
if a user would just copy paste that one the client would always add
.blob, and the server would not find that file.

So avoid adding file endings if it is already a known OK one.

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-06-03 07:03:55 +02:00
dc155e9bd7 client restore: factor out archive/type parsing
will be extended in a next patch.

Also drop a dead else branch, can never get hit as we always add
.blob as fallback

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-06-03 07:03:12 +02:00
4e14781aec fix typo 2020-06-03 06:59:43 +02:00
a595f0fee0 client: improve connection/new fingerprint query
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2020-06-02 10:40:31 +02:00
add5861e8d typo fixes all over the place
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-05-30 16:39:08 +02:00
1610c45a86 src/client/pull.rs: also download client.log.blob 2020-05-30 14:51:33 +02:00
b2387eaa45 avoid compiler warnings 2020-05-30 14:05:33 +02:00
96d65fbcd0 cleanup: define/use const for predefined blob file names. 2020-05-30 14:04:15 +02:00
7cc3473a4e src/client/backup_specification.rs: split code into extra file 2020-05-30 10:54:38 +02:00
4856a21836 src/client/pull.rs: more verbose logging 2020-05-30 08:12:43 +02:00
a0153b02c9 ui: use Proxmox.Utils.setAuthData
this uses different parameters which we want to be the same for
all products (e.g. secure cookie)

leave the PBS.Utils.updateLoginData for the case that we want to do
something more here (as in pve for example)

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-05-30 07:24:27 +02:00
04b0ca8b59 add owner to group and snapshot listings
while touching it, make columns and tbar in DataStoreContent.js
declarative members and remove the (now) unnecessary initComponent

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-05-30 07:24:12 +02:00
86e432b0b8 ui: add SyncView
shows a nice overview of sync jobs (incl status of last run, estimated
next run, etc.) with options to add/edit/remove and also show the
log of the last run and manually run it now

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-05-29 11:32:40 +02:00
f0ed6a218c ui: add SyncJobEdit window
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-05-29 11:32:13 +02:00
709584719d ui: add RemoteSelector and DataStoreSelector
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-05-29 11:31:54 +02:00
d43f86f3f3 api2: add admin/sync endpoint
this returns the list of syncjobs with status, as opposed to
config/sync (which is just the config)

also adds an api call where users can run the job manually under
/admin/sync/$ID/run

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-05-29 11:31:32 +02:00
997d7e19fc config/sync: add SyncJobStatus Struct/Schema
contains the config + status

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-05-29 11:29:39 +02:00
c67b1fa72f syncjob: change worker type for sync jobs
'sync' is used for manually pulling a remote datastore
changing it for a scheduled sync to 'syncjob' so that we can
differentiate between both types of syncs

this also adds a seperate task description for it

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-05-29 11:28:04 +02:00
268687ddf0 api2/pull: refactor priv checking and creating pull parameters
we want to reuse those in the api call for manually running a sync job

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-05-29 11:27:43 +02:00
426c1e353b api2/config/sync: fix id parameter
'name' is not the correct parameter for get/post

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-05-29 11:24:54 +02:00
2888b27f4c create SYNC_SCHEDULE_SCHEMA to adapt description for sync jobs
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-05-29 11:24:25 +02:00
f5d00373f3 ui: add missing comment field to remote model
when using a diffstore, we have to add all used columns to the model,
else they will not refresh on a load

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-05-29 11:17:15 +02:00
934f5bb8ac src/bin/proxmox-backup-proxy.rs: cleanup, move code to src/tools/disks.rs
And simplify find_mounted_device by using stat.st_dev
2020-05-29 11:13:36 +02:00
9857472211 fix removing of remotes
we have to save the remote config after removing the section

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-05-29 10:48:26 +02:00
013fa7bbcb rrd: reduce io by saving data only once a minute 2020-05-29 09:16:13 +02:00
a8d7033cb2 src/bin/proxmox-backup-proxy.rs: add test if last prune job is still running 2020-05-29 08:06:48 +02:00
04ad7bc436 src/bin/proxmox-backup-proxy.rs: test if last sync job is still running 2020-05-29 08:06:48 +02:00
77ebbefc1a src/server/worker_task.rs: make worker_is_active_local pub 2020-05-29 08:06:48 +02:00
750252ba2f src/tools/systemd/time.rs: add test for "daily" schedule 2020-05-29 07:52:09 +02:00
dc58194ebe src/bin/proxmox-backup-proxy.rs: use correct id to lookup sync jobs 2020-05-29 07:50:59 +02:00
c6887a8a4d remote config gui: add comment field 2020-05-29 06:46:56 +02:00
090decbe76 BACKUP_REPO_URL_REGEX: move to api2::types and allow all valid data store names
The repo URL consists of
* optional userid
* optional host
* datastore name

All three have defined regex or format, but none of that is used, so
for example not all valid datastore names are accepted.

Move definition of the regex over to api2::types where we can access
all required regexes easily.

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-05-29 06:29:23 +02:00
c32186595e api2::types: factor out USER_ID regex
allows for better reuse in a next patch

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-05-29 06:27:38 +02:00
947f45252d www/ServerStatus.js: use term "IO wait" for CPU iowait
Because we already use "IO delay" for the storage statistics.
2020-05-29 06:12:49 +02:00
c94e1f655e rrd stats: improve io delay stats 2020-05-28 19:12:13 +02:00
d80d1f9a2b bump version to 0.2.1-1
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-05-28 17:39:41 +02:00
6161ac18a4 ui: remotes: fix remote remove buttons base url
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-05-28 17:29:54 +02:00
6bba120d14 ui: fix RemoteEdit password change
we have to remove the password from the submitvalues if it did not
change

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-05-28 17:24:06 +02:00
91e5bb49f5 src/bin/proxmox-backup-proxy.rs: simplify code
and gather all stats for the root disk
2020-05-28 12:30:54 +02:00
547e3c2f6c src/tools/disks/zfs.rs: use wtime + rtime (wait + run time) 2020-05-28 11:45:34 +02:00
4bf26be3bb www/DataStoreStatistic.js: add transfer rate 2020-05-28 10:20:29 +02:00
25c550bc28 src/bin/proxmox-backup-proxy.rs: gather zpool io stats 2020-05-28 10:09:13 +02:00
0146133b4b src/tools/disks/zfs.rs: helper to read zfs pool io stats 2020-05-28 10:07:52 +02:00
3eeba68785 depend on proxmox 0.1.38, use new fs helper functions 2020-05-28 10:06:44 +02:00
f5056656b2 use the sync id for the scheduled sync worker task
this way, multiple sync jobs with the same local store, can get scheduled

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-05-28 06:26:03 +02:00
8c87743642 fix 'remove_vanished' cli arg again
since the target side wants this to be a boolean and
serde interprets a None Value as 'null' we have to only
add this when it is really set via cli

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-05-28 06:25:30 +02:00
05d755b282 fix inserting of worker tasks
when starting a new task, we do two things to keep track of tasks
(in that order):
* updating the 'active' file with a list of tasks with
  'update_active_workers'
* updating the WORKER_TASK_LIST

the second also updates the status of running tasks in the file by
checking if it is still running by checking the WORKER_TASK_LIST

since those two things are not locked, it can happend that
we update the file, and before updating the WORKER_TASK_LIST,
another thread calls update_active_workers and tries to
get the status from the task log, which won't have any data yet
so the status is 'unknown'

(we do not update that status ever, likely for performance reasons,
so we have to fix this here)

by switching the order of the two operations, we make sure that only
tasks reach the 'active' file which are inserted in the WORKER_TASK_LIST

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-05-28 06:24:42 +02:00
143b654550 src/tools.rs - command_output: add parameter to check exit code 2020-05-27 07:25:39 +02:00
97fab7aa11 src/tools.rs: new helper to handle command_output (std::process::Output) 2020-05-27 06:53:25 +02:00
ed216fd773 ui: acl view: only update if component is activated
Avoid triggering non-required background updates during browsing a
datastores content or statistics panels. They're not expensive, but I
do not like such behavior at all (having traveled with trains and
spotty network to often)

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-05-26 18:58:21 +02:00
0f13623443 ui: tasks: add sync description+
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-05-26 18:36:58 +02:00
dbd959d43f ui: tasks: render reader with full info
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-05-26 18:36:58 +02:00
f68ae22cc0 ui: factor out render_datetime_utc
will be reused in the next patch

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-05-26 18:36:48 +02:00
06c3dc8a8e ui: task: improve rendering of backup/prune worker entries
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-05-26 13:37:57 +02:00
a6fbbd03c8 depend on proxmox 0.1.37 2020-05-26 13:00:34 +02:00
26956d73a2 ui: datastore prune: remove debug logging
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-05-26 12:50:06 +02:00
3f98b34705 ui: rework datastore content panel controller
Mostly refactoring, but actually fixes an issue where one seldom run
into a undefined dereference due to the store onLoad callback getting
triggered after some of the componet was destroyed - on quick
switching through the datastores.

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-05-26 12:46:48 +02:00
40dc103103 fix cli pull api call
there is no 'delete' parameter, only 'remove-vanished', so fix that

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-05-26 12:39:19 +02:00
12710fd3c3 ui: add missing monStoreErrors
to actually show api errors on the list call

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-05-26 12:38:57 +02:00
9e2a4653b4 ui: add crud for remotes
listing/adding/editing/removing

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-05-26 12:38:39 +02:00
de4db62c57 remotes: save passwords as base64
to avoid having arbitrary characters in the config (e.g. newlines)
note that this breaks existings configs

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-05-26 12:38:06 +02:00
1a0d3d11d2 src/api2/admin/datastore.rs: add rrd api 2020-05-26 12:26:14 +02:00
8c03041a2c src/bin/proxmox-backup-proxy.rs: gather block device stats on datastore 2020-05-26 11:20:59 +02:00
3fcc4b4e5c src/tools/disks.rs: add helper to read block device stats 2020-05-26 11:20:22 +02:00
3ed07ed2cd src/tools/disks.rs: export read_sys 2020-05-26 09:49:13 +02:00
75410d65ef d/control: proxmox-backup-server: depend on proxmox-backup-docs
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-05-26 09:37:03 +02:00
83fd4b3b1b remote: try to use Struct for api
with a catch: password is in the struct but we do not want it to return
via the api, so we only 'serialize' it when the string is not empty
(this can only happen when the format is not checked by us, iow.
when its returned from the api) and setting it manually to ""
when we return remotes from the api

this way we can still use the type but do not return the password

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-05-26 08:55:07 +02:00
bfa0146c00 ui: acls: include roleid into id and sort by it
this fixes missing acls on the gui

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-05-26 08:49:59 +02:00
5dcdcea293 api2/config/remote: remove password from read_remote
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-05-26 08:49:12 +02:00
99f443c6ae api2/config/remote: lock and use digest for removal
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-05-26 08:48:45 +02:00
4f966d0592 api2/config/remote: use rpcenv for digest for read_remote
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-05-26 08:48:28 +02:00
db0c228719 config/remote: add 'name' to Remote struct
and use it as section id, like with User

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-05-26 08:48:05 +02:00
880fa939d1 gui: move system stat RRDs to ServerStatus panel. 2020-05-26 07:33:00 +02:00
73 changed files with 2316 additions and 622 deletions

View File

@ -1,6 +1,6 @@
[package]
name = "proxmox-backup"
version = "0.2.0"
version = "0.2.3"
authors = ["Dietmar Maurer <dietmar@proxmox.com>"]
edition = "2018"
license = "AGPL-3"
@ -36,7 +36,7 @@ pam = "0.7"
pam-sys = "0.5"
percent-encoding = "2.1"
pin-utils = "0.1.0"
proxmox = { version = "0.1.36", features = [ "sortable-macro", "api-macro" ] }
proxmox = { version = "0.1.38", features = [ "sortable-macro", "api-macro" ] }
#proxmox = { git = "ssh://gitolite3@proxdev.maurer-it.com/rust/proxmox", version = "0.1.2", features = [ "sortable-macro", "api-macro" ] }
#proxmox = { path = "../proxmox/proxmox", features = [ "sortable-macro", "api-macro" ] }
regex = "1.2"

45
debian/changelog vendored
View File

@ -1,3 +1,48 @@
rust-proxmox-backup (0.2.3-1) unstable; urgency=medium
* tools/systemd/time: fix compute_next_event for weekdays
* improve display of 'next run' for sync jobs
* fix csum calculation for images which do not have a 'chunk_size' aligned
size
* add parser for zpool list output
-- Proxmox Support Team <support@proxmox.com> Thu, 04 Jun 2020 10:39:06 +0200
rust-proxmox-backup (0.2.2-1) unstable; urgency=medium
* proxmox-backup-client.rs: implement quiet flag
* client restore: don't add server file ending if already specified
* src/client/pull.rs: also download client.log.blob
* src/client/pull.rs: more verbose logging
* gui improvements
-- Proxmox Support Team <support@proxmox.com> Wed, 03 Jun 2020 10:37:12 +0200
rust-proxmox-backup (0.2.1-1) unstable; urgency=medium
* ui: move server RRD statistics to 'Server Status' panel
* ui/api: add more server statistics
* ui/api: add per-datastore usage and performance statistics over time
* ui: add initial remote config management panel
* remotes: save passwords as base64
* gather zpool io stats
* various fixes/improvements
-- Proxmox Support Team <support@proxmox.com> Thu, 28 May 2020 17:39:33 +0200
rust-proxmox-backup (0.2.0-1) unstable; urgency=medium
* see git changelog (too many changes)

1
debian/control.in vendored
View File

@ -3,6 +3,7 @@ Architecture: any
Depends: fonts-font-awesome,
libjs-extjs (>= 6.0.1),
libzstd1 (>= 1.3.8),
proxmox-backup-docs,
proxmox-mini-journalreader,
proxmox-widget-toolkit (>= 2.2-4),
${misc:Depends},

View File

@ -2,9 +2,11 @@ use proxmox::api::router::{Router, SubdirMap};
use proxmox::list_subdirs_api_method;
pub mod datastore;
pub mod sync;
const SUBDIRS: SubdirMap = &[
("datastore", &datastore::ROUTER)
("datastore", &datastore::ROUTER),
("sync", &sync::ROUTER)
];
pub const ROUTER: Router = Router::new()

View File

@ -44,7 +44,7 @@ fn read_backup_index(store: &DataStore, backup_dir: &BackupDir) -> Result<Vec<Ba
let mut path = store.base_path();
path.push(backup_dir.relative_path());
path.push("index.json.blob");
path.push(MANIFEST_BLOB_NAME);
let raw_data = file_get_contents(&path)?;
let index_size = raw_data.len() as u64;
@ -61,7 +61,7 @@ fn read_backup_index(store: &DataStore, backup_dir: &BackupDir) -> Result<Vec<Ba
}
result.push(BackupContent {
filename: "index.json.blob".to_string(),
filename: MANIFEST_BLOB_NAME.to_string(),
size: Some(index_size),
});
@ -130,8 +130,8 @@ fn list_groups(
let group = info.backup_dir.group();
let list_all = (user_privs & PRIV_DATASTORE_AUDIT) != 0;
let owner = datastore.get_owner(group)?;
if !list_all {
let owner = datastore.get_owner(group)?;
if owner != username { continue; }
}
@ -141,6 +141,7 @@ fn list_groups(
last_backup: info.backup_dir.backup_time().timestamp(),
backup_count: list.len() as u64,
files: info.files.clone(),
owner: Some(owner),
};
groups.push(result_item);
}
@ -329,8 +330,9 @@ pub fn list_snapshots (
}
let list_all = (user_privs & PRIV_DATASTORE_AUDIT) != 0;
let owner = datastore.get_owner(group)?;
if !list_all {
let owner = datastore.get_owner(group)?;
if owner != username { continue; }
}
@ -340,6 +342,7 @@ pub fn list_snapshots (
backup_time: info.backup_dir.backup_time().timestamp(),
files: info.files,
size: None,
owner: Some(owner),
};
if let Ok(index) = read_backup_index(&datastore, &info.backup_dir) {
@ -802,7 +805,7 @@ fn upload_backup_log(
let store = tools::required_string_param(&param, "store")?;
let datastore = DataStore::lookup_datastore(store)?;
let file_name = "client.log.blob";
let file_name = CLIENT_LOG_BLOB_NAME;
let backup_type = tools::required_string_param(&param, "backup-type")?;
let backup_id = tools::required_string_param(&param, "backup-id")?;
@ -843,6 +846,47 @@ fn upload_backup_log(
}.boxed()
}
#[api(
input: {
properties: {
store: {
schema: DATASTORE_SCHEMA,
},
timeframe: {
type: RRDTimeFrameResolution,
},
cf: {
type: RRDMode,
},
},
},
access: {
permission: &Permission::Privilege(&["datastore", "{store}"], PRIV_DATASTORE_AUDIT | PRIV_DATASTORE_BACKUP, true),
},
)]
/// Read datastore stats
fn get_rrd_stats(
store: String,
timeframe: RRDTimeFrameResolution,
cf: RRDMode,
_param: Value,
) -> Result<Value, Error> {
let rrd_dir = format!("datastore/{}", store);
crate::rrd::extract_data(
&rrd_dir,
&[
"total", "used",
"read_ios", "read_bytes",
"write_ios", "write_bytes",
"io_ticks",
],
timeframe,
cf,
)
}
#[sortable]
const DATASTORE_INFO_SUBDIRS: SubdirMap = &[
(
@ -871,6 +915,11 @@ const DATASTORE_INFO_SUBDIRS: SubdirMap = &[
&Router::new()
.post(&API_METHOD_PRUNE)
),
(
"rrd",
&Router::new()
.get(&API_METHOD_GET_RRD_STATS)
),
(
"snapshots",
&Router::new()

130
src/api2/admin/sync.rs Normal file
View File

@ -0,0 +1,130 @@
use anyhow::{Error};
use serde_json::Value;
use std::collections::HashMap;
use proxmox::api::{api, ApiMethod, Router, RpcEnvironment};
use proxmox::api::router::SubdirMap;
use proxmox::{list_subdirs_api_method, sortable};
use crate::api2::types::*;
use crate::api2::pull::{get_pull_parameters};
use crate::config::sync::{self, SyncJobStatus, SyncJobConfig};
use crate::server::{self, TaskListInfo, WorkerTask};
use crate::tools::systemd::time::{
parse_calendar_event, compute_next_event};
#[api(
input: {
properties: {},
},
returns: {
description: "List configured jobs and their status.",
type: Array,
items: { type: sync::SyncJobStatus },
},
)]
/// List all sync jobs
pub fn list_sync_jobs(
_param: Value,
mut rpcenv: &mut dyn RpcEnvironment,
) -> Result<Vec<SyncJobStatus>, Error> {
let (config, digest) = sync::config()?;
let mut list: Vec<SyncJobStatus> = config.convert_to_typed_array("sync")?;
let mut last_tasks: HashMap<String, &TaskListInfo> = HashMap::new();
let tasks = server::read_task_list()?;
for info in tasks.iter() {
let worker_id = match &info.upid.worker_id {
Some(id) => id,
_ => { continue; },
};
if let Some(last) = last_tasks.get(worker_id) {
if last.upid.starttime < info.upid.starttime {
last_tasks.insert(worker_id.to_string(), &info);
}
} else {
last_tasks.insert(worker_id.to_string(), &info);
}
}
for job in &mut list {
let mut last = 0;
if let Some(task) = last_tasks.get(&job.id) {
job.last_run_upid = Some(task.upid_str.clone());
if let Some((endtime, status)) = &task.state {
job.last_run_state = Some(String::from(status));
job.last_run_endtime = Some(*endtime);
last = *endtime;
}
}
job.next_run = (|| -> Option<i64> {
let schedule = job.schedule.as_ref()?;
let event = parse_calendar_event(&schedule).ok()?;
compute_next_event(&event, last, false).ok()
})();
}
rpcenv["digest"] = proxmox::tools::digest_to_hex(&digest).into();
Ok(list)
}
#[api(
input: {
properties: {
id: {
schema: JOB_ID_SCHEMA,
}
}
}
)]
/// Runs the sync jobs manually.
async fn run_sync_job(
id: String,
_info: &ApiMethod,
rpcenv: &mut dyn RpcEnvironment,
) -> Result<String, Error> {
let (config, _digest) = sync::config()?;
let sync_job: SyncJobConfig = config.lookup("sync", &id)?;
let username = rpcenv.get_user().unwrap();
let delete = sync_job.remove_vanished.unwrap_or(true);
let (client, src_repo, tgt_store) = get_pull_parameters(&sync_job.store, &sync_job.remote, &sync_job.remote_store).await?;
let upid_str = WorkerTask::spawn("syncjob", Some(id.clone()), &username.clone(), false, move |worker| async move {
worker.log(format!("sync job '{}' start", &id));
crate::client::pull::pull_store(&worker, &client, &src_repo, tgt_store.clone(), delete, String::from("backup@pam")).await?;
worker.log(format!("sync job '{}' end", &id));
Ok(())
})?;
Ok(upid_str)
}
#[sortable]
const SYNC_INFO_SUBDIRS: SubdirMap = &[
(
"run",
&Router::new()
.post(&API_METHOD_RUN_SYNC_JOB)
),
];
const SYNC_INFO_ROUTER: Router = Router::new()
.get(&list_subdirs_api_method!(SYNC_INFO_SUBDIRS))
.subdirs(SYNC_INFO_SUBDIRS);
pub const ROUTER: Router = Router::new()
.get(&API_METHOD_LIST_SYNC_JOBS)
.match_all("id", &SYNC_INFO_ROUTER);

View File

@ -107,7 +107,7 @@ async move {
}
let (path, is_new) = datastore.create_backup_dir(&backup_dir)?;
if !is_new { bail!("backup directorty already exists."); }
if !is_new { bail!("backup directory already exists."); }
WorkerTask::spawn("backup", Some(worker_id), &username.clone(), true, move |worker| {
let mut env = BackupEnvironment::new(
@ -151,7 +151,7 @@ async move {
match (res, env.ensure_finished()) {
(Ok(_), Ok(())) => {
env.log("backup finished sucessfully");
env.log("backup finished successfully");
Ok(())
},
(Err(err), Ok(())) => {
@ -378,7 +378,7 @@ fn dynamic_append (
env.dynamic_writer_append_chunk(wid, offset, size, &digest)?;
env.debug(format!("sucessfully added chunk {} to dynamic index {} (offset {}, size {})", digest_str, wid, offset, size));
env.debug(format!("successfully added chunk {} to dynamic index {} (offset {}, size {})", digest_str, wid, offset, size));
}
Ok(Value::Null)
@ -443,7 +443,7 @@ fn fixed_append (
env.fixed_writer_append_chunk(wid, offset, size, &digest)?;
env.debug(format!("sucessfully added chunk {} to fixed index {} (offset {}, size {})", digest_str, wid, offset, size));
env.debug(format!("successfully added chunk {} to fixed index {} (offset {}, size {})", digest_str, wid, offset, size));
}
Ok(Value::Null)
@ -498,7 +498,7 @@ fn close_dynamic_index (
env.dynamic_writer_close(wid, chunk_count, size, csum)?;
env.log(format!("sucessfully closed dynamic index {}", wid));
env.log(format!("successfully closed dynamic index {}", wid));
Ok(Value::Null)
}
@ -552,7 +552,7 @@ fn close_fixed_index (
env.fixed_writer_close(wid, chunk_count, size, csum)?;
env.log(format!("sucessfully closed fixed index {}", wid));
env.log(format!("successfully closed fixed index {}", wid));
Ok(Value::Null)
}
@ -566,7 +566,7 @@ fn finish_backup (
let env: &BackupEnvironment = rpcenv.as_ref();
env.finish_backup()?;
env.log("sucessfully finished backup");
env.log("successfully finished backup");
Ok(Value::Null)
}

View File

@ -52,7 +52,7 @@ struct FixedWriterState {
struct SharedBackupState {
finished: bool,
uid_counter: usize,
file_counter: usize, // sucessfully uploaded files
file_counter: usize, // successfully uploaded files
dynamic_writers: HashMap<usize, DynamicWriterState>,
fixed_writers: HashMap<usize, FixedWriterState>,
known_chunks: HashMap<[u8;32], u32>,

View File

@ -1,6 +1,7 @@
use anyhow::{bail, Error};
use serde_json::Value;
use ::serde::{Deserialize, Serialize};
use base64;
use proxmox::api::{api, ApiMethod, Router, RpcEnvironment, Permission};
@ -16,27 +17,8 @@ use crate::config::acl::{PRIV_REMOTE_AUDIT, PRIV_REMOTE_MODIFY};
description: "The list of configured remotes (with config digest).",
type: Array,
items: {
type: Object,
type: remote::Remote,
description: "Remote configuration (without password).",
properties: {
name: {
schema: REMOTE_ID_SCHEMA,
},
comment: {
optional: true,
schema: SINGLE_LINE_COMMENT_SCHEMA,
},
host: {
schema: DNS_NAME_OR_IP_SCHEMA,
},
userid: {
schema: PROXMOX_USER_ID_SCHEMA,
},
fingerprint: {
optional: true,
schema: CERT_FINGERPRINT_SHA256_SCHEMA,
},
},
},
},
access: {
@ -47,14 +29,20 @@ use crate::config::acl::{PRIV_REMOTE_AUDIT, PRIV_REMOTE_MODIFY};
pub fn list_remotes(
_param: Value,
_info: &ApiMethod,
_rpcenv: &mut dyn RpcEnvironment,
) -> Result<Value, Error> {
mut rpcenv: &mut dyn RpcEnvironment,
) -> Result<Vec<remote::Remote>, Error> {
let (config, digest) = remote::config()?;
let value = config.convert_to_array("name", Some(&digest), &["password"]);
let mut list: Vec<remote::Remote> = config.convert_to_typed_array("remote")?;
Ok(value.into())
// don't return password in api
for remote in &mut list {
remote.password = "".to_string();
}
rpcenv["digest"] = proxmox::tools::digest_to_hex(&digest).into();
Ok(list)
}
#[api(
@ -88,19 +76,21 @@ pub fn list_remotes(
},
)]
/// Create new remote.
pub fn create_remote(name: String, param: Value) -> Result<(), Error> {
pub fn create_remote(password: String, param: Value) -> Result<(), Error> {
let _lock = crate::tools::open_file_locked(remote::REMOTE_CFG_LOCKFILE, std::time::Duration::new(10, 0))?;
let remote: remote::Remote = serde_json::from_value(param.clone())?;
let mut data = param.clone();
data["password"] = Value::from(base64::encode(password.as_bytes()));
let remote: remote::Remote = serde_json::from_value(data)?;
let (mut config, _digest) = remote::config()?;
if let Some(_) = config.sections.get(&name) {
bail!("remote '{}' already exists.", name);
if let Some(_) = config.sections.get(&remote.name) {
bail!("remote '{}' already exists.", remote.name);
}
config.set_data(&name, "remote", &remote)?;
config.set_data(&remote.name, "remote", &remote)?;
remote::save_config(&config)?;
@ -124,11 +114,15 @@ pub fn create_remote(name: String, param: Value) -> Result<(), Error> {
}
)]
/// Read remote configuration data.
pub fn read_remote(name: String) -> Result<Value, Error> {
pub fn read_remote(
name: String,
_info: &ApiMethod,
mut rpcenv: &mut dyn RpcEnvironment,
) -> Result<remote::Remote, Error> {
let (config, digest) = remote::config()?;
let mut data = config.lookup_json("remote", &name)?;
data.as_object_mut().unwrap()
.insert("digest".into(), proxmox::tools::digest_to_hex(&digest).into());
let mut data: remote::Remote = config.lookup("remote", &name)?;
data.password = "".to_string(); // do not return password in api
rpcenv["digest"] = proxmox::tools::digest_to_hex(&digest).into();
Ok(data)
}
@ -248,6 +242,10 @@ pub fn update_remote(
name: {
schema: REMOTE_ID_SCHEMA,
},
digest: {
optional: true,
schema: PROXMOX_CONFIG_DIGEST_SCHEMA,
},
},
},
access: {
@ -255,18 +253,24 @@ pub fn update_remote(
},
)]
/// Remove a remote from the configuration file.
pub fn delete_remote(name: String) -> Result<(), Error> {
pub fn delete_remote(name: String, digest: Option<String>) -> Result<(), Error> {
// fixme: locking ?
// fixme: check digest ?
let _lock = crate::tools::open_file_locked(remote::REMOTE_CFG_LOCKFILE, std::time::Duration::new(10, 0))?;
let (mut config, _digest) = remote::config()?;
let (mut config, expected_digest) = remote::config()?;
if let Some(ref digest) = digest {
let digest = proxmox::tools::hex_to_digest(digest)?;
crate::tools::detect_modified_configuration_file(&digest, &expected_digest)?;
}
match config.sections.get(&name) {
Some(_) => { config.sections.remove(&name); },
None => bail!("remote '{}' does not exist.", name),
}
remote::save_config(&config)?;
Ok(())
}

View File

@ -60,7 +60,7 @@ pub fn list_sync_jobs(
},
schedule: {
optional: true,
schema: GC_SCHEDULE_SCHEMA,
schema: SYNC_SCHEDULE_SCHEMA,
},
},
},
@ -154,7 +154,7 @@ pub enum DeletableProperty {
},
schedule: {
optional: true,
schema: GC_SCHEDULE_SCHEMA,
schema: SYNC_SCHEDULE_SCHEMA,
},
delete: {
description: "List of properties to delete.",
@ -274,4 +274,4 @@ const ITEM_ROUTER: Router = Router::new()
pub const ROUTER: Router = Router::new()
.get(&API_METHOD_LIST_SYNC_JOBS)
.post(&API_METHOD_CREATE_SYNC_JOB)
.match_all("name", &ITEM_ROUTER);
.match_all("id", &ITEM_ROUTER);

View File

@ -338,7 +338,7 @@ pub enum DeletableProperty {
autostart,
/// Delete bridge ports (set to 'none')
bridge_ports,
/// Delet bridge-vlan-aware flag
/// Delete bridge-vlan-aware flag
bridge_vlan_aware,
/// Delete bond-slaves (set to 'none')
slaves,

View File

@ -34,9 +34,12 @@ fn get_node_stats(
"memtotal", "memused",
"swaptotal", "swapused",
"netin", "netout",
"roottotal", "rootused",
"loadavg",
],
"total", "used",
"read_ios", "read_bytes",
"write_ios", "write_bytes",
"io_ticks",
],
timeframe,
cf,
)

View File

@ -256,7 +256,7 @@ fn stop_service(
_param: Value,
) -> Result<Value, Error> {
log::info!("stoping service {}", service);
log::info!("stopping service {}", service);
run_service_command(&service, "stop")
}

View File

@ -1,4 +1,5 @@
//! Sync datastore from remote server
use std::sync::{Arc};
use anyhow::{format_err, Error};
@ -15,6 +16,52 @@ use crate::config::{
cached_user_info::CachedUserInfo,
};
pub fn check_pull_privs(
username: &str,
store: &str,
remote: &str,
remote_store: &str,
delete: bool,
) -> Result<(), Error> {
let user_info = CachedUserInfo::new()?;
user_info.check_privs(username, &["datastore", store], PRIV_DATASTORE_BACKUP, false)?;
user_info.check_privs(username, &["remote", remote, remote_store], PRIV_REMOTE_READ, false)?;
if delete {
user_info.check_privs(username, &["datastore", store], PRIV_DATASTORE_PRUNE, false)?;
}
Ok(())
}
pub async fn get_pull_parameters(
store: &str,
remote: &str,
remote_store: &str,
) -> Result<(HttpClient, BackupRepository, Arc<DataStore>), Error> {
let tgt_store = DataStore::lookup_datastore(store)?;
let (remote_config, _digest) = remote::config()?;
let remote: remote::Remote = remote_config.lookup("remote", remote)?;
let options = HttpClientOptions::new()
.password(Some(remote.password.clone()))
.fingerprint(remote.fingerprint.clone());
let client = HttpClient::new(&remote.host, &remote.userid, options)?;
let _auth_info = client.login() // make sure we can auth
.await
.map_err(|err| format_err!("remote connection to '{}' failed - {}", remote.host, err))?;
let src_repo = BackupRepository::new(Some(remote.userid), Some(remote.host), remote_store.to_string());
Ok((client, src_repo, tgt_store))
}
#[api(
input: {
properties: {
@ -52,33 +99,12 @@ async fn pull (
rpcenv: &mut dyn RpcEnvironment,
) -> Result<String, Error> {
let user_info = CachedUserInfo::new()?;
let username = rpcenv.get_user().unwrap();
user_info.check_privs(&username, &["datastore", &store], PRIV_DATASTORE_BACKUP, false)?;
user_info.check_privs(&username, &["remote", &remote, &remote_store], PRIV_REMOTE_READ, false)?;
let delete = remove_vanished.unwrap_or(true);
if delete {
user_info.check_privs(&username, &["datastore", &store], PRIV_DATASTORE_PRUNE, false)?;
}
check_pull_privs(&username, &store, &remote, &remote_store, delete)?;
let tgt_store = DataStore::lookup_datastore(&store)?;
let (remote_config, _digest) = remote::config()?;
let remote: remote::Remote = remote_config.lookup("remote", &remote)?;
let options = HttpClientOptions::new()
.password(Some(remote.password.clone()))
.fingerprint(remote.fingerprint.clone());
let client = HttpClient::new(&remote.host, &remote.userid, options)?;
let _auth_info = client.login() // make sure we can auth
.await
.map_err(|err| format_err!("remote connection to '{}' failed - {}", remote.host, err))?;
let src_repo = BackupRepository::new(Some(remote.userid), Some(remote.host), remote_store);
let (client, src_repo, tgt_store) = get_pull_parameters(&store, &remote, &remote_store).await?;
// fixme: set to_stdout to false?
let upid_str = WorkerTask::spawn("sync", Some(store.clone()), &username.clone(), true, move |worker| async move {

View File

@ -131,7 +131,7 @@ fn upgrade_to_backup_reader_protocol(
Either::Right((Ok(res), _)) => Ok(res),
Either::Right((Err(err), _)) => Err(err),
})
.map_ok(move |_| env.log("reader finished sucessfully"))
.map_ok(move |_| env.log("reader finished successfully"))
})?;
let response = Response::builder()

View File

@ -27,6 +27,8 @@ macro_rules! DNS_NAME { () => (concat!(r"(?:", DNS_LABEL!() , r"\.)*", DNS_LABEL
macro_rules! USER_NAME_REGEX_STR { () => (r"(?:[^\s:/[:cntrl:]]+)") }
macro_rules! GROUP_NAME_REGEX_STR { () => (USER_NAME_REGEX_STR!()) }
macro_rules! USER_ID_REGEX_STR { () => (concat!(USER_NAME_REGEX_STR!(), r"@", PROXMOX_SAFE_ID_REGEX_STR!())) }
#[macro_export]
macro_rules! PROXMOX_SAFE_ID_REGEX_STR { () => (r"(?:[A-Za-z0-9_][A-Za-z0-9._\-]*)") }
@ -63,7 +65,9 @@ const_regex!{
pub DNS_NAME_OR_IP_REGEX = concat!(r"^", DNS_NAME!(), "|", IPRE!(), r"$");
pub PROXMOX_USER_ID_REGEX = concat!(r"^", USER_NAME_REGEX_STR!(), r"@", PROXMOX_SAFE_ID_REGEX_STR!(), r"$");
pub PROXMOX_USER_ID_REGEX = concat!(r"^", USER_ID_REGEX_STR!(), r"$");
pub BACKUP_REPO_URL_REGEX = concat!(r"^^(?:(?:(", USER_ID_REGEX_STR!(), ")@)?(", DNS_NAME!(), "|", IPRE!() ,"):)?(", PROXMOX_SAFE_ID_REGEX_STR!(), r")$");
pub PROXMOX_GROUP_ID_REGEX = concat!(r"^", GROUP_NAME_REGEX_STR!(), r"$");
@ -287,6 +291,11 @@ pub const DATASTORE_SCHEMA: Schema = StringSchema::new("Datastore name.")
.max_length(32)
.schema();
pub const SYNC_SCHEDULE_SCHEMA: Schema = StringSchema::new(
"Run sync job at specified schedule.")
.format(&ApiStringFormat::VerifyFn(crate::tools::systemd::time::verify_calendar_event))
.schema();
pub const GC_SCHEDULE_SCHEMA: Schema = StringSchema::new(
"Run garbage collection job at specified schedule.")
.format(&ApiStringFormat::VerifyFn(crate::tools::systemd::time::verify_calendar_event))
@ -379,6 +388,9 @@ pub struct GroupListItem {
pub backup_count: u64,
/// List of contained archive files.
pub files: Vec<String>,
/// The owner of group
#[serde(skip_serializing_if="Option::is_none")]
pub owner: Option<String>,
}
#[api(
@ -411,6 +423,9 @@ pub struct SnapshotListItem {
/// Overall snapshot size (sum of all archive sizes).
#[serde(skip_serializing_if="Option::is_none")]
pub size: Option<u64>,
/// The owner of the snapshots group
#[serde(skip_serializing_if="Option::is_none")]
pub owner: Option<String>,
}
#[api(
@ -807,7 +822,7 @@ fn test_cert_fingerprint_schema() -> Result<(), anyhow::Error> {
for fingerprint in invalid_fingerprints.iter() {
if let Ok(_) = parse_simple_value(fingerprint, &schema) {
bail!("test fingerprint '{}' failed - got Ok() while expection an error.", fingerprint);
bail!("test fingerprint '{}' failed - got Ok() while exception an error.", fingerprint);
}
}
@ -851,7 +866,7 @@ fn test_proxmox_user_id_schema() -> Result<(), anyhow::Error> {
for name in invalid_user_ids.iter() {
if let Ok(_) = parse_simple_value(name, &schema) {
bail!("test userid '{}' failed - got Ok() while expection an error.", name);
bail!("test userid '{}' failed - got Ok() while exception an error.", name);
}
}

View File

@ -311,7 +311,7 @@ impl DataBlob {
/// Verify digest and data length for unencrypted chunks.
///
/// To do that, we need to decompress data first. Please note that
/// this is noth possible for encrypted chunks.
/// this is not possible for encrypted chunks.
pub fn verify_unencrypted(
&self,
expected_chunk_size: usize,

View File

@ -11,7 +11,7 @@ use super::backup_info::{BackupGroup, BackupDir};
use super::chunk_store::ChunkStore;
use super::dynamic_index::{DynamicIndexReader, DynamicIndexWriter};
use super::fixed_index::{FixedIndexReader, FixedIndexWriter};
use super::manifest::{MANIFEST_BLOB_NAME, BackupManifest};
use super::manifest::{MANIFEST_BLOB_NAME, CLIENT_LOG_BLOB_NAME, BackupManifest};
use super::index::*;
use super::{DataBlob, ArchiveType, archive_type};
use crate::config::datastore;
@ -149,6 +149,7 @@ impl DataStore {
let mut wanted_files = HashSet::new();
wanted_files.insert(MANIFEST_BLOB_NAME.to_string());
wanted_files.insert(CLIENT_LOG_BLOB_NAME.to_string());
manifest.files().iter().for_each(|item| { wanted_files.insert(item.filename.clone()); });
for item in tools::fs::read_subdir(libc::AT_FDCWD, &full_path)? {

View File

@ -198,7 +198,7 @@ impl FixedIndexReader {
let mut csum = openssl::sha::Sha256::new();
let mut chunk_end = 0;
for pos in 0..self.index_length {
chunk_end = ((pos + 1) * self.chunk_size) as u64;
chunk_end = self.chunk_end(pos);
let digest = self.chunk_digest(pos);
csum.update(digest);
}

View File

@ -7,6 +7,7 @@ use serde_json::{json, Value};
use crate::backup::BackupDir;
pub const MANIFEST_BLOB_NAME: &str = "index.json.blob";
pub const CLIENT_LOG_BLOB_NAME: &str = "client.log.blob";
pub struct FileInfo {
pub filename: String,
@ -72,7 +73,7 @@ impl BackupManifest {
let info = self.lookup_file_info(name)?;
if size != info.size {
bail!("wrong size for file '{}' ({} != {}", name, info.size, size);
bail!("wrong size for file '{}' ({} != {})", name, info.size, size);
}
if csum != &info.csum {

View File

@ -49,7 +49,7 @@ fn hello_command(
}
#[api(input: { properties: {} })]
/// Quit command. Exit the programm.
/// Quit command. Exit the program.
///
/// Returns: nothing
fn quit_command() -> Result<(), Error> {

View File

@ -16,7 +16,7 @@ use std::io::Write;
// tar: dyntest1/testfile7.dat: File shrank by 2833252864 bytes; padding with zeros
// # pxar create test.pxar ./dyntest1/
// Error: detected shrinked file "./dyntest1/testfile0.dat" (22020096 < 12679380992)
// Error: detected shrunk file "./dyntest1/testfile0.dat" (22020096 < 12679380992)
fn create_large_file(path: PathBuf) {

View File

@ -22,11 +22,6 @@ use proxmox_backup::client::*;
use proxmox_backup::backup::*;
use proxmox_backup::pxar::{ self, catalog::* };
//use proxmox_backup::backup::image_index::*;
//use proxmox_backup::config::datastore;
//use proxmox_backup::pxar::encoder::*;
//use proxmox_backup::backup::datastore::*;
use serde_json::{json, Value};
//use hyper::Body;
use std::sync::{Arc, Mutex};
@ -39,20 +34,12 @@ use tokio::sync::mpsc;
const ENV_VAR_PBS_FINGERPRINT: &str = "PBS_FINGERPRINT";
const ENV_VAR_PBS_PASSWORD: &str = "PBS_PASSWORD";
proxmox::const_regex! {
BACKUPSPEC_REGEX = r"^([a-zA-Z0-9_-]+\.(?:pxar|img|conf|log)):(.+)$";
}
const REPO_URL_SCHEMA: Schema = StringSchema::new("Repository URL.")
.format(&BACKUP_REPO_URL)
.max_length(256)
.schema();
const BACKUP_SOURCE_SCHEMA: Schema = StringSchema::new(
"Backup source specification ([<label>:<path>]).")
.format(&ApiStringFormat::Pattern(&BACKUPSPEC_REGEX))
.schema();
const KEYFILE_SCHEMA: Schema = StringSchema::new(
"Path to encryption key. All data will be encrypted using this key.")
.schema();
@ -688,14 +675,6 @@ async fn start_garbage_collection(param: Value) -> Result<Value, Error> {
Ok(Value::Null)
}
fn parse_backupspec(value: &str) -> Result<(&str, &str), Error> {
if let Some(caps) = (BACKUPSPEC_REGEX.regex_obj)().captures(value) {
return Ok((caps.get(1).unwrap().as_str(), caps.get(2).unwrap().as_str()));
}
bail!("unable to parse directory specification '{}'", value);
}
fn spawn_catalog_upload(
client: Arc<BackupWriter>,
crypt_config: Option<Arc<CryptConfig>>,
@ -865,12 +844,12 @@ async fn create_backup(
let mut upload_list = vec![];
enum BackupType { PXAR, IMAGE, CONFIG, LOGFILE };
let mut upload_catalog = false;
for backupspec in backupspec_list {
let (target, filename) = parse_backupspec(backupspec.as_str().unwrap())?;
let spec = parse_backup_specification(backupspec.as_str().unwrap())?;
let filename = &spec.config_string;
let target = &spec.archive_name;
use std::os::unix::fs::FileTypeExt;
@ -878,19 +857,15 @@ async fn create_backup(
.map_err(|err| format_err!("unable to access '{}' - {}", filename, err))?;
let file_type = metadata.file_type();
let extension = target.rsplit('.').next()
.ok_or_else(|| format_err!("missing target file extenion '{}'", target))?;
match extension {
"pxar" => {
match spec.spec_type {
BackupSpecificationType::PXAR => {
if !file_type.is_dir() {
bail!("got unexpected file type (expected directory)");
}
upload_list.push((BackupType::PXAR, filename.to_owned(), format!("{}.didx", target), 0));
upload_list.push((BackupSpecificationType::PXAR, filename.to_owned(), format!("{}.didx", target), 0));
upload_catalog = true;
}
"img" => {
BackupSpecificationType::IMAGE => {
if !(file_type.is_file() || file_type.is_block_device()) {
bail!("got unexpected file type (expected file or block device)");
}
@ -899,22 +874,19 @@ async fn create_backup(
if size == 0 { bail!("got zero-sized file '{}'", filename); }
upload_list.push((BackupType::IMAGE, filename.to_owned(), format!("{}.fidx", target), size));
upload_list.push((BackupSpecificationType::IMAGE, filename.to_owned(), format!("{}.fidx", target), size));
}
"conf" => {
BackupSpecificationType::CONFIG => {
if !file_type.is_file() {
bail!("got unexpected file type (expected regular file)");
}
upload_list.push((BackupType::CONFIG, filename.to_owned(), format!("{}.blob", target), metadata.len()));
upload_list.push((BackupSpecificationType::CONFIG, filename.to_owned(), format!("{}.blob", target), metadata.len()));
}
"log" => {
BackupSpecificationType::LOGFILE => {
if !file_type.is_file() {
bail!("got unexpected file type (expected regular file)");
}
upload_list.push((BackupType::LOGFILE, filename.to_owned(), format!("{}.blob", target), metadata.len()));
}
_ => {
bail!("got unknown archive extension '{}'", extension);
upload_list.push((BackupSpecificationType::LOGFILE, filename.to_owned(), format!("{}.blob", target), metadata.len()));
}
}
}
@ -967,21 +939,21 @@ async fn create_backup(
for (backup_type, filename, target, size) in upload_list {
match backup_type {
BackupType::CONFIG => {
BackupSpecificationType::CONFIG => {
println!("Upload config file '{}' to '{:?}' as {}", filename, repo, target);
let stats = client
.upload_blob_from_file(&filename, &target, crypt_config.clone(), true)
.await?;
manifest.add_file(target, stats.size, stats.csum)?;
}
BackupType::LOGFILE => { // fixme: remove - not needed anymore ?
BackupSpecificationType::LOGFILE => { // fixme: remove - not needed anymore ?
println!("Upload log file '{}' to '{:?}' as {}", filename, repo, target);
let stats = client
.upload_blob_from_file(&filename, &target, crypt_config.clone(), true)
.await?;
manifest.add_file(target, stats.size, stats.csum)?;
}
BackupType::PXAR => {
BackupSpecificationType::PXAR => {
println!("Upload directory '{}' to '{:?}' as {}", filename, repo, target);
catalog.lock().unwrap().start_directory(std::ffi::CString::new(target.as_str())?.as_c_str())?;
let stats = backup_directory(
@ -1000,7 +972,7 @@ async fn create_backup(
manifest.add_file(target, stats.size, stats.csum)?;
catalog.lock().unwrap().end_directory()?;
}
BackupType::IMAGE => {
BackupSpecificationType::IMAGE => {
println!("Upload image '{}' to '{:?}' as {}", filename, repo, target);
let stats = backup_image(
&client,
@ -1135,6 +1107,18 @@ fn dump_image<W: Write>(
Ok(())
}
fn parse_archive_type(name: &str) -> (String, ArchiveType) {
if name.ends_with(".didx") || name.ends_with(".fidx") || name.ends_with(".blob") {
(name.into(), archive_type(name).unwrap())
} else if name.ends_with(".pxar") {
(format!("{}.didx", name), ArchiveType::DynamicIndex)
} else if name.ends_with(".img") {
(format!("{}.fidx", name), ArchiveType::FixedIndex)
} else {
(format!("{}.blob", name), ArchiveType::Blob)
}
}
#[api(
input: {
properties: {
@ -1207,14 +1191,6 @@ async fn restore(param: Value) -> Result<Value, Error> {
}
};
let server_archive_name = if archive_name.ends_with(".pxar") {
format!("{}.didx", archive_name)
} else if archive_name.ends_with(".img") {
format!("{}.fidx", archive_name)
} else {
format!("{}.blob", archive_name)
};
let client = BackupReader::start(
client,
crypt_config.clone(),
@ -1227,7 +1203,9 @@ async fn restore(param: Value) -> Result<Value, Error> {
let manifest = client.download_manifest().await?;
if server_archive_name == MANIFEST_BLOB_NAME {
let (archive_name, archive_type) = parse_archive_type(archive_name);
if archive_name == MANIFEST_BLOB_NAME {
let backup_index_data = manifest.into_json().to_string();
if let Some(target) = target {
replace_file(target, backup_index_data.as_bytes(), CreateOptions::new())?;
@ -1238,9 +1216,9 @@ async fn restore(param: Value) -> Result<Value, Error> {
.map_err(|err| format_err!("unable to pipe data - {}", err))?;
}
} else if server_archive_name.ends_with(".blob") {
} else if archive_type == ArchiveType::Blob {
let mut reader = client.download_blob(&manifest, &server_archive_name).await?;
let mut reader = client.download_blob(&manifest, &archive_name).await?;
if let Some(target) = target {
let mut writer = std::fs::OpenOptions::new()
@ -1257,9 +1235,9 @@ async fn restore(param: Value) -> Result<Value, Error> {
.map_err(|err| format_err!("unable to pipe data - {}", err))?;
}
} else if server_archive_name.ends_with(".didx") {
} else if archive_type == ArchiveType::DynamicIndex {
let index = client.download_dynamic_index(&manifest, &server_archive_name).await?;
let index = client.download_dynamic_index(&manifest, &archive_name).await?;
let most_used = index.find_most_used_chunks(8);
@ -1289,9 +1267,9 @@ async fn restore(param: Value) -> Result<Value, Error> {
std::io::copy(&mut reader, &mut writer)
.map_err(|err| format_err!("unable to pipe data - {}", err))?;
}
} else if server_archive_name.ends_with(".fidx") {
} else if archive_type == ArchiveType::FixedIndex {
let index = client.download_fixed_index(&manifest, &server_archive_name).await?;
let index = client.download_fixed_index(&manifest, &archive_name).await?;
let mut writer = if let Some(target) = target {
std::fs::OpenOptions::new()
@ -1308,9 +1286,6 @@ async fn restore(param: Value) -> Result<Value, Error> {
};
dump_image(client.clone(), crypt_config.clone(), index, &mut writer, verbose)?;
} else {
bail!("unknown archive file extension (expected .pxar of .img)");
}
Ok(Value::Null)
@ -1390,6 +1365,12 @@ const API_METHOD_PRUNE: ApiMethod = ApiMethod::new(
("group", false, &StringSchema::new("Backup group.").schema()),
], [
("output-format", true, &OUTPUT_FORMAT),
(
"quiet",
true,
&BooleanSchema::new("Minimal output - only show removals.")
.schema()
),
("repository", true, &REPO_URL_SCHEMA),
])
)
@ -1417,9 +1398,12 @@ async fn prune_async(mut param: Value) -> Result<Value, Error> {
let output_format = get_output_format(&param);
let quiet = param["quiet"].as_bool().unwrap_or(false);
param.as_object_mut().unwrap().remove("repository");
param.as_object_mut().unwrap().remove("group");
param.as_object_mut().unwrap().remove("output-format");
param.as_object_mut().unwrap().remove("quiet");
param["backup-type"] = group.backup_type().into();
param["backup-id"] = group.backup_id().into();
@ -1434,19 +1418,34 @@ async fn prune_async(mut param: Value) -> Result<Value, Error> {
Ok(snapshot.relative_path().to_str().unwrap().to_owned())
};
let render_prune_action = |v: &Value, _record: &Value| -> Result<String, Error> {
Ok(match v.as_bool() {
Some(true) => "keep",
Some(false) => "remove",
None => "unknown",
}.to_string())
};
let options = default_table_format_options()
.sortby("backup-type", false)
.sortby("backup-id", false)
.sortby("backup-time", false)
.column(ColumnConfig::new("backup-id").renderer(render_snapshot_path).header("snapshot"))
.column(ColumnConfig::new("backup-time").renderer(tools::format::render_epoch).header("date"))
.column(ColumnConfig::new("keep"))
.column(ColumnConfig::new("keep").renderer(render_prune_action).header("action"))
;
let info = &proxmox_backup::api2::admin::datastore::API_RETURN_SCHEMA_PRUNE;
let mut data = result["data"].take();
if quiet {
let list: Vec<Value> = data.as_array().unwrap().iter().filter(|item| {
item["keep"].as_bool() == Some(false)
}).map(|v| v.clone()).collect();
data = list.into();
}
format_and_print_result_full(&mut data, info, &output_format, &options);
Ok(Value::Null)
@ -2028,7 +2027,7 @@ async fn mount_do(param: Value, pipe: Option<RawFd>) -> Result<Value, Error> {
if let Some(pipe) = pipe {
nix::unistd::chdir(Path::new("/")).unwrap();
// Finish creation of deamon by redirecting filedescriptors.
// Finish creation of daemon by redirecting filedescriptors.
let nullfd = nix::fcntl::open(
"/dev/null",
nix::fcntl::OFlag::O_RDWR,

View File

@ -260,11 +260,9 @@ fn task_mgmt_cli() -> CommandLineInterface {
"remote-store": {
schema: DATASTORE_SCHEMA,
},
delete: {
description: "Delete vanished backups. This remove the local copy if the remote backup was deleted.",
type: Boolean,
"remove-vanished": {
schema: REMOVE_VANISHED_BACKUPS_SCHEMA,
optional: true,
default: true,
},
"output-format": {
schema: OUTPUT_FORMAT,
@ -278,7 +276,7 @@ async fn pull_datastore(
remote: String,
remote_store: String,
local_store: String,
delete: Option<bool>,
remove_vanished: Option<bool>,
param: Value,
) -> Result<Value, Error> {
@ -292,8 +290,8 @@ async fn pull_datastore(
"remote-store": remote_store,
});
if let Some(delete) = delete {
args["delete"] = delete.into();
if let Some(remove_vanished) = remove_vanished {
args["remove-vanished"] = Value::from(remove_vanished);
}
let result = client.post("api2/json/pull", Some(args)).await?;

View File

@ -1,4 +1,5 @@
use std::sync::Arc;
use std::path::Path;
use anyhow::{bail, format_err, Error};
use futures::*;
@ -14,6 +15,7 @@ use proxmox_backup::server;
use proxmox_backup::tools::daemon;
use proxmox_backup::server::{ApiConfig, rest::*};
use proxmox_backup::auth_helpers::*;
use proxmox_backup::tools::disks::{ DiskManage, zfs_pool_stats };
fn main() {
if let Err(err) = proxmox_backup::tools::runtime::main(run()) {
@ -381,12 +383,15 @@ async fn schedule_datastore_prune() {
}
};
//fixme: if last_prune_job_stzill_running { continue; }
let worker_type = "prune";
let last = match lookup_last_worker(worker_type, &store) {
Ok(Some(upid)) => upid.starttime,
Ok(Some(upid)) => {
if proxmox_backup::server::worker_is_active_local(&upid) {
continue;
}
upid.starttime
}
Ok(None) => 0,
Err(err) => {
eprintln!("lookup_last_job_start failed: {}", err);
@ -503,12 +508,15 @@ async fn schedule_datastore_sync_jobs() {
}
};
//fixme: if last_sync_job_still_running { continue; }
let worker_type = "syncjob";
let worker_type = "sync";
let last = match lookup_last_worker(worker_type, &job_config.store) {
Ok(Some(upid)) => upid.starttime,
let last = match lookup_last_worker(worker_type, &job_id) {
Ok(Some(upid)) => {
if proxmox_backup::server::worker_is_active_local(&upid) {
continue;
}
upid.starttime
},
Ok(None) => 0,
Err(err) => {
eprintln!("lookup_last_job_start failed: {}", err);
@ -558,7 +566,7 @@ async fn schedule_datastore_sync_jobs() {
if let Err(err) = WorkerTask::spawn(
worker_type,
Some(job_config.store.clone()),
Some(job_id.clone()),
&username.clone(),
false,
move |worker| async move {
@ -590,41 +598,47 @@ async fn schedule_datastore_sync_jobs() {
async fn run_stat_generator() {
let mut count = 0;
loop {
count += 1;
let save = if count >= 6 { count = 0; true } else { false };
let delay_target = Instant::now() + Duration::from_secs(10);
generate_host_stats().await;
generate_host_stats(save).await;
tokio::time::delay_until(tokio::time::Instant::from_std(delay_target)).await;
}
}
}
fn rrd_update_gauge(name: &str, value: f64) {
fn rrd_update_gauge(name: &str, value: f64, save: bool) {
use proxmox_backup::rrd;
if let Err(err) = rrd::update_value(name, value, rrd::DST::Gauge) {
if let Err(err) = rrd::update_value(name, value, rrd::DST::Gauge, save) {
eprintln!("rrd::update_value '{}' failed - {}", name, err);
}
}
fn rrd_update_derive(name: &str, value: f64) {
fn rrd_update_derive(name: &str, value: f64, save: bool) {
use proxmox_backup::rrd;
if let Err(err) = rrd::update_value(name, value, rrd::DST::Derive) {
if let Err(err) = rrd::update_value(name, value, rrd::DST::Derive, save) {
eprintln!("rrd::update_value '{}' failed - {}", name, err);
}
}
async fn generate_host_stats() {
async fn generate_host_stats(save: bool) {
use proxmox::sys::linux::procfs::{
read_meminfo, read_proc_stat, read_proc_net_dev, read_loadavg};
use proxmox_backup::config::datastore;
proxmox_backup::tools::runtime::block_in_place(move || {
match read_proc_stat() {
Ok(stat) => {
rrd_update_gauge("host/cpu", stat.cpu);
rrd_update_gauge("host/iowait", stat.iowait_percent);
rrd_update_gauge("host/cpu", stat.cpu, save);
rrd_update_gauge("host/iowait", stat.iowait_percent, save);
}
Err(err) => {
eprintln!("read_proc_stat failed - {}", err);
@ -633,10 +647,10 @@ async fn generate_host_stats() {
match read_meminfo() {
Ok(meminfo) => {
rrd_update_gauge("host/memtotal", meminfo.memtotal as f64);
rrd_update_gauge("host/memused", meminfo.memused as f64);
rrd_update_gauge("host/swaptotal", meminfo.swaptotal as f64);
rrd_update_gauge("host/swapused", meminfo.swapused as f64);
rrd_update_gauge("host/memtotal", meminfo.memtotal as f64, save);
rrd_update_gauge("host/memused", meminfo.memused as f64, save);
rrd_update_gauge("host/swaptotal", meminfo.swaptotal as f64, save);
rrd_update_gauge("host/swapused", meminfo.swapused as f64, save);
}
Err(err) => {
eprintln!("read_meminfo failed - {}", err);
@ -653,8 +667,8 @@ async fn generate_host_stats() {
netin += item.receive;
netout += item.send;
}
rrd_update_derive("host/netin", netin as f64);
rrd_update_derive("host/netout", netout as f64);
rrd_update_derive("host/netin", netin as f64, save);
rrd_update_derive("host/netout", netout as f64, save);
}
Err(err) => {
eprintln!("read_prox_net_dev failed - {}", err);
@ -663,22 +677,16 @@ async fn generate_host_stats() {
match read_loadavg() {
Ok(loadavg) => {
rrd_update_gauge("host/loadavg", loadavg.0 as f64);
rrd_update_gauge("host/loadavg", loadavg.0 as f64, save);
}
Err(err) => {
eprintln!("read_loadavg failed - {}", err);
}
}
match disk_usage(std::path::Path::new("/")) {
Ok((total, used, _avail)) => {
rrd_update_gauge("host/roottotal", total as f64);
rrd_update_gauge("host/rootused", used as f64);
}
Err(err) => {
eprintln!("read root disk_usage failed - {}", err);
}
}
let disk_manager = DiskManage::new();
gather_disk_stats(disk_manager.clone(), Path::new("/"), "host", save);
match datastore::config() {
Ok((config, _)) => {
@ -686,16 +694,10 @@ async fn generate_host_stats() {
config.convert_to_typed_array("datastore").unwrap_or(Vec::new());
for config in datastore_list {
match disk_usage(std::path::Path::new(&config.path)) {
Ok((total, used, _avail)) => {
let rrd_key = format!("datastore/{}", config.name);
rrd_update_gauge(&rrd_key, total as f64);
rrd_update_gauge(&rrd_key, used as f64);
}
Err(err) => {
eprintln!("read disk_usage on {:?} failed - {}", config.path, err);
}
}
let rrd_prefix = format!("datastore/{}", config.name);
let path = std::path::Path::new(&config.path);
gather_disk_stats(disk_manager.clone(), path, &rrd_prefix, save);
}
}
Err(err) => {
@ -706,17 +708,59 @@ async fn generate_host_stats() {
});
}
// Returns (total, used, avail)
fn disk_usage(path: &std::path::Path) -> Result<(u64, u64, u64), Error> {
fn gather_disk_stats(disk_manager: Arc<DiskManage>, path: &Path, rrd_prefix: &str, save: bool) {
let mut stat: libc::statfs64 = unsafe { std::mem::zeroed() };
match proxmox_backup::tools::disks::disk_usage(path) {
Ok((total, used, _avail)) => {
let rrd_key = format!("{}/total", rrd_prefix);
rrd_update_gauge(&rrd_key, total as f64, save);
let rrd_key = format!("{}/used", rrd_prefix);
rrd_update_gauge(&rrd_key, used as f64, save);
}
Err(err) => {
eprintln!("read disk_usage on {:?} failed - {}", path, err);
}
}
use nix::NixPath;
match disk_manager.find_mounted_device(path) {
Ok(None) => {},
Ok(Some((fs_type, device, source))) => {
let mut device_stat = None;
match fs_type.as_str() {
"zfs" => {
if let Some(pool) = source {
match zfs_pool_stats(&pool) {
Ok(stat) => device_stat = stat,
Err(err) => eprintln!("zfs_pool_stats({:?}) failed - {}", pool, err),
}
}
}
_ => {
if let Ok(disk) = disk_manager.clone().disk_by_dev_num(device.into_dev_t()) {
match disk.read_stat() {
Ok(stat) => device_stat = stat,
Err(err) => eprintln!("disk.read_stat {:?} failed - {}", path, err),
}
}
}
}
if let Some(stat) = device_stat {
let rrd_key = format!("{}/read_ios", rrd_prefix);
rrd_update_derive(&rrd_key, stat.read_ios as f64, save);
let rrd_key = format!("{}/read_bytes", rrd_prefix);
rrd_update_derive(&rrd_key, (stat.read_sectors*512) as f64, save);
let res = path.with_nix_path(|cstr| unsafe { libc::statfs64(cstr.as_ptr(), &mut stat) })?;
nix::errno::Errno::result(res)?;
let rrd_key = format!("{}/write_ios", rrd_prefix);
rrd_update_derive(&rrd_key, stat.write_ios as f64, save);
let rrd_key = format!("{}/write_bytes", rrd_prefix);
rrd_update_derive(&rrd_key, (stat.write_sectors*512) as f64, save);
let bsize = stat.f_bsize as u64;
Ok((stat.f_blocks*bsize, (stat.f_blocks-stat.f_bfree)*bsize, stat.f_bavail*bsize))
let rrd_key = format!("{}/io_ticks", rrd_prefix);
rrd_update_derive(&rrd_key, (stat.io_ticks as f64)/1000.0, save);
}
}
Err(err) => {
eprintln!("find_mounted_device failed - {}", err);
}
}
}

View File

@ -17,7 +17,7 @@ fn x509name_to_string(name: &openssl::x509::X509NameRef) -> Result<String, Error
}
#[api]
/// Diplay node certificate information.
/// Display node certificate information.
fn cert_info() -> Result<(), Error> {
let cert_path = PathBuf::from(configdir!("/proxy.pem"));

View File

@ -30,4 +30,7 @@ pub use pxar_decode_writer::*;
mod backup_repo;
pub use backup_repo::*;
mod backup_specification;
pub use backup_specification::*;
pub mod pull;

View File

@ -138,7 +138,7 @@ impl BackupReader {
/// Download a .blob file
///
/// This creates a temorary file in /tmp (using O_TMPFILE). The data is verified using
/// This creates a temporary file in /tmp (using O_TMPFILE). The data is verified using
/// the provided manifest.
pub async fn download_blob(
&self,
@ -164,7 +164,7 @@ impl BackupReader {
/// Download dynamic index file
///
/// This creates a temorary file in /tmp (using O_TMPFILE). The index is verified using
/// This creates a temporary file in /tmp (using O_TMPFILE). The index is verified using
/// the provided manifest.
pub async fn download_dynamic_index(
&self,
@ -192,7 +192,7 @@ impl BackupReader {
/// Download fixed index file
///
/// This creates a temorary file in /tmp (using O_TMPFILE). The index is verified using
/// This creates a temporary file in /tmp (using O_TMPFILE). The index is verified using
/// the provided manifest.
pub async fn download_fixed_index(
&self,

View File

@ -3,12 +3,8 @@ use std::fmt;
use anyhow::{format_err, Error};
use proxmox::api::schema::*;
use proxmox::const_regex;
const_regex! {
/// Regular expression to parse repository URLs
pub BACKUP_REPO_URL_REGEX = r"^(?:(?:([\w@]+)@)?([\w\-_.]+):)?(\w+)$";
}
use crate::api2::types::*;
/// API schema format definition for repository URLs
pub const BACKUP_REPO_URL: ApiStringFormat = ApiStringFormat::Pattern(&BACKUP_REPO_URL_REGEX);

View File

@ -0,0 +1,39 @@
use anyhow::{bail, Error};
use proxmox::api::schema::*;
proxmox::const_regex! {
BACKUPSPEC_REGEX = r"^([a-zA-Z0-9_-]+\.(pxar|img|conf|log)):(.+)$";
}
pub const BACKUP_SOURCE_SCHEMA: Schema = StringSchema::new(
"Backup source specification ([<label>:<path>]).")
.format(&ApiStringFormat::Pattern(&BACKUPSPEC_REGEX))
.schema();
pub enum BackupSpecificationType { PXAR, IMAGE, CONFIG, LOGFILE }
pub struct BackupSpecification {
pub archive_name: String, // left part
pub config_string: String, // right part
pub spec_type: BackupSpecificationType,
}
pub fn parse_backup_specification(value: &str) -> Result<BackupSpecification, Error> {
if let Some(caps) = (BACKUPSPEC_REGEX.regex_obj)().captures(value) {
let archive_name = caps.get(1).unwrap().as_str().into();
let extension = caps.get(2).unwrap().as_str();
let config_string = caps.get(3).unwrap().as_str().into();
let spec_type = match extension {
"pxar" => BackupSpecificationType::PXAR,
"img" => BackupSpecificationType::IMAGE,
"conf" => BackupSpecificationType::CONFIG,
"log" => BackupSpecificationType::LOGFILE,
_ => bail!("unknown backup source type '{}'", extension),
};
return Ok(BackupSpecification { archive_name, config_string, spec_type });
}
bail!("unable to parse backup source specification '{}'", value);
}

View File

@ -343,7 +343,7 @@ impl HttpClient {
/// Login
///
/// Login is done on demand, so this is onyl required if you need
/// Login is done on demand, so this is only required if you need
/// access to authentication data in 'AuthInfo'.
pub async fn login(&self) -> Result<AuthInfo, Error> {
self.auth.listen().await
@ -400,21 +400,22 @@ impl HttpClient {
if interactive && tty::stdin_isatty() {
println!("fingerprint: {}", fp_string);
loop {
print!("Want to trust? (y/n): ");
print!("Are you sure you want to continue connecting? (y/n): ");
let _ = std::io::stdout().flush();
let mut buf = [0u8; 1];
use std::io::Read;
match std::io::stdin().read_exact(&mut buf) {
Ok(()) => {
if buf[0] == b'y' || buf[0] == b'Y' {
use std::io::{BufRead, BufReader};
let mut line = String::new();
match BufReader::new(std::io::stdin()).read_line(&mut line) {
Ok(_) => {
let trimmed = line.trim();
if trimmed == "y" || trimmed == "Y" {
return (true, Some(fp_string));
} else if buf[0] == b'n' || buf[0] == b'N' {
} else if trimmed == "n" || trimmed == "N" {
return (false, None);
} else {
continue;
}
}
Err(_) => {
return (false, None);
}
Err(_) => return (false, None),
}
}
}

View File

@ -106,6 +106,34 @@ async fn pull_single_archive(
Ok(())
}
// Note: The client.log.blob is uploaded after the backup, so it is
// not mentioned in the manifest.
async fn try_client_log_download(
worker: &WorkerTask,
reader: Arc<BackupReader>,
path: &std::path::Path,
) -> Result<(), Error> {
let mut tmp_path = path.to_owned();
tmp_path.set_extension("tmp");
let tmpfile = std::fs::OpenOptions::new()
.write(true)
.create(true)
.read(true)
.open(&tmp_path)?;
// Note: be silent if there is no log - only log successful download
if let Ok(_) = reader.download(CLIENT_LOG_BLOB_NAME, tmpfile).await {
if let Err(err) = std::fs::rename(&tmp_path, &path) {
bail!("Atomic rename file {:?} failed - {}", path, err);
}
worker.log(format!("got backup log file {:?}", CLIENT_LOG_BLOB_NAME));
}
Ok(())
}
async fn pull_snapshot(
worker: &WorkerTask,
reader: Arc<BackupReader>,
@ -117,6 +145,10 @@ async fn pull_snapshot(
manifest_name.push(snapshot.relative_path());
manifest_name.push(MANIFEST_BLOB_NAME);
let mut client_log_name = tgt_store.base_path();
client_log_name.push(snapshot.relative_path());
client_log_name.push(CLIENT_LOG_BLOB_NAME);
let mut tmp_manifest_name = manifest_name.clone();
tmp_manifest_name.set_extension("tmp");
@ -137,6 +169,10 @@ async fn pull_snapshot(
})?;
if manifest_blob.raw_data() == tmp_manifest_blob.raw_data() {
if !client_log_name.exists() {
try_client_log_download(worker, reader, &client_log_name).await?;
}
worker.log("no data changes");
return Ok(()); // nothing changed
}
}
@ -199,6 +235,10 @@ async fn pull_snapshot(
bail!("Atomic rename file {:?} failed - {}", manifest_name, err);
}
if !client_log_name.exists() {
try_client_log_download(worker, reader, &client_log_name).await?;
}
// cleanup - remove stale files
tgt_store.cleanup_backup_dir(snapshot, &manifest)?;
@ -223,9 +263,11 @@ pub async fn pull_snapshot_from(
}
return Err(err);
}
worker.log(format!("sync snapshot {:?} done", snapshot.relative_path()));
} else {
worker.log(format!("re-sync snapshot {:?}", snapshot.relative_path()));
pull_snapshot(worker, reader, tgt_store.clone(), &snapshot).await?
pull_snapshot(worker, reader, tgt_store.clone(), &snapshot).await?;
worker.log(format!("re-sync snapshot {:?} done", snapshot.relative_path()));
}
Ok(())

View File

@ -1,4 +1,4 @@
use anyhow::{bail, Error};
use anyhow::{Error};
use lazy_static::lazy_static;
use std::collections::HashMap;
use serde::{Serialize, Deserialize};
@ -113,16 +113,9 @@ pub const DATASTORE_CFG_FILENAME: &str = "/etc/proxmox-backup/datastore.cfg";
pub const DATASTORE_CFG_LOCKFILE: &str = "/etc/proxmox-backup/.datastore.lck";
pub fn config() -> Result<(SectionConfigData, [u8;32]), Error> {
let content = match std::fs::read_to_string(DATASTORE_CFG_FILENAME) {
Ok(c) => c,
Err(err) => {
if err.kind() == std::io::ErrorKind::NotFound {
String::from("")
} else {
bail!("unable to read '{}' - {}", DATASTORE_CFG_FILENAME, err);
}
}
};
let content = proxmox::tools::fs::file_read_optional_string(DATASTORE_CFG_FILENAME)?;
let content = content.unwrap_or(String::from(""));
let digest = openssl::sha::sha256(content.as_bytes());
let data = CONFIG.parse(DATASTORE_CFG_FILENAME, &content)?;

View File

@ -149,7 +149,7 @@ impl Interface {
Ok(())
}
/// Write attributes not dependening on address family
/// Write attributes not depending on address family
fn write_iface_attributes(&self, w: &mut dyn Write) -> Result<(), Error> {
static EMPTY_LIST: Vec<String> = Vec::new();
@ -187,7 +187,7 @@ impl Interface {
Ok(())
}
/// Write attributes dependening on address family inet (IPv4)
/// Write attributes depending on address family inet (IPv4)
fn write_iface_attributes_v4(&self, w: &mut dyn Write, method: NetworkConfigMethod) -> Result<(), Error> {
if method == NetworkConfigMethod::Static {
if let Some(address) = &self.cidr {
@ -211,7 +211,7 @@ impl Interface {
Ok(())
}
/// Write attributes dependening on address family inet6 (IPv6)
/// Write attributes depending on address family inet6 (IPv6)
fn write_iface_attributes_v6(&self, w: &mut dyn Write, method: NetworkConfigMethod) -> Result<(), Error> {
if method == NetworkConfigMethod::Static {
if let Some(address) = &self.cidr6 {
@ -477,24 +477,15 @@ pub const NETWORK_INTERFACES_FILENAME: &str = "/etc/network/interfaces";
pub const NETWORK_INTERFACES_NEW_FILENAME: &str = "/etc/network/interfaces.new";
pub const NETWORK_LOCKFILE: &str = "/var/lock/pve-network.lck";
pub fn config() -> Result<(NetworkConfig, [u8;32]), Error> {
let content = std::fs::read(NETWORK_INTERFACES_NEW_FILENAME)
.or_else(|err| {
if err.kind() == std::io::ErrorKind::NotFound {
std::fs::read(NETWORK_INTERFACES_FILENAME)
.or_else(|err| {
if err.kind() == std::io::ErrorKind::NotFound {
Ok(Vec::new())
} else {
bail!("unable to read '{}' - {}", NETWORK_INTERFACES_FILENAME, err);
}
})
} else {
bail!("unable to read '{}' - {}", NETWORK_INTERFACES_NEW_FILENAME, err);
}
})?;
let content = match proxmox::tools::fs::file_get_optional_contents(NETWORK_INTERFACES_NEW_FILENAME)? {
Some(content) => content,
None => {
let content = proxmox::tools::fs::file_get_optional_contents(NETWORK_INTERFACES_FILENAME)?;
content.unwrap_or(Vec::new())
}
};
let digest = openssl::sha::sha256(&content);

View File

@ -149,23 +149,8 @@ pub fn compute_file_diff(filename: &str, shadow: &str) -> Result<String, Error>
.output()
.map_err(|err| format_err!("failed to execute diff - {}", err))?;
if !output.status.success() {
match output.status.code() {
Some(code) => {
if code == 0 { return Ok(String::new()); }
if code != 1 {
let msg = String::from_utf8(output.stderr)
.map(|m| if m.is_empty() { String::from("no error message") } else { m })
.unwrap_or_else(|_| String::from("non utf8 error message (suppressed)"));
bail!("diff failed with status code: {} - {}", code, msg);
}
}
None => bail!("diff terminated by signal"),
}
}
let diff = String::from_utf8(output.stdout)?;
let diff = crate::tools::command_output(output, Some(|c| c == 0 || c == 1))
.map_err(|err| format_err!("diff failed: {}", err))?;
Ok(diff)
}
@ -180,17 +165,14 @@ pub fn assert_ifupdown2_installed() -> Result<(), Error> {
pub fn network_reload() -> Result<(), Error> {
let status = Command::new("/sbin/ifreload")
let output = Command::new("/sbin/ifreload")
.arg("-a")
.status()
.map_err(|err| format_err!("failed to execute ifreload: - {}", err))?;
.output()
.map_err(|err| format_err!("failed to execute '/sbin/ifreload' - {}", err))?;
crate::tools::command_output(output, None)
.map_err(|err| format_err!("ifreload failed: {}", err))?;
if !status.success() {
match status.code() {
Some(code) => bail!("ifreload failed with status code: {}", code),
None => bail!("ifreload terminated by signal")
}
}
Ok(())
}

View File

@ -1,4 +1,4 @@
use anyhow::{bail, Error};
use anyhow::{Error};
use lazy_static::lazy_static;
use std::collections::HashMap;
use serde::{Serialize, Deserialize};
@ -29,6 +29,9 @@ pub const REMOTE_PASSWORD_SCHEMA: Schema = StringSchema::new("Password or auth t
#[api(
properties: {
name: {
schema: REMOTE_ID_SCHEMA,
},
comment: {
optional: true,
schema: SINGLE_LINE_COMMENT_SCHEMA,
@ -51,10 +54,13 @@ pub const REMOTE_PASSWORD_SCHEMA: Schema = StringSchema::new("Password or auth t
#[derive(Serialize,Deserialize)]
/// Remote properties.
pub struct Remote {
pub name: String,
#[serde(skip_serializing_if="Option::is_none")]
pub comment: Option<String>,
pub host: String,
pub userid: String,
#[serde(skip_serializing_if="String::is_empty")]
#[serde(with = "proxmox::tools::serde::string_as_base64")]
pub password: String,
#[serde(skip_serializing_if="Option::is_none")]
pub fingerprint: Option<String>,
@ -66,7 +72,7 @@ fn init() -> SectionConfig {
_ => unreachable!(),
};
let plugin = SectionConfigPlugin::new("remote".to_string(), None, obj_schema);
let plugin = SectionConfigPlugin::new("remote".to_string(), Some("name".to_string()), obj_schema);
let mut config = SectionConfig::new(&REMOTE_ID_SCHEMA);
config.register_plugin(plugin);
@ -77,16 +83,9 @@ pub const REMOTE_CFG_FILENAME: &str = "/etc/proxmox-backup/remote.cfg";
pub const REMOTE_CFG_LOCKFILE: &str = "/etc/proxmox-backup/.remote.lck";
pub fn config() -> Result<(SectionConfigData, [u8;32]), Error> {
let content = match std::fs::read_to_string(REMOTE_CFG_FILENAME) {
Ok(c) => c,
Err(err) => {
if err.kind() == std::io::ErrorKind::NotFound {
String::from("")
} else {
bail!("unable to read '{}' - {}", REMOTE_CFG_FILENAME, err);
}
}
};
let content = proxmox::tools::fs::file_read_optional_string(REMOTE_CFG_FILENAME)?;
let content = content.unwrap_or(String::from(""));
let digest = openssl::sha::sha256(content.as_bytes());
let data = CONFIG.parse(REMOTE_CFG_FILENAME, &content)?;

View File

@ -1,4 +1,4 @@
use anyhow::{bail, Error};
use anyhow::{Error};
use lazy_static::lazy_static;
use std::collections::HashMap;
use serde::{Serialize, Deserialize};
@ -46,7 +46,7 @@ lazy_static! {
},
schedule: {
optional: true,
schema: GC_SCHEDULE_SCHEMA,
schema: SYNC_SCHEDULE_SCHEMA,
},
}
)]
@ -66,6 +66,79 @@ pub struct SyncJobConfig {
pub schedule: Option<String>,
}
// FIXME: generate duplicate schemas/structs from one listing?
#[api(
properties: {
id: {
schema: JOB_ID_SCHEMA,
},
store: {
schema: DATASTORE_SCHEMA,
},
remote: {
schema: REMOTE_ID_SCHEMA,
},
"remote-store": {
schema: DATASTORE_SCHEMA,
},
"remove-vanished": {
schema: REMOVE_VANISHED_BACKUPS_SCHEMA,
optional: true,
},
comment: {
optional: true,
schema: SINGLE_LINE_COMMENT_SCHEMA,
},
schedule: {
optional: true,
schema: SYNC_SCHEDULE_SCHEMA,
},
"next-run": {
description: "Estimated time of the next run (UNIX epoch).",
optional: true,
type: Integer,
},
"last-run-state": {
description: "Result of the last run.",
optional: true,
type: String,
},
"last-run-upid": {
description: "Task UPID of the last run.",
optional: true,
type: String,
},
"last-run-endtime": {
description: "Endtime of the last run.",
optional: true,
type: Integer,
},
}
)]
#[serde(rename_all="kebab-case")]
#[derive(Serialize,Deserialize)]
/// Status of Sync Job
pub struct SyncJobStatus {
pub id: String,
pub store: String,
pub remote: String,
pub remote_store: String,
#[serde(skip_serializing_if="Option::is_none")]
pub remove_vanished: Option<bool>,
#[serde(skip_serializing_if="Option::is_none")]
pub comment: Option<String>,
#[serde(skip_serializing_if="Option::is_none")]
pub schedule: Option<String>,
#[serde(skip_serializing_if="Option::is_none")]
pub next_run: Option<i64>,
#[serde(skip_serializing_if="Option::is_none")]
pub last_run_state: Option<String>,
#[serde(skip_serializing_if="Option::is_none")]
pub last_run_upid: Option<String>,
#[serde(skip_serializing_if="Option::is_none")]
pub last_run_endtime: Option<i64>,
}
fn init() -> SectionConfig {
let obj_schema = match SyncJobConfig::API_SCHEMA {
Schema::Object(ref obj_schema) => obj_schema,
@ -83,16 +156,9 @@ pub const SYNC_CFG_FILENAME: &str = "/etc/proxmox-backup/sync.cfg";
pub const SYNC_CFG_LOCKFILE: &str = "/etc/proxmox-backup/.sync.lck";
pub fn config() -> Result<(SectionConfigData, [u8;32]), Error> {
let content = match std::fs::read_to_string(SYNC_CFG_FILENAME) {
Ok(c) => c,
Err(err) => {
if err.kind() == std::io::ErrorKind::NotFound {
String::from("")
} else {
bail!("unable to read '{}' - {}", SYNC_CFG_FILENAME, err);
}
}
};
let content = proxmox::tools::fs::file_read_optional_string(SYNC_CFG_FILENAME)?;
let content = content.unwrap_or(String::from(""));
let digest = openssl::sha::sha256(content.as_bytes());
let data = CONFIG.parse(SYNC_CFG_FILENAME, &content)?;

View File

@ -120,16 +120,9 @@ pub const USER_CFG_FILENAME: &str = "/etc/proxmox-backup/user.cfg";
pub const USER_CFG_LOCKFILE: &str = "/etc/proxmox-backup/.user.lck";
pub fn config() -> Result<(SectionConfigData, [u8;32]), Error> {
let content = match std::fs::read_to_string(USER_CFG_FILENAME) {
Ok(c) => c,
Err(err) => {
if err.kind() == std::io::ErrorKind::NotFound {
String::from("")
} else {
bail!("unable to read '{}' - {}", USER_CFG_FILENAME, err);
}
}
};
let content = proxmox::tools::fs::file_read_optional_string(USER_CFG_FILENAME)?;
let content = content.unwrap_or(String::from(""));
let digest = openssl::sha::sha256(content.as_bytes());
let mut data = CONFIG.parse(USER_CFG_FILENAME, &content)?;

View File

@ -4,7 +4,7 @@
//! format used in the [casync](https://github.com/systemd/casync)
//! toolkit (we are not 100\% binary compatible). It is a file archive
//! format defined by 'Lennart Poettering', specially defined for
//! efficent deduplication.
//! efficient deduplication.
//! Every archive contains items in the following order:
//! * `ENTRY` -- containing general stat() data and related bits

View File

@ -61,7 +61,7 @@ fn copy_binary_search_tree_inner<F: FnMut(usize, usize)>(
}
}
/// This function calls the provided `copy_func()` with the permutaion
/// This function calls the provided `copy_func()` with the permutation
/// info.
///
/// ```
@ -71,7 +71,7 @@ fn copy_binary_search_tree_inner<F: FnMut(usize, usize)>(
/// });
/// ```
///
/// This will produce the folowing output:
/// This will produce the following output:
///
/// ```no-compile
/// Copy 3 to 0
@ -81,7 +81,7 @@ fn copy_binary_search_tree_inner<F: FnMut(usize, usize)>(
/// Copy 4 to 2
/// ```
///
/// So this generates the following permuation: `[3,1,4,0,2]`.
/// So this generates the following permutation: `[3,1,4,0,2]`.
pub fn copy_binary_search_tree<F: FnMut(usize, usize)>(
n: usize,

View File

@ -1117,7 +1117,7 @@ impl<'a, W: Write, C: BackupCatalogWriter> Encoder<'a, W, C> {
if pos != size {
// Note:: casync format cannot handle that
bail!(
"detected shrinked file {:?} ({} < {})",
"detected shrunk file {:?} ({} < {})",
self.full_path(),
pos,
size

View File

@ -29,7 +29,7 @@ pub const PXAR_QUOTA_PROJID: u64 = 0x161baf2d8772a72b;
/// Marks item as hardlink
/// compute_goodbye_hash(b"__PROXMOX_FORMAT_HARDLINK__");
pub const PXAR_FORMAT_HARDLINK: u64 = 0x2c5e06f634f65b86;
/// Marks the beginnig of the payload (actual content) of regular files
/// Marks the beginning of the payload (actual content) of regular files
pub const PXAR_PAYLOAD: u64 = 0x8b9e1d93d6dcffc9;
/// Marks item as entry of goodbye table
pub const PXAR_GOODBYE: u64 = 0xdfd35c5e8327c403;

View File

@ -124,7 +124,7 @@ impl MatchPattern {
Ok(Some((match_pattern, content_buffer, stat)))
}
/// Interprete a byte buffer as a sinlge line containing a valid
/// Interpret a byte buffer as a sinlge line containing a valid
/// `MatchPattern`.
/// Pattern starting with `#` are interpreted as comments, returning `Ok(None)`.
/// Pattern starting with '!' are interpreted as negative match pattern.

View File

@ -84,7 +84,7 @@ impl<R: Read> SequentialDecoder<R> {
pub(crate) fn read_link(&mut self, size: u64) -> Result<PathBuf, Error> {
if size < (HEADER_SIZE + 2) {
bail!("dectected short link target.");
bail!("detected short link target.");
}
let target_len = size - HEADER_SIZE;
@ -104,7 +104,7 @@ impl<R: Read> SequentialDecoder<R> {
pub(crate) fn read_hardlink(&mut self, size: u64) -> Result<(PathBuf, u64), Error> {
if size < (HEADER_SIZE + 8 + 2) {
bail!("dectected short hardlink header.");
bail!("detected short hardlink header.");
}
let offset: u64 = self.read_item()?;
let target = self.read_link(size - 8)?;
@ -121,7 +121,7 @@ impl<R: Read> SequentialDecoder<R> {
pub(crate) fn read_filename(&mut self, size: u64) -> Result<OsString, Error> {
if size < (HEADER_SIZE + 2) {
bail!("dectected short filename");
bail!("detected short filename");
}
let name_len = size - HEADER_SIZE;

View File

@ -40,7 +40,7 @@ fn now() -> Result<f64, Error> {
Ok(time.as_secs_f64())
}
pub fn update_value(rel_path: &str, value: f64, dst: DST) -> Result<(), Error> {
pub fn update_value(rel_path: &str, value: f64, dst: DST, save: bool) -> Result<(), Error> {
let mut path = PathBuf::from(PBS_RRD_BASEDIR);
path.push(rel_path);
@ -52,7 +52,7 @@ pub fn update_value(rel_path: &str, value: f64, dst: DST) -> Result<(), Error> {
if let Some(rrd) = map.get_mut(rel_path) {
rrd.update(now, value);
rrd.save(&path)?;
if save { rrd.save(&path)?; }
} else {
let mut rrd = match RRD::load(&path) {
Ok(rrd) => rrd,
@ -64,7 +64,7 @@ pub fn update_value(rel_path: &str, value: f64, dst: DST) -> Result<(), Error> {
},
};
rrd.update(now, value);
rrd.save(&path)?;
if save { rrd.save(&path)?; }
map.insert(rel_path.into(), rrd);
}

View File

@ -72,7 +72,7 @@ pub async fn worker_is_active(upid: &UPID) -> Result<bool, Error> {
/// If the task is spanned from a different process, we simply return if
/// that process is still running. This information is good enough to detect
/// stale tasks...
fn worker_is_active_local(upid: &UPID) -> bool {
pub fn worker_is_active_local(upid: &UPID) -> bool {
if (upid.pid == *MY_PID) && (upid.pstart == *MY_PID_PSTART) {
WORKER_TASK_LIST.lock().unwrap().contains_key(&upid.task_id)
} else {
@ -277,7 +277,7 @@ fn update_active_workers(new_upid: Option<&UPID>) -> Result<Vec<TaskListInfo>, E
} else {
match state {
None => {
println!("Detected stoped UPID {}", upid_str);
println!("Detected stopped UPID {}", upid_str);
let status = upid_read_status(&upid)
.unwrap_or_else(|_| String::from("unknown"));
finish_list.push(TaskListInfo {
@ -418,10 +418,8 @@ impl WorkerTask {
let logger = FileLogger::new(&path, to_stdout)?;
nix::unistd::chown(&path, Some(backup_user.uid), Some(backup_user.gid))?;
update_active_workers(Some(&upid))?;
let worker = Arc::new(Self {
upid,
upid: upid.clone(),
abort_requested: AtomicBool::new(false),
data: Mutex::new(WorkerTaskData {
logger,
@ -430,10 +428,14 @@ impl WorkerTask {
}),
});
let mut hash = WORKER_TASK_LIST.lock().unwrap();
// scope to drop the lock again after inserting
{
let mut hash = WORKER_TASK_LIST.lock().unwrap();
hash.insert(task_id, worker.clone());
super::set_worker_count(hash.len());
}
hash.insert(task_id, worker.clone());
super::set_worker_count(hash.len());
update_active_workers(Some(&upid))?;
Ok(worker)
}

View File

@ -127,7 +127,7 @@ pub fn lock_file<F: AsRawFd>(
}
/// Open or create a lock file (append mode). Then try to
/// aquire a lock using `lock_file()`.
/// acquire a lock using `lock_file()`.
pub fn open_file_locked<P: AsRef<Path>>(path: P, timeout: Duration) -> Result<File, Error> {
let path = path.as_ref();
let mut file = match OpenOptions::new().create(true).append(true).open(path) {
@ -136,7 +136,7 @@ pub fn open_file_locked<P: AsRef<Path>>(path: P, timeout: Duration) -> Result<Fi
};
match lock_file(&mut file, true, Some(timeout)) {
Ok(_) => Ok(file),
Err(err) => bail!("Unable to aquire lock {:?} - {}", path, err),
Err(err) => bail!("Unable to acquire lock {:?} - {}", path, err),
}
}
@ -441,7 +441,7 @@ pub fn join(data: &Vec<String>, sep: char) -> String {
/// Detect modified configuration files
///
/// This function fails with a resonable error message if checksums do not match.
/// This function fails with a reasonable error message if checksums do not match.
pub fn detect_modified_configuration_file(digest1: &[u8;32], digest2: &[u8;32]) -> Result<(), Error> {
if digest1 != digest2 {
bail!("detected modified configuration - file changed by other user? Try again.");
@ -474,6 +474,40 @@ pub fn normalize_uri_path(path: &str) -> Result<(String, Vec<&str>), Error> {
Ok((path, components))
}
/// Helper to check result from std::process::Command output
///
/// The exit_code_check() function should return true if the exit code
/// is considered successful.
pub fn command_output(
output: std::process::Output,
exit_code_check: Option<fn(i32) -> bool>
) -> Result<String, Error> {
if !output.status.success() {
match output.status.code() {
Some(code) => {
let is_ok = match exit_code_check {
Some(check_fn) => check_fn(code),
None => code == 0,
};
if !is_ok {
let msg = String::from_utf8(output.stderr)
.map(|m| if m.is_empty() { String::from("no error message") } else { m })
.unwrap_or_else(|_| String::from("non utf8 error message (suppressed)"));
bail!("status code: {} - {}", code, msg);
}
}
None => bail!("terminated by signal"),
}
}
let output = String::from_utf8(output.stdout)?;
Ok(output)
}
pub fn fd_change_cloexec(fd: RawFd, on: bool) -> Result<(), Error> {
use nix::fcntl::{fcntl, FdFlag, F_GETFD, F_SETFD};
let mut flags = FdFlag::from_bits(fcntl(fd, F_GETFD)?)

View File

@ -149,14 +149,14 @@ fn test_broadcast_future() {
.map_ok(|res| {
CHECKSUM.fetch_add(res, Ordering::SeqCst);
})
.map_err(|err| { panic!("got errror {}", err); })
.map_err(|err| { panic!("got error {}", err); })
.map(|_| ());
let receiver2 = sender.listen()
.map_ok(|res| {
CHECKSUM.fetch_add(res*2, Ordering::SeqCst);
})
.map_err(|err| { panic!("got errror {}", err); })
.map_err(|err| { panic!("got error {}", err); })
.map(|_| ());
let mut rt = tokio::runtime::Runtime::new().unwrap();

View File

@ -1,6 +1,6 @@
//! Disk query/management utilities for.
use std::collections::HashSet;
use std::collections::{HashMap, HashSet};
use std::ffi::{OsStr, OsString};
use std::io;
use std::os::unix::ffi::{OsStrExt, OsStringExt};
@ -13,9 +13,14 @@ use libc::dev_t;
use once_cell::sync::OnceCell;
use proxmox::sys::error::io_err_other;
use proxmox::sys::linux::procfs::MountInfo;
use proxmox::sys::linux::procfs::{MountInfo, mountinfo::Device};
use proxmox::{io_bail, io_format_err};
mod zfs;
pub use zfs::*;
mod lvm;
pub use lvm::*;
bitflags! {
/// Ways a device is being used.
pub struct DiskUse: u32 {
@ -133,6 +138,28 @@ impl DiskManage {
})
}
/// Information about file system type and unsed device for a path
///
/// Returns tuple (fs_type, device, mount_source)
pub fn find_mounted_device(
&self,
path: &std::path::Path,
) -> Result<Option<(String, Device, Option<OsString>)>, Error> {
let stat = nix::sys::stat::stat(path)?;
let device = Device::from_dev_t(stat.st_dev);
let root_path = std::path::Path::new("/");
for (_id, entry) in self.mount_info()? {
if entry.root == root_path && entry.device == device {
return Ok(Some((entry.fs_type.clone(), entry.device, entry.mount_source.clone())));
}
}
Ok(None)
}
/// Check whether a specific device node is mounted.
///
/// Note that this tries to `stat` the sources of all mount points without caching the result
@ -222,7 +249,7 @@ impl Disk {
/// Read from a file in this device's sys path.
///
/// Note: path must be a relative path!
fn read_sys(&self, path: &Path) -> io::Result<Option<Vec<u8>>> {
pub fn read_sys(&self, path: &Path) -> io::Result<Option<Vec<u8>>> {
assert!(path.is_relative());
std::fs::read(self.syspath().join(path))
@ -423,6 +450,44 @@ impl Disk {
.is_mounted
.get_or_try_init(|| self.manager.is_devnum_mounted(self.devnum()?))?)
}
/// Read block device stats
///
/// see https://www.kernel.org/doc/Documentation/block/stat.txt
pub fn read_stat(&self) -> std::io::Result<Option<BlockDevStat>> {
if let Some(stat) = self.read_sys(Path::new("stat"))? {
let stat = unsafe { std::str::from_utf8_unchecked(&stat) };
let stat: Vec<u64> = stat.split_ascii_whitespace().map(|s| {
u64::from_str_radix(s, 10).unwrap_or(0)
}).collect();
if stat.len() < 15 { return Ok(None); }
return Ok(Some(BlockDevStat {
read_ios: stat[0],
read_sectors: stat[2],
write_ios: stat[4] + stat[11], // write + discard
write_sectors: stat[6] + stat[13], // write + discard
io_ticks: stat[10],
}));
}
Ok(None)
}
}
/// Returns disk usage information (total, used, avail)
pub fn disk_usage(path: &std::path::Path) -> Result<(u64, u64, u64), Error> {
let mut stat: libc::statfs64 = unsafe { std::mem::zeroed() };
use nix::NixPath;
let res = path.with_nix_path(|cstr| unsafe { libc::statfs64(cstr.as_ptr(), &mut stat) })?;
nix::errno::Errno::result(res)?;
let bsize = stat.f_bsize as u64;
Ok((stat.f_blocks*bsize, (stat.f_blocks-stat.f_bfree)*bsize, stat.f_bavail*bsize))
}
/// This is just a rough estimate for a "type" of disk.
@ -439,3 +504,52 @@ pub enum DiskType {
/// Some kind of USB disk, but we don't know more than that.
Usb,
}
#[derive(Debug)]
/// Represents the contents of the /sys/block/<dev>/stat file.
pub struct BlockDevStat {
pub read_ios: u64,
pub read_sectors: u64,
pub write_ios: u64,
pub write_sectors: u64,
pub io_ticks: u64, // milliseconds
}
/// Use lsblk to read partition type uuids.
pub fn get_partition_type_info() -> Result<HashMap<String, Vec<String>>, Error> {
const LSBLK_BIN_PATH: &str = "/usr/bin/lsblk";
let mut command = std::process::Command::new(LSBLK_BIN_PATH);
command.args(&["--json", "-o", "path,parttype"]);
let output = command.output()
.map_err(|err| format_err!("failed to execute '{}' - {}", LSBLK_BIN_PATH, err))?;
let output = crate::tools::command_output(output, None)
.map_err(|err| format_err!("lsblk command failed: {}", err))?;
let mut res: HashMap<String, Vec<String>> = HashMap::new();
let output: serde_json::Value = output.parse()?;
match output["blockdevices"].as_array() {
Some(list) => {
for info in list {
let path = match info["path"].as_str() {
Some(p) => p,
None => continue,
};
let partition_type = match info["parttype"].as_str() {
Some(t) => t.to_owned(),
None => continue,
};
let devices = res.entry(partition_type).or_insert(Vec::new());
devices.push(path.to_string());
}
}
None => {
}
}
Ok(res)
}

55
src/tools/disks/lvm.rs Normal file
View File

@ -0,0 +1,55 @@
use std::collections::{HashSet, HashMap};
use anyhow::{format_err, Error};
use serde_json::Value;
use lazy_static::lazy_static;
lazy_static!{
static ref LVM_UUIDS: HashSet<&'static str> = {
let mut set = HashSet::new();
set.insert("e6d6d379-f507-44c2-a23c-238f2a3df928");
set
};
}
/// Get list of devices used by LVM (pvs).
pub fn get_lvm_devices(
partition_type_map: &HashMap<String, Vec<String>>,
) -> Result<HashSet<String>, Error> {
const PVS_BIN_PATH: &str = "/sbin/pvs";
let mut command = std::process::Command::new(PVS_BIN_PATH);
command.args(&["--reportformat", "json", "--noheadings", "--readonly", "-o", "pv_name"]);
let output = command.output()
.map_err(|err| format_err!("failed to execute '{}' - {}", PVS_BIN_PATH, err))?;
let output = crate::tools::command_output(output, None)
.map_err(|err| format_err!("pvs command failed: {}", err))?;
let mut device_set: HashSet<String> = HashSet::new();
for device_list in partition_type_map.iter()
.filter_map(|(uuid, list)| if LVM_UUIDS.contains(uuid.as_str()) { Some(list) } else { None })
{
for device in device_list {
device_set.insert(device.clone());
}
}
let output: Value = output.parse()?;
match output["report"][0]["pv"].as_array() {
Some(list) => {
for info in list {
if let Some(pv_name) = info["pv_name"].as_str() {
device_set.insert(pv_name.to_string());
}
}
}
None => return Ok(device_set),
}
Ok(device_set)
}

206
src/tools/disks/zfs.rs Normal file
View File

@ -0,0 +1,206 @@
use std::path::PathBuf;
use std::collections::{HashMap, HashSet};
use anyhow::{bail, Error};
use lazy_static::lazy_static;
use nom::{
error::VerboseError,
bytes::complete::{take_while, take_while1, take_till, take_till1},
combinator::{map_res, all_consuming, recognize},
sequence::{preceded, tuple},
character::complete::{space1, digit1, char, line_ending},
multi::{many0, many1},
};
use super::*;
lazy_static!{
static ref ZFS_UUIDS: HashSet<&'static str> = {
let mut set = HashSet::new();
set.insert("6a898cc3-1dd2-11b2-99a6-080020736631"); // apple
set.insert("516e7cba-6ecf-11d6-8ff8-00022d09712b"); // bsd
set
};
}
type IResult<I, O, E = VerboseError<I>> = Result<(I, O), nom::Err<E>>;
#[derive(Debug)]
pub struct ZFSPoolUsage {
total: u64,
used: u64,
free: u64,
}
#[derive(Debug)]
pub struct ZFSPoolStatus {
name: String,
usage: Option<ZFSPoolUsage>,
devices: Vec<String>,
}
/// Returns kernel IO-stats for zfs pools
pub fn zfs_pool_stats(pool: &OsStr) -> Result<Option<BlockDevStat>, Error> {
let mut path = PathBuf::from("/proc/spl/kstat/zfs");
path.push(pool);
path.push("io");
let text = match proxmox::tools::fs::file_read_optional_string(&path)? {
Some(text) => text,
None => { return Ok(None); }
};
let lines: Vec<&str> = text.lines().collect();
if lines.len() < 3 {
bail!("unable to parse {:?} - got less than 3 lines", path);
}
// https://github.com/openzfs/zfs/blob/master/lib/libspl/include/sys/kstat.h#L578
// nread nwritten reads writes wtime wlentime wupdate rtime rlentime rupdate wcnt rcnt
// Note: w -> wait (wtime -> wait time)
// Note: r -> run (rtime -> run time)
// All times are nanoseconds
let stat: Vec<u64> = lines[2].split_ascii_whitespace().map(|s| {
u64::from_str_radix(s, 10).unwrap_or(0)
}).collect();
let ticks = (stat[4] + stat[7])/1_000_000; // convert to milisec
let stat = BlockDevStat {
read_sectors: stat[0]>>9,
write_sectors: stat[1]>>9,
read_ios: stat[2],
write_ios: stat[3],
io_ticks: ticks,
};
Ok(Some(stat))
}
/// Recognizes zero or more spaces and tabs (but not carage returns or line feeds)
fn multispace0(i: &str) -> IResult<&str, &str> {
take_while(|c| c == ' ' || c == '\t')(i)
}
/// Recognizes one or more spaces and tabs (but not carage returns or line feeds)
fn multispace1(i: &str) -> IResult<&str, &str> {
take_while1(|c| c == ' ' || c == '\t')(i)
}
fn parse_optional_u64(i: &str) -> IResult<&str, Option<u64>> {
if i.starts_with('-') {
Ok((&i[1..], None))
} else {
let (i, value) = map_res(recognize(digit1), str::parse)(i)?;
Ok((i, Some(value)))
}
}
fn parse_pool_device(i: &str) -> IResult<&str, String> {
let (i, (device, _, _rest)) = tuple((
preceded(multispace1, take_till1(|c| c == ' ' || c == '\t')),
multispace1,
preceded(take_till(|c| c == '\n'), char('\n')),
))(i)?;
Ok((i, device.to_string()))
}
fn parse_pool_header(i: &str) -> IResult<&str, ZFSPoolStatus> {
let (i, (text, total, used, free, _, _eol)) = tuple((
take_while1(|c| char::is_alphanumeric(c)),
preceded(multispace1, parse_optional_u64),
preceded(multispace1, parse_optional_u64),
preceded(multispace1, parse_optional_u64),
preceded(space1, take_till(|c| c == '\n')),
line_ending,
))(i)?;
let status = if let (Some(total), Some(used), Some(free)) = (total, used, free) {
ZFSPoolStatus {
name: text.into(),
usage: Some(ZFSPoolUsage { total, used, free }),
devices: Vec::new(),
}
} else {
ZFSPoolStatus {
name: text.into(), usage: None, devices: Vec::new(),
}
};
Ok((i, status))
}
fn parse_pool_status(i: &str) -> IResult<&str, ZFSPoolStatus> {
let (i, mut stat) = parse_pool_header(i)?;
let (i, devices) = many1(parse_pool_device)(i)?;
for device_path in devices.into_iter().filter(|n| n.starts_with("/dev/")) {
stat.devices.push(device_path);
}
let (i, _) = many0(tuple((multispace0, char('\n'))))(i)?; // skip empty lines
Ok((i, stat))
}
/// Parse zpool list outout
///
/// Note: This does not reveal any details on how the pool uses the devices, because
/// the zpool list output format is not really defined...
pub fn parse_zfs_list(i: &str) -> Result<Vec<ZFSPoolStatus>, Error> {
match all_consuming(many1(parse_pool_status))(i) {
Err(nom::Err::Error(err)) |
Err(nom::Err::Failure(err)) => {
bail!("unable to parse zfs list output - {}", nom::error::convert_error(i, err));
}
Err(err) => {
bail!("unable to parse calendar event: {}", err);
}
Ok((_, ce)) => Ok(ce),
}
}
/// List devices used by zfs (or a specific zfs pool)
pub fn zfs_devices(
partition_type_map: &HashMap<String, Vec<String>>,
pool: Option<&OsStr>,
) -> Result<HashSet<String>, Error> {
// Note: zpools list output can include entries for 'special', 'cache' and 'logs'
// and maybe other things.
let mut command = std::process::Command::new("/sbin/zpool");
command.args(&["list", "-H", "-v", "-p", "-P"]);
if let Some(pool) = pool { command.arg(pool); }
let output = command.output()
.map_err(|err| format_err!("failed to execute '/sbin/zpool' - {}", err))?;
let output = crate::tools::command_output(output, None)
.map_err(|err| format_err!("zpool list command failed: {}", err))?;
let list = parse_zfs_list(&output)?;
let mut device_set = HashSet::new();
for entry in list {
for device in entry.devices {
device_set.insert(device.clone());
}
}
for device_list in partition_type_map.iter()
.filter_map(|(uuid, list)| if ZFS_UUIDS.contains(uuid.as_str()) { Some(list) } else { None })
{
for device in device_list {
device_set.insert(device.clone());
}
}
Ok(device_set)
}

View File

@ -4,7 +4,7 @@ use std::io::Write;
/// Log messages with timestamps into files
///
/// Logs messages to file, and optionaly to standart output.
/// Logs messages to file, and optionally to standard output.
///
///
/// #### Example:

View File

@ -107,7 +107,7 @@ pub fn read_subdir<P: ?Sized + nix::NixPath>(dirfd: RawFd, path: &P) -> nix::Res
}
/// Scan through a directory with a regular expression. This is simply a shortcut filtering the
/// results of `read_subdir`. Non-UTF8 comaptible file names are silently ignored.
/// results of `read_subdir`. Non-UTF8 compatible file names are silently ignored.
pub fn scan_subdir<'a, P: ?Sized + nix::NixPath>(
dirfd: RawFd,
path: &P,

View File

@ -1,6 +1,6 @@
//! Inter-process reader-writer lock builder.
//!
//! This implemenation uses fcntl record locks with non-blocking
//! This implementation uses fcntl record locks with non-blocking
//! F_SETLK command (never blocks).
//!
//! We maintain a map of shared locks with time stamps, so you can get
@ -127,9 +127,9 @@ impl ProcessLocker {
Ok(())
}
/// Try to aquire a shared lock
/// Try to acquire a shared lock
///
/// On sucess, this makes sure that no other process can get an exclusive lock for the file.
/// On success, this makes sure that no other process can get an exclusive lock for the file.
pub fn try_shared_lock(locker: Arc<Mutex<Self>>) -> Result<ProcessLockSharedGuard, Error> {
let mut data = locker.lock().unwrap();
@ -168,7 +168,7 @@ impl ProcessLocker {
result
}
/// Try to aquire a exclusive lock
/// Try to acquire a exclusive lock
///
/// Make sure the we are the only process which has locks for this file (shared or exclusive).
pub fn try_exclusive_lock(locker: Arc<Mutex<Self>>) -> Result<ProcessLockExclusiveGuard, Error> {

View File

@ -163,12 +163,11 @@ pub fn compute_next_event(
if event.days.contains(day) {
t.changes.remove(TMChanges::WDAY);
} else {
if let Some(n) = (day_num+1..6)
.map(|d| WeekDays::from_bits(1<<d).unwrap())
.find(|d| event.days.contains(*d))
if let Some(n) = ((day_num+1)..7)
.find(|d| event.days.contains(WeekDays::from_bits(1<<d).unwrap()))
{
// try next day
t.add_days((n.bits() as i32) - day_num, true);
t.add_days(n - day_num, true);
continue;
} else {
// try next week
@ -296,6 +295,16 @@ mod test {
test_value("mon 2:*", THURSDAY_00_00, THURSDAY_00_00 + 4*DAY + 2*HOUR)?;
test_value("mon 2:50", THURSDAY_00_00, THURSDAY_00_00 + 4*DAY + 2*HOUR + 50*MIN)?;
test_value("tue", THURSDAY_00_00, THURSDAY_00_00 + 5*DAY)?;
test_value("wed", THURSDAY_00_00, THURSDAY_00_00 + 6*DAY)?;
test_value("thu", THURSDAY_00_00, THURSDAY_00_00 + 7*DAY)?;
test_value("fri", THURSDAY_00_00, THURSDAY_00_00 + 1*DAY)?;
test_value("sat", THURSDAY_00_00, THURSDAY_00_00 + 2*DAY)?;
test_value("sun", THURSDAY_00_00, THURSDAY_00_00 + 3*DAY)?;
test_value("daily", THURSDAY_00_00, THURSDAY_00_00 + DAY)?;
test_value("daily", THURSDAY_00_00+1, THURSDAY_00_00 + DAY)?;
let n = test_value("5/2:0", THURSDAY_00_00, THURSDAY_00_00 + 5*HOUR)?;
let n = test_value("5/2:0", n, THURSDAY_00_00 + 7*HOUR)?;
let n = test_value("5/2:0", n, THURSDAY_00_00 + 9*HOUR)?;

View File

@ -1,4 +1,4 @@
//! Generate and verify Authentification tickets
//! Generate and verify Authentication tickets
use anyhow::{bail, Error};
use base64;

View File

@ -9,6 +9,7 @@ Ext.define('pbs-data-store-snapshots', {
dateFormat: 'timestamp'
},
'files',
'owner',
{ name: 'size', type: 'int' },
]
});
@ -29,109 +30,113 @@ Ext.define('PBS.DataStoreContent', {
throw "no datastore specified";
}
this.data_store = Ext.create('Ext.data.Store', {
this.store = Ext.create('Ext.data.Store', {
model: 'pbs-data-store-snapshots',
sorters: 'backup-group',
groupField: 'backup-group',
});
this.store.on('load', this.onLoad, this);
Proxmox.Utils.monStoreErrors(view, view.store, true);
this.reload(); // initial load
},
reload: function() {
var view = this.getView();
let view = this.getView();
if (!view.store || !this.store) {
console.warn('cannot reload, no store(s)');
return;
}
let url = `/api2/json/admin/datastore/${view.datastore}/snapshots`;
this.data_store.setProxy({
this.store.setProxy({
type: 'proxmox',
url: url
});
this.data_store.load(function(records, operation, success) {
let groups = {};
this.store.load();
},
records.forEach(function(item) {
var btype = item.data["backup-type"];
let group = btype + "/" + item.data["backup-id"];
getRecordGroups: function(records) {
let groups = {};
if (groups[group] !== undefined)
return;
for (const item of records) {
var btype = item.data["backup-type"];
let group = btype + "/" + item.data["backup-id"];
var cls = '';
if (btype === 'vm') {
cls = 'fa-desktop';
} else if (btype === 'ct') {
cls = 'fa-cube';
} else if (btype === 'host') {
cls = 'fa-building';
} else {
return btype + '/' + value;
}
if (groups[group] !== undefined) {
continue;
}
groups[group] = {
text: group,
leaf: false,
iconCls: "fa " + cls,
expanded: false,
backup_type: item.data["backup-type"],
backup_id: item.data["backup-id"],
children: []
};
});
var cls = '';
if (btype === 'vm') {
cls = 'fa-desktop';
} else if (btype === 'ct') {
cls = 'fa-cube';
} else if (btype === 'host') {
cls = 'fa-building';
} else {
console.warn(`got unknown backup-type '${btype}'`);
continue; // FIXME: auto render? what do?
}
let backup_time_to_string = function(backup_time) {
let pad = function(number) {
if (number < 10) {
return '0' + number;
}
return number;
};
return backup_time.getUTCFullYear() +
'-' + pad(backup_time.getUTCMonth() + 1) +
'-' + pad(backup_time.getUTCDate()) +
'T' + pad(backup_time.getUTCHours()) +
':' + pad(backup_time.getUTCMinutes()) +
':' + pad(backup_time.getUTCSeconds()) +
'Z';
groups[group] = {
text: group,
leaf: false,
iconCls: "fa " + cls,
expanded: false,
backup_type: item.data["backup-type"],
backup_id: item.data["backup-id"],
children: []
};
}
records.forEach(function(item) {
let group = item.data["backup-type"] + "/" + item.data["backup-id"];
let children = groups[group].children;
return groups;
},
let data = item.data;
onLoad: function(store, records, success) {
let view = this.getView();
data.text = Ext.Date.format(data["backup-time"], 'Y-m-d H:i:s');
data.text = group + '/' + backup_time_to_string(data["backup-time"]);
data.leaf = true;
data.cls = 'no-leaf-icons';
if (!success) {
return;
}
children.push(data);
});
let groups = this.getRecordGroups(records);
let children = [];
Ext.Object.each(groups, function(key, group) {
let last_backup = 0;
group.children.forEach(function(item) {
if (item["backup-time"] > last_backup) {
last_backup = item["backup-time"];
group["backup-time"] = last_backup;
group.files = item.files;
group.size = item.size;
}
});
group.count = group.children.length;
children.push(group)
})
for (const item of records) {
let group = item.data["backup-type"] + "/" + item.data["backup-id"];
let children = groups[group].children;
view.setRootNode({
expanded: true,
children: children
});
let data = item.data;
data.text = group + '/' + PBS.Utils.render_datetime_utc(data["backup-time"]);
data.leaf = true;
data.cls = 'no-leaf-icons';
children.push(data);
}
let children = [];
for (const [_key, group] of Object.entries(groups)) {
let last_backup = 0;
for (const item of group.children) {
if (item["backup-time"] > last_backup) {
last_backup = item["backup-time"];
group["backup-time"] = last_backup;
group.files = item.files;
group.size = item.size;
group.owner = item.owner;
}
}
group.count = group.children.length;
children.push(group);
}
view.setRootNode({
expanded: true,
children: children
});
},
onPrune: function() {
@ -154,67 +159,59 @@ Ext.define('PBS.DataStoreContent', {
}
},
initComponent: function() {
var me = this;
columns: [
{
xtype: 'treecolumn',
header: gettext("Backup Group"),
dataIndex: 'text',
flex: 1
},
{
xtype: 'datecolumn',
header: gettext('Backup Time'),
sortable: true,
dataIndex: 'backup-time',
format: 'Y-m-d H:i:s',
width: 150
},
{
header: gettext("Size"),
sortable: true,
dataIndex: 'size',
renderer: Proxmox.Utils.format_size,
},
{
xtype: 'numbercolumn',
format: '0',
header: gettext("Count"),
sortable: true,
dataIndex: 'count',
},
{
header: gettext("Owner"),
sortable: true,
dataIndex: 'owner',
},
{
header: gettext("Files"),
sortable: false,
dataIndex: 'files',
flex: 2
},
],
var sm = Ext.create('Ext.selection.RowModel', {});
var prune_btn = new Proxmox.button.Button({
tbar: [
{
text: gettext('Reload'),
iconCls: 'fa fa-refresh',
handler: 'reload',
},
{
xtype: 'proxmoxButton',
text: gettext('Prune'),
disabled: true,
selModel: sm,
enableFn: function(record) { return !record.data.leaf; },
handler: 'onPrune',
});
Ext.apply(me, {
selModel: sm,
columns: [
{
xtype: 'treecolumn',
header: gettext("Backup Group"),
dataIndex: 'text',
flex: 1
},
{
xtype: 'datecolumn',
header: gettext('Backup Time'),
sortable: true,
dataIndex: 'backup-time',
format: 'Y-m-d H:i:s',
width: 150
},
{
header: gettext("Size"),
sortable: true,
dataIndex: 'size',
renderer: Proxmox.Utils.format_size,
},
{
xtype: 'numbercolumn',
format: '0',
header: gettext("Count"),
sortable: true,
dataIndex: 'count',
},
{
header: gettext("Files"),
sortable: false,
dataIndex: 'files',
flex: 2
}
],
tbar: [
{
text: gettext('Reload'),
iconCls: 'fa fa-refresh',
handler: 'reload',
},
prune_btn
],
});
me.callParent();
},
}
],
});

View File

@ -22,6 +22,12 @@ Ext.define('PBS.DataStorePanel', {
datastore: '{datastore}',
},
},
{
xtype: 'pbsDataStoreStatistic',
cbind: {
datastore: '{datastore}',
},
},
{
itemId: 'acl',
xtype: 'pbsACLView',

View File

@ -52,14 +52,13 @@ Ext.define('PBS.DataStorePruneInputPanel', {
method: "POST",
params: params,
callback: function() {
console.log("DONE");
return; // for easy breakpoint setting
},
failure: function (response, opts) {
Ext.Msg.alert(gettext('Error'), response.htmlStatus);
},
success: function(response, options) {
var data = response.result.data;
console.log(data);
view.prune_store.setData(data);
}
});

102
www/DataStoreStatistic.js Normal file
View File

@ -0,0 +1,102 @@
Ext.define('pve-rrd-datastore', {
extend: 'Ext.data.Model',
fields: [
'used',
'total',
'read_ios',
'read_bytes',
'write_ios',
'write_bytes',
'io_ticks',
{
name: 'io_delay', calculate: function(data) {
let ios = 0;
if (data.read_ios !== undefined) { ios += data.read_ios; }
if (data.write_ios !== undefined) { ios += data.write_ios; }
if (ios == 0 || data.io_ticks === undefined) {
return undefined;
}
return (data.io_ticks*1000.0)/ios;
}
},
{ type: 'date', dateFormat: 'timestamp', name: 'time' }
]
});
Ext.define('PBS.DataStoreStatistic', {
extend: 'Ext.panel.Panel',
alias: 'widget.pbsDataStoreStatistic',
title: gettext('Statistics'),
scrollable: true,
initComponent: function() {
var me = this;
if (!me.datastore) {
throw "no datastore specified";
}
me.tbar = [ '->', { xtype: 'proxmoxRRDTypeSelector' } ];
var rrdstore = Ext.create('Proxmox.data.RRDStore', {
rrdurl: "/api2/json/admin/datastore/" + me.datastore + "/rrd",
model: 'pve-rrd-datastore'
});
me.items = {
xtype: 'container',
itemId: 'itemcontainer',
layout: 'column',
minWidth: 700,
defaults: {
minHeight: 320,
padding: 5,
columnWidth: 1
},
items: [
{
xtype: 'proxmoxRRDChart',
title: gettext('Storage usage (bytes)'),
fields: ['total','used'],
fieldTitles: [gettext('Total'), gettext('Storage usage')],
store: rrdstore
},
{
xtype: 'proxmoxRRDChart',
title: gettext('Transfer Rate (bytes/second)'),
fields: ['read_bytes','write_bytes'],
fieldTitles: [gettext('Read'), gettext('Write')],
store: rrdstore
},
{
xtype: 'proxmoxRRDChart',
title: gettext('Input/Output Operations per Second (IOPS)'),
fields: ['read_ios','write_ios'],
fieldTitles: [gettext('Read'), gettext('Write')],
store: rrdstore
},
{
xtype: 'proxmoxRRDChart',
title: gettext('IO Delay (ms)'),
fields: ['io_delay'],
fieldTitles: [gettext('IO Delay')],
store: rrdstore
},
]
};
me.listeners = {
activate: function() {
rrdstore.startUpdate();
},
destroy: function() {
rrdstore.stopUpdate();
},
};
me.callParent();
}
});

View File

@ -8,120 +8,13 @@ Ext.define('pbs-datastore-list', {
idProperty: 'store'
});
Ext.define('pve-rrd-node', {
extend: 'Ext.data.Model',
fields: [
{
name: 'cpu',
// percentage
convert: function(value) {
return value*100;
}
},
{
name: 'iowait',
// percentage
convert: function(value) {
return value*100;
}
},
'netin',
'netout',
'memtotal',
'memused',
'swaptotal',
'swapused',
'roottotal',
'rootused',
'loadavg',
{ type: 'date', dateFormat: 'timestamp', name: 'time' }
]
});
Ext.define('PBS.DataStoreStatus', {
extend: 'Ext.panel.Panel',
alias: 'widget.pbsDataStoreStatus',
title: gettext('Data Store Status'),
tbar: ['->', { xtype: 'proxmoxRRDTypeSelector' } ],
scrollable: true,
initComponent: function() {
var me = this;
// this is just a test for the RRD api
var rrdstore = Ext.create('Proxmox.data.RRDStore', {
rrdurl: "/api2/json/nodes/localhost/rrd",
model: 'pve-rrd-node'
});
me.items = {
xtype: 'container',
itemId: 'itemcontainer',
layout: 'column',
minWidth: 700,
defaults: {
minHeight: 320,
padding: 5,
columnWidth: 1
},
items: [
{
xtype: 'proxmoxRRDChart',
title: gettext('CPU usage'),
fields: ['cpu','iowait'],
fieldTitles: [gettext('CPU usage'), gettext('IO delay')],
store: rrdstore
},
{
xtype: 'proxmoxRRDChart',
title: gettext('Server load'),
fields: ['loadavg'],
fieldTitles: [gettext('Load average')],
store: rrdstore
},
{
xtype: 'proxmoxRRDChart',
title: gettext('Memory usage'),
fields: ['memtotal','memused'],
fieldTitles: [gettext('Total'), gettext('RAM usage')],
store: rrdstore
},
{
xtype: 'proxmoxRRDChart',
title: gettext('Swap usage'),
fields: ['swaptotal','swapused'],
fieldTitles: [gettext('Total'), gettext('Swap usage')],
store: rrdstore
},
{
xtype: 'proxmoxRRDChart',
title: gettext('Network traffic'),
fields: ['netin','netout'],
store: rrdstore
},
{
xtype: 'proxmoxRRDChart',
title: gettext('Root Disk usage'),
fields: ['roottotal','rootused'],
fieldTitles: [gettext('Total'), gettext('Disk usage')],
store: rrdstore
},
]
};
me.listeners = {
activate: function() {
rrdstore.startUpdate();
},
destroy: function() {
rrdstore.stopUpdate();
},
};
me.callParent();
}
html: "fixme: Add Datastore status",
});

View File

@ -6,9 +6,15 @@ IMAGES := \
JSSRC= \
form/UserSelector.js \
form/RemoteSelector.js \
form/DataStoreSelector.js \
config/UserView.js \
config/RemoteView.js \
config/ACLView.js \
config/SyncView.js \
window/UserEdit.js \
window/RemoteEdit.js \
window/SyncJobEdit.js \
window/ACLEdit.js \
Utils.js \
LoginView.js \
@ -18,6 +24,7 @@ JSSRC= \
DataStorePrune.js \
DataStoreConfig.js \
DataStoreStatus.js \
DataStoreStatistic.js \
DataStoreContent.js \
DataStorePanel.js \
ServerStatus.js \

View File

@ -30,6 +30,18 @@ Ext.define('PBS.store.NavigationStore', {
path: 'pbsACLView',
leaf: true
},
{
text: gettext('Remotes'),
iconCls: 'fa fa-server',
path: 'pbsRemoteView',
leaf: true,
},
{
text: gettext('Sync Jobs'),
iconCls: 'fa fa-refresh',
path: 'pbsSyncJobView',
leaf: true,
},
{
text: gettext('Data Store'),
iconCls: 'fa fa-archive',

View File

@ -1,10 +1,55 @@
Ext.define('pve-rrd-node', {
extend: 'Ext.data.Model',
fields: [
{
name: 'cpu',
// percentage
convert: function(value) {
return value*100;
}
},
{
name: 'iowait',
// percentage
convert: function(value) {
return value*100;
}
},
'netin',
'netout',
'memtotal',
'memused',
'swaptotal',
'swapused',
'total',
'used',
'read_ios',
'read_bytes',
'write_ios',
'write_bytes',
'io_ticks',
{
name: 'io_delay', calculate: function(data) {
let ios = 0;
if (data.read_ios !== undefined) { ios += data.read_ios; }
if (data.write_ios !== undefined) { ios += data.write_ios; }
if (ios == 0 || data.io_ticks === undefined) {
return undefined;
}
return (data.io_ticks*1000.0)/ios;
}
},
'loadavg',
{ type: 'date', dateFormat: 'timestamp', name: 'time' }
]
});
Ext.define('PBS.ServerStatus', {
extend: 'Ext.panel.Panel',
alias: 'widget.pbsServerStatus',
title: gettext('ServerStatus'),
html: "Add Something usefule here ?",
scrollable: true,
initComponent: function() {
var me = this;
@ -41,7 +86,97 @@ Ext.define('PBS.ServerStatus', {
iconCls: 'fa fa-power-off'
});
me.tbar = [ restartBtn, shutdownBtn ];
me.tbar = [ restartBtn, shutdownBtn, '->', { xtype: 'proxmoxRRDTypeSelector' } ];
var rrdstore = Ext.create('Proxmox.data.RRDStore', {
rrdurl: "/api2/json/nodes/localhost/rrd",
model: 'pve-rrd-node'
});
me.items = {
xtype: 'container',
itemId: 'itemcontainer',
layout: 'column',
minWidth: 700,
defaults: {
minHeight: 320,
padding: 5,
columnWidth: 1
},
items: [
{
xtype: 'proxmoxRRDChart',
title: gettext('CPU usage'),
fields: ['cpu','iowait'],
fieldTitles: [gettext('CPU usage'), gettext('IO wait')],
store: rrdstore
},
{
xtype: 'proxmoxRRDChart',
title: gettext('Server load'),
fields: ['loadavg'],
fieldTitles: [gettext('Load average')],
store: rrdstore
},
{
xtype: 'proxmoxRRDChart',
title: gettext('Memory usage'),
fields: ['memtotal','memused'],
fieldTitles: [gettext('Total'), gettext('RAM usage')],
store: rrdstore
},
{
xtype: 'proxmoxRRDChart',
title: gettext('Swap usage'),
fields: ['swaptotal','swapused'],
fieldTitles: [gettext('Total'), gettext('Swap usage')],
store: rrdstore
},
{
xtype: 'proxmoxRRDChart',
title: gettext('Network traffic'),
fields: ['netin','netout'],
store: rrdstore
},
{
xtype: 'proxmoxRRDChart',
title: gettext('Root Disk usage'),
fields: ['total','used'],
fieldTitles: [gettext('Total'), gettext('Disk usage')],
store: rrdstore
},
{
xtype: 'proxmoxRRDChart',
title: gettext('Root Disk Transfer Rate (bytes/second)'),
fields: ['read_bytes','write_bytes'],
fieldTitles: [gettext('Read'), gettext('Write')],
store: rrdstore
},
{
xtype: 'proxmoxRRDChart',
title: gettext('Root Disk Input/Output Operations per Second (IOPS)'),
fields: ['read_ios','write_ios'],
fieldTitles: [gettext('Read'), gettext('Write')],
store: rrdstore
},
{
xtype: 'proxmoxRRDChart',
title: gettext('Root Disk IO Delay (ms)'),
fields: ['io_delay'],
fieldTitles: [gettext('IO Delay')],
store: rrdstore
},
]
};
me.listeners = {
activate: function() {
rrdstore.startUpdate();
},
destroy: function() {
rrdstore.stopUpdate();
},
};
me.callParent();
}

View File

@ -7,12 +7,8 @@ Ext.define('PBS.Utils', {
singleton: true,
updateLoginData: function(data) {
Proxmox.CSRFPreventionToken = data.CSRFPreventionToken;
Proxmox.UserName = data.username;
//console.log(data.ticket);
// fixme: use secure flag once we have TLS
//Ext.util.Cookies.set('PBSAuthCookie', data.ticket, null, '/', null, true );
Ext.util.Cookies.set('PBSAuthCookie', data.ticket, null, '/', null, false);
Proxmox.Utils.setAuthData(data);
},
dataStorePrefix: 'DataStore-',
@ -25,14 +21,53 @@ Ext.define('PBS.Utils', {
return path.indexOf(PBS.Utils.dataStorePrefix) === 0;
},
render_datetime_utc: function(datetime) {
let pad = (number) => number < 10 ? '0' + number : number;
return datetime.getUTCFullYear() +
'-' + pad(datetime.getUTCMonth() + 1) +
'-' + pad(datetime.getUTCDate()) +
'T' + pad(datetime.getUTCHours()) +
':' + pad(datetime.getUTCMinutes()) +
':' + pad(datetime.getUTCSeconds()) +
'Z';
},
render_datastore_worker_id: function(id, what) {
const result = id.match(/^(\S+)_([^_\s]+)_([^_\s]+)$/);
if (result) {
let datastore = result[1], type = result[2], id = result[3];
return `Datastore ${datastore} - ${what} ${type}/${id}`;
}
return what;
},
render_datastore_time_worker_id: function(id, what) {
const res = id.match(/^(\S+)_([^_\s]+)_([^_\s]+)_([^_\s]+)$/);
if (res) {
let datastore = res[1], type = res[2], id = res[3];
let datetime = Ext.Date.parse(parseInt(res[4], 16), 'U');
let utctime = PBS.Utils.render_datetime_utc(datetime);
return `Datastore ${datastore} - ${what} ${type}/${id}/${utctime}`;
}
return what;
},
constructor: function() {
var me = this;
// do whatever you want here
Proxmox.Utils.override_task_descriptions({
garbage_collection: ['Datastore', gettext('Garbage collect') ],
backup: [ '', gettext('Backup') ],
reader: [ '', gettext('Read datastore objects') ], // FIXME: better one
sync: ['Datastore', gettext('Remote Sync') ],
syncjob: [gettext('Sync Job'), gettext('Remote Sync') ],
prune: (type, id) => {
return PBS.Utils.render_datastore_worker_id(id, gettext('Prune'));
},
backup: (type, id) => {
return PBS.Utils.render_datastore_worker_id(id, gettext('Backup'));
},
reader: (type, id) => {
return PBS.Utils.render_datastore_time_worker_id(id, gettext('Read objects'));
},
});
}
});

View File

@ -5,7 +5,7 @@ Ext.define('pmx-acls', {
{
name: 'aclid',
calculate: function(data) {
return `${data.path} for ${data.ugid}`;
return `${data.path} for ${data.ugid} - ${data.roleid}`;
},
},
],
@ -77,6 +77,17 @@ Ext.define('PBS.config.ACLView', {
params.exact = view.aclExact;
}
proxy.setExtraParams(params);
Proxmox.Utils.monStoreErrors(view, view.getStore().rstore);
},
control: {
'#': { // view
activate: function() {
this.getView().getStore().rstore.startUpdate();
},
deactivate: function() {
this.getView().getStore().rstore.stopUpdate();
},
},
},
},
@ -84,12 +95,11 @@ Ext.define('PBS.config.ACLView', {
type: 'diff',
autoDestroy: true,
autoDestroyRstore: true,
sorters: 'userid',
sorters: 'aclid',
rstore: {
type: 'update',
storeid: 'pmx-acls',
model: 'pmx-acls',
autoStart: true,
interval: 5000,
},
},

136
www/config/RemoteView.js Normal file
View File

@ -0,0 +1,136 @@
Ext.define('pmx-remotes', {
extend: 'Ext.data.Model',
fields: [ 'name', 'host', 'userid', 'fingerprint', 'comment' ],
idProperty: 'name',
proxy: {
type: 'proxmox',
url: '/api2/json/config/remote',
},
});
Ext.define('PBS.config.RemoteView', {
extend: 'Ext.grid.GridPanel',
alias: 'widget.pbsRemoteView',
stateful: true,
stateId: 'grid-remotes',
title: gettext('Remotes'),
controller: {
xclass: 'Ext.app.ViewController',
addRemote: function() {
let me = this;
Ext.create('PBS.window.RemoteEdit', {
listeners: {
destroy: function() {
me.reload();
},
},
}).show();
},
editRemote: function() {
let me = this;
let view = me.getView();
let selection = view.getSelection();
if (selection.length < 1) return;
Ext.create('PBS.window.RemoteEdit', {
name: selection[0].data.name,
listeners: {
destroy: function() {
me.reload();
},
},
}).show();
},
reload: function() { this.getView().getStore().rstore.load(); },
init: function(view) {
Proxmox.Utils.monStoreErrors(view, view.getStore().rstore);
},
},
listeners: {
activate: 'reload',
itemdblclick: 'editRemote',
},
store: {
type: 'diff',
autoDestroy: true,
autoDestroyRstore: true,
sorters: 'name',
rstore: {
type: 'update',
storeid: 'pmx-remotes',
model: 'pmx-remotes',
autoStart: true,
interval: 5000,
},
},
tbar: [
{
xtype: 'proxmoxButton',
text: gettext('Add'),
handler: 'addRemote',
selModel: false,
},
{
xtype: 'proxmoxButton',
text: gettext('Edit'),
handler: 'editRemote',
disabled: true,
},
{
xtype: 'proxmoxStdRemoveButton',
baseurl: '/config/remote',
callback: 'reload',
},
],
viewConfig: {
trackOver: false,
},
columns: [
{
header: gettext('Remote'),
width: 200,
sortable: true,
renderer: Ext.String.htmlEncode,
dataIndex: 'name',
},
{
header: gettext('Host'),
width: 200,
sortable: true,
dataIndex: 'host',
},
{
header: gettext('User name'),
width: 200,
sortable: true,
renderer: Ext.String.htmlEncode,
dataIndex: 'userid',
},
{
header: gettext('Fingerprint'),
sortable: false,
renderer: Ext.String.htmlEncode,
dataIndex: 'fingerprint',
width: 200,
},
{
header: gettext('Comment'),
sortable: false,
renderer: Ext.String.htmlEncode,
dataIndex: 'comment',
flex: 1,
},
],
});

263
www/config/SyncView.js Normal file
View File

@ -0,0 +1,263 @@
Ext.define('pbs-sync-jobs-status', {
extend: 'Ext.data.Model',
fields: [
'id', 'remote', 'remote-store', 'store', 'schedule',
'next-run', 'last-run-upid', 'last-run-state', 'last-run-endtime',
{
name: 'duration',
calculate: function(data) {
let endtime = data['last-run-endtime'];
if (!endtime) return undefined;
let task = Proxmox.Utils.parse_task_upid(data['last-run-upid']);
return endtime - task.starttime;
},
},
],
idProperty: 'id',
proxy: {
type: 'proxmox',
url: '/api2/json/admin/sync',
},
});
Ext.define('PBS.config.SyncJobView', {
extend: 'Ext.grid.GridPanel',
alias: 'widget.pbsSyncJobView',
stateful: true,
stateId: 'grid-sync-jobs',
title: gettext('Sync Jobs'),
controller: {
xclass: 'Ext.app.ViewController',
addSyncJob: function() {
let me = this;
Ext.create('PBS.window.SyncJobEdit', {
listeners: {
destroy: function() {
me.reload();
},
},
}).show();
},
editSyncJob: function() {
let me = this;
let view = me.getView();
let selection = view.getSelection();
if (selection.length < 1) return;
Ext.create('PBS.window.SyncJobEdit', {
id: selection[0].data.id,
listeners: {
destroy: function() {
me.reload();
},
},
}).show();
},
openTaskLog: function() {
let me = this;
let view = me.getView();
let selection = view.getSelection();
if (selection.length < 1) return;
let upid = selection[0].data['last-run-upid'];
if (!upid) return;
Ext.create('Proxmox.window.TaskViewer', {
upid
}).show();
},
runSyncJob: function() {
let me = this;
let view = me.getView();
let selection = view.getSelection();
if (selection.length < 1) return;
let id = selection[0].data.id;
Proxmox.Utils.API2Request({
method: 'POST',
url: `/admin/sync/${id}/run`,
success: function(response, opt) {
Ext.create('Proxmox.window.TaskViewer', {
upid: response.result.data,
taskDone: function(success) {
me.reload();
},
}).show();
},
failure: function(response, opt) {
Ext.Msg.alert(gettext('Error'), response.htmlStatus);
},
});
},
render_sync_status: function(value, metadata, record) {
if (!record.data['last-run-upid']) {
return '-';
}
if (!record.data['last-run-endtime']) {
metadata.tdCls = 'x-grid-row-loading';
return '';
}
if (value === 'OK') {
return `<i class="fa fa-check good"></i> ${gettext("OK")}`;
}
return `<i class="fa fa-times critical"></i> ${gettext("Error")}:${value}`;
},
render_next_run: function(value, metadat, record) {
if (!value) return '-';
let now = new Date();
let next = new Date(value*1000);
if (next < now) {
return gettext('pending');
}
return Proxmox.Utils.render_timestamp(value);
},
render_optional_timestamp: function(value, metadata, record) {
if (!value) return '-';
return Proxmox.Utils.render_timestamp(value);
},
reload: function() { this.getView().getStore().rstore.load(); },
init: function(view) {
Proxmox.Utils.monStoreErrors(view, view.getStore().rstore);
},
},
listeners: {
activate: 'reload',
itemdblclick: 'editSyncJob',
},
store: {
type: 'diff',
autoDestroy: true,
autoDestroyRstore: true,
sorters: 'id',
rstore: {
type: 'update',
storeid: 'pbs-sync-jobs-status',
model: 'pbs-sync-jobs-status',
autoStart: true,
interval: 5000,
},
},
tbar: [
{
xtype: 'proxmoxButton',
text: gettext('Add'),
handler: 'addSyncJob',
selModel: false,
},
{
xtype: 'proxmoxButton',
text: gettext('Edit'),
handler: 'editSyncJob',
disabled: true,
},
{
xtype: 'proxmoxStdRemoveButton',
baseurl: '/config/sync/',
callback: 'reload',
},
'-',
{
xtype: 'proxmoxButton',
text: gettext('Log'),
handler: 'openTaskLog',
enableFn: (rec) => !!rec.data['last-run-upid'],
disabled: true,
},
{
xtype: 'proxmoxButton',
text: gettext('Run now'),
handler: 'runSyncJob',
disabled: true,
},
],
viewConfig: {
trackOver: false,
},
columns: [
{
header: gettext('Sync Job'),
width: 200,
sortable: true,
renderer: Ext.String.htmlEncode,
dataIndex: 'id',
},
{
header: gettext('Remote'),
width: 200,
sortable: true,
dataIndex: 'remote',
},
{
header: gettext('Remote Store'),
width: 200,
sortable: true,
dataIndex: 'remote-store',
},
{
header: gettext('Local Store'),
width: 200,
sortable: true,
dataIndex: 'store',
},
{
header: gettext('Schedule'),
sortable: true,
dataIndex: 'schedule',
},
{
header: gettext('Status'),
dataIndex: 'last-run-state',
flex: 1,
renderer: 'render_sync_status',
},
{
header: gettext('Last Sync'),
sortable: true,
minWidth: 200,
renderer: 'render_optional_timestamp',
dataIndex: 'last-run-endtime',
},
{
text: gettext('Duration'),
dataIndex: 'duration',
width: 60,
renderer: Proxmox.Utils.render_duration,
},
{
header: gettext('Next Run'),
sortable: true,
minWidth: 200,
renderer: 'render_next_run',
dataIndex: 'next-run',
},
{
header: gettext('Comment'),
hidden: true,
sortable: true,
renderer: Ext.String.htmlEncode,
dataIndex: 'comment',
},
],
});

View File

@ -60,6 +60,10 @@ Ext.define('PBS.config.UserView', {
},
reload: function() { this.getView().getStore().rstore.load(); },
init: function(view) {
Proxmox.Utils.monStoreErrors(view, view.getStore().rstore);
},
},
listeners: {

View File

@ -0,0 +1,34 @@
Ext.define('PBS.form.DataStoreSelector', {
extend: 'Proxmox.form.ComboGrid',
alias: 'widget.pbsDataStoreSelector',
allowBlank: false,
autoSelect: false,
valueField: 'store',
displayField: 'store',
store: {
model: 'pbs-datastore-list',
autoLoad: true,
sorters: 'store',
},
listConfig: {
columns: [
{
header: gettext('DataStore'),
sortable: true,
dataIndex: 'store',
renderer: Ext.String.htmlEncode,
flex: 1,
},
{
header: gettext('Comment'),
sortable: true,
dataIndex: 'comment',
renderer: Ext.String.htmlEncode,
flex: 1,
},
],
},
});

View File

@ -0,0 +1,40 @@
Ext.define('PBS.form.RemoteSelector', {
extend: 'Proxmox.form.ComboGrid',
alias: 'widget.pbsRemoteSelector',
allowBlank: false,
autoSelect: false,
valueField: 'name',
displayField: 'name',
store: {
model: 'pmx-remotes',
autoLoad: true,
sorters: 'name',
},
listConfig: {
columns: [
{
header: gettext('Remote'),
sortable: true,
dataIndex: 'name',
renderer: Ext.String.htmlEncode,
flex: 1,
},
{
header: gettext('Host'),
sortable: true,
dataIndex: 'host',
flex: 1,
},
{
header: gettext('User name'),
sortable: true,
dataIndex: 'userid',
renderer: Ext.String.htmlEncode,
flex: 1,
},
],
},
});

94
www/window/RemoteEdit.js Normal file
View File

@ -0,0 +1,94 @@
Ext.define('PBS.window.RemoteEdit', {
extend: 'Proxmox.window.Edit',
alias: 'widget.pbsRemoteEdit',
mixins: ['Proxmox.Mixin.CBind'],
userid: undefined,
isAdd: true,
subject: gettext('Remote'),
fieldDefaults: { labelWidth: 120 },
cbindData: function(initialConfig) {
let me = this;
let baseurl = '/api2/extjs/config/remote';
let name = initialConfig.name;
me.isCreate = !name;
me.url = name ? `${baseurl}/${name}` : baseurl;
me.method = name ? 'PUT' : 'POST';
me.autoLoad = !!name;
return {
passwordEmptyText: me.isCreate ? '' : gettext('Unchanged'),
};
},
items: {
xtype: 'inputpanel',
column1: [
{
xtype: 'pmxDisplayEditField',
name: 'name',
fieldLabel: gettext('Remote'),
renderer: Ext.htmlEncode,
allowBlank: false,
minLength: 4,
cbind: {
editable: '{isCreate}',
},
},
{
xtype: 'proxmoxtextfield',
allowBlank: false,
name: 'host',
fieldLabel: gettext('Host'),
},
],
column2: [
{
xtype: 'proxmoxtextfield',
allowBlank: false,
name: 'userid',
fieldLabel: gettext('Userid'),
},
{
xtype: 'textfield',
inputType: 'password',
fieldLabel: gettext('Password'),
name: 'password',
cbind: {
emptyText: '{passwordEmptyText}',
allowBlank: '{!isCreate}',
},
},
],
columnB: [
{
xtype: 'proxmoxtextfield',
name: 'fingerprint',
fieldLabel: gettext('Fingerprint'),
},
{
xtype: 'proxmoxtextfield',
name: 'comment',
fieldLabel: gettext('Comment'),
},
],
},
getValues: function() {
let me = this;
let values = me.callParent(arguments);
if (values.password === '') {
delete values.password;
}
return values;
},
});

84
www/window/SyncJobEdit.js Normal file
View File

@ -0,0 +1,84 @@
Ext.define('PBS.window.SyncJobEdit', {
extend: 'Proxmox.window.Edit',
alias: 'widget.pbsSyncJobEdit',
mixins: ['Proxmox.Mixin.CBind'],
userid: undefined,
isAdd: true,
subject: gettext('SyncJob'),
fieldDefaults: { labelWidth: 120 },
cbindData: function(initialConfig) {
let me = this;
let baseurl = '/api2/extjs/config/sync';
let id = initialConfig.id;
me.isCreate = !id;
me.url = id ? `${baseurl}/${id}` : baseurl;
me.method = id ? 'PUT' : 'POST';
me.autoLoad = !!id;
return { };
},
items: {
xtype: 'inputpanel',
column1: [
{
fieldLabel: gettext('Sync Job'),
xtype: 'pmxDisplayEditField',
name: 'id',
renderer: Ext.htmlEncode,
allowBlank: false,
minLength: 4,
cbind: {
editable: '{isCreate}',
},
},
{
fieldLabel: gettext('Remote'),
xtype: 'pbsRemoteSelector',
allowBlank: false,
name: 'remote',
},
{
fieldLabel: gettext('Local Datastore'),
xtype: 'pbsDataStoreSelector',
allowBlank: false,
name: 'store',
},
{
fieldLabel: gettext('Remote Datastore'),
xtype: 'proxmoxtextfield',
allowBlank: false,
name: 'remote-store',
},
],
column2: [
{
fieldLabel: gettext('Remove vanished'),
xtype: 'proxmoxcheckbox',
name: 'remove-vanished',
uncheckedValue: false,
value: true,
},
{
fieldLabel: gettext('Schedule'),
xtype: 'proxmoxtextfield',
name: 'schedule',
},
],
columnB: [
{
fieldLabel: gettext('Comment'),
xtype: 'proxmoxtextfield',
name: 'comment',
},
],
},
});