Compare commits

...

42 Commits

Author SHA1 Message Date
c9299e76fc bump version to 0.9.3-2
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-10-29 17:20:04 +01:00
2f1a46f748 ui: move user, token and permissions into an access control tab panel
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-10-29 16:47:18 +01:00
2b38dfb456 d/control: update
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-10-29 16:18:40 +01:00
f487a622ce ui: datastore summary: handle missing snapshot of a types
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-10-29 15:52:53 +01:00
906ef6c5bd api2/access/user: fix return type schema
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2020-10-29 15:20:10 +01:00
ea1853a17b api2/access/user: drop Option, treat empty Vec as None
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2020-10-29 15:17:54 +01:00
221177ba41 fixup hardcoded paths
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2020-10-29 15:15:17 +01:00
184a37635b gui: add API token ACLs
and the needed API token selector.

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2020-10-29 15:14:27 +01:00
b2da7fbd1c acls: allow viewing/editing user's token ACLs
even for otherwise unprivileged users.

since effective privileges of an API token are always intersected with
those of their owning user, this does not allow an unprivileged user to
elevate their privileges in practice, but avoids the need to involve a
privileged user to deploy API tokens.

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2020-10-29 15:14:27 +01:00
7fe76d3491 gui: add API token UI
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2020-10-29 15:14:27 +01:00
e6b5bf69a3 gui: add permissions button to user view
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2020-10-29 15:14:27 +01:00
4615325f9e manager: add user permissions command
useful for debugging complex ACL setups.

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2020-10-29 15:14:27 +01:00
2156dec5a9 manager: add token commands
to generate, list and delete tokens. adding them to ACLs already works
out of the box.

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2020-10-29 15:14:27 +01:00
16245d540c tasks: allow unpriv users to read their tokens' tasks
and tighten down the return schema while we're at it.

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2020-10-29 15:14:27 +01:00
bff8557298 owner checks: handle backups owned by API tokens
a user should be allowed to read/list/overwrite backups owned by their
own tokens, but a token should not be able to read/list/overwrite
backups owned by their owning user.

when changing ownership of a backup group, a user should be able to
transfer ownership to/from their own tokens if the backup is owned by
them (or one of their tokens).

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2020-10-29 15:14:27 +01:00
34aa8e13b6 client/remote: allow using ApiToken + secret
in place of user + password.

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2020-10-29 15:14:27 +01:00
babab85b56 api: add permissions endpoint
and adapt privilege calculation to return propagate flag

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2020-10-29 15:14:27 +01:00
6746bbb1a2 api: allow listing users + tokens
since it's not possible to extend existing structs, UserWithTokens
duplicates most of user::User.. to avoid duplicating user::ApiToken as
well, this returns full API token IDs, not just the token name part.

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2020-10-29 15:14:27 +01:00
942078c40b api: add API token endpoints
beneath the user endpoint.

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2020-10-29 15:14:27 +01:00
c30816c1f8 REST: extract and handle API tokens
and refactor handling of headers in the REST server while we're at it.

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2020-10-29 15:14:27 +01:00
e6dc35acb8 replace Userid with Authid
in most generic places. this is accompanied by a change in
RpcEnvironment to purposefully break existing call sites.

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2020-10-29 15:11:39 +01:00
e10c5c74f6 bump proxmox dependency to 0.6.0 for api tokens and tfa
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2020-10-29 15:11:39 +01:00
f8adf8f83f config: add token.shadow file
containing pairs of token ids and hashed secret values.

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2020-10-29 15:11:39 +01:00
e0538349e2 api: add Authid as wrapper around Userid
with an optional Tokenname, appended with '!' as delimiter in the string
representation like for PVE.

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2020-10-29 15:11:39 +01:00
0903403ce7 bump version to 0.9.3-1
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-10-29 14:58:21 +01:00
b6563f48ad GC: improve task logs
Make it more clear that removed files are chunks (not indexes or
something like that, user cannot know that we do not touch them here)

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-10-29 14:47:39 +01:00
932390bd46 GC: fix logging leftover bad chunks
fixes commit b4fb262335, which copied
over the "Removed bad files:" block, but only adapted the log text,
not the actual variable.

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-10-29 14:40:29 +01:00
6b7688aa98 ui: datastore: fix sync/verify job removal prompt
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-10-29 14:34:31 +01:00
ab0cf7e6a1 ui: drop id field from verify/sync add window
the config is shared between multiple datastores with the ID as, well
the unique ID, but we only show those of a single datastore.

So if a user adds a new one with a fixed ID "12345" but a job with
that ID exists already on another store, they get a error about
duplicate IDs, but cannot relate as that duplicate job is not visible
(filtered away)

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-10-29 14:22:43 +01:00
264779e704 server/worker_task: simplify task log writing
instead of prerotating 1000 tasks
(which resulted in 2 writes each time an active worker was finished)
simply append finished tasks to the archive (which will be rotated)

page cache should be good enough so that we can get the task logs fast

since existing installations might have an 'index' file, we
still have to read tasks from there, but only if it exists

this simplifies the TaskListInfoIterator a good amount

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-10-29 12:41:20 +01:00
7f3d91003c worker task: remove debug print, faster modulo
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-10-29 12:35:33 +01:00
14e0862509 api: datstore status: introduce proper structs and restore compatibility
by moving the properties of the storage status out again to the top
level object

also introduce proper structs for the types used, to get type-safety
and better documentation for the api calls

this changes the backup counts from an array of [groups,snapshots] to
an object/struct with { groups, snapshots } and include 'other' types
(though we do not have any at this moment)

this way it is better documented

this also adapts the ui code to cope with the api changes

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-10-29 12:31:27 +01:00
9e733dae48 send sync job status emails 2020-10-29 12:22:50 +01:00
bfea476be2 schedule_datastore_sync_jobs: remove unneccessary clone() 2020-10-29 12:22:41 +01:00
385cf2bd9d send_job_status_mail: corectly escape html characters 2020-10-29 11:22:08 +01:00
d6373f3525 garbage_collection: log deduplication factor 2020-10-29 11:13:01 +01:00
01f37e01c3 ui: datastore: use pointer cursor for edit notes
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-10-29 10:45:37 +01:00
b4fb262335 garbage_collection: log bad chunks (still_bad value) 2020-10-29 10:24:31 +01:00
5499bd3dee fix #2998: encode mtime as i64 instead of u64
saves files mtime as i64 instead of u64 which enables backup of
files with negative mtime

the catalog_decode_i64 is compatible to encoded u64 values (if < 2^63)
but not reverse, so all "old" catalogs can be read with the new
decoder, but catalogs that contain negative mtimes will decode wrongly
on older clients

also remove the arbitrary maximum value of 2^63 - 1 for
encode_u64 (we just use up to 10 bytes now) and correctly
decode them and update the comments accordingly

adds also test for i64 encode/decode and for compatibility between
u64 encode and i64 decode

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2020-10-29 08:51:10 +01:00
d771a608f5 verify: directly pass manifest to filter function
In order to avoid loading the manifest twice during verify.
2020-10-29 07:59:19 +01:00
227a39b34b bump version to 0.9.2-2
re-use the changelog as this was not released publicly and it's just
a small fix

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-10-28 23:05:58 +01:00
f9beae9cc9 client: adapt to change datastroe status return schema
fixes commit 16f9f244cf which extended
the return schema of the status API but did not adapted the client
status command to that.

Simply define our own tiny return schema and use that.

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2020-10-28 22:59:40 +01:00
77 changed files with 3058 additions and 760 deletions

View File

@ -1,6 +1,6 @@
[package] [package]
name = "proxmox-backup" name = "proxmox-backup"
version = "0.9.2" version = "0.9.4"
authors = ["Dietmar Maurer <dietmar@proxmox.com>"] authors = ["Dietmar Maurer <dietmar@proxmox.com>"]
edition = "2018" edition = "2018"
license = "AGPL-3" license = "AGPL-3"
@ -38,7 +38,7 @@ pam-sys = "0.5"
percent-encoding = "2.1" percent-encoding = "2.1"
pin-utils = "0.1.0" pin-utils = "0.1.0"
pathpatterns = "0.1.2" pathpatterns = "0.1.2"
proxmox = { version = "0.5.0", features = [ "sortable-macro", "api-macro", "websocket" ] } proxmox = { version = "0.6.0", features = [ "sortable-macro", "api-macro", "websocket" ] }
#proxmox = { git = "git://git.proxmox.com/git/proxmox", version = "0.1.2", features = [ "sortable-macro", "api-macro" ] } #proxmox = { git = "git://git.proxmox.com/git/proxmox", version = "0.1.2", features = [ "sortable-macro", "api-macro" ] }
#proxmox = { path = "../proxmox/proxmox", features = [ "sortable-macro", "api-macro", "websocket" ] } #proxmox = { path = "../proxmox/proxmox", features = [ "sortable-macro", "api-macro", "websocket" ] }
proxmox-fuse = "0.1.0" proxmox-fuse = "0.1.0"

36
debian/changelog vendored
View File

@ -1,4 +1,36 @@
rust-proxmox-backup (0.9.2-1) unstable; urgency=medium rust-proxmox-backup (0.9.4-1) unstable; urgency=medium
* implement API-token
* client/remote: allow using API-token + secret
* ui/cli: implement API-token management interface and commands
* ui: add widget to view the effective permissions of a user or token
* ui: datastore summary: handle error when havin zero snapshot of any type
* ui: move user, token and permissions into an access control tab panel
-- Proxmox Support Team <support@proxmox.com> Thu, 29 Oct 2020 17:19:13 +0100
rust-proxmox-backup (0.9.3-1) unstable; urgency=medium
* fix #2998: encode mtime as i64 instead of u64
* GC: log the number of leftover bad chunks we could not yet cleanup, as no
valid one replaced them. Also log deduplication factor.
* send sync job status emails
* api: datstore status: introduce proper structs and restore compatibility
to 0.9.1
* ui: drop id field from verify/sync add window, they are now seen as internal
-- Proxmox Support Team <support@proxmox.com> Thu, 29 Oct 2020 14:58:13 +0100
rust-proxmox-backup (0.9.2-2) unstable; urgency=medium
* rework server web-interface, move more datastore related panels as tabs * rework server web-interface, move more datastore related panels as tabs
inside the datastore view inside the datastore view
@ -76,7 +108,7 @@ rust-proxmox-backup (0.9.2-1) unstable; urgency=medium
* ui: datastore: show snapshot manifest comment and allow to edit them * ui: datastore: show snapshot manifest comment and allow to edit them
-- Proxmox Support Team <support@proxmox.com> Wed, 28 Oct 2020 21:27:02 +0100 -- Proxmox Support Team <support@proxmox.com> Wed, 28 Oct 2020 23:05:41 +0100
rust-proxmox-backup (0.9.1-1) unstable; urgency=medium rust-proxmox-backup (0.9.1-1) unstable; urgency=medium

8
debian/control vendored
View File

@ -34,10 +34,10 @@ Build-Depends: debhelper (>= 11),
librust-pathpatterns-0.1+default-dev (>= 0.1.2-~~), librust-pathpatterns-0.1+default-dev (>= 0.1.2-~~),
librust-percent-encoding-2+default-dev (>= 2.1-~~), librust-percent-encoding-2+default-dev (>= 2.1-~~),
librust-pin-utils-0.1+default-dev, librust-pin-utils-0.1+default-dev,
librust-proxmox-0.5+api-macro-dev, librust-proxmox-0.6+api-macro-dev,
librust-proxmox-0.5+default-dev, librust-proxmox-0.6+default-dev,
librust-proxmox-0.5+sortable-macro-dev, librust-proxmox-0.6+sortable-macro-dev,
librust-proxmox-0.5+websocket-dev, librust-proxmox-0.6+websocket-dev,
librust-proxmox-fuse-0.1+default-dev, librust-proxmox-fuse-0.1+default-dev,
librust-pxar-0.6+default-dev (>= 0.6.1-~~), librust-pxar-0.6+default-dev (>= 0.6.1-~~),
librust-pxar-0.6+futures-io-dev (>= 0.6.1-~~), librust-pxar-0.6+futures-io-dev (>= 0.6.1-~~),

View File

@ -2,7 +2,7 @@ use std::io::Write;
use anyhow::{Error}; use anyhow::{Error};
use proxmox_backup::api2::types::Userid; use proxmox_backup::api2::types::Authid;
use proxmox_backup::client::{HttpClient, HttpClientOptions, BackupReader}; use proxmox_backup::client::{HttpClient, HttpClientOptions, BackupReader};
pub struct DummyWriter { pub struct DummyWriter {
@ -26,13 +26,13 @@ async fn run() -> Result<(), Error> {
let host = "localhost"; let host = "localhost";
let username = Userid::root_userid(); let auth_id = Authid::root_auth_id();
let options = HttpClientOptions::new() let options = HttpClientOptions::new()
.interactive(true) .interactive(true)
.ticket_cache(true); .ticket_cache(true);
let client = HttpClient::new(host, 8007, username, options)?; let client = HttpClient::new(host, 8007, auth_id, options)?;
let backup_time = proxmox::tools::time::parse_rfc3339("2019-06-28T10:49:48Z")?; let backup_time = proxmox::tools::time::parse_rfc3339("2019-06-28T10:49:48Z")?;

View File

@ -1,6 +1,6 @@
use anyhow::{Error}; use anyhow::{Error};
use proxmox_backup::api2::types::Userid; use proxmox_backup::api2::types::Authid;
use proxmox_backup::client::*; use proxmox_backup::client::*;
async fn upload_speed() -> Result<f64, Error> { async fn upload_speed() -> Result<f64, Error> {
@ -8,13 +8,13 @@ async fn upload_speed() -> Result<f64, Error> {
let host = "localhost"; let host = "localhost";
let datastore = "store2"; let datastore = "store2";
let username = Userid::root_userid(); let auth_id = Authid::root_auth_id();
let options = HttpClientOptions::new() let options = HttpClientOptions::new()
.interactive(true) .interactive(true)
.ticket_cache(true); .ticket_cache(true);
let client = HttpClient::new(host, 8007, username, options)?; let client = HttpClient::new(host, 8007, auth_id, options)?;
let backup_time = proxmox::tools::time::epoch_i64(); let backup_time = proxmox::tools::time::epoch_i64();

View File

@ -1,6 +1,8 @@
use anyhow::{bail, format_err, Error}; use anyhow::{bail, format_err, Error};
use serde_json::{json, Value}; use serde_json::{json, Value};
use std::collections::HashMap;
use std::collections::HashSet;
use proxmox::api::{api, RpcEnvironment, Permission}; use proxmox::api::{api, RpcEnvironment, Permission};
use proxmox::api::router::{Router, SubdirMap}; use proxmox::api::router::{Router, SubdirMap};
@ -12,8 +14,9 @@ use crate::auth_helpers::*;
use crate::api2::types::*; use crate::api2::types::*;
use crate::tools::{FileLogOptions, FileLogger}; use crate::tools::{FileLogOptions, FileLogger};
use crate::config::acl as acl_config;
use crate::config::acl::{PRIVILEGES, PRIV_SYS_AUDIT, PRIV_PERMISSIONS_MODIFY};
use crate::config::cached_user_info::CachedUserInfo; use crate::config::cached_user_info::CachedUserInfo;
use crate::config::acl::{PRIVILEGES, PRIV_PERMISSIONS_MODIFY};
pub mod user; pub mod user;
pub mod domain; pub mod domain;
@ -31,7 +34,8 @@ fn authenticate_user(
) -> Result<bool, Error> { ) -> Result<bool, Error> {
let user_info = CachedUserInfo::new()?; let user_info = CachedUserInfo::new()?;
if !user_info.is_active_user(&userid) { let auth_id = Authid::from(userid.clone());
if !user_info.is_active_auth_id(&auth_id) {
bail!("user account disabled or expired."); bail!("user account disabled or expired.");
} }
@ -69,8 +73,7 @@ fn authenticate_user(
path_vec.push(part); path_vec.push(part);
} }
} }
user_info.check_privs(&auth_id, &path_vec, *privilege, false)?;
user_info.check_privs(userid, &path_vec, *privilege, false)?;
return Ok(false); return Ok(false);
} }
} }
@ -213,9 +216,10 @@ fn change_password(
) -> Result<Value, Error> { ) -> Result<Value, Error> {
let current_user: Userid = rpcenv let current_user: Userid = rpcenv
.get_user() .get_auth_id()
.ok_or_else(|| format_err!("unknown user"))? .ok_or_else(|| format_err!("unknown user"))?
.parse()?; .parse()?;
let current_auth = Authid::from(current_user.clone());
let mut allowed = userid == current_user; let mut allowed = userid == current_user;
@ -223,7 +227,7 @@ fn change_password(
if !allowed { if !allowed {
let user_info = CachedUserInfo::new()?; let user_info = CachedUserInfo::new()?;
let privs = user_info.lookup_privs(&current_user, &[]); let privs = user_info.lookup_privs(&current_auth, &[]);
if (privs & PRIV_PERMISSIONS_MODIFY) != 0 { allowed = true; } if (privs & PRIV_PERMISSIONS_MODIFY) != 0 { allowed = true; }
} }
@ -237,6 +241,128 @@ fn change_password(
Ok(Value::Null) Ok(Value::Null)
} }
#[api(
input: {
properties: {
auth_id: {
type: Authid,
optional: true,
},
path: {
schema: ACL_PATH_SCHEMA,
optional: true,
},
},
},
access: {
permission: &Permission::Anybody,
description: "Requires Sys.Audit on '/access', limited to own privileges otherwise.",
},
returns: {
description: "Map of ACL path to Map of privilege to propagate bit",
type: Object,
properties: {},
additional_properties: true,
},
)]
/// List permissions of given or currently authenticated user / API token.
///
/// Optionally limited to specific path.
pub fn list_permissions(
auth_id: Option<Authid>,
path: Option<String>,
rpcenv: &dyn RpcEnvironment,
) -> Result<HashMap<String, HashMap<String, bool>>, Error> {
let current_auth_id: Authid = rpcenv.get_auth_id().unwrap().parse()?;
let user_info = CachedUserInfo::new()?;
let user_privs = user_info.lookup_privs(&current_auth_id, &["access"]);
let auth_id = if user_privs & PRIV_SYS_AUDIT == 0 {
match auth_id {
Some(auth_id) => {
if auth_id == current_auth_id {
auth_id
} else if auth_id.is_token()
&& !current_auth_id.is_token()
&& auth_id.user() == current_auth_id.user() {
auth_id
} else {
bail!("not allowed to list permissions of {}", auth_id);
}
},
None => current_auth_id,
}
} else {
match auth_id {
Some(auth_id) => auth_id,
None => current_auth_id,
}
};
fn populate_acl_paths(
mut paths: HashSet<String>,
node: acl_config::AclTreeNode,
path: &str
) -> HashSet<String> {
for (sub_path, child_node) in node.children {
let sub_path = format!("{}/{}", path, &sub_path);
paths = populate_acl_paths(paths, child_node, &sub_path);
paths.insert(sub_path);
}
paths
}
let paths = match path {
Some(path) => {
let mut paths = HashSet::new();
paths.insert(path);
paths
},
None => {
let mut paths = HashSet::new();
let (acl_tree, _) = acl_config::config()?;
paths = populate_acl_paths(paths, acl_tree.root, "");
// default paths, returned even if no ACL exists
paths.insert("/".to_string());
paths.insert("/access".to_string());
paths.insert("/datastore".to_string());
paths.insert("/remote".to_string());
paths.insert("/system".to_string());
paths
},
};
let map = paths
.into_iter()
.fold(HashMap::new(), |mut map: HashMap<String, HashMap<String, bool>>, path: String| {
let split_path = acl_config::split_acl_path(path.as_str());
let (privs, propagated_privs) = user_info.lookup_privs_details(&auth_id, &split_path);
match privs {
0 => map, // Don't leak ACL paths where we don't have any privileges
_ => {
let priv_map = PRIVILEGES
.iter()
.fold(HashMap::new(), |mut priv_map, (name, value)| {
if value & privs != 0 {
priv_map.insert(name.to_string(), value & propagated_privs != 0);
}
priv_map
});
map.insert(path, priv_map);
map
},
}});
Ok(map)
}
#[sortable] #[sortable]
const SUBDIRS: SubdirMap = &sorted!([ const SUBDIRS: SubdirMap = &sorted!([
("acl", &acl::ROUTER), ("acl", &acl::ROUTER),
@ -244,6 +370,10 @@ const SUBDIRS: SubdirMap = &sorted!([
"password", &Router::new() "password", &Router::new()
.put(&API_METHOD_CHANGE_PASSWORD) .put(&API_METHOD_CHANGE_PASSWORD)
), ),
(
"permissions", &Router::new()
.get(&API_METHOD_LIST_PERMISSIONS)
),
( (
"ticket", &Router::new() "ticket", &Router::new()
.post(&API_METHOD_CREATE_TICKET) .post(&API_METHOD_CREATE_TICKET)

View File

@ -7,6 +7,7 @@ use proxmox::tools::fs::open_file_locked;
use crate::api2::types::*; use crate::api2::types::*;
use crate::config::acl; use crate::config::acl;
use crate::config::acl::{Role, PRIV_SYS_AUDIT, PRIV_PERMISSIONS_MODIFY}; use crate::config::acl::{Role, PRIV_SYS_AUDIT, PRIV_PERMISSIONS_MODIFY};
use crate::config::cached_user_info::CachedUserInfo;
#[api( #[api(
properties: { properties: {
@ -43,8 +44,23 @@ fn extract_acl_node_data(
path: &str, path: &str,
list: &mut Vec<AclListItem>, list: &mut Vec<AclListItem>,
exact: bool, exact: bool,
token_user: &Option<Authid>,
) { ) {
// tokens can't have tokens, so we can early return
if let Some(token_user) = token_user {
if token_user.is_token() {
return;
}
}
for (user, roles) in &node.users { for (user, roles) in &node.users {
if let Some(token_user) = token_user {
if !user.is_token()
|| user.user() != token_user.user() {
continue;
}
}
for (role, propagate) in roles { for (role, propagate) in roles {
list.push(AclListItem { list.push(AclListItem {
path: if path.is_empty() { String::from("/") } else { path.to_string() }, path: if path.is_empty() { String::from("/") } else { path.to_string() },
@ -56,6 +72,10 @@ fn extract_acl_node_data(
} }
} }
for (group, roles) in &node.groups { for (group, roles) in &node.groups {
if let Some(_) = token_user {
continue;
}
for (role, propagate) in roles { for (role, propagate) in roles {
list.push(AclListItem { list.push(AclListItem {
path: if path.is_empty() { String::from("/") } else { path.to_string() }, path: if path.is_empty() { String::from("/") } else { path.to_string() },
@ -71,7 +91,7 @@ fn extract_acl_node_data(
} }
for (comp, child) in &node.children { for (comp, child) in &node.children {
let new_path = format!("{}/{}", path, comp); let new_path = format!("{}/{}", path, comp);
extract_acl_node_data(child, &new_path, list, exact); extract_acl_node_data(child, &new_path, list, exact, token_user);
} }
} }
@ -98,7 +118,8 @@ fn extract_acl_node_data(
} }
}, },
access: { access: {
permission: &Permission::Privilege(&["access", "acl"], PRIV_SYS_AUDIT, false), permission: &Permission::Anybody,
description: "Returns all ACLs if user has Sys.Audit on '/access/acl', or just the ACLs containing the user's API tokens.",
}, },
)] )]
/// Read Access Control List (ACLs). /// Read Access Control List (ACLs).
@ -107,18 +128,26 @@ pub fn read_acl(
exact: bool, exact: bool,
mut rpcenv: &mut dyn RpcEnvironment, mut rpcenv: &mut dyn RpcEnvironment,
) -> Result<Vec<AclListItem>, Error> { ) -> Result<Vec<AclListItem>, Error> {
let auth_id = rpcenv.get_auth_id().unwrap().parse()?;
//let auth_user = rpcenv.get_user().unwrap(); let user_info = CachedUserInfo::new()?;
let top_level_privs = user_info.lookup_privs(&auth_id, &["access", "acl"]);
let auth_id_filter = if (top_level_privs & PRIV_SYS_AUDIT) == 0 {
Some(auth_id)
} else {
None
};
let (mut tree, digest) = acl::config()?; let (mut tree, digest) = acl::config()?;
let mut list: Vec<AclListItem> = Vec::new(); let mut list: Vec<AclListItem> = Vec::new();
if let Some(path) = &path { if let Some(path) = &path {
if let Some(node) = &tree.find_node(path) { if let Some(node) = &tree.find_node(path) {
extract_acl_node_data(&node, path, &mut list, exact); extract_acl_node_data(&node, path, &mut list, exact, &auth_id_filter);
} }
} else { } else {
extract_acl_node_data(&tree.root, "", &mut list, exact); extract_acl_node_data(&tree.root, "", &mut list, exact, &auth_id_filter);
} }
rpcenv["digest"] = proxmox::tools::digest_to_hex(&digest).into(); rpcenv["digest"] = proxmox::tools::digest_to_hex(&digest).into();
@ -140,9 +169,9 @@ pub fn read_acl(
optional: true, optional: true,
schema: ACL_PROPAGATE_SCHEMA, schema: ACL_PROPAGATE_SCHEMA,
}, },
userid: { auth_id: {
optional: true, optional: true,
type: Userid, type: Authid,
}, },
group: { group: {
optional: true, optional: true,
@ -160,7 +189,8 @@ pub fn read_acl(
}, },
}, },
access: { access: {
permission: &Permission::Privilege(&["access", "acl"], PRIV_PERMISSIONS_MODIFY, false), permission: &Permission::Anybody,
description: "Requires Permissions.Modify on '/access/acl', limited to updating ACLs of the user's API tokens otherwise."
}, },
)] )]
/// Update Access Control List (ACLs). /// Update Access Control List (ACLs).
@ -168,12 +198,35 @@ pub fn update_acl(
path: String, path: String,
role: String, role: String,
propagate: Option<bool>, propagate: Option<bool>,
userid: Option<Userid>, auth_id: Option<Authid>,
group: Option<String>, group: Option<String>,
delete: Option<bool>, delete: Option<bool>,
digest: Option<String>, digest: Option<String>,
_rpcenv: &mut dyn RpcEnvironment, rpcenv: &mut dyn RpcEnvironment,
) -> Result<(), Error> { ) -> Result<(), Error> {
let current_auth_id: Authid = rpcenv.get_auth_id().unwrap().parse()?;
let user_info = CachedUserInfo::new()?;
let top_level_privs = user_info.lookup_privs(&current_auth_id, &["access", "acl"]);
if top_level_privs & PRIV_PERMISSIONS_MODIFY == 0 {
if let Some(_) = group {
bail!("Unprivileged users are not allowed to create group ACL item.");
}
match &auth_id {
Some(auth_id) => {
if current_auth_id.is_token() {
bail!("Unprivileged API tokens can't set ACL items.");
} else if !auth_id.is_token() {
bail!("Unprivileged users can only set ACL items for API tokens.");
} else if auth_id.user() != current_auth_id.user() {
bail!("Unprivileged users can only set ACL items for their own API tokens.");
}
},
None => { bail!("Unprivileged user needs to provide auth_id to update ACL item."); },
};
}
let _lock = open_file_locked(acl::ACL_CFG_LOCKFILE, std::time::Duration::new(10, 0), true)?; let _lock = open_file_locked(acl::ACL_CFG_LOCKFILE, std::time::Duration::new(10, 0), true)?;
@ -190,11 +243,12 @@ pub fn update_acl(
if let Some(ref _group) = group { if let Some(ref _group) = group {
bail!("parameter 'group' - groups are currently not supported."); bail!("parameter 'group' - groups are currently not supported.");
} else if let Some(ref userid) = userid { } else if let Some(ref auth_id) = auth_id {
if !delete { // Note: we allow to delete non-existent users if !delete { // Note: we allow to delete non-existent users
let user_cfg = crate::config::user::cached_config()?; let user_cfg = crate::config::user::cached_config()?;
if user_cfg.sections.get(&userid.to_string()).is_none() { if user_cfg.sections.get(&auth_id.to_string()).is_none() {
bail!("no such user."); bail!(format!("no such {}.",
if auth_id.is_token() { "API token" } else { "user" }));
} }
} }
} else { } else {
@ -205,11 +259,11 @@ pub fn update_acl(
acl::check_acl_path(&path)?; acl::check_acl_path(&path)?;
} }
if let Some(userid) = userid { if let Some(auth_id) = auth_id {
if delete { if delete {
tree.delete_user_role(&path, &userid, &role); tree.delete_user_role(&path, &auth_id, &role);
} else { } else {
tree.insert_user_role(&path, &userid, &role, propagate); tree.insert_user_role(&path, &auth_id, &role, propagate);
} }
} else if let Some(group) = group { } else if let Some(group) = group {
if delete { if delete {

View File

@ -1,12 +1,16 @@
use anyhow::{bail, Error}; use anyhow::{bail, Error};
use serde_json::Value; use serde::{Serialize, Deserialize};
use serde_json::{json, Value};
use std::collections::HashMap;
use proxmox::api::{api, ApiMethod, Router, RpcEnvironment, Permission}; use proxmox::api::{api, ApiMethod, Router, RpcEnvironment, Permission};
use proxmox::api::router::SubdirMap;
use proxmox::api::schema::{Schema, StringSchema}; use proxmox::api::schema::{Schema, StringSchema};
use proxmox::tools::fs::open_file_locked; use proxmox::tools::fs::open_file_locked;
use crate::api2::types::*; use crate::api2::types::*;
use crate::config::user; use crate::config::user;
use crate::config::token_shadow;
use crate::config::acl::{PRIV_SYS_AUDIT, PRIV_PERMISSIONS_MODIFY}; use crate::config::acl::{PRIV_SYS_AUDIT, PRIV_PERMISSIONS_MODIFY};
use crate::config::cached_user_info::CachedUserInfo; use crate::config::cached_user_info::CachedUserInfo;
@ -16,14 +20,96 @@ pub const PBS_PASSWORD_SCHEMA: Schema = StringSchema::new("User Password.")
.max_length(64) .max_length(64)
.schema(); .schema();
#[api(
properties: {
userid: {
type: Userid,
},
comment: {
optional: true,
schema: SINGLE_LINE_COMMENT_SCHEMA,
},
enable: {
optional: true,
schema: user::ENABLE_USER_SCHEMA,
},
expire: {
optional: true,
schema: user::EXPIRE_USER_SCHEMA,
},
firstname: {
optional: true,
schema: user::FIRST_NAME_SCHEMA,
},
lastname: {
schema: user::LAST_NAME_SCHEMA,
optional: true,
},
email: {
schema: user::EMAIL_SCHEMA,
optional: true,
},
tokens: {
type: Array,
optional: true,
description: "List of user's API tokens.",
items: {
type: user::ApiToken
},
},
}
)]
#[derive(Serialize,Deserialize)]
/// User properties with added list of ApiTokens
pub struct UserWithTokens {
pub userid: Userid,
#[serde(skip_serializing_if="Option::is_none")]
pub comment: Option<String>,
#[serde(skip_serializing_if="Option::is_none")]
pub enable: Option<bool>,
#[serde(skip_serializing_if="Option::is_none")]
pub expire: Option<i64>,
#[serde(skip_serializing_if="Option::is_none")]
pub firstname: Option<String>,
#[serde(skip_serializing_if="Option::is_none")]
pub lastname: Option<String>,
#[serde(skip_serializing_if="Option::is_none")]
pub email: Option<String>,
#[serde(skip_serializing_if="Vec::is_empty")]
pub tokens: Vec<user::ApiToken>,
}
impl UserWithTokens {
fn new(user: user::User) -> Self {
Self {
userid: user.userid,
comment: user.comment,
enable: user.enable,
expire: user.expire,
firstname: user.firstname,
lastname: user.lastname,
email: user.email,
tokens: Vec::new(),
}
}
}
#[api( #[api(
input: { input: {
properties: {}, properties: {
include_tokens: {
type: bool,
description: "Include user's API tokens in returned list.",
optional: true,
default: false,
},
},
}, },
returns: { returns: {
description: "List users (with config digest).", description: "List users (with config digest).",
type: Array, type: Array,
items: { type: user::User }, items: { type: UserWithTokens },
}, },
access: { access: {
permission: &Permission::Anybody, permission: &Permission::Anybody,
@ -32,28 +118,60 @@ pub const PBS_PASSWORD_SCHEMA: Schema = StringSchema::new("User Password.")
)] )]
/// List users /// List users
pub fn list_users( pub fn list_users(
_param: Value, include_tokens: bool,
_info: &ApiMethod, _info: &ApiMethod,
mut rpcenv: &mut dyn RpcEnvironment, mut rpcenv: &mut dyn RpcEnvironment,
) -> Result<Vec<user::User>, Error> { ) -> Result<Vec<UserWithTokens>, Error> {
let (config, digest) = user::config()?; let (config, digest) = user::config()?;
let userid: Userid = rpcenv.get_user().unwrap().parse()?; // intentionally user only for now
let userid: Userid = rpcenv.get_auth_id().unwrap().parse()?;
let auth_id = Authid::from(userid.clone());
let user_info = CachedUserInfo::new()?; let user_info = CachedUserInfo::new()?;
let top_level_privs = user_info.lookup_privs(&userid, &["access", "users"]); let top_level_privs = user_info.lookup_privs(&auth_id, &["access", "users"]);
let top_level_allowed = (top_level_privs & PRIV_SYS_AUDIT) != 0; let top_level_allowed = (top_level_privs & PRIV_SYS_AUDIT) != 0;
let filter_by_privs = |user: &user::User| { let filter_by_privs = |user: &user::User| {
top_level_allowed || user.userid == userid top_level_allowed || user.userid == userid
}; };
let list:Vec<user::User> = config.convert_to_typed_array("user")?; let list:Vec<user::User> = config.convert_to_typed_array("user")?;
rpcenv["digest"] = proxmox::tools::digest_to_hex(&digest).into(); rpcenv["digest"] = proxmox::tools::digest_to_hex(&digest).into();
Ok(list.into_iter().filter(filter_by_privs).collect()) let iter = list.into_iter().filter(filter_by_privs);
let list = if include_tokens {
let tokens: Vec<user::ApiToken> = config.convert_to_typed_array("token")?;
let mut user_to_tokens = tokens
.into_iter()
.fold(
HashMap::new(),
|mut map: HashMap<Userid, Vec<user::ApiToken>>, token: user::ApiToken| {
if token.tokenid.is_token() {
map
.entry(token.tokenid.user().clone())
.or_default()
.push(token);
}
map
});
iter
.map(|user: user::User| {
let mut user = UserWithTokens::new(user);
user.tokens = user_to_tokens.remove(&user.userid).unwrap_or_default();
user
})
.collect()
} else {
iter.map(|user: user::User| UserWithTokens::new(user))
.collect()
};
Ok(list)
} }
#[api( #[api(
@ -304,12 +422,340 @@ pub fn delete_user(userid: Userid, digest: Option<String>) -> Result<(), Error>
Ok(()) Ok(())
} }
const ITEM_ROUTER: Router = Router::new() #[api(
input: {
properties: {
userid: {
type: Userid,
},
tokenname: {
type: Tokenname,
},
},
},
returns: {
description: "Get API token metadata (with config digest).",
type: user::ApiToken,
},
access: {
permission: &Permission::Or(&[
&Permission::Privilege(&["access", "users"], PRIV_SYS_AUDIT, false),
&Permission::UserParam("userid"),
]),
},
)]
/// Read user's API token metadata
pub fn read_token(
userid: Userid,
tokenname: Tokenname,
_info: &ApiMethod,
mut rpcenv: &mut dyn RpcEnvironment,
) -> Result<user::ApiToken, Error> {
let (config, digest) = user::config()?;
let tokenid = Authid::from((userid, Some(tokenname)));
rpcenv["digest"] = proxmox::tools::digest_to_hex(&digest).into();
config.lookup("token", &tokenid.to_string())
}
#[api(
protected: true,
input: {
properties: {
userid: {
type: Userid,
},
tokenname: {
type: Tokenname,
},
comment: {
optional: true,
schema: SINGLE_LINE_COMMENT_SCHEMA,
},
enable: {
schema: user::ENABLE_USER_SCHEMA,
optional: true,
},
expire: {
schema: user::EXPIRE_USER_SCHEMA,
optional: true,
},
digest: {
optional: true,
schema: PROXMOX_CONFIG_DIGEST_SCHEMA,
},
},
},
access: {
permission: &Permission::Or(&[
&Permission::Privilege(&["access", "users"], PRIV_PERMISSIONS_MODIFY, false),
&Permission::UserParam("userid"),
]),
},
returns: {
description: "API token identifier + generated secret.",
properties: {
value: {
type: String,
description: "The API token secret",
},
tokenid: {
type: String,
description: "The API token identifier",
},
},
},
)]
/// Generate a new API token with given metadata
pub fn generate_token(
userid: Userid,
tokenname: Tokenname,
comment: Option<String>,
enable: Option<bool>,
expire: Option<i64>,
digest: Option<String>,
) -> Result<Value, Error> {
let _lock = open_file_locked(user::USER_CFG_LOCKFILE, std::time::Duration::new(10, 0), true)?;
let (mut config, expected_digest) = user::config()?;
if let Some(ref digest) = digest {
let digest = proxmox::tools::hex_to_digest(digest)?;
crate::tools::detect_modified_configuration_file(&digest, &expected_digest)?;
}
let tokenid = Authid::from((userid.clone(), Some(tokenname.clone())));
let tokenid_string = tokenid.to_string();
if let Some(_) = config.sections.get(&tokenid_string) {
bail!("token '{}' for user '{}' already exists.", tokenname.as_str(), userid);
}
let secret = format!("{:x}", proxmox::tools::uuid::Uuid::generate());
token_shadow::set_secret(&tokenid, &secret)?;
let token = user::ApiToken {
tokenid: tokenid.clone(),
comment,
enable,
expire,
};
config.set_data(&tokenid_string, "token", &token)?;
user::save_config(&config)?;
Ok(json!({
"tokenid": tokenid_string,
"value": secret
}))
}
#[api(
protected: true,
input: {
properties: {
userid: {
type: Userid,
},
tokenname: {
type: Tokenname,
},
comment: {
optional: true,
schema: SINGLE_LINE_COMMENT_SCHEMA,
},
enable: {
schema: user::ENABLE_USER_SCHEMA,
optional: true,
},
expire: {
schema: user::EXPIRE_USER_SCHEMA,
optional: true,
},
digest: {
optional: true,
schema: PROXMOX_CONFIG_DIGEST_SCHEMA,
},
},
},
access: {
permission: &Permission::Or(&[
&Permission::Privilege(&["access", "users"], PRIV_PERMISSIONS_MODIFY, false),
&Permission::UserParam("userid"),
]),
},
)]
/// Update user's API token metadata
pub fn update_token(
userid: Userid,
tokenname: Tokenname,
comment: Option<String>,
enable: Option<bool>,
expire: Option<i64>,
digest: Option<String>,
) -> Result<(), Error> {
let _lock = open_file_locked(user::USER_CFG_LOCKFILE, std::time::Duration::new(10, 0), true)?;
let (mut config, expected_digest) = user::config()?;
if let Some(ref digest) = digest {
let digest = proxmox::tools::hex_to_digest(digest)?;
crate::tools::detect_modified_configuration_file(&digest, &expected_digest)?;
}
let tokenid = Authid::from((userid, Some(tokenname)));
let tokenid_string = tokenid.to_string();
let mut data: user::ApiToken = config.lookup("token", &tokenid_string)?;
if let Some(comment) = comment {
let comment = comment.trim().to_string();
if comment.is_empty() {
data.comment = None;
} else {
data.comment = Some(comment);
}
}
if let Some(enable) = enable {
data.enable = if enable { None } else { Some(false) };
}
if let Some(expire) = expire {
data.expire = if expire > 0 { Some(expire) } else { None };
}
config.set_data(&tokenid_string, "token", &data)?;
user::save_config(&config)?;
Ok(())
}
#[api(
protected: true,
input: {
properties: {
userid: {
type: Userid,
},
tokenname: {
type: Tokenname,
},
digest: {
optional: true,
schema: PROXMOX_CONFIG_DIGEST_SCHEMA,
},
},
},
access: {
permission: &Permission::Or(&[
&Permission::Privilege(&["access", "users"], PRIV_PERMISSIONS_MODIFY, false),
&Permission::UserParam("userid"),
]),
},
)]
/// Delete a user's API token
pub fn delete_token(
userid: Userid,
tokenname: Tokenname,
digest: Option<String>,
) -> Result<(), Error> {
let _lock = open_file_locked(user::USER_CFG_LOCKFILE, std::time::Duration::new(10, 0), true)?;
let (mut config, expected_digest) = user::config()?;
if let Some(ref digest) = digest {
let digest = proxmox::tools::hex_to_digest(digest)?;
crate::tools::detect_modified_configuration_file(&digest, &expected_digest)?;
}
let tokenid = Authid::from((userid.clone(), Some(tokenname.clone())));
let tokenid_string = tokenid.to_string();
match config.sections.get(&tokenid_string) {
Some(_) => { config.sections.remove(&tokenid_string); },
None => bail!("token '{}' of user '{}' does not exist.", tokenname.as_str(), userid),
}
token_shadow::delete_secret(&tokenid)?;
user::save_config(&config)?;
Ok(())
}
#[api(
input: {
properties: {
userid: {
type: Userid,
},
},
},
returns: {
description: "List user's API tokens (with config digest).",
type: Array,
items: { type: user::ApiToken },
},
access: {
permission: &Permission::Or(&[
&Permission::Privilege(&["access", "users"], PRIV_SYS_AUDIT, false),
&Permission::UserParam("userid"),
]),
},
)]
/// List user's API tokens
pub fn list_tokens(
userid: Userid,
_info: &ApiMethod,
mut rpcenv: &mut dyn RpcEnvironment,
) -> Result<Vec<user::ApiToken>, Error> {
let (config, digest) = user::config()?;
let list:Vec<user::ApiToken> = config.convert_to_typed_array("token")?;
rpcenv["digest"] = proxmox::tools::digest_to_hex(&digest).into();
let filter_by_owner = |token: &user::ApiToken| {
if token.tokenid.is_token() {
token.tokenid.user() == &userid
} else {
false
}
};
Ok(list.into_iter().filter(filter_by_owner).collect())
}
const TOKEN_ITEM_ROUTER: Router = Router::new()
.get(&API_METHOD_READ_TOKEN)
.put(&API_METHOD_UPDATE_TOKEN)
.post(&API_METHOD_GENERATE_TOKEN)
.delete(&API_METHOD_DELETE_TOKEN);
const TOKEN_ROUTER: Router = Router::new()
.get(&API_METHOD_LIST_TOKENS)
.match_all("tokenname", &TOKEN_ITEM_ROUTER);
const USER_SUBDIRS: SubdirMap = &[
("token", &TOKEN_ROUTER),
];
const USER_ROUTER: Router = Router::new()
.get(&API_METHOD_READ_USER) .get(&API_METHOD_READ_USER)
.put(&API_METHOD_UPDATE_USER) .put(&API_METHOD_UPDATE_USER)
.delete(&API_METHOD_DELETE_USER); .delete(&API_METHOD_DELETE_USER)
.subdirs(USER_SUBDIRS);
pub const ROUTER: Router = Router::new() pub const ROUTER: Router = Router::new()
.get(&API_METHOD_LIST_USERS) .get(&API_METHOD_LIST_USERS)
.post(&API_METHOD_CREATE_USER) .post(&API_METHOD_CREATE_USER)
.match_all("userid", &ITEM_ROUTER); .match_all("userid", &USER_ROUTER);

View File

@ -44,14 +44,30 @@ use crate::config::acl::{
PRIV_DATASTORE_BACKUP, PRIV_DATASTORE_BACKUP,
}; };
fn check_backup_owner( fn check_priv_or_backup_owner(
store: &DataStore, store: &DataStore,
group: &BackupGroup, group: &BackupGroup,
userid: &Userid, auth_id: &Authid,
required_privs: u64,
) -> Result<(), Error> { ) -> Result<(), Error> {
let owner = store.get_owner(group)?; let user_info = CachedUserInfo::new()?;
if &owner != userid { let privs = user_info.lookup_privs(&auth_id, &["datastore", store.name()]);
bail!("backup owner check failed ({} != {})", userid, owner);
if privs & required_privs == 0 {
let owner = store.get_owner(group)?;
check_backup_owner(&owner, auth_id)?;
}
Ok(())
}
fn check_backup_owner(
owner: &Authid,
auth_id: &Authid,
) -> Result<(), Error> {
let correct_owner = owner == auth_id
|| (owner.is_token() && &Authid::from(owner.user().clone()) == auth_id);
if !correct_owner {
bail!("backup owner check failed ({} != {})", auth_id, owner);
} }
Ok(()) Ok(())
} }
@ -149,9 +165,9 @@ fn list_groups(
rpcenv: &mut dyn RpcEnvironment, rpcenv: &mut dyn RpcEnvironment,
) -> Result<Vec<GroupListItem>, Error> { ) -> Result<Vec<GroupListItem>, Error> {
let userid: Userid = rpcenv.get_user().unwrap().parse()?; let auth_id: Authid = rpcenv.get_auth_id().unwrap().parse()?;
let user_info = CachedUserInfo::new()?; let user_info = CachedUserInfo::new()?;
let user_privs = user_info.lookup_privs(&userid, &["datastore", &store]); let user_privs = user_info.lookup_privs(&auth_id, &["datastore", &store]);
let datastore = DataStore::lookup_datastore(&store)?; let datastore = DataStore::lookup_datastore(&store)?;
@ -171,7 +187,7 @@ fn list_groups(
let list_all = (user_privs & PRIV_DATASTORE_AUDIT) != 0; let list_all = (user_privs & PRIV_DATASTORE_AUDIT) != 0;
let owner = datastore.get_owner(group)?; let owner = datastore.get_owner(group)?;
if !list_all && owner != userid { if !list_all && check_backup_owner(&owner, &auth_id).is_err() {
continue; continue;
} }
@ -230,16 +246,12 @@ pub fn list_snapshot_files(
rpcenv: &mut dyn RpcEnvironment, rpcenv: &mut dyn RpcEnvironment,
) -> Result<Vec<BackupContent>, Error> { ) -> Result<Vec<BackupContent>, Error> {
let userid: Userid = rpcenv.get_user().unwrap().parse()?; let auth_id: Authid = rpcenv.get_auth_id().unwrap().parse()?;
let user_info = CachedUserInfo::new()?;
let user_privs = user_info.lookup_privs(&userid, &["datastore", &store]);
let datastore = DataStore::lookup_datastore(&store)?; let datastore = DataStore::lookup_datastore(&store)?;
let snapshot = BackupDir::new(backup_type, backup_id, backup_time)?; let snapshot = BackupDir::new(backup_type, backup_id, backup_time)?;
let allowed = (user_privs & (PRIV_DATASTORE_AUDIT | PRIV_DATASTORE_READ)) != 0; check_priv_or_backup_owner(&datastore, snapshot.group(), &auth_id, PRIV_DATASTORE_AUDIT | PRIV_DATASTORE_READ)?;
if !allowed { check_backup_owner(&datastore, snapshot.group(), &userid)?; }
let info = BackupInfo::new(&datastore.base_path(), snapshot)?; let info = BackupInfo::new(&datastore.base_path(), snapshot)?;
@ -282,16 +294,12 @@ fn delete_snapshot(
rpcenv: &mut dyn RpcEnvironment, rpcenv: &mut dyn RpcEnvironment,
) -> Result<Value, Error> { ) -> Result<Value, Error> {
let userid: Userid = rpcenv.get_user().unwrap().parse()?; let auth_id: Authid = rpcenv.get_auth_id().unwrap().parse()?;
let user_info = CachedUserInfo::new()?;
let user_privs = user_info.lookup_privs(&userid, &["datastore", &store]);
let snapshot = BackupDir::new(backup_type, backup_id, backup_time)?; let snapshot = BackupDir::new(backup_type, backup_id, backup_time)?;
let datastore = DataStore::lookup_datastore(&store)?; let datastore = DataStore::lookup_datastore(&store)?;
let allowed = (user_privs & PRIV_DATASTORE_MODIFY) != 0; check_priv_or_backup_owner(&datastore, snapshot.group(), &auth_id, PRIV_DATASTORE_MODIFY)?;
if !allowed { check_backup_owner(&datastore, snapshot.group(), &userid)?; }
datastore.remove_backup_dir(&snapshot, false)?; datastore.remove_backup_dir(&snapshot, false)?;
@ -338,9 +346,9 @@ pub fn list_snapshots (
rpcenv: &mut dyn RpcEnvironment, rpcenv: &mut dyn RpcEnvironment,
) -> Result<Vec<SnapshotListItem>, Error> { ) -> Result<Vec<SnapshotListItem>, Error> {
let userid: Userid = rpcenv.get_user().unwrap().parse()?; let auth_id: Authid = rpcenv.get_auth_id().unwrap().parse()?;
let user_info = CachedUserInfo::new()?; let user_info = CachedUserInfo::new()?;
let user_privs = user_info.lookup_privs(&userid, &["datastore", &store]); let user_privs = user_info.lookup_privs(&auth_id, &["datastore", &store]);
let datastore = DataStore::lookup_datastore(&store)?; let datastore = DataStore::lookup_datastore(&store)?;
@ -362,7 +370,7 @@ pub fn list_snapshots (
let list_all = (user_privs & PRIV_DATASTORE_AUDIT) != 0; let list_all = (user_privs & PRIV_DATASTORE_AUDIT) != 0;
let owner = datastore.get_owner(group)?; let owner = datastore.get_owner(group)?;
if !list_all && owner != userid { if !list_all && check_backup_owner(&owner, &auth_id).is_err() {
continue; continue;
} }
@ -423,12 +431,18 @@ pub fn list_snapshots (
Ok(snapshots) Ok(snapshots)
} }
// returns a map from type to (group_count, snapshot_count) fn get_snapshots_count(store: &DataStore) -> Result<Counts, Error> {
fn get_snaphots_count(store: &DataStore) -> Result<HashMap<String, (usize, usize)>, Error> {
let base_path = store.base_path(); let base_path = store.base_path();
let backup_list = BackupInfo::list_backups(&base_path)?; let backup_list = BackupInfo::list_backups(&base_path)?;
let mut groups = HashSet::new(); let mut groups = HashSet::new();
let mut result: HashMap<String, (usize, usize)> = HashMap::new();
let mut result = Counts {
ct: None,
host: None,
vm: None,
other: None,
};
for info in backup_list { for info in backup_list {
let group = info.backup_dir.group(); let group = info.backup_dir.group();
@ -441,13 +455,23 @@ fn get_snaphots_count(store: &DataStore) -> Result<HashMap<String, (usize, usize
new_id = true; new_id = true;
} }
if let Some(mut counts) = result.get_mut(backup_type) { let mut counts = match backup_type {
counts.1 += 1; "ct" => result.ct.take().unwrap_or(Default::default()),
if new_id { "host" => result.host.take().unwrap_or(Default::default()),
counts.0 +=1; "vm" => result.vm.take().unwrap_or(Default::default()),
} _ => result.other.take().unwrap_or(Default::default()),
} else { };
result.insert(backup_type.to_string(), (1, 1));
counts.snapshots += 1;
if new_id {
counts.groups +=1;
}
match backup_type {
"ct" => result.ct = Some(counts),
"host" => result.host = Some(counts),
"vm" => result.vm = Some(counts),
_ => result.other = Some(counts),
} }
} }
@ -463,21 +487,7 @@ fn get_snaphots_count(store: &DataStore) -> Result<HashMap<String, (usize, usize
}, },
}, },
returns: { returns: {
description: "The overall Datastore status and information.", type: DataStoreStatus,
type: Object,
properties: {
storage: {
type: StorageStatus,
},
counts: {
description: "Group and Snapshot counts per Type",
type: Object,
properties: { },
},
"gc-status": {
type: GarbageCollectionStatus,
},
},
}, },
access: { access: {
permission: &Permission::Privilege(&["datastore", "{store}"], PRIV_DATASTORE_AUDIT | PRIV_DATASTORE_BACKUP, true), permission: &Permission::Privilege(&["datastore", "{store}"], PRIV_DATASTORE_AUDIT | PRIV_DATASTORE_BACKUP, true),
@ -488,19 +498,19 @@ pub fn status(
store: String, store: String,
_info: &ApiMethod, _info: &ApiMethod,
_rpcenv: &mut dyn RpcEnvironment, _rpcenv: &mut dyn RpcEnvironment,
) -> Result<Value, Error> { ) -> Result<DataStoreStatus, Error> {
let datastore = DataStore::lookup_datastore(&store)?; let datastore = DataStore::lookup_datastore(&store)?;
let storage_status = crate::tools::disks::disk_usage(&datastore.base_path())?; let storage = crate::tools::disks::disk_usage(&datastore.base_path())?;
let counts = get_snaphots_count(&datastore)?; let counts = get_snapshots_count(&datastore)?;
let gc_status = datastore.last_gc_status(); let gc_status = datastore.last_gc_status();
let res = json!({ Ok(DataStoreStatus {
"storage": storage_status, total: storage.total,
"counts": counts, used: storage.used,
"gc-status": gc_status, avail: storage.avail,
}); gc_status,
counts,
Ok(res) })
} }
#[api( #[api(
@ -568,18 +578,17 @@ pub fn verify(
_ => bail!("parameters do not specify a backup group or snapshot"), _ => bail!("parameters do not specify a backup group or snapshot"),
} }
let userid: Userid = rpcenv.get_user().unwrap().parse()?; let auth_id: Authid = rpcenv.get_auth_id().unwrap().parse()?;
let to_stdout = if rpcenv.env_type() == RpcEnvironmentType::CLI { true } else { false }; let to_stdout = if rpcenv.env_type() == RpcEnvironmentType::CLI { true } else { false };
let upid_str = WorkerTask::new_thread( let upid_str = WorkerTask::new_thread(
worker_type, worker_type,
Some(worker_id.clone()), Some(worker_id.clone()),
userid, auth_id,
to_stdout, to_stdout,
move |worker| { move |worker| {
let verified_chunks = Arc::new(Mutex::new(HashSet::with_capacity(1024*16))); let verified_chunks = Arc::new(Mutex::new(HashSet::with_capacity(1024*16)));
let corrupt_chunks = Arc::new(Mutex::new(HashSet::with_capacity(64))); let corrupt_chunks = Arc::new(Mutex::new(HashSet::with_capacity(64)));
let filter = |_backup_info: &BackupInfo| { true };
let failed_dirs = if let Some(backup_dir) = backup_dir { let failed_dirs = if let Some(backup_dir) = backup_dir {
let mut res = Vec::new(); let mut res = Vec::new();
@ -590,6 +599,7 @@ pub fn verify(
corrupt_chunks, corrupt_chunks,
worker.clone(), worker.clone(),
worker.upid().clone(), worker.upid().clone(),
None,
)? { )? {
res.push(backup_dir.to_string()); res.push(backup_dir.to_string());
} }
@ -603,11 +613,11 @@ pub fn verify(
None, None,
worker.clone(), worker.clone(),
worker.upid(), worker.upid(),
&filter, None,
)?; )?;
failed_dirs failed_dirs
} else { } else {
verify_all_backups(datastore, worker.clone(), worker.upid(), &filter)? verify_all_backups(datastore, worker.clone(), worker.upid(), None)?
}; };
if failed_dirs.len() > 0 { if failed_dirs.len() > 0 {
worker.log("Failed to verify following snapshots:"); worker.log("Failed to verify following snapshots:");
@ -703,9 +713,7 @@ fn prune(
let backup_type = tools::required_string_param(&param, "backup-type")?; let backup_type = tools::required_string_param(&param, "backup-type")?;
let backup_id = tools::required_string_param(&param, "backup-id")?; let backup_id = tools::required_string_param(&param, "backup-id")?;
let userid: Userid = rpcenv.get_user().unwrap().parse()?; let auth_id: Authid = rpcenv.get_auth_id().unwrap().parse()?;
let user_info = CachedUserInfo::new()?;
let user_privs = user_info.lookup_privs(&userid, &["datastore", &store]);
let dry_run = param["dry-run"].as_bool().unwrap_or(false); let dry_run = param["dry-run"].as_bool().unwrap_or(false);
@ -713,8 +721,7 @@ fn prune(
let datastore = DataStore::lookup_datastore(&store)?; let datastore = DataStore::lookup_datastore(&store)?;
let allowed = (user_privs & PRIV_DATASTORE_MODIFY) != 0; check_priv_or_backup_owner(&datastore, &group, &auth_id, PRIV_DATASTORE_MODIFY)?;
if !allowed { check_backup_owner(&datastore, &group, &userid)?; }
let prune_options = PruneOptions { let prune_options = PruneOptions {
keep_last: param["keep-last"].as_u64(), keep_last: param["keep-last"].as_u64(),
@ -756,7 +763,7 @@ fn prune(
// We use a WorkerTask just to have a task log, but run synchrounously // We use a WorkerTask just to have a task log, but run synchrounously
let worker = WorkerTask::new("prune", Some(worker_id), Userid::root_userid().clone(), true)?; let worker = WorkerTask::new("prune", Some(worker_id), auth_id.clone(), true)?;
if keep_all { if keep_all {
worker.log("No prune selection - keeping all files."); worker.log("No prune selection - keeping all files.");
@ -831,6 +838,7 @@ fn start_garbage_collection(
) -> Result<Value, Error> { ) -> Result<Value, Error> {
let datastore = DataStore::lookup_datastore(&store)?; let datastore = DataStore::lookup_datastore(&store)?;
let auth_id: Authid = rpcenv.get_auth_id().unwrap().parse()?;
println!("Starting garbage collection on store {}", store); println!("Starting garbage collection on store {}", store);
@ -839,7 +847,7 @@ fn start_garbage_collection(
let upid_str = WorkerTask::new_thread( let upid_str = WorkerTask::new_thread(
"garbage_collection", "garbage_collection",
Some(store.clone()), Some(store.clone()),
Userid::root_userid().clone(), auth_id.clone(),
to_stdout, to_stdout,
move |worker| { move |worker| {
worker.log(format!("starting garbage collection on store {}", store)); worker.log(format!("starting garbage collection on store {}", store));
@ -909,13 +917,13 @@ fn get_datastore_list(
let (config, _digest) = datastore::config()?; let (config, _digest) = datastore::config()?;
let userid: Userid = rpcenv.get_user().unwrap().parse()?; let auth_id: Authid = rpcenv.get_auth_id().unwrap().parse()?;
let user_info = CachedUserInfo::new()?; let user_info = CachedUserInfo::new()?;
let mut list = Vec::new(); let mut list = Vec::new();
for (store, (_, data)) in &config.sections { for (store, (_, data)) in &config.sections {
let user_privs = user_info.lookup_privs(&userid, &["datastore", &store]); let user_privs = user_info.lookup_privs(&auth_id, &["datastore", &store]);
let allowed = (user_privs & (PRIV_DATASTORE_AUDIT| PRIV_DATASTORE_BACKUP)) != 0; let allowed = (user_privs & (PRIV_DATASTORE_AUDIT| PRIV_DATASTORE_BACKUP)) != 0;
if allowed { if allowed {
let mut entry = json!({ "store": store }); let mut entry = json!({ "store": store });
@ -960,9 +968,7 @@ fn download_file(
let store = tools::required_string_param(&param, "store")?; let store = tools::required_string_param(&param, "store")?;
let datastore = DataStore::lookup_datastore(store)?; let datastore = DataStore::lookup_datastore(store)?;
let userid: Userid = rpcenv.get_user().unwrap().parse()?; let auth_id: Authid = rpcenv.get_auth_id().unwrap().parse()?;
let user_info = CachedUserInfo::new()?;
let user_privs = user_info.lookup_privs(&userid, &["datastore", &store]);
let file_name = tools::required_string_param(&param, "file-name")?.to_owned(); let file_name = tools::required_string_param(&param, "file-name")?.to_owned();
@ -972,8 +978,7 @@ fn download_file(
let backup_dir = BackupDir::new(backup_type, backup_id, backup_time)?; let backup_dir = BackupDir::new(backup_type, backup_id, backup_time)?;
let allowed = (user_privs & PRIV_DATASTORE_READ) != 0; check_priv_or_backup_owner(&datastore, backup_dir.group(), &auth_id, PRIV_DATASTORE_READ)?;
if !allowed { check_backup_owner(&datastore, backup_dir.group(), &userid)?; }
println!("Download {} from {} ({}/{})", file_name, store, backup_dir, file_name); println!("Download {} from {} ({}/{})", file_name, store, backup_dir, file_name);
@ -1033,9 +1038,7 @@ fn download_file_decoded(
let store = tools::required_string_param(&param, "store")?; let store = tools::required_string_param(&param, "store")?;
let datastore = DataStore::lookup_datastore(store)?; let datastore = DataStore::lookup_datastore(store)?;
let userid: Userid = rpcenv.get_user().unwrap().parse()?; let auth_id: Authid = rpcenv.get_auth_id().unwrap().parse()?;
let user_info = CachedUserInfo::new()?;
let user_privs = user_info.lookup_privs(&userid, &["datastore", &store]);
let file_name = tools::required_string_param(&param, "file-name")?.to_owned(); let file_name = tools::required_string_param(&param, "file-name")?.to_owned();
@ -1045,8 +1048,7 @@ fn download_file_decoded(
let backup_dir = BackupDir::new(backup_type, backup_id, backup_time)?; let backup_dir = BackupDir::new(backup_type, backup_id, backup_time)?;
let allowed = (user_privs & PRIV_DATASTORE_READ) != 0; check_priv_or_backup_owner(&datastore, backup_dir.group(), &auth_id, PRIV_DATASTORE_READ)?;
if !allowed { check_backup_owner(&datastore, backup_dir.group(), &userid)?; }
let (manifest, files) = read_backup_index(&datastore, &backup_dir)?; let (manifest, files) = read_backup_index(&datastore, &backup_dir)?;
for file in files { for file in files {
@ -1158,8 +1160,9 @@ fn upload_backup_log(
let backup_dir = BackupDir::new(backup_type, backup_id, backup_time)?; let backup_dir = BackupDir::new(backup_type, backup_id, backup_time)?;
let userid: Userid = rpcenv.get_user().unwrap().parse()?; let auth_id: Authid = rpcenv.get_auth_id().unwrap().parse()?;
check_backup_owner(&datastore, backup_dir.group(), &userid)?; let owner = datastore.get_owner(backup_dir.group())?;
check_backup_owner(&owner, &auth_id)?;
let mut path = datastore.base_path(); let mut path = datastore.base_path();
path.push(backup_dir.relative_path()); path.push(backup_dir.relative_path());
@ -1228,14 +1231,11 @@ fn catalog(
) -> Result<Value, Error> { ) -> Result<Value, Error> {
let datastore = DataStore::lookup_datastore(&store)?; let datastore = DataStore::lookup_datastore(&store)?;
let userid: Userid = rpcenv.get_user().unwrap().parse()?; let auth_id: Authid = rpcenv.get_auth_id().unwrap().parse()?;
let user_info = CachedUserInfo::new()?;
let user_privs = user_info.lookup_privs(&userid, &["datastore", &store]);
let backup_dir = BackupDir::new(backup_type, backup_id, backup_time)?; let backup_dir = BackupDir::new(backup_type, backup_id, backup_time)?;
let allowed = (user_privs & PRIV_DATASTORE_READ) != 0; check_priv_or_backup_owner(&datastore, backup_dir.group(), &auth_id, PRIV_DATASTORE_READ)?;
if !allowed { check_backup_owner(&datastore, backup_dir.group(), &userid)?; }
let file_name = CATALOG_NAME; let file_name = CATALOG_NAME;
@ -1399,9 +1399,7 @@ fn pxar_file_download(
let store = tools::required_string_param(&param, "store")?; let store = tools::required_string_param(&param, "store")?;
let datastore = DataStore::lookup_datastore(&store)?; let datastore = DataStore::lookup_datastore(&store)?;
let userid: Userid = rpcenv.get_user().unwrap().parse()?; let auth_id: Authid = rpcenv.get_auth_id().unwrap().parse()?;
let user_info = CachedUserInfo::new()?;
let user_privs = user_info.lookup_privs(&userid, &["datastore", &store]);
let filepath = tools::required_string_param(&param, "filepath")?.to_owned(); let filepath = tools::required_string_param(&param, "filepath")?.to_owned();
@ -1411,8 +1409,7 @@ fn pxar_file_download(
let backup_dir = BackupDir::new(backup_type, backup_id, backup_time)?; let backup_dir = BackupDir::new(backup_type, backup_id, backup_time)?;
let allowed = (user_privs & PRIV_DATASTORE_READ) != 0; check_priv_or_backup_owner(&datastore, backup_dir.group(), &auth_id, PRIV_DATASTORE_READ)?;
if !allowed { check_backup_owner(&datastore, backup_dir.group(), &userid)?; }
let mut components = base64::decode(&filepath)?; let mut components = base64::decode(&filepath)?;
if components.len() > 0 && components[0] == '/' as u8 { if components.len() > 0 && components[0] == '/' as u8 {
@ -1578,14 +1575,10 @@ fn get_notes(
) -> Result<String, Error> { ) -> Result<String, Error> {
let datastore = DataStore::lookup_datastore(&store)?; let datastore = DataStore::lookup_datastore(&store)?;
let userid: Userid = rpcenv.get_user().unwrap().parse()?; let auth_id: Authid = rpcenv.get_auth_id().unwrap().parse()?;
let user_info = CachedUserInfo::new()?;
let user_privs = user_info.lookup_privs(&userid, &["datastore", &store]);
let backup_dir = BackupDir::new(backup_type, backup_id, backup_time)?; let backup_dir = BackupDir::new(backup_type, backup_id, backup_time)?;
let allowed = (user_privs & PRIV_DATASTORE_READ) != 0; check_priv_or_backup_owner(&datastore, backup_dir.group(), &auth_id, PRIV_DATASTORE_READ)?;
if !allowed { check_backup_owner(&datastore, backup_dir.group(), &userid)?; }
let (manifest, _) = datastore.load_manifest(&backup_dir)?; let (manifest, _) = datastore.load_manifest(&backup_dir)?;
@ -1631,14 +1624,10 @@ fn set_notes(
) -> Result<(), Error> { ) -> Result<(), Error> {
let datastore = DataStore::lookup_datastore(&store)?; let datastore = DataStore::lookup_datastore(&store)?;
let userid: Userid = rpcenv.get_user().unwrap().parse()?; let auth_id: Authid = rpcenv.get_auth_id().unwrap().parse()?;
let user_info = CachedUserInfo::new()?;
let user_privs = user_info.lookup_privs(&userid, &["datastore", &store]);
let backup_dir = BackupDir::new(backup_type, backup_id, backup_time)?; let backup_dir = BackupDir::new(backup_type, backup_id, backup_time)?;
let allowed = (user_privs & PRIV_DATASTORE_READ) != 0; check_priv_or_backup_owner(&datastore, backup_dir.group(), &auth_id, PRIV_DATASTORE_READ)?;
if !allowed { check_backup_owner(&datastore, backup_dir.group(), &userid)?; }
datastore.update_manifest(&backup_dir,|manifest| { datastore.update_manifest(&backup_dir,|manifest| {
manifest.unprotected["notes"] = notes.into(); manifest.unprotected["notes"] = notes.into();
@ -1660,12 +1649,13 @@ fn set_notes(
schema: BACKUP_ID_SCHEMA, schema: BACKUP_ID_SCHEMA,
}, },
"new-owner": { "new-owner": {
type: Userid, type: Authid,
}, },
}, },
}, },
access: { access: {
permission: &Permission::Privilege(&["datastore", "{store}"], PRIV_DATASTORE_MODIFY, true), permission: &Permission::Anybody,
description: "Datastore.Modify on whole datastore, or changing ownership between user and a user's token for owned backups with Datastore.Backup"
}, },
)] )]
/// Change owner of a backup group /// Change owner of a backup group
@ -1673,18 +1663,69 @@ fn set_backup_owner(
store: String, store: String,
backup_type: String, backup_type: String,
backup_id: String, backup_id: String,
new_owner: Userid, new_owner: Authid,
_rpcenv: &mut dyn RpcEnvironment, rpcenv: &mut dyn RpcEnvironment,
) -> Result<(), Error> { ) -> Result<(), Error> {
let datastore = DataStore::lookup_datastore(&store)?; let datastore = DataStore::lookup_datastore(&store)?;
let backup_group = BackupGroup::new(backup_type, backup_id); let backup_group = BackupGroup::new(backup_type, backup_id);
let auth_id: Authid = rpcenv.get_auth_id().unwrap().parse()?;
let user_info = CachedUserInfo::new()?; let user_info = CachedUserInfo::new()?;
if !user_info.is_active_user(&new_owner) { let privs = user_info.lookup_privs(&auth_id, &["datastore", &store]);
bail!("user '{}' is inactive or non-existent", new_owner);
let allowed = if (privs & PRIV_DATASTORE_MODIFY) != 0 {
// High-privilege user/token
true
} else if (privs & PRIV_DATASTORE_BACKUP) != 0 {
let owner = datastore.get_owner(&backup_group)?;
match (owner.is_token(), new_owner.is_token()) {
(true, true) => {
// API token to API token, owned by same user
let owner = owner.user();
let new_owner = new_owner.user();
owner == new_owner && Authid::from(owner.clone()) == auth_id
},
(true, false) => {
// API token to API token owner
Authid::from(owner.user().clone()) == auth_id
&& new_owner == auth_id
},
(false, true) => {
// API token owner to API token
owner == auth_id
&& Authid::from(new_owner.user().clone()) == auth_id
},
(false, false) => {
// User to User, not allowed for unprivileged users
false
},
}
} else {
false
};
if !allowed {
return Err(http_err!(UNAUTHORIZED,
"{} does not have permission to change owner of backup group '{}' to {}",
auth_id,
backup_group,
new_owner,
));
}
if !user_info.is_active_auth_id(&new_owner) {
bail!("{} '{}' is inactive or non-existent",
if new_owner.is_token() {
"API token".to_string()
} else {
"user".to_string()
},
new_owner);
} }
datastore.set_owner(&backup_group, &new_owner, true)?; datastore.set_owner(&backup_group, &new_owner, true)?;

View File

@ -101,11 +101,11 @@ fn run_sync_job(
let (config, _digest) = sync::config()?; let (config, _digest) = sync::config()?;
let sync_job: SyncJobConfig = config.lookup("sync", &id)?; let sync_job: SyncJobConfig = config.lookup("sync", &id)?;
let userid: Userid = rpcenv.get_user().unwrap().parse()?; let auth_id: Authid = rpcenv.get_auth_id().unwrap().parse()?;
let job = Job::new("syncjob", &id)?; let job = Job::new("syncjob", &id)?;
let upid_str = do_sync_job(job, sync_job, &userid, None)?; let upid_str = do_sync_job(job, sync_job, &auth_id, None)?;
Ok(upid_str) Ok(upid_str)
} }

View File

@ -101,11 +101,11 @@ fn run_verification_job(
let (config, _digest) = verify::config()?; let (config, _digest) = verify::config()?;
let verification_job: VerificationJobConfig = config.lookup("verification", &id)?; let verification_job: VerificationJobConfig = config.lookup("verification", &id)?;
let userid: Userid = rpcenv.get_user().unwrap().parse()?; let auth_id: Authid = rpcenv.get_auth_id().unwrap().parse()?;
let job = Job::new("verificationjob", &id)?; let job = Job::new("verificationjob", &id)?;
let upid_str = do_verification_job(job, verification_job, &userid, None)?; let upid_str = do_verification_job(job, verification_job, &auth_id, None)?;
Ok(upid_str) Ok(upid_str)
} }

View File

@ -59,12 +59,12 @@ async move {
let debug = param["debug"].as_bool().unwrap_or(false); let debug = param["debug"].as_bool().unwrap_or(false);
let benchmark = param["benchmark"].as_bool().unwrap_or(false); let benchmark = param["benchmark"].as_bool().unwrap_or(false);
let userid: Userid = rpcenv.get_user().unwrap().parse()?; let auth_id: Authid = rpcenv.get_auth_id().unwrap().parse()?;
let store = tools::required_string_param(&param, "store")?.to_owned(); let store = tools::required_string_param(&param, "store")?.to_owned();
let user_info = CachedUserInfo::new()?; let user_info = CachedUserInfo::new()?;
user_info.check_privs(&userid, &["datastore", &store], PRIV_DATASTORE_BACKUP, false)?; user_info.check_privs(&auth_id, &["datastore", &store], PRIV_DATASTORE_BACKUP, false)?;
let datastore = DataStore::lookup_datastore(&store)?; let datastore = DataStore::lookup_datastore(&store)?;
@ -105,12 +105,15 @@ async move {
}; };
// lock backup group to only allow one backup per group at a time // lock backup group to only allow one backup per group at a time
let (owner, _group_guard) = datastore.create_locked_backup_group(&backup_group, &userid)?; let (owner, _group_guard) = datastore.create_locked_backup_group(&backup_group, &auth_id)?;
// permission check // permission check
if owner != userid && worker_type != "benchmark" { let correct_owner = owner == auth_id
|| (owner.is_token()
&& Authid::from(owner.user().clone()) == auth_id);
if !correct_owner && worker_type != "benchmark" {
// only the owner is allowed to create additional snapshots // only the owner is allowed to create additional snapshots
bail!("backup owner check failed ({} != {})", userid, owner); bail!("backup owner check failed ({} != {})", auth_id, owner);
} }
let last_backup = { let last_backup = {
@ -153,9 +156,9 @@ async move {
if !is_new { bail!("backup directory already exists."); } if !is_new { bail!("backup directory already exists."); }
WorkerTask::spawn(worker_type, Some(worker_id), userid.clone(), true, move |worker| { WorkerTask::spawn(worker_type, Some(worker_id), auth_id.clone(), true, move |worker| {
let mut env = BackupEnvironment::new( let mut env = BackupEnvironment::new(
env_type, userid, worker.clone(), datastore, backup_dir); env_type, auth_id, worker.clone(), datastore, backup_dir);
env.debug = debug; env.debug = debug;
env.last_backup = last_backup; env.last_backup = last_backup;

View File

@ -10,7 +10,7 @@ use proxmox::tools::digest_to_hex;
use proxmox::tools::fs::{replace_file, CreateOptions}; use proxmox::tools::fs::{replace_file, CreateOptions};
use proxmox::api::{RpcEnvironment, RpcEnvironmentType}; use proxmox::api::{RpcEnvironment, RpcEnvironmentType};
use crate::api2::types::Userid; use crate::api2::types::Authid;
use crate::backup::*; use crate::backup::*;
use crate::server::WorkerTask; use crate::server::WorkerTask;
use crate::server::formatter::*; use crate::server::formatter::*;
@ -104,7 +104,7 @@ impl SharedBackupState {
pub struct BackupEnvironment { pub struct BackupEnvironment {
env_type: RpcEnvironmentType, env_type: RpcEnvironmentType,
result_attributes: Value, result_attributes: Value,
user: Userid, auth_id: Authid,
pub debug: bool, pub debug: bool,
pub formatter: &'static OutputFormatter, pub formatter: &'static OutputFormatter,
pub worker: Arc<WorkerTask>, pub worker: Arc<WorkerTask>,
@ -117,7 +117,7 @@ pub struct BackupEnvironment {
impl BackupEnvironment { impl BackupEnvironment {
pub fn new( pub fn new(
env_type: RpcEnvironmentType, env_type: RpcEnvironmentType,
user: Userid, auth_id: Authid,
worker: Arc<WorkerTask>, worker: Arc<WorkerTask>,
datastore: Arc<DataStore>, datastore: Arc<DataStore>,
backup_dir: BackupDir, backup_dir: BackupDir,
@ -137,7 +137,7 @@ impl BackupEnvironment {
Self { Self {
result_attributes: json!({}), result_attributes: json!({}),
env_type, env_type,
user, auth_id,
worker, worker,
datastore, datastore,
debug: false, debug: false,
@ -518,7 +518,7 @@ impl BackupEnvironment {
WorkerTask::new_thread( WorkerTask::new_thread(
"verify", "verify",
Some(worker_id), Some(worker_id),
self.user.clone(), self.auth_id.clone(),
false, false,
move |worker| { move |worker| {
worker.log("Automatically verifying newly added snapshot"); worker.log("Automatically verifying newly added snapshot");
@ -533,6 +533,7 @@ impl BackupEnvironment {
corrupt_chunks, corrupt_chunks,
worker.clone(), worker.clone(),
worker.upid().clone(), worker.upid().clone(),
None,
snap_lock, snap_lock,
)? { )? {
bail!("verification failed - please check the log for details"); bail!("verification failed - please check the log for details");
@ -598,12 +599,12 @@ impl RpcEnvironment for BackupEnvironment {
self.env_type self.env_type
} }
fn set_user(&mut self, _user: Option<String>) { fn set_auth_id(&mut self, _auth_id: Option<String>) {
panic!("unable to change user"); panic!("unable to change auth_id");
} }
fn get_user(&self) -> Option<String> { fn get_auth_id(&self) -> Option<String> {
Some(self.user.to_string()) Some(self.auth_id.to_string())
} }
} }

View File

@ -35,14 +35,14 @@ pub fn list_datastores(
let (config, digest) = datastore::config()?; let (config, digest) = datastore::config()?;
let userid: Userid = rpcenv.get_user().unwrap().parse()?; let auth_id: Authid = rpcenv.get_auth_id().unwrap().parse()?;
let user_info = CachedUserInfo::new()?; let user_info = CachedUserInfo::new()?;
rpcenv["digest"] = proxmox::tools::digest_to_hex(&digest).into(); rpcenv["digest"] = proxmox::tools::digest_to_hex(&digest).into();
let list:Vec<DataStoreConfig> = config.convert_to_typed_array("datastore")?; let list:Vec<DataStoreConfig> = config.convert_to_typed_array("datastore")?;
let filter_by_privs = |store: &DataStoreConfig| { let filter_by_privs = |store: &DataStoreConfig| {
let user_privs = user_info.lookup_privs(&userid, &["datastore", &store.name]); let user_privs = user_info.lookup_privs(&auth_id, &["datastore", &store.name]);
(user_privs & PRIV_DATASTORE_AUDIT) != 0 (user_privs & PRIV_DATASTORE_AUDIT) != 0
}; };

View File

@ -66,7 +66,7 @@ pub fn list_remotes(
default: 8007, default: 8007,
}, },
userid: { userid: {
type: Userid, type: Authid,
}, },
password: { password: {
schema: remote::REMOTE_PASSWORD_SCHEMA, schema: remote::REMOTE_PASSWORD_SCHEMA,
@ -167,7 +167,7 @@ pub enum DeletableProperty {
}, },
userid: { userid: {
optional: true, optional: true,
type: Userid, type: Authid,
}, },
password: { password: {
optional: true, optional: true,
@ -201,7 +201,7 @@ pub fn update_remote(
comment: Option<String>, comment: Option<String>,
host: Option<String>, host: Option<String>,
port: Option<u16>, port: Option<u16>,
userid: Option<Userid>, userid: Option<Authid>,
password: Option<String>, password: Option<String>,
fingerprint: Option<String>, fingerprint: Option<String>,
delete: Option<Vec<DeletableProperty>>, delete: Option<Vec<DeletableProperty>>,

View File

@ -91,10 +91,12 @@ async fn termproxy(
cmd: Option<String>, cmd: Option<String>,
rpcenv: &mut dyn RpcEnvironment, rpcenv: &mut dyn RpcEnvironment,
) -> Result<Value, Error> { ) -> Result<Value, Error> {
// intentionally user only for now
let userid: Userid = rpcenv let userid: Userid = rpcenv
.get_user() .get_auth_id()
.ok_or_else(|| format_err!("unknown user"))? .ok_or_else(|| format_err!("unknown user"))?
.parse()?; .parse()?;
let auth_id = Authid::from(userid.clone());
if userid.realm() != "pam" { if userid.realm() != "pam" {
bail!("only pam users can use the console"); bail!("only pam users can use the console");
@ -137,7 +139,7 @@ async fn termproxy(
let upid = WorkerTask::spawn( let upid = WorkerTask::spawn(
"termproxy", "termproxy",
None, None,
userid, auth_id,
false, false,
move |worker| async move { move |worker| async move {
// move inside the worker so that it survives and does not close the port // move inside the worker so that it survives and does not close the port
@ -272,7 +274,8 @@ fn upgrade_to_websocket(
rpcenv: Box<dyn RpcEnvironment>, rpcenv: Box<dyn RpcEnvironment>,
) -> ApiResponseFuture { ) -> ApiResponseFuture {
async move { async move {
let userid: Userid = rpcenv.get_user().unwrap().parse()?; // intentionally user only for now
let userid: Userid = rpcenv.get_auth_id().unwrap().parse()?;
let ticket = tools::required_string_param(&param, "vncticket")?; let ticket = tools::required_string_param(&param, "vncticket")?;
let port: u16 = tools::required_integer_param(&param, "port")? as u16; let port: u16 = tools::required_integer_param(&param, "port")? as u16;

View File

@ -12,7 +12,7 @@ use crate::server::WorkerTask;
use crate::tools::http; use crate::tools::http;
use crate::config::acl::{PRIV_SYS_AUDIT, PRIV_SYS_MODIFY}; use crate::config::acl::{PRIV_SYS_AUDIT, PRIV_SYS_MODIFY};
use crate::api2::types::{APTUpdateInfo, NODE_SCHEMA, Userid, UPID_SCHEMA}; use crate::api2::types::{Authid, APTUpdateInfo, NODE_SCHEMA, UPID_SCHEMA};
const_regex! { const_regex! {
VERSION_EPOCH_REGEX = r"^\d+:"; VERSION_EPOCH_REGEX = r"^\d+:";
@ -351,11 +351,11 @@ pub fn apt_update_database(
rpcenv: &mut dyn RpcEnvironment, rpcenv: &mut dyn RpcEnvironment,
) -> Result<String, Error> { ) -> Result<String, Error> {
let userid: Userid = rpcenv.get_user().unwrap().parse()?; let auth_id: Authid = rpcenv.get_auth_id().unwrap().parse()?;
let to_stdout = if rpcenv.env_type() == RpcEnvironmentType::CLI { true } else { false }; let to_stdout = if rpcenv.env_type() == RpcEnvironmentType::CLI { true } else { false };
let quiet = quiet.unwrap_or(API_METHOD_APT_UPDATE_DATABASE_PARAM_DEFAULT_QUIET); let quiet = quiet.unwrap_or(API_METHOD_APT_UPDATE_DATABASE_PARAM_DEFAULT_QUIET);
let upid_str = WorkerTask::new_thread("aptupdate", None, userid, to_stdout, move |worker| { let upid_str = WorkerTask::new_thread("aptupdate", None, auth_id, to_stdout, move |worker| {
if !quiet { worker.log("starting apt-get update") } if !quiet { worker.log("starting apt-get update") }
// TODO: set proxy /etc/apt/apt.conf.d/76pbsproxy like PVE // TODO: set proxy /etc/apt/apt.conf.d/76pbsproxy like PVE

View File

@ -13,7 +13,7 @@ use crate::tools::disks::{
}; };
use crate::server::WorkerTask; use crate::server::WorkerTask;
use crate::api2::types::{Userid, UPID_SCHEMA, NODE_SCHEMA, BLOCKDEVICE_NAME_SCHEMA}; use crate::api2::types::{Authid, UPID_SCHEMA, NODE_SCHEMA, BLOCKDEVICE_NAME_SCHEMA};
pub mod directory; pub mod directory;
pub mod zfs; pub mod zfs;
@ -140,7 +140,7 @@ pub fn initialize_disk(
let to_stdout = if rpcenv.env_type() == RpcEnvironmentType::CLI { true } else { false }; let to_stdout = if rpcenv.env_type() == RpcEnvironmentType::CLI { true } else { false };
let userid: Userid = rpcenv.get_user().unwrap().parse()?; let auth_id: Authid = rpcenv.get_auth_id().unwrap().parse()?;
let info = get_disk_usage_info(&disk, true)?; let info = get_disk_usage_info(&disk, true)?;
@ -149,7 +149,7 @@ pub fn initialize_disk(
} }
let upid_str = WorkerTask::new_thread( let upid_str = WorkerTask::new_thread(
"diskinit", Some(disk.clone()), userid, to_stdout, move |worker| "diskinit", Some(disk.clone()), auth_id, to_stdout, move |worker|
{ {
worker.log(format!("initialize disk {}", disk)); worker.log(format!("initialize disk {}", disk));

View File

@ -134,7 +134,7 @@ pub fn create_datastore_disk(
let to_stdout = if rpcenv.env_type() == RpcEnvironmentType::CLI { true } else { false }; let to_stdout = if rpcenv.env_type() == RpcEnvironmentType::CLI { true } else { false };
let userid: Userid = rpcenv.get_user().unwrap().parse()?; let auth_id: Authid = rpcenv.get_auth_id().unwrap().parse()?;
let info = get_disk_usage_info(&disk, true)?; let info = get_disk_usage_info(&disk, true)?;
@ -143,7 +143,7 @@ pub fn create_datastore_disk(
} }
let upid_str = WorkerTask::new_thread( let upid_str = WorkerTask::new_thread(
"dircreate", Some(name.clone()), userid, to_stdout, move |worker| "dircreate", Some(name.clone()), auth_id, to_stdout, move |worker|
{ {
worker.log(format!("create datastore '{}' on disk {}", name, disk)); worker.log(format!("create datastore '{}' on disk {}", name, disk));

View File

@ -256,7 +256,7 @@ pub fn create_zpool(
let to_stdout = if rpcenv.env_type() == RpcEnvironmentType::CLI { true } else { false }; let to_stdout = if rpcenv.env_type() == RpcEnvironmentType::CLI { true } else { false };
let userid: Userid = rpcenv.get_user().unwrap().parse()?; let auth_id: Authid = rpcenv.get_auth_id().unwrap().parse()?;
let add_datastore = add_datastore.unwrap_or(false); let add_datastore = add_datastore.unwrap_or(false);
@ -316,7 +316,7 @@ pub fn create_zpool(
} }
let upid_str = WorkerTask::new_thread( let upid_str = WorkerTask::new_thread(
"zfscreate", Some(name.clone()), userid, to_stdout, move |worker| "zfscreate", Some(name.clone()), auth_id, to_stdout, move |worker|
{ {
worker.log(format!("create {:?} zpool '{}' on devices '{}'", raidlevel, name, devices_text)); worker.log(format!("create {:?} zpool '{}' on devices '{}'", raidlevel, name, devices_text));

View File

@ -684,9 +684,9 @@ pub async fn reload_network_config(
network::assert_ifupdown2_installed()?; network::assert_ifupdown2_installed()?;
let userid: Userid = rpcenv.get_user().unwrap().parse()?; let auth_id: Authid = rpcenv.get_auth_id().unwrap().parse()?;
let upid_str = WorkerTask::spawn("srvreload", Some(String::from("networking")), userid, true, |_worker| async { let upid_str = WorkerTask::spawn("srvreload", Some(String::from("networking")), auth_id, true, |_worker| async {
let _ = std::fs::rename(network::NETWORK_INTERFACES_NEW_FILENAME, network::NETWORK_INTERFACES_FILENAME); let _ = std::fs::rename(network::NETWORK_INTERFACES_NEW_FILENAME, network::NETWORK_INTERFACES_FILENAME);

View File

@ -182,7 +182,7 @@ fn get_service_state(
Ok(json_service_state(&service, status)) Ok(json_service_state(&service, status))
} }
fn run_service_command(service: &str, cmd: &str, userid: Userid) -> Result<Value, Error> { fn run_service_command(service: &str, cmd: &str, auth_id: Authid) -> Result<Value, Error> {
let workerid = format!("srv{}", &cmd); let workerid = format!("srv{}", &cmd);
@ -196,7 +196,7 @@ fn run_service_command(service: &str, cmd: &str, userid: Userid) -> Result<Value
let upid = WorkerTask::new_thread( let upid = WorkerTask::new_thread(
&workerid, &workerid,
Some(service.clone()), Some(service.clone()),
userid, auth_id,
false, false,
move |_worker| { move |_worker| {
@ -244,11 +244,11 @@ fn start_service(
rpcenv: &mut dyn RpcEnvironment, rpcenv: &mut dyn RpcEnvironment,
) -> Result<Value, Error> { ) -> Result<Value, Error> {
let userid: Userid = rpcenv.get_user().unwrap().parse()?; let auth_id: Authid = rpcenv.get_auth_id().unwrap().parse()?;
log::info!("starting service {}", service); log::info!("starting service {}", service);
run_service_command(&service, "start", userid) run_service_command(&service, "start", auth_id)
} }
#[api( #[api(
@ -274,11 +274,11 @@ fn stop_service(
rpcenv: &mut dyn RpcEnvironment, rpcenv: &mut dyn RpcEnvironment,
) -> Result<Value, Error> { ) -> Result<Value, Error> {
let userid: Userid = rpcenv.get_user().unwrap().parse()?; let auth_id: Authid = rpcenv.get_auth_id().unwrap().parse()?;
log::info!("stopping service {}", service); log::info!("stopping service {}", service);
run_service_command(&service, "stop", userid) run_service_command(&service, "stop", auth_id)
} }
#[api( #[api(
@ -304,15 +304,15 @@ fn restart_service(
rpcenv: &mut dyn RpcEnvironment, rpcenv: &mut dyn RpcEnvironment,
) -> Result<Value, Error> { ) -> Result<Value, Error> {
let userid: Userid = rpcenv.get_user().unwrap().parse()?; let auth_id: Authid = rpcenv.get_auth_id().unwrap().parse()?;
log::info!("re-starting service {}", service); log::info!("re-starting service {}", service);
if &service == "proxmox-backup-proxy" { if &service == "proxmox-backup-proxy" {
// special case, avoid aborting running tasks // special case, avoid aborting running tasks
run_service_command(&service, "reload", userid) run_service_command(&service, "reload", auth_id)
} else { } else {
run_service_command(&service, "restart", userid) run_service_command(&service, "restart", auth_id)
} }
} }
@ -339,11 +339,11 @@ fn reload_service(
rpcenv: &mut dyn RpcEnvironment, rpcenv: &mut dyn RpcEnvironment,
) -> Result<Value, Error> { ) -> Result<Value, Error> {
let userid: Userid = rpcenv.get_user().unwrap().parse()?; let auth_id: Authid = rpcenv.get_auth_id().unwrap().parse()?;
log::info!("reloading service {}", service); log::info!("reloading service {}", service);
run_service_command(&service, "reload", userid) run_service_command(&service, "reload", auth_id)
} }

View File

@ -7,7 +7,7 @@ use crate::tools;
use crate::tools::subscription::{self, SubscriptionStatus, SubscriptionInfo}; use crate::tools::subscription::{self, SubscriptionStatus, SubscriptionInfo};
use crate::config::acl::{PRIV_SYS_AUDIT,PRIV_SYS_MODIFY}; use crate::config::acl::{PRIV_SYS_AUDIT,PRIV_SYS_MODIFY};
use crate::config::cached_user_info::CachedUserInfo; use crate::config::cached_user_info::CachedUserInfo;
use crate::api2::types::{NODE_SCHEMA, Userid}; use crate::api2::types::{NODE_SCHEMA, Authid};
#[api( #[api(
input: { input: {
@ -100,9 +100,9 @@ fn get_subscription(
}, },
}; };
let userid: Userid = rpcenv.get_user().unwrap().parse()?; let auth_id: Authid = rpcenv.get_auth_id().unwrap().parse()?;
let user_info = CachedUserInfo::new()?; let user_info = CachedUserInfo::new()?;
let user_privs = user_info.lookup_privs(&userid, &[]); let user_privs = user_info.lookup_privs(&auth_id, &[]);
if (user_privs & PRIV_SYS_AUDIT) == 0 { if (user_privs & PRIV_SYS_AUDIT) == 0 {
// not enough privileges for full state // not enough privileges for full state

View File

@ -14,6 +14,16 @@ use crate::server::{self, UPID, TaskState, TaskListInfoIterator};
use crate::config::acl::{PRIV_SYS_AUDIT, PRIV_SYS_MODIFY}; use crate::config::acl::{PRIV_SYS_AUDIT, PRIV_SYS_MODIFY};
use crate::config::cached_user_info::CachedUserInfo; use crate::config::cached_user_info::CachedUserInfo;
fn check_task_access(auth_id: &Authid, upid: &UPID) -> Result<(), Error> {
let task_auth_id = &upid.auth_id;
if auth_id == task_auth_id
|| (task_auth_id.is_token() && &Authid::from(task_auth_id.user().clone()) == auth_id) {
Ok(())
} else {
let user_info = CachedUserInfo::new()?;
user_info.check_privs(auth_id, &["system", "tasks"], PRIV_SYS_AUDIT, false)
}
}
#[api( #[api(
input: { input: {
@ -57,9 +67,13 @@ use crate::config::cached_user_info::CachedUserInfo;
description: "Worker ID (arbitrary ASCII string)", description: "Worker ID (arbitrary ASCII string)",
}, },
user: { user: {
type: String, type: Userid,
description: "The user who started the task.", description: "The user who started the task.",
}, },
tokenid: {
type: Tokenname,
optional: true,
},
status: { status: {
type: String, type: String,
description: "'running' or 'stopped'", description: "'running' or 'stopped'",
@ -84,12 +98,8 @@ async fn get_task_status(
let upid = extract_upid(&param)?; let upid = extract_upid(&param)?;
let userid: Userid = rpcenv.get_user().unwrap().parse()?; let auth_id: Authid = rpcenv.get_auth_id().unwrap().parse()?;
check_task_access(&auth_id, &upid)?;
if userid != upid.userid {
let user_info = CachedUserInfo::new()?;
user_info.check_privs(&userid, &["system", "tasks"], PRIV_SYS_AUDIT, false)?;
}
let mut result = json!({ let mut result = json!({
"upid": param["upid"], "upid": param["upid"],
@ -99,9 +109,13 @@ async fn get_task_status(
"starttime": upid.starttime, "starttime": upid.starttime,
"type": upid.worker_type, "type": upid.worker_type,
"id": upid.worker_id, "id": upid.worker_id,
"user": upid.userid, "user": upid.auth_id.user(),
}); });
if upid.auth_id.is_token() {
result["tokenid"] = Value::from(upid.auth_id.tokenname().unwrap().as_str());
}
if crate::server::worker_is_active(&upid).await? { if crate::server::worker_is_active(&upid).await? {
result["status"] = Value::from("running"); result["status"] = Value::from("running");
} else { } else {
@ -161,12 +175,9 @@ async fn read_task_log(
let upid = extract_upid(&param)?; let upid = extract_upid(&param)?;
let userid: Userid = rpcenv.get_user().unwrap().parse()?; let auth_id: Authid = rpcenv.get_auth_id().unwrap().parse()?;
if userid != upid.userid { check_task_access(&auth_id, &upid)?;
let user_info = CachedUserInfo::new()?;
user_info.check_privs(&userid, &["system", "tasks"], PRIV_SYS_AUDIT, false)?;
}
let test_status = param["test-status"].as_bool().unwrap_or(false); let test_status = param["test-status"].as_bool().unwrap_or(false);
@ -234,11 +245,11 @@ fn stop_task(
let upid = extract_upid(&param)?; let upid = extract_upid(&param)?;
let userid: Userid = rpcenv.get_user().unwrap().parse()?; let auth_id: Authid = rpcenv.get_auth_id().unwrap().parse()?;
if userid != upid.userid { if auth_id != upid.auth_id {
let user_info = CachedUserInfo::new()?; let user_info = CachedUserInfo::new()?;
user_info.check_privs(&userid, &["system", "tasks"], PRIV_SYS_MODIFY, false)?; user_info.check_privs(&auth_id, &["system", "tasks"], PRIV_SYS_MODIFY, false)?;
} }
server::abort_worker_async(upid); server::abort_worker_async(upid);
@ -308,9 +319,9 @@ pub fn list_tasks(
mut rpcenv: &mut dyn RpcEnvironment, mut rpcenv: &mut dyn RpcEnvironment,
) -> Result<Vec<TaskListItem>, Error> { ) -> Result<Vec<TaskListItem>, Error> {
let userid: Userid = rpcenv.get_user().unwrap().parse()?; let auth_id: Authid = rpcenv.get_auth_id().unwrap().parse()?;
let user_info = CachedUserInfo::new()?; let user_info = CachedUserInfo::new()?;
let user_privs = user_info.lookup_privs(&userid, &["system", "tasks"]); let user_privs = user_info.lookup_privs(&auth_id, &["system", "tasks"]);
let list_all = (user_privs & PRIV_SYS_AUDIT) != 0; let list_all = (user_privs & PRIV_SYS_AUDIT) != 0;
@ -326,10 +337,12 @@ pub fn list_tasks(
Err(_) => return None, Err(_) => return None,
}; };
if !list_all && info.upid.userid != userid { return None; } if !list_all && check_task_access(&auth_id, &info.upid).is_err() {
return None;
}
if let Some(userid) = &userfilter { if let Some(needle) = &userfilter {
if !info.upid.userid.as_str().contains(userid) { return None; } if !info.upid.auth_id.to_string().contains(needle) { return None; }
} }
if let Some(store) = store { if let Some(store) = store {

View File

@ -20,7 +20,7 @@ use crate::config::{
pub fn check_pull_privs( pub fn check_pull_privs(
userid: &Userid, auth_id: &Authid,
store: &str, store: &str,
remote: &str, remote: &str,
remote_store: &str, remote_store: &str,
@ -29,11 +29,11 @@ pub fn check_pull_privs(
let user_info = CachedUserInfo::new()?; let user_info = CachedUserInfo::new()?;
user_info.check_privs(userid, &["datastore", store], PRIV_DATASTORE_BACKUP, false)?; user_info.check_privs(auth_id, &["datastore", store], PRIV_DATASTORE_BACKUP, false)?;
user_info.check_privs(userid, &["remote", remote, remote_store], PRIV_REMOTE_READ, false)?; user_info.check_privs(auth_id, &["remote", remote, remote_store], PRIV_REMOTE_READ, false)?;
if delete { if delete {
user_info.check_privs(userid, &["datastore", store], PRIV_DATASTORE_PRUNE, false)?; user_info.check_privs(auth_id, &["datastore", store], PRIV_DATASTORE_PRUNE, false)?;
} }
Ok(()) Ok(())
@ -56,7 +56,7 @@ pub async fn get_pull_parameters(
let src_repo = BackupRepository::new(Some(remote.userid.clone()), Some(remote.host.clone()), remote.port, remote_store.to_string()); let src_repo = BackupRepository::new(Some(remote.userid.clone()), Some(remote.host.clone()), remote.port, remote_store.to_string());
let client = HttpClient::new(&src_repo.host(), src_repo.port(), &src_repo.user(), options)?; let client = HttpClient::new(&src_repo.host(), src_repo.port(), &src_repo.auth_id(), options)?;
let _auth_info = client.login() // make sure we can auth let _auth_info = client.login() // make sure we can auth
.await .await
.map_err(|err| format_err!("remote connection to '{}' failed - {}", remote.host, err))?; .map_err(|err| format_err!("remote connection to '{}' failed - {}", remote.host, err))?;
@ -68,23 +68,26 @@ pub async fn get_pull_parameters(
pub fn do_sync_job( pub fn do_sync_job(
mut job: Job, mut job: Job,
sync_job: SyncJobConfig, sync_job: SyncJobConfig,
userid: &Userid, auth_id: &Authid,
schedule: Option<String>, schedule: Option<String>,
) -> Result<String, Error> { ) -> Result<String, Error> {
let job_id = job.jobname().to_string(); let job_id = job.jobname().to_string();
let worker_type = job.jobtype().to_string(); let worker_type = job.jobtype().to_string();
let email = crate::server::lookup_user_email(auth_id.user());
let upid_str = WorkerTask::spawn( let upid_str = WorkerTask::spawn(
&worker_type, &worker_type,
Some(job.jobname().to_string()), Some(job.jobname().to_string()),
userid.clone(), auth_id.clone(),
false, false,
move |worker| async move { move |worker| async move {
job.start(&worker.upid().to_string())?; job.start(&worker.upid().to_string())?;
let worker2 = worker.clone(); let worker2 = worker.clone();
let sync_job2 = sync_job.clone();
let worker_future = async move { let worker_future = async move {
@ -98,7 +101,9 @@ pub fn do_sync_job(
worker.log(format!("Sync datastore '{}' from '{}/{}'", worker.log(format!("Sync datastore '{}' from '{}/{}'",
sync_job.store, sync_job.remote, sync_job.remote_store)); sync_job.store, sync_job.remote, sync_job.remote_store));
crate::client::pull::pull_store(&worker, &client, &src_repo, tgt_store.clone(), delete, Userid::backup_userid().clone()).await?; let backup_auth_id = Authid::backup_auth_id();
crate::client::pull::pull_store(&worker, &client, &src_repo, tgt_store.clone(), delete, backup_auth_id.clone()).await?;
worker.log(format!("sync job '{}' end", &job_id)); worker.log(format!("sync job '{}' end", &job_id));
@ -107,12 +112,12 @@ pub fn do_sync_job(
let mut abort_future = worker2.abort_future().map(|_| Err(format_err!("sync aborted"))); let mut abort_future = worker2.abort_future().map(|_| Err(format_err!("sync aborted")));
let res = select!{ let result = select!{
worker = worker_future.fuse() => worker, worker = worker_future.fuse() => worker,
abort = abort_future => abort, abort = abort_future => abort,
}; };
let status = worker2.create_state(&res); let status = worker2.create_state(&result);
match job.finish(status) { match job.finish(status) {
Ok(_) => {}, Ok(_) => {},
@ -121,7 +126,13 @@ pub fn do_sync_job(
} }
} }
res if let Some(email) = email {
if let Err(err) = crate::server::send_sync_status(&email, &sync_job2, &result) {
eprintln!("send sync notification failed: {}", err);
}
}
result
})?; })?;
Ok(upid_str) Ok(upid_str)
@ -164,19 +175,19 @@ async fn pull (
rpcenv: &mut dyn RpcEnvironment, rpcenv: &mut dyn RpcEnvironment,
) -> Result<String, Error> { ) -> Result<String, Error> {
let userid: Userid = rpcenv.get_user().unwrap().parse()?; let auth_id: Authid = rpcenv.get_auth_id().unwrap().parse()?;
let delete = remove_vanished.unwrap_or(true); let delete = remove_vanished.unwrap_or(true);
check_pull_privs(&userid, &store, &remote, &remote_store, delete)?; check_pull_privs(&auth_id, &store, &remote, &remote_store, delete)?;
let (client, src_repo, tgt_store) = get_pull_parameters(&store, &remote, &remote_store).await?; let (client, src_repo, tgt_store) = get_pull_parameters(&store, &remote, &remote_store).await?;
// fixme: set to_stdout to false? // fixme: set to_stdout to false?
let upid_str = WorkerTask::spawn("sync", Some(store.clone()), userid.clone(), true, move |worker| async move { let upid_str = WorkerTask::spawn("sync", Some(store.clone()), auth_id.clone(), true, move |worker| async move {
worker.log(format!("sync datastore '{}' start", store)); worker.log(format!("sync datastore '{}' start", store));
let pull_future = pull_store(&worker, &client, &src_repo, tgt_store.clone(), delete, userid); let pull_future = pull_store(&worker, &client, &src_repo, tgt_store.clone(), delete, auth_id);
let future = select!{ let future = select!{
success = pull_future.fuse() => success, success = pull_future.fuse() => success,
abort = worker.abort_future().map(|_| Err(format_err!("pull aborted"))) => abort, abort = worker.abort_future().map(|_| Err(format_err!("pull aborted"))) => abort,

View File

@ -55,11 +55,11 @@ fn upgrade_to_backup_reader_protocol(
async move { async move {
let debug = param["debug"].as_bool().unwrap_or(false); let debug = param["debug"].as_bool().unwrap_or(false);
let userid: Userid = rpcenv.get_user().unwrap().parse()?; let auth_id: Authid = rpcenv.get_auth_id().unwrap().parse()?;
let store = tools::required_string_param(&param, "store")?.to_owned(); let store = tools::required_string_param(&param, "store")?.to_owned();
let user_info = CachedUserInfo::new()?; let user_info = CachedUserInfo::new()?;
let privs = user_info.lookup_privs(&userid, &["datastore", &store]); let privs = user_info.lookup_privs(&auth_id, &["datastore", &store]);
let priv_read = privs & PRIV_DATASTORE_READ != 0; let priv_read = privs & PRIV_DATASTORE_READ != 0;
let priv_backup = privs & PRIV_DATASTORE_BACKUP != 0; let priv_backup = privs & PRIV_DATASTORE_BACKUP != 0;
@ -94,7 +94,10 @@ fn upgrade_to_backup_reader_protocol(
let backup_dir = BackupDir::new(backup_type, backup_id, backup_time)?; let backup_dir = BackupDir::new(backup_type, backup_id, backup_time)?;
if !priv_read { if !priv_read {
let owner = datastore.get_owner(backup_dir.group())?; let owner = datastore.get_owner(backup_dir.group())?;
if owner != userid { let correct_owner = owner == auth_id
|| (owner.is_token()
&& Authid::from(owner.user().clone()) == auth_id);
if !correct_owner {
bail!("backup owner check failed!"); bail!("backup owner check failed!");
} }
} }
@ -110,10 +113,10 @@ fn upgrade_to_backup_reader_protocol(
let worker_id = format!("{}:{}/{}/{:08X}", store, backup_type, backup_id, backup_dir.backup_time()); let worker_id = format!("{}:{}/{}/{:08X}", store, backup_type, backup_id, backup_dir.backup_time());
WorkerTask::spawn("reader", Some(worker_id), userid.clone(), true, move |worker| { WorkerTask::spawn("reader", Some(worker_id), auth_id.clone(), true, move |worker| {
let mut env = ReaderEnvironment::new( let mut env = ReaderEnvironment::new(
env_type, env_type,
userid, auth_id,
worker.clone(), worker.clone(),
datastore, datastore,
backup_dir, backup_dir,

View File

@ -5,7 +5,7 @@ use serde_json::{json, Value};
use proxmox::api::{RpcEnvironment, RpcEnvironmentType}; use proxmox::api::{RpcEnvironment, RpcEnvironmentType};
use crate::api2::types::Userid; use crate::api2::types::Authid;
use crate::backup::*; use crate::backup::*;
use crate::server::formatter::*; use crate::server::formatter::*;
use crate::server::WorkerTask; use crate::server::WorkerTask;
@ -17,7 +17,7 @@ use crate::server::WorkerTask;
pub struct ReaderEnvironment { pub struct ReaderEnvironment {
env_type: RpcEnvironmentType, env_type: RpcEnvironmentType,
result_attributes: Value, result_attributes: Value,
user: Userid, auth_id: Authid,
pub debug: bool, pub debug: bool,
pub formatter: &'static OutputFormatter, pub formatter: &'static OutputFormatter,
pub worker: Arc<WorkerTask>, pub worker: Arc<WorkerTask>,
@ -29,7 +29,7 @@ pub struct ReaderEnvironment {
impl ReaderEnvironment { impl ReaderEnvironment {
pub fn new( pub fn new(
env_type: RpcEnvironmentType, env_type: RpcEnvironmentType,
user: Userid, auth_id: Authid,
worker: Arc<WorkerTask>, worker: Arc<WorkerTask>,
datastore: Arc<DataStore>, datastore: Arc<DataStore>,
backup_dir: BackupDir, backup_dir: BackupDir,
@ -39,7 +39,7 @@ impl ReaderEnvironment {
Self { Self {
result_attributes: json!({}), result_attributes: json!({}),
env_type, env_type,
user, auth_id,
worker, worker,
datastore, datastore,
debug: false, debug: false,
@ -82,12 +82,12 @@ impl RpcEnvironment for ReaderEnvironment {
self.env_type self.env_type
} }
fn set_user(&mut self, _user: Option<String>) { fn set_auth_id(&mut self, _auth_id: Option<String>) {
panic!("unable to change user"); panic!("unable to change auth_id");
} }
fn get_user(&self) -> Option<String> { fn get_auth_id(&self) -> Option<String> {
Some(self.user.to_string()) Some(self.auth_id.to_string())
} }
} }

View File

@ -16,9 +16,9 @@ use crate::api2::types::{
DATASTORE_SCHEMA, DATASTORE_SCHEMA,
RRDMode, RRDMode,
RRDTimeFrameResolution, RRDTimeFrameResolution,
Authid,
TaskListItem, TaskListItem,
TaskStateType, TaskStateType,
Userid,
}; };
use crate::server; use crate::server;
@ -87,13 +87,13 @@ fn datastore_status(
let (config, _digest) = datastore::config()?; let (config, _digest) = datastore::config()?;
let userid: Userid = rpcenv.get_user().unwrap().parse()?; let auth_id: Authid = rpcenv.get_auth_id().unwrap().parse()?;
let user_info = CachedUserInfo::new()?; let user_info = CachedUserInfo::new()?;
let mut list = Vec::new(); let mut list = Vec::new();
for (store, (_, _)) in &config.sections { for (store, (_, _)) in &config.sections {
let user_privs = user_info.lookup_privs(&userid, &["datastore", &store]); let user_privs = user_info.lookup_privs(&auth_id, &["datastore", &store]);
let allowed = (user_privs & (PRIV_DATASTORE_AUDIT| PRIV_DATASTORE_BACKUP)) != 0; let allowed = (user_privs & (PRIV_DATASTORE_AUDIT| PRIV_DATASTORE_BACKUP)) != 0;
if !allowed { if !allowed {
continue; continue;
@ -221,9 +221,9 @@ pub fn list_tasks(
rpcenv: &mut dyn RpcEnvironment, rpcenv: &mut dyn RpcEnvironment,
) -> Result<Vec<TaskListItem>, Error> { ) -> Result<Vec<TaskListItem>, Error> {
let userid: Userid = rpcenv.get_user().unwrap().parse()?; let auth_id: Authid = rpcenv.get_auth_id().unwrap().parse()?;
let user_info = CachedUserInfo::new()?; let user_info = CachedUserInfo::new()?;
let user_privs = user_info.lookup_privs(&userid, &["system", "tasks"]); let user_privs = user_info.lookup_privs(&auth_id, &["system", "tasks"]);
let list_all = (user_privs & PRIV_SYS_AUDIT) != 0; let list_all = (user_privs & PRIV_SYS_AUDIT) != 0;
let since = since.unwrap_or_else(|| 0); let since = since.unwrap_or_else(|| 0);
@ -238,7 +238,7 @@ pub fn list_tasks(
.filter_map(|info| { .filter_map(|info| {
match info { match info {
Ok(info) => { Ok(info) => {
if list_all || info.upid.userid == userid { if list_all || info.upid.auth_id == auth_id {
if let Some(filter) = &typefilter { if let Some(filter) = &typefilter {
if !info.upid.worker_type.contains(filter) { if !info.upid.worker_type.contains(filter) {
return None; return None;

View File

@ -14,9 +14,11 @@ mod macros;
#[macro_use] #[macro_use]
mod userid; mod userid;
pub use userid::{Realm, RealmRef}; pub use userid::{Realm, RealmRef};
pub use userid::{Tokenname, TokennameRef};
pub use userid::{Username, UsernameRef}; pub use userid::{Username, UsernameRef};
pub use userid::Userid; pub use userid::Userid;
pub use userid::PROXMOX_GROUP_ID_SCHEMA; pub use userid::Authid;
pub use userid::{PROXMOX_TOKEN_ID_SCHEMA, PROXMOX_TOKEN_NAME_SCHEMA, PROXMOX_GROUP_ID_SCHEMA};
// File names: may not contain slashes, may not start with "." // File names: may not contain slashes, may not start with "."
pub const FILENAME_FORMAT: ApiStringFormat = ApiStringFormat::VerifyFn(|name| { pub const FILENAME_FORMAT: ApiStringFormat = ApiStringFormat::VerifyFn(|name| {
@ -65,7 +67,7 @@ const_regex!{
pub DNS_NAME_OR_IP_REGEX = concat!(r"^(?:", DNS_NAME!(), "|", IPRE!(), r")$"); pub DNS_NAME_OR_IP_REGEX = concat!(r"^(?:", DNS_NAME!(), "|", IPRE!(), r")$");
pub BACKUP_REPO_URL_REGEX = concat!(r"^^(?:(?:(", USER_ID_REGEX_STR!(), ")@)?(", DNS_NAME!(), "|", IPRE_BRACKET!() ,"):)?(?:([0-9]{1,5}):)?(", PROXMOX_SAFE_ID_REGEX_STR!(), r")$"); pub BACKUP_REPO_URL_REGEX = concat!(r"^^(?:(?:(", USER_ID_REGEX_STR!(), "|", APITOKEN_ID_REGEX_STR!(), ")@)?(", DNS_NAME!(), "|", IPRE_BRACKET!() ,"):)?(?:([0-9]{1,5}):)?(", PROXMOX_SAFE_ID_REGEX_STR!(), r")$");
pub CERT_FINGERPRINT_SHA256_REGEX = r"^(?:[0-9a-fA-F][0-9a-fA-F])(?::[0-9a-fA-F][0-9a-fA-F]){31}$"; pub CERT_FINGERPRINT_SHA256_REGEX = r"^(?:[0-9a-fA-F][0-9a-fA-F])(?::[0-9a-fA-F][0-9a-fA-F]){31}$";
@ -374,7 +376,7 @@ pub const BLOCKDEVICE_NAME_SCHEMA: Schema = StringSchema::new("Block device name
}, },
}, },
owner: { owner: {
type: Userid, type: Authid,
optional: true, optional: true,
}, },
}, },
@ -392,7 +394,7 @@ pub struct GroupListItem {
pub files: Vec<String>, pub files: Vec<String>,
/// The owner of group /// The owner of group
#[serde(skip_serializing_if="Option::is_none")] #[serde(skip_serializing_if="Option::is_none")]
pub owner: Option<Userid>, pub owner: Option<Authid>,
} }
#[api()] #[api()]
@ -450,7 +452,7 @@ pub struct SnapshotVerifyState {
}, },
}, },
owner: { owner: {
type: Userid, type: Authid,
optional: true, optional: true,
}, },
}, },
@ -475,7 +477,7 @@ pub struct SnapshotListItem {
pub size: Option<u64>, pub size: Option<u64>,
/// The owner of the snapshots group /// The owner of the snapshots group
#[serde(skip_serializing_if="Option::is_none")] #[serde(skip_serializing_if="Option::is_none")]
pub owner: Option<Userid>, pub owner: Option<Authid>,
} }
#[api( #[api(
@ -622,10 +624,75 @@ pub struct StorageStatus {
pub avail: u64, pub avail: u64,
} }
#[api()]
#[derive(Serialize, Deserialize, Default)]
/// Backup Type group/snapshot counts.
pub struct TypeCounts {
/// The number of groups of the type.
pub groups: u64,
/// The number of snapshots of the type.
pub snapshots: u64,
}
#[api(
properties: {
ct: {
type: TypeCounts,
optional: true,
},
host: {
type: TypeCounts,
optional: true,
},
vm: {
type: TypeCounts,
optional: true,
},
other: {
type: TypeCounts,
optional: true,
},
},
)]
#[derive(Serialize, Deserialize)]
/// Counts of groups/snapshots per BackupType.
pub struct Counts {
/// The counts for CT backups
pub ct: Option<TypeCounts>,
/// The counts for Host backups
pub host: Option<TypeCounts>,
/// The counts for VM backups
pub vm: Option<TypeCounts>,
/// The counts for other backup types
pub other: Option<TypeCounts>,
}
#[api(
properties: {
"gc-status": { type: GarbageCollectionStatus, },
counts: { type: Counts, }
},
)]
#[derive(Serialize, Deserialize)]
#[serde(rename_all="kebab-case")]
/// Overall Datastore status and useful information.
pub struct DataStoreStatus {
/// Total space (bytes).
pub total: u64,
/// Used space (bytes).
pub used: u64,
/// Available space (bytes).
pub avail: u64,
/// Status of last GC
pub gc_status: GarbageCollectionStatus,
/// Group/Snapshot counts
pub counts: Counts,
}
#[api( #[api(
properties: { properties: {
upid: { schema: UPID_SCHEMA }, upid: { schema: UPID_SCHEMA },
user: { type: Userid }, user: { type: Authid },
}, },
)] )]
#[derive(Serialize, Deserialize)] #[derive(Serialize, Deserialize)]
@ -644,8 +711,8 @@ pub struct TaskListItem {
pub worker_type: String, pub worker_type: String,
/// Worker ID (arbitrary ASCII string) /// Worker ID (arbitrary ASCII string)
pub worker_id: Option<String>, pub worker_id: Option<String>,
/// The user who started the task /// The authenticated entity who started the task
pub user: Userid, pub user: Authid,
/// The task end time (Epoch) /// The task end time (Epoch)
#[serde(skip_serializing_if="Option::is_none")] #[serde(skip_serializing_if="Option::is_none")]
pub endtime: Option<i64>, pub endtime: Option<i64>,
@ -668,7 +735,7 @@ impl From<crate::server::TaskListInfo> for TaskListItem {
starttime: info.upid.starttime, starttime: info.upid.starttime,
worker_type: info.upid.worker_type, worker_type: info.upid.worker_type,
worker_id: info.upid.worker_id, worker_id: info.upid.worker_id,
user: info.upid.userid, user: info.upid.auth_id,
endtime, endtime,
status, status,
} }

View File

@ -1,6 +1,7 @@
//! Types for user handling. //! Types for user handling.
//! //!
//! We have [`Username`]s and [`Realm`]s. To uniquely identify a user, they must be combined into a [`Userid`]. //! We have [`Username`]s, [`Realm`]s and [`Tokenname`]s. To uniquely identify a user/API token, they
//! must be combined into a [`Userid`] or [`Authid`].
//! //!
//! Since they're all string types, they're organized as follows: //! Since they're all string types, they're organized as follows:
//! //!
@ -9,13 +10,16 @@
//! with `String`, meaning you can only make references to it. //! with `String`, meaning you can only make references to it.
//! * [`Realm`]: an owned realm (`String` equivalent). //! * [`Realm`]: an owned realm (`String` equivalent).
//! * [`RealmRef`]: a borrowed realm (`str` equivalent). //! * [`RealmRef`]: a borrowed realm (`str` equivalent).
//! * [`Userid`]: an owned user id (`"user@realm"`). Note that this does not have a separate //! * [`Tokenname`]: an owned API token name (`String` equivalent)
//! borrowed type. //! * [`TokennameRef`]: a borrowed `Tokenname` (`str` equivalent).
//! * [`Userid`]: an owned user id (`"user@realm"`).
//! * [`Authid`]: an owned Authentication ID (a `Userid` with an optional `Tokenname`).
//! Note that `Userid` and `Authid` do not have a separate borrowed type.
//! //!
//! Note that `Username`s are not unique, therefore they do not implement `Eq` and cannot be //! Note that `Username`s and `Tokenname`s are not unique, therefore they do not implement `Eq` and cannot be
//! compared directly. If a direct comparison is really required, they can be compared as strings //! compared directly. If a direct comparison is really required, they can be compared as strings
//! via the `as_str()` method. [`Realm`]s and [`Userid`]s on the other hand can be compared with //! via the `as_str()` method. [`Realm`]s, [`Userid`]s and [`Authid`]s on the other
//! each other, as in those two cases the comparison has meaning. //! hand can be compared with each other, as in those cases the comparison has meaning.
use std::borrow::Borrow; use std::borrow::Borrow;
use std::convert::TryFrom; use std::convert::TryFrom;
@ -36,19 +40,42 @@ use proxmox::const_regex;
// also see "man useradd" // also see "man useradd"
macro_rules! USER_NAME_REGEX_STR { () => (r"(?:[^\s:/[:cntrl:]]+)") } macro_rules! USER_NAME_REGEX_STR { () => (r"(?:[^\s:/[:cntrl:]]+)") }
macro_rules! GROUP_NAME_REGEX_STR { () => (USER_NAME_REGEX_STR!()) } macro_rules! GROUP_NAME_REGEX_STR { () => (USER_NAME_REGEX_STR!()) }
macro_rules! TOKEN_NAME_REGEX_STR { () => (PROXMOX_SAFE_ID_REGEX_STR!()) }
macro_rules! USER_ID_REGEX_STR { () => (concat!(USER_NAME_REGEX_STR!(), r"@", PROXMOX_SAFE_ID_REGEX_STR!())) } macro_rules! USER_ID_REGEX_STR { () => (concat!(USER_NAME_REGEX_STR!(), r"@", PROXMOX_SAFE_ID_REGEX_STR!())) }
macro_rules! APITOKEN_ID_REGEX_STR { () => (concat!(USER_ID_REGEX_STR!() , r"!", TOKEN_NAME_REGEX_STR!())) }
const_regex! { const_regex! {
pub PROXMOX_USER_NAME_REGEX = concat!(r"^", USER_NAME_REGEX_STR!(), r"$"); pub PROXMOX_USER_NAME_REGEX = concat!(r"^", USER_NAME_REGEX_STR!(), r"$");
pub PROXMOX_TOKEN_NAME_REGEX = concat!(r"^", TOKEN_NAME_REGEX_STR!(), r"$");
pub PROXMOX_USER_ID_REGEX = concat!(r"^", USER_ID_REGEX_STR!(), r"$"); pub PROXMOX_USER_ID_REGEX = concat!(r"^", USER_ID_REGEX_STR!(), r"$");
pub PROXMOX_APITOKEN_ID_REGEX = concat!(r"^", APITOKEN_ID_REGEX_STR!(), r"$");
pub PROXMOX_AUTH_ID_REGEX = concat!(r"^", r"(?:", USER_ID_REGEX_STR!(), r"|", APITOKEN_ID_REGEX_STR!(), r")$");
pub PROXMOX_GROUP_ID_REGEX = concat!(r"^", GROUP_NAME_REGEX_STR!(), r"$"); pub PROXMOX_GROUP_ID_REGEX = concat!(r"^", GROUP_NAME_REGEX_STR!(), r"$");
} }
pub const PROXMOX_USER_NAME_FORMAT: ApiStringFormat = pub const PROXMOX_USER_NAME_FORMAT: ApiStringFormat =
ApiStringFormat::Pattern(&PROXMOX_USER_NAME_REGEX); ApiStringFormat::Pattern(&PROXMOX_USER_NAME_REGEX);
pub const PROXMOX_TOKEN_NAME_FORMAT: ApiStringFormat =
ApiStringFormat::Pattern(&PROXMOX_TOKEN_NAME_REGEX);
pub const PROXMOX_USER_ID_FORMAT: ApiStringFormat = pub const PROXMOX_USER_ID_FORMAT: ApiStringFormat =
ApiStringFormat::Pattern(&PROXMOX_USER_ID_REGEX); ApiStringFormat::Pattern(&PROXMOX_USER_ID_REGEX);
pub const PROXMOX_TOKEN_ID_FORMAT: ApiStringFormat =
ApiStringFormat::Pattern(&PROXMOX_APITOKEN_ID_REGEX);
pub const PROXMOX_AUTH_ID_FORMAT: ApiStringFormat =
ApiStringFormat::Pattern(&PROXMOX_AUTH_ID_REGEX);
pub const PROXMOX_TOKEN_ID_SCHEMA: Schema = StringSchema::new("API Token ID")
.format(&PROXMOX_TOKEN_ID_FORMAT)
.min_length(3)
.max_length(64)
.schema();
pub const PROXMOX_TOKEN_NAME_SCHEMA: Schema = StringSchema::new("API Token name")
.format(&PROXMOX_TOKEN_NAME_FORMAT)
.min_length(3)
.max_length(64)
.schema();
pub const PROXMOX_GROUP_ID_FORMAT: ApiStringFormat = pub const PROXMOX_GROUP_ID_FORMAT: ApiStringFormat =
ApiStringFormat::Pattern(&PROXMOX_GROUP_ID_REGEX); ApiStringFormat::Pattern(&PROXMOX_GROUP_ID_REGEX);
@ -91,26 +118,6 @@ pub struct Username(String);
#[derive(Debug, Hash)] #[derive(Debug, Hash)]
pub struct UsernameRef(str); pub struct UsernameRef(str);
#[doc(hidden)]
/// ```compile_fail
/// let a: Username = unsafe { std::mem::zeroed() };
/// let b: Username = unsafe { std::mem::zeroed() };
/// let _ = <Username as PartialEq>::eq(&a, &b);
/// ```
///
/// ```compile_fail
/// let a: &UsernameRef = unsafe { std::mem::zeroed() };
/// let b: &UsernameRef = unsafe { std::mem::zeroed() };
/// let _ = <&UsernameRef as PartialEq>::eq(a, b);
/// ```
///
/// ```compile_fail
/// let a: &UsernameRef = unsafe { std::mem::zeroed() };
/// let b: &UsernameRef = unsafe { std::mem::zeroed() };
/// let _ = <&UsernameRef as PartialEq>::eq(&a, &b);
/// ```
struct _AssertNoEqImpl;
impl UsernameRef { impl UsernameRef {
fn new(s: &str) -> &Self { fn new(s: &str) -> &Self {
unsafe { &*(s as *const str as *const UsernameRef) } unsafe { &*(s as *const str as *const UsernameRef) }
@ -286,7 +293,132 @@ impl PartialEq<Realm> for &RealmRef {
} }
} }
/// A complete user id consting of a user name and a realm. #[api(
type: String,
format: &PROXMOX_TOKEN_NAME_FORMAT,
)]
/// The token ID part of an API token authentication id.
///
/// This alone does NOT uniquely identify the API token and therefore does not implement `Eq`. In
/// order to compare token IDs directly, they need to be explicitly compared as strings by calling
/// `.as_str()`.
///
/// ```compile_fail
/// fn test(a: Tokenname, b: Tokenname) -> bool {
/// a == b // illegal and does not compile
/// }
/// ```
#[derive(Clone, Debug, Hash, Deserialize, Serialize)]
pub struct Tokenname(String);
/// A reference to a token name part of an authentication id. This alone does NOT uniquely identify
/// the user.
///
/// This is like a `str` to the `String` of a [`Tokenname`].
#[derive(Debug, Hash)]
pub struct TokennameRef(str);
#[doc(hidden)]
/// ```compile_fail
/// let a: Username = unsafe { std::mem::zeroed() };
/// let b: Username = unsafe { std::mem::zeroed() };
/// let _ = <Username as PartialEq>::eq(&a, &b);
/// ```
///
/// ```compile_fail
/// let a: &UsernameRef = unsafe { std::mem::zeroed() };
/// let b: &UsernameRef = unsafe { std::mem::zeroed() };
/// let _ = <&UsernameRef as PartialEq>::eq(a, b);
/// ```
///
/// ```compile_fail
/// let a: &UsernameRef = unsafe { std::mem::zeroed() };
/// let b: &UsernameRef = unsafe { std::mem::zeroed() };
/// let _ = <&UsernameRef as PartialEq>::eq(&a, &b);
/// ```
///
/// ```compile_fail
/// let a: Tokenname = unsafe { std::mem::zeroed() };
/// let b: Tokenname = unsafe { std::mem::zeroed() };
/// let _ = <Tokenname as PartialEq>::eq(&a, &b);
/// ```
///
/// ```compile_fail
/// let a: &TokennameRef = unsafe { std::mem::zeroed() };
/// let b: &TokennameRef = unsafe { std::mem::zeroed() };
/// let _ = <&TokennameRef as PartialEq>::eq(a, b);
/// ```
///
/// ```compile_fail
/// let a: &TokennameRef = unsafe { std::mem::zeroed() };
/// let b: &TokennameRef = unsafe { std::mem::zeroed() };
/// let _ = <&TokennameRef as PartialEq>::eq(&a, &b);
/// ```
struct _AssertNoEqImpl;
impl TokennameRef {
fn new(s: &str) -> &Self {
unsafe { &*(s as *const str as *const TokennameRef) }
}
pub fn as_str(&self) -> &str {
&self.0
}
}
impl std::ops::Deref for Tokenname {
type Target = TokennameRef;
fn deref(&self) -> &TokennameRef {
self.borrow()
}
}
impl Borrow<TokennameRef> for Tokenname {
fn borrow(&self) -> &TokennameRef {
TokennameRef::new(self.0.as_str())
}
}
impl AsRef<TokennameRef> for Tokenname {
fn as_ref(&self) -> &TokennameRef {
self.borrow()
}
}
impl ToOwned for TokennameRef {
type Owned = Tokenname;
fn to_owned(&self) -> Self::Owned {
Tokenname(self.0.to_owned())
}
}
impl TryFrom<String> for Tokenname {
type Error = Error;
fn try_from(s: String) -> Result<Self, Error> {
if !PROXMOX_TOKEN_NAME_REGEX.is_match(&s) {
bail!("invalid token name");
}
Ok(Self(s))
}
}
impl<'a> TryFrom<&'a str> for &'a TokennameRef {
type Error = Error;
fn try_from(s: &'a str) -> Result<&'a TokennameRef, Error> {
if !PROXMOX_TOKEN_NAME_REGEX.is_match(s) {
bail!("invalid token name in user id");
}
Ok(TokennameRef::new(s))
}
}
/// A complete user id consisting of a user name and a realm
#[derive(Clone, Debug, Hash)] #[derive(Clone, Debug, Hash)]
pub struct Userid { pub struct Userid {
data: String, data: String,
@ -366,10 +498,18 @@ impl std::str::FromStr for Userid {
type Err = Error; type Err = Error;
fn from_str(id: &str) -> Result<Self, Error> { fn from_str(id: &str) -> Result<Self, Error> {
let (name, realm) = match id.as_bytes().iter().rposition(|&b| b == b'@') { let name_len = id
Some(pos) => (&id[..pos], &id[(pos + 1)..]), .as_bytes()
None => bail!("not a valid user id"), .iter()
}; .rposition(|&b| b == b'@')
.ok_or_else(|| format_err!("not a valid user id"))?;
let name = &id[..name_len];
let realm = &id[(name_len + 1)..];
if !PROXMOX_USER_NAME_REGEX.is_match(name) {
bail!("invalid user name in user id");
}
PROXMOX_AUTH_REALM_STRING_SCHEMA.check_constraints(realm) PROXMOX_AUTH_REALM_STRING_SCHEMA.check_constraints(realm)
.map_err(|_| format_err!("invalid realm in user id"))?; .map_err(|_| format_err!("invalid realm in user id"))?;
@ -388,6 +528,10 @@ impl TryFrom<String> for Userid {
.rposition(|&b| b == b'@') .rposition(|&b| b == b'@')
.ok_or_else(|| format_err!("not a valid user id"))?; .ok_or_else(|| format_err!("not a valid user id"))?;
if !PROXMOX_USER_NAME_REGEX.is_match(&data[..name_len]) {
bail!("invalid user name in user id");
}
PROXMOX_AUTH_REALM_STRING_SCHEMA.check_constraints(&data[(name_len + 1)..]) PROXMOX_AUTH_REALM_STRING_SCHEMA.check_constraints(&data[(name_len + 1)..])
.map_err(|_| format_err!("invalid realm in user id"))?; .map_err(|_| format_err!("invalid realm in user id"))?;
@ -413,5 +557,182 @@ impl PartialEq<String> for Userid {
} }
} }
/// A complete authentication id consisting of a user id and an optional token name.
#[derive(Clone, Debug, Hash)]
pub struct Authid {
user: Userid,
tokenname: Option<Tokenname>
}
impl Authid {
pub const API_SCHEMA: Schema = StringSchema::new("Authentication ID")
.format(&PROXMOX_AUTH_ID_FORMAT)
.min_length(3)
.max_length(64)
.schema();
const fn new(user: Userid, tokenname: Option<Tokenname>) -> Self {
Self { user, tokenname }
}
pub fn user(&self) -> &Userid {
&self.user
}
pub fn is_token(&self) -> bool {
self.tokenname.is_some()
}
pub fn tokenname(&self) -> Option<&TokennameRef> {
match &self.tokenname {
Some(name) => Some(&name),
None => None,
}
}
/// Get the "backup@pam" auth id.
pub fn backup_auth_id() -> &'static Self {
&*BACKUP_AUTHID
}
/// Get the "root@pam" auth id.
pub fn root_auth_id() -> &'static Self {
&*ROOT_AUTHID
}
}
lazy_static! {
pub static ref BACKUP_AUTHID: Authid = Authid::from(Userid::new("backup@pam".to_string(), 6));
pub static ref ROOT_AUTHID: Authid = Authid::from(Userid::new("root@pam".to_string(), 4));
}
impl Eq for Authid {}
impl PartialEq for Authid {
fn eq(&self, rhs: &Self) -> bool {
self.user == rhs.user && match (&self.tokenname, &rhs.tokenname) {
(Some(ours), Some(theirs)) => ours.as_str() == theirs.as_str(),
(None, None) => true,
_ => false,
}
}
}
impl From<Userid> for Authid {
fn from(parts: Userid) -> Self {
Self::new(parts, None)
}
}
impl From<(Userid, Option<Tokenname>)> for Authid {
fn from(parts: (Userid, Option<Tokenname>)) -> Self {
Self::new(parts.0, parts.1)
}
}
impl fmt::Display for Authid {
fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result {
match &self.tokenname {
Some(token) => write!(f, "{}!{}", self.user, token.as_str()),
None => self.user.fmt(f),
}
}
}
impl std::str::FromStr for Authid {
type Err = Error;
fn from_str(id: &str) -> Result<Self, Error> {
let name_len = id
.as_bytes()
.iter()
.rposition(|&b| b == b'@')
.ok_or_else(|| format_err!("not a valid user id"))?;
let realm_end = id
.as_bytes()
.iter()
.rposition(|&b| b == b'!')
.map(|pos| if pos < name_len { id.len() } else { pos })
.unwrap_or(id.len());
if realm_end == id.len() - 1 {
bail!("empty token name in userid");
}
let user = Userid::from_str(&id[..realm_end])?;
if id.len() > realm_end {
let token = Tokenname::try_from(id[(realm_end + 1)..].to_string())?;
Ok(Self::new(user, Some(token)))
} else {
Ok(Self::new(user, None))
}
}
}
impl TryFrom<String> for Authid {
type Error = Error;
fn try_from(mut data: String) -> Result<Self, Error> {
let name_len = data
.as_bytes()
.iter()
.rposition(|&b| b == b'@')
.ok_or_else(|| format_err!("not a valid user id"))?;
let realm_end = data
.as_bytes()
.iter()
.rposition(|&b| b == b'!')
.map(|pos| if pos < name_len { data.len() } else { pos })
.unwrap_or(data.len());
if realm_end == data.len() - 1 {
bail!("empty token name in userid");
}
let tokenname = if data.len() > realm_end {
Some(Tokenname::try_from(data[(realm_end + 1)..].to_string())?)
} else {
None
};
data.truncate(realm_end);
let user:Userid = data.parse()?;
Ok(Self { user, tokenname })
}
}
#[test]
fn test_token_id() {
let userid: Userid = "test@pam".parse().expect("parsing Userid failed");
assert_eq!(userid.name().as_str(), "test");
assert_eq!(userid.realm(), "pam");
assert_eq!(userid, "test@pam");
let auth_id: Authid = "test@pam".parse().expect("parsing user Authid failed");
assert_eq!(auth_id.to_string(), "test@pam".to_string());
assert!(!auth_id.is_token());
assert_eq!(auth_id.user(), &userid);
let user_auth_id = Authid::from(userid.clone());
assert_eq!(user_auth_id, auth_id);
assert!(!user_auth_id.is_token());
let auth_id: Authid = "test@pam!bar".parse().expect("parsing token Authid failed");
let token_userid = auth_id.user();
assert_eq!(&userid, token_userid);
assert!(auth_id.is_token());
assert_eq!(auth_id.tokenname().expect("Token has tokenname").as_str(), TokennameRef::new("bar").as_str());
assert_eq!(auth_id.to_string(), "test@pam!bar".to_string());
}
proxmox::forward_deserialize_to_from_str!(Userid); proxmox::forward_deserialize_to_from_str!(Userid);
proxmox::forward_serialize_to_display!(Userid); proxmox::forward_serialize_to_display!(Userid);
proxmox::forward_deserialize_to_from_str!(Authid);
proxmox::forward_serialize_to_display!(Authid);

View File

@ -78,7 +78,7 @@ pub struct DirEntry {
#[derive(Clone, Debug, PartialEq)] #[derive(Clone, Debug, PartialEq)]
pub enum DirEntryAttribute { pub enum DirEntryAttribute {
Directory { start: u64 }, Directory { start: u64 },
File { size: u64, mtime: u64 }, File { size: u64, mtime: i64 },
Symlink, Symlink,
Hardlink, Hardlink,
BlockDevice, BlockDevice,
@ -89,7 +89,7 @@ pub enum DirEntryAttribute {
impl DirEntry { impl DirEntry {
fn new(etype: CatalogEntryType, name: Vec<u8>, start: u64, size: u64, mtime:u64) -> Self { fn new(etype: CatalogEntryType, name: Vec<u8>, start: u64, size: u64, mtime: i64) -> Self {
match etype { match etype {
CatalogEntryType::Directory => { CatalogEntryType::Directory => {
DirEntry { name, attr: DirEntryAttribute::Directory { start } } DirEntry { name, attr: DirEntryAttribute::Directory { start } }
@ -184,7 +184,7 @@ impl DirInfo {
catalog_encode_u64(writer, name.len() as u64)?; catalog_encode_u64(writer, name.len() as u64)?;
writer.write_all(name)?; writer.write_all(name)?;
catalog_encode_u64(writer, *size)?; catalog_encode_u64(writer, *size)?;
catalog_encode_u64(writer, *mtime)?; catalog_encode_i64(writer, *mtime)?;
} }
DirEntry { name, attr: DirEntryAttribute::Symlink } => { DirEntry { name, attr: DirEntryAttribute::Symlink } => {
writer.write_all(&[CatalogEntryType::Symlink as u8])?; writer.write_all(&[CatalogEntryType::Symlink as u8])?;
@ -234,7 +234,7 @@ impl DirInfo {
Ok((self.name, data)) Ok((self.name, data))
} }
fn parse<C: FnMut(CatalogEntryType, &[u8], u64, u64, u64) -> Result<bool, Error>>( fn parse<C: FnMut(CatalogEntryType, &[u8], u64, u64, i64) -> Result<bool, Error>>(
data: &[u8], data: &[u8],
mut callback: C, mut callback: C,
) -> Result<(), Error> { ) -> Result<(), Error> {
@ -265,7 +265,7 @@ impl DirInfo {
} }
CatalogEntryType::File => { CatalogEntryType::File => {
let size = catalog_decode_u64(&mut cursor)?; let size = catalog_decode_u64(&mut cursor)?;
let mtime = catalog_decode_u64(&mut cursor)?; let mtime = catalog_decode_i64(&mut cursor)?;
callback(etype, name, 0, size, mtime)? callback(etype, name, 0, size, mtime)?
} }
_ => { _ => {
@ -362,7 +362,7 @@ impl <W: Write> BackupCatalogWriter for CatalogWriter<W> {
Ok(()) Ok(())
} }
fn add_file(&mut self, name: &CStr, size: u64, mtime: u64) -> Result<(), Error> { fn add_file(&mut self, name: &CStr, size: u64, mtime: i64) -> Result<(), Error> {
let dir = self.dirstack.last_mut().ok_or_else(|| format_err!("outside root"))?; let dir = self.dirstack.last_mut().ok_or_else(|| format_err!("outside root"))?;
let name = name.to_bytes().to_vec(); let name = name.to_bytes().to_vec();
dir.entries.push(DirEntry { name, attr: DirEntryAttribute::File { size, mtime } }); dir.entries.push(DirEntry { name, attr: DirEntryAttribute::File { size, mtime } });
@ -587,14 +587,77 @@ impl <R: Read + Seek> CatalogReader<R> {
} }
} }
/// Serialize i64 as short, variable length byte sequence
///
/// Stores 7 bits per byte, Bit 8 indicates the end of the sequence (when not set).
/// If the value is negative, we end with a zero byte (0x00).
pub fn catalog_encode_i64<W: Write>(writer: &mut W, v: i64) -> Result<(), Error> {
let mut enc = Vec::new();
let mut d = if v < 0 {
(-1 * (v + 1)) as u64 + 1 // also handles i64::MIN
} else {
v as u64
};
loop {
if d < 128 {
if v < 0 {
enc.push(128 | d as u8);
enc.push(0u8);
} else {
enc.push(d as u8);
}
break;
}
enc.push((128 | (d & 127)) as u8);
d = d >> 7;
}
writer.write_all(&enc)?;
Ok(())
}
/// Deserialize i64 from variable length byte sequence
///
/// We currently read maximal 11 bytes, which give a maximum of 70 bits + sign.
/// this method is compatible with catalog_encode_u64 iff the
/// value encoded is <= 2^63 (values > 2^63 cannot be represented in an i64)
pub fn catalog_decode_i64<R: Read>(reader: &mut R) -> Result<i64, Error> {
let mut v: u64 = 0;
let mut buf = [0u8];
for i in 0..11 { // only allow 11 bytes (70 bits + sign marker)
if buf.is_empty() {
bail!("decode_i64 failed - unexpected EOB");
}
reader.read_exact(&mut buf)?;
let t = buf[0];
if t == 0 {
if v == 0 {
return Ok(0);
}
return Ok(((v - 1) as i64 * -1) - 1); // also handles i64::MIN
} else if t < 128 {
v |= (t as u64) << (i*7);
return Ok(v as i64);
} else {
v |= ((t & 127) as u64) << (i*7);
}
}
bail!("decode_i64 failed - missing end marker");
}
/// Serialize u64 as short, variable length byte sequence /// Serialize u64 as short, variable length byte sequence
/// ///
/// Stores 7 bits per byte, Bit 8 indicates the end of the sequence (when not set). /// Stores 7 bits per byte, Bit 8 indicates the end of the sequence (when not set).
/// We limit values to a maximum of 2^63.
pub fn catalog_encode_u64<W: Write>(writer: &mut W, v: u64) -> Result<(), Error> { pub fn catalog_encode_u64<W: Write>(writer: &mut W, v: u64) -> Result<(), Error> {
let mut enc = Vec::new(); let mut enc = Vec::new();
if (v & (1<<63)) != 0 { bail!("catalog_encode_u64 failed - value >= 2^63"); }
let mut d = v; let mut d = v;
loop { loop {
if d < 128 { if d < 128 {
@ -611,13 +674,14 @@ pub fn catalog_encode_u64<W: Write>(writer: &mut W, v: u64) -> Result<(), Error>
/// Deserialize u64 from variable length byte sequence /// Deserialize u64 from variable length byte sequence
/// ///
/// We currently read maximal 9 bytes, which give a maximum of 63 bits. /// We currently read maximal 10 bytes, which give a maximum of 70 bits,
/// but we currently only encode up to 64 bits
pub fn catalog_decode_u64<R: Read>(reader: &mut R) -> Result<u64, Error> { pub fn catalog_decode_u64<R: Read>(reader: &mut R) -> Result<u64, Error> {
let mut v: u64 = 0; let mut v: u64 = 0;
let mut buf = [0u8]; let mut buf = [0u8];
for i in 0..9 { // only allow 9 bytes (63 bits) for i in 0..10 { // only allow 10 bytes (70 bits)
if buf.is_empty() { if buf.is_empty() {
bail!("decode_u64 failed - unexpected EOB"); bail!("decode_u64 failed - unexpected EOB");
} }
@ -652,9 +716,58 @@ fn test_catalog_u64_encoder() {
assert!(decoded == value); assert!(decoded == value);
} }
test_encode_decode(u64::MIN);
test_encode_decode(126); test_encode_decode(126);
test_encode_decode((1<<12)-1); test_encode_decode((1<<12)-1);
test_encode_decode((1<<20)-1); test_encode_decode((1<<20)-1);
test_encode_decode((1<<50)-1); test_encode_decode((1<<50)-1);
test_encode_decode((1<<63)-1); test_encode_decode(u64::MAX);
}
#[test]
fn test_catalog_i64_encoder() {
fn test_encode_decode(value: i64) {
let mut data = Vec::new();
catalog_encode_i64(&mut data, value).unwrap();
let slice = &mut &data[..];
let decoded = catalog_decode_i64(slice).unwrap();
assert!(decoded == value);
}
test_encode_decode(0);
test_encode_decode(-0);
test_encode_decode(126);
test_encode_decode(-126);
test_encode_decode((1<<12)-1);
test_encode_decode(-(1<<12)-1);
test_encode_decode((1<<20)-1);
test_encode_decode(-(1<<20)-1);
test_encode_decode(i64::MIN);
test_encode_decode(i64::MAX);
}
#[test]
fn test_catalog_i64_compatibility() {
fn test_encode_decode(value: u64) {
let mut data = Vec::new();
catalog_encode_u64(&mut data, value).unwrap();
let slice = &mut &data[..];
let decoded = catalog_decode_i64(slice).unwrap() as u64;
assert!(decoded == value);
}
test_encode_decode(u64::MIN);
test_encode_decode(126);
test_encode_decode((1<<12)-1);
test_encode_decode((1<<20)-1);
test_encode_decode((1<<50)-1);
test_encode_decode(u64::MAX);
} }

View File

@ -23,7 +23,7 @@ use crate::task::TaskState;
use crate::tools; use crate::tools;
use crate::tools::format::HumanByte; use crate::tools::format::HumanByte;
use crate::tools::fs::{lock_dir_noblock, DirLockGuard}; use crate::tools::fs::{lock_dir_noblock, DirLockGuard};
use crate::api2::types::{GarbageCollectionStatus, Userid}; use crate::api2::types::{Authid, GarbageCollectionStatus};
use crate::server::UPID; use crate::server::UPID;
lazy_static! { lazy_static! {
@ -276,8 +276,8 @@ impl DataStore {
/// Returns the backup owner. /// Returns the backup owner.
/// ///
/// The backup owner is the user who first created the backup group. /// The backup owner is the entity who first created the backup group.
pub fn get_owner(&self, backup_group: &BackupGroup) -> Result<Userid, Error> { pub fn get_owner(&self, backup_group: &BackupGroup) -> Result<Authid, Error> {
let mut full_path = self.base_path(); let mut full_path = self.base_path();
full_path.push(backup_group.group_path()); full_path.push(backup_group.group_path());
full_path.push("owner"); full_path.push("owner");
@ -289,7 +289,7 @@ impl DataStore {
pub fn set_owner( pub fn set_owner(
&self, &self,
backup_group: &BackupGroup, backup_group: &BackupGroup,
userid: &Userid, auth_id: &Authid,
force: bool, force: bool,
) -> Result<(), Error> { ) -> Result<(), Error> {
let mut path = self.base_path(); let mut path = self.base_path();
@ -309,7 +309,7 @@ impl DataStore {
let mut file = open_options.open(&path) let mut file = open_options.open(&path)
.map_err(|err| format_err!("unable to create owner file {:?} - {}", path, err))?; .map_err(|err| format_err!("unable to create owner file {:?} - {}", path, err))?;
writeln!(file, "{}", userid) writeln!(file, "{}", auth_id)
.map_err(|err| format_err!("unable to write owner file {:?} - {}", path, err))?; .map_err(|err| format_err!("unable to write owner file {:?} - {}", path, err))?;
Ok(()) Ok(())
@ -324,8 +324,8 @@ impl DataStore {
pub fn create_locked_backup_group( pub fn create_locked_backup_group(
&self, &self,
backup_group: &BackupGroup, backup_group: &BackupGroup,
userid: &Userid, auth_id: &Authid,
) -> Result<(Userid, DirLockGuard), Error> { ) -> Result<(Authid, DirLockGuard), Error> {
// create intermediate path first: // create intermediate path first:
let base_path = self.base_path(); let base_path = self.base_path();
@ -339,7 +339,7 @@ impl DataStore {
match std::fs::create_dir(&full_path) { match std::fs::create_dir(&full_path) {
Ok(_) => { Ok(_) => {
let guard = lock_dir_noblock(&full_path, "backup group", "another backup is already running")?; let guard = lock_dir_noblock(&full_path, "backup group", "another backup is already running")?;
self.set_owner(backup_group, userid, false)?; self.set_owner(backup_group, auth_id, false)?;
let owner = self.get_owner(backup_group)?; // just to be sure let owner = self.get_owner(backup_group)?; // just to be sure
Ok((owner, guard)) Ok((owner, guard))
} }
@ -559,7 +559,11 @@ impl DataStore {
); );
} }
if gc_status.removed_bad > 0 { if gc_status.removed_bad > 0 {
crate::task_log!(worker, "Removed bad files: {}", gc_status.removed_bad); crate::task_log!(worker, "Removed bad chunks: {}", gc_status.removed_bad);
}
if gc_status.still_bad > 0 {
crate::task_log!(worker, "Leftover bad chunks: {}", gc_status.still_bad);
} }
crate::task_log!( crate::task_log!(
@ -580,6 +584,14 @@ impl DataStore {
crate::task_log!(worker, "On-Disk chunks: {}", gc_status.disk_chunks); crate::task_log!(worker, "On-Disk chunks: {}", gc_status.disk_chunks);
let deduplication_factor = if gc_status.disk_bytes > 0 {
(gc_status.index_data_bytes as f64)/(gc_status.disk_bytes as f64)
} else {
1.0
};
crate::task_log!(worker, "Deduplication factor: {:.2}", deduplication_factor);
if gc_status.disk_chunks > 0 { if gc_status.disk_chunks > 0 {
let avg_chunk = gc_status.disk_bytes/(gc_status.disk_chunks as u64); let avg_chunk = gc_status.disk_bytes/(gc_status.disk_chunks as u64);
crate::task_log!(worker, "Average chunk size: {}", HumanByte::from(avg_chunk)); crate::task_log!(worker, "Average chunk size: {}", HumanByte::from(avg_chunk));

View File

@ -14,6 +14,7 @@ use crate::{
BackupGroup, BackupGroup,
BackupDir, BackupDir,
BackupInfo, BackupInfo,
BackupManifest,
IndexFile, IndexFile,
CryptMode, CryptMode,
FileInfo, FileInfo,
@ -284,6 +285,7 @@ pub fn verify_backup_dir(
corrupt_chunks: Arc<Mutex<HashSet<[u8;32]>>>, corrupt_chunks: Arc<Mutex<HashSet<[u8;32]>>>,
worker: Arc<dyn TaskState + Send + Sync>, worker: Arc<dyn TaskState + Send + Sync>,
upid: UPID, upid: UPID,
filter: Option<&dyn Fn(&BackupManifest) -> bool>,
) -> Result<bool, Error> { ) -> Result<bool, Error> {
let snap_lock = lock_dir_noblock_shared( let snap_lock = lock_dir_noblock_shared(
&datastore.snapshot_path(&backup_dir), &datastore.snapshot_path(&backup_dir),
@ -297,6 +299,7 @@ pub fn verify_backup_dir(
corrupt_chunks, corrupt_chunks,
worker, worker,
upid, upid,
filter,
snap_lock snap_lock
), ),
Err(err) => { Err(err) => {
@ -320,6 +323,7 @@ pub fn verify_backup_dir_with_lock(
corrupt_chunks: Arc<Mutex<HashSet<[u8;32]>>>, corrupt_chunks: Arc<Mutex<HashSet<[u8;32]>>>,
worker: Arc<dyn TaskState + Send + Sync>, worker: Arc<dyn TaskState + Send + Sync>,
upid: UPID, upid: UPID,
filter: Option<&dyn Fn(&BackupManifest) -> bool>,
_snap_lock: Dir, _snap_lock: Dir,
) -> Result<bool, Error> { ) -> Result<bool, Error> {
let manifest = match datastore.load_manifest(&backup_dir) { let manifest = match datastore.load_manifest(&backup_dir) {
@ -336,6 +340,18 @@ pub fn verify_backup_dir_with_lock(
} }
}; };
if let Some(filter) = filter {
if filter(&manifest) == false {
task_log!(
worker,
"SKIPPED: verify {}:{} (recently verified)",
datastore.name(),
backup_dir,
);
return Ok(true);
}
}
task_log!(worker, "verify {}:{}", datastore.name(), backup_dir); task_log!(worker, "verify {}:{}", datastore.name(), backup_dir);
let mut error_count = 0; let mut error_count = 0;
@ -412,7 +428,7 @@ pub fn verify_backup_group(
progress: Option<(usize, usize)>, // (done, snapshot_count) progress: Option<(usize, usize)>, // (done, snapshot_count)
worker: Arc<dyn TaskState + Send + Sync>, worker: Arc<dyn TaskState + Send + Sync>,
upid: &UPID, upid: &UPID,
filter: &dyn Fn(&BackupInfo) -> bool, filter: Option<&dyn Fn(&BackupManifest) -> bool>,
) -> Result<(usize, Vec<String>), Error> { ) -> Result<(usize, Vec<String>), Error> {
let mut errors = Vec::new(); let mut errors = Vec::new();
@ -439,16 +455,6 @@ pub fn verify_backup_group(
for info in list { for info in list {
count += 1; count += 1;
if filter(&info) == false {
task_log!(
worker,
"SKIPPED: verify {}:{} (recently verified)",
datastore.name(),
info.backup_dir,
);
continue;
}
if !verify_backup_dir( if !verify_backup_dir(
datastore.clone(), datastore.clone(),
&info.backup_dir, &info.backup_dir,
@ -456,6 +462,7 @@ pub fn verify_backup_group(
corrupt_chunks.clone(), corrupt_chunks.clone(),
worker.clone(), worker.clone(),
upid.clone(), upid.clone(),
filter,
)? { )? {
errors.push(info.backup_dir.to_string()); errors.push(info.backup_dir.to_string());
} }
@ -486,7 +493,7 @@ pub fn verify_all_backups(
datastore: Arc<DataStore>, datastore: Arc<DataStore>,
worker: Arc<dyn TaskState + Send + Sync>, worker: Arc<dyn TaskState + Send + Sync>,
upid: &UPID, upid: &UPID,
filter: &dyn Fn(&BackupInfo) -> bool, filter: Option<&dyn Fn(&BackupManifest) -> bool>,
) -> Result<Vec<String>, Error> { ) -> Result<Vec<String>, Error> {
let mut errors = Vec::new(); let mut errors = Vec::new();

View File

@ -36,7 +36,7 @@ use proxmox_backup::api2::types::*;
use proxmox_backup::api2::version; use proxmox_backup::api2::version;
use proxmox_backup::client::*; use proxmox_backup::client::*;
use proxmox_backup::pxar::catalog::*; use proxmox_backup::pxar::catalog::*;
use proxmox_backup::config::user::complete_user_name; use proxmox_backup::config::user::complete_userid;
use proxmox_backup::backup::{ use proxmox_backup::backup::{
archive_type, archive_type,
decrypt_key, decrypt_key,
@ -193,7 +193,7 @@ pub fn complete_repository(_arg: &str, _param: &HashMap<String, String>) -> Vec<
result result
} }
fn connect(server: &str, port: u16, userid: &Userid) -> Result<HttpClient, Error> { fn connect(server: &str, port: u16, auth_id: &Authid) -> Result<HttpClient, Error> {
let fingerprint = std::env::var(ENV_VAR_PBS_FINGERPRINT).ok(); let fingerprint = std::env::var(ENV_VAR_PBS_FINGERPRINT).ok();
@ -212,7 +212,7 @@ fn connect(server: &str, port: u16, userid: &Userid) -> Result<HttpClient, Error
.fingerprint_cache(true) .fingerprint_cache(true)
.ticket_cache(true); .ticket_cache(true);
HttpClient::new(server, port, userid, options) HttpClient::new(server, port, auth_id, options)
} }
async fn view_task_result( async fn view_task_result(
@ -366,7 +366,7 @@ async fn list_backup_groups(param: Value) -> Result<Value, Error> {
let repo = extract_repository_from_value(&param)?; let repo = extract_repository_from_value(&param)?;
let client = connect(repo.host(), repo.port(), repo.user())?; let client = connect(repo.host(), repo.port(), repo.auth_id())?;
let path = format!("api2/json/admin/datastore/{}/groups", repo.store()); let path = format!("api2/json/admin/datastore/{}/groups", repo.store());
@ -425,7 +425,7 @@ async fn list_backup_groups(param: Value) -> Result<Value, Error> {
description: "Backup group.", description: "Backup group.",
}, },
"new-owner": { "new-owner": {
type: Userid, type: Authid,
}, },
} }
} }
@ -435,7 +435,7 @@ async fn change_backup_owner(group: String, mut param: Value) -> Result<(), Erro
let repo = extract_repository_from_value(&param)?; let repo = extract_repository_from_value(&param)?;
let mut client = connect(repo.host(), repo.port(), repo.user())?; let mut client = connect(repo.host(), repo.port(), repo.auth_id())?;
param.as_object_mut().unwrap().remove("repository"); param.as_object_mut().unwrap().remove("repository");
@ -478,7 +478,7 @@ async fn list_snapshots(param: Value) -> Result<Value, Error> {
let output_format = get_output_format(&param); let output_format = get_output_format(&param);
let client = connect(repo.host(), repo.port(), repo.user())?; let client = connect(repo.host(), repo.port(), repo.auth_id())?;
let group: Option<BackupGroup> = if let Some(path) = param["group"].as_str() { let group: Option<BackupGroup> = if let Some(path) = param["group"].as_str() {
Some(path.parse()?) Some(path.parse()?)
@ -543,7 +543,7 @@ async fn forget_snapshots(param: Value) -> Result<Value, Error> {
let path = tools::required_string_param(&param, "snapshot")?; let path = tools::required_string_param(&param, "snapshot")?;
let snapshot: BackupDir = path.parse()?; let snapshot: BackupDir = path.parse()?;
let mut client = connect(repo.host(), repo.port(), repo.user())?; let mut client = connect(repo.host(), repo.port(), repo.auth_id())?;
let path = format!("api2/json/admin/datastore/{}/snapshots", repo.store()); let path = format!("api2/json/admin/datastore/{}/snapshots", repo.store());
@ -573,7 +573,7 @@ async fn api_login(param: Value) -> Result<Value, Error> {
let repo = extract_repository_from_value(&param)?; let repo = extract_repository_from_value(&param)?;
let client = connect(repo.host(), repo.port(), repo.user())?; let client = connect(repo.host(), repo.port(), repo.auth_id())?;
client.login().await?; client.login().await?;
record_repository(&repo); record_repository(&repo);
@ -630,7 +630,7 @@ async fn api_version(param: Value) -> Result<(), Error> {
let repo = extract_repository_from_value(&param); let repo = extract_repository_from_value(&param);
if let Ok(repo) = repo { if let Ok(repo) = repo {
let client = connect(repo.host(), repo.port(), repo.user())?; let client = connect(repo.host(), repo.port(), repo.auth_id())?;
match client.get("api2/json/version", None).await { match client.get("api2/json/version", None).await {
Ok(mut result) => version_info["server"] = result["data"].take(), Ok(mut result) => version_info["server"] = result["data"].take(),
@ -680,7 +680,7 @@ async fn list_snapshot_files(param: Value) -> Result<Value, Error> {
let output_format = get_output_format(&param); let output_format = get_output_format(&param);
let client = connect(repo.host(), repo.port(), repo.user())?; let client = connect(repo.host(), repo.port(), repo.auth_id())?;
let path = format!("api2/json/admin/datastore/{}/files", repo.store()); let path = format!("api2/json/admin/datastore/{}/files", repo.store());
@ -724,7 +724,7 @@ async fn start_garbage_collection(param: Value) -> Result<Value, Error> {
let output_format = get_output_format(&param); let output_format = get_output_format(&param);
let mut client = connect(repo.host(), repo.port(), repo.user())?; let mut client = connect(repo.host(), repo.port(), repo.auth_id())?;
let path = format!("api2/json/admin/datastore/{}/gc", repo.store()); let path = format!("api2/json/admin/datastore/{}/gc", repo.store());
@ -1036,7 +1036,7 @@ async fn create_backup(
let backup_time = backup_time_opt.unwrap_or_else(|| epoch_i64()); let backup_time = backup_time_opt.unwrap_or_else(|| epoch_i64());
let client = connect(repo.host(), repo.port(), repo.user())?; let client = connect(repo.host(), repo.port(), repo.auth_id())?;
record_repository(&repo); record_repository(&repo);
println!("Starting backup: {}/{}/{}", backup_type, backup_id, BackupDir::backup_time_to_string(backup_time)?); println!("Starting backup: {}/{}/{}", backup_type, backup_id, BackupDir::backup_time_to_string(backup_time)?);
@ -1339,7 +1339,7 @@ async fn restore(param: Value) -> Result<Value, Error> {
let archive_name = tools::required_string_param(&param, "archive-name")?; let archive_name = tools::required_string_param(&param, "archive-name")?;
let client = connect(repo.host(), repo.port(), repo.user())?; let client = connect(repo.host(), repo.port(), repo.auth_id())?;
record_repository(&repo); record_repository(&repo);
@ -1512,7 +1512,7 @@ async fn upload_log(param: Value) -> Result<Value, Error> {
let snapshot = tools::required_string_param(&param, "snapshot")?; let snapshot = tools::required_string_param(&param, "snapshot")?;
let snapshot: BackupDir = snapshot.parse()?; let snapshot: BackupDir = snapshot.parse()?;
let mut client = connect(repo.host(), repo.port(), repo.user())?; let mut client = connect(repo.host(), repo.port(), repo.auth_id())?;
let (keydata, crypt_mode) = keyfile_parameters(&param)?; let (keydata, crypt_mode) = keyfile_parameters(&param)?;
@ -1583,7 +1583,7 @@ fn prune<'a>(
async fn prune_async(mut param: Value) -> Result<Value, Error> { async fn prune_async(mut param: Value) -> Result<Value, Error> {
let repo = extract_repository_from_value(&param)?; let repo = extract_repository_from_value(&param)?;
let mut client = connect(repo.host(), repo.port(), repo.user())?; let mut client = connect(repo.host(), repo.port(), repo.auth_id())?;
let path = format!("api2/json/admin/datastore/{}/prune", repo.store()); let path = format!("api2/json/admin/datastore/{}/prune", repo.store());
@ -1657,7 +1657,10 @@ async fn prune_async(mut param: Value) -> Result<Value, Error> {
optional: true, optional: true,
}, },
} }
} },
returns: {
type: StorageStatus,
},
)] )]
/// Get repository status. /// Get repository status.
async fn status(param: Value) -> Result<Value, Error> { async fn status(param: Value) -> Result<Value, Error> {
@ -1666,7 +1669,7 @@ async fn status(param: Value) -> Result<Value, Error> {
let output_format = get_output_format(&param); let output_format = get_output_format(&param);
let client = connect(repo.host(), repo.port(), repo.user())?; let client = connect(repo.host(), repo.port(), repo.auth_id())?;
let path = format!("api2/json/admin/datastore/{}/status", repo.store()); let path = format!("api2/json/admin/datastore/{}/status", repo.store());
@ -1690,7 +1693,7 @@ async fn status(param: Value) -> Result<Value, Error> {
.column(ColumnConfig::new("used").renderer(render_total_percentage)) .column(ColumnConfig::new("used").renderer(render_total_percentage))
.column(ColumnConfig::new("avail").renderer(render_total_percentage)); .column(ColumnConfig::new("avail").renderer(render_total_percentage));
let schema = &proxmox_backup::api2::admin::datastore::API_RETURN_SCHEMA_STATUS; let schema = &API_RETURN_SCHEMA_STATUS;
format_and_print_result_full(&mut data, schema, &output_format, &options); format_and_print_result_full(&mut data, schema, &output_format, &options);
@ -1711,7 +1714,7 @@ async fn try_get(repo: &BackupRepository, url: &str) -> Value {
.fingerprint_cache(true) .fingerprint_cache(true)
.ticket_cache(true); .ticket_cache(true);
let client = match HttpClient::new(repo.host(), repo.port(), repo.user(), options) { let client = match HttpClient::new(repo.host(), repo.port(), repo.auth_id(), options) {
Ok(v) => v, Ok(v) => v,
_ => return Value::Null, _ => return Value::Null,
}; };
@ -2010,7 +2013,7 @@ fn main() {
let change_owner_cmd_def = CliCommand::new(&API_METHOD_CHANGE_BACKUP_OWNER) let change_owner_cmd_def = CliCommand::new(&API_METHOD_CHANGE_BACKUP_OWNER)
.arg_param(&["group", "new-owner"]) .arg_param(&["group", "new-owner"])
.completion_cb("group", complete_backup_group) .completion_cb("group", complete_backup_group)
.completion_cb("new-owner", complete_user_name) .completion_cb("new-owner", complete_userid)
.completion_cb("repository", complete_repository); .completion_cb("repository", complete_repository);
let cmd_def = CliCommandMap::new() let cmd_def = CliCommandMap::new()

View File

@ -62,10 +62,10 @@ fn connect() -> Result<HttpClient, Error> {
let ticket = Ticket::new("PBS", Userid::root_userid())? let ticket = Ticket::new("PBS", Userid::root_userid())?
.sign(private_auth_key(), None)?; .sign(private_auth_key(), None)?;
options = options.password(Some(ticket)); options = options.password(Some(ticket));
HttpClient::new("localhost", 8007, Userid::root_userid(), options)? HttpClient::new("localhost", 8007, Authid::root_auth_id(), options)?
} else { } else {
options = options.ticket_cache(true).interactive(true); options = options.ticket_cache(true).interactive(true);
HttpClient::new("localhost", 8007, Userid::root_userid(), options)? HttpClient::new("localhost", 8007, Authid::root_auth_id(), options)?
}; };
Ok(client) Ok(client)
@ -388,7 +388,7 @@ fn main() {
let mut rpcenv = CliEnvironment::new(); let mut rpcenv = CliEnvironment::new();
rpcenv.set_user(Some(String::from("root@pam"))); rpcenv.set_auth_id(Some(String::from("root@pam")));
proxmox_backup::tools::runtime::main(run_async_cli_command(cmd_def, rpcenv)); proxmox_backup::tools::runtime::main(run_async_cli_command(cmd_def, rpcenv));
} }

View File

@ -30,7 +30,7 @@ use proxmox_backup::{
}; };
use proxmox_backup::api2::types::Userid; use proxmox_backup::api2::types::{Authid, Userid};
use proxmox_backup::configdir; use proxmox_backup::configdir;
use proxmox_backup::buildcfg; use proxmox_backup::buildcfg;
use proxmox_backup::server; use proxmox_backup::server;
@ -334,7 +334,7 @@ async fn schedule_datastore_garbage_collection() {
if let Err(err) = WorkerTask::new_thread( if let Err(err) = WorkerTask::new_thread(
worker_type, worker_type,
Some(store.clone()), Some(store.clone()),
Userid::backup_userid().clone(), Authid::backup_auth_id().clone(),
false, false,
move |worker| { move |worker| {
job.start(&worker.upid().to_string())?; job.start(&worker.upid().to_string())?;
@ -463,7 +463,7 @@ async fn schedule_datastore_prune() {
if let Err(err) = WorkerTask::new_thread( if let Err(err) = WorkerTask::new_thread(
worker_type, worker_type,
Some(store.clone()), Some(store.clone()),
Userid::backup_userid().clone(), Authid::backup_auth_id().clone(),
false, false,
move |worker| { move |worker| {
@ -579,9 +579,9 @@ async fn schedule_datastore_sync_jobs() {
Err(_) => continue, // could not get lock Err(_) => continue, // could not get lock
}; };
let userid = Userid::backup_userid().clone(); let auth_id = Authid::backup_auth_id();
if let Err(err) = do_sync_job(job, job_config, &userid, Some(event_str)) { if let Err(err) = do_sync_job(job, job_config, &auth_id, Some(event_str)) {
eprintln!("unable to start datastore sync job {} - {}", &job_id, err); eprintln!("unable to start datastore sync job {} - {}", &job_id, err);
} }
} }
@ -642,8 +642,8 @@ async fn schedule_datastore_verify_jobs() {
Ok(job) => job, Ok(job) => job,
Err(_) => continue, // could not get lock Err(_) => continue, // could not get lock
}; };
let userid = Userid::backup_userid().clone(); let auth_id = Authid::backup_auth_id();
if let Err(err) = do_verification_job(job, job_config, &userid, Some(event_str)) { if let Err(err) = do_verification_job(job, job_config, &auth_id, Some(event_str)) {
eprintln!("unable to start datastore verification job {} - {}", &job_id, err); eprintln!("unable to start datastore verification job {} - {}", &job_id, err);
} }
} }
@ -704,7 +704,7 @@ async fn schedule_task_log_rotate() {
if let Err(err) = WorkerTask::new_thread( if let Err(err) = WorkerTask::new_thread(
worker_type, worker_type,
Some(job_id.to_string()), Some(job_id.to_string()),
Userid::backup_userid().clone(), Authid::backup_auth_id().clone(),
false, false,
move |worker| { move |worker| {
job.start(&worker.upid().to_string())?; job.start(&worker.upid().to_string())?;

View File

@ -225,7 +225,7 @@ async fn test_upload_speed(
let backup_time = proxmox::tools::time::epoch_i64(); let backup_time = proxmox::tools::time::epoch_i64();
let client = connect(repo.host(), repo.port(), repo.user())?; let client = connect(repo.host(), repo.port(), repo.auth_id())?;
record_repository(&repo); record_repository(&repo);
if verbose { eprintln!("Connecting to backup server"); } if verbose { eprintln!("Connecting to backup server"); }

View File

@ -79,7 +79,7 @@ async fn dump_catalog(param: Value) -> Result<Value, Error> {
} }
}; };
let client = connect(repo.host(), repo.port(), repo.user())?; let client = connect(repo.host(), repo.port(), repo.auth_id())?;
let client = BackupReader::start( let client = BackupReader::start(
client, client,
@ -153,7 +153,7 @@ async fn dump_catalog(param: Value) -> Result<Value, Error> {
/// Shell to interactively inspect and restore snapshots. /// Shell to interactively inspect and restore snapshots.
async fn catalog_shell(param: Value) -> Result<(), Error> { async fn catalog_shell(param: Value) -> Result<(), Error> {
let repo = extract_repository_from_value(&param)?; let repo = extract_repository_from_value(&param)?;
let client = connect(repo.host(), repo.port(), repo.user())?; let client = connect(repo.host(), repo.port(), repo.auth_id())?;
let path = tools::required_string_param(&param, "snapshot")?; let path = tools::required_string_param(&param, "snapshot")?;
let archive_name = tools::required_string_param(&param, "archive-name")?; let archive_name = tools::required_string_param(&param, "archive-name")?;

View File

@ -163,7 +163,7 @@ fn mount(
async fn mount_do(param: Value, pipe: Option<RawFd>) -> Result<Value, Error> { async fn mount_do(param: Value, pipe: Option<RawFd>) -> Result<Value, Error> {
let repo = extract_repository_from_value(&param)?; let repo = extract_repository_from_value(&param)?;
let archive_name = tools::required_string_param(&param, "archive-name")?; let archive_name = tools::required_string_param(&param, "archive-name")?;
let client = connect(repo.host(), repo.port(), repo.user())?; let client = connect(repo.host(), repo.port(), repo.auth_id())?;
let target = param["target"].as_str(); let target = param["target"].as_str();

View File

@ -48,7 +48,7 @@ async fn task_list(param: Value) -> Result<Value, Error> {
let output_format = get_output_format(&param); let output_format = get_output_format(&param);
let repo = extract_repository_from_value(&param)?; let repo = extract_repository_from_value(&param)?;
let client = connect(repo.host(), repo.port(), repo.user())?; let client = connect(repo.host(), repo.port(), repo.auth_id())?;
let limit = param["limit"].as_u64().unwrap_or(50) as usize; let limit = param["limit"].as_u64().unwrap_or(50) as usize;
let running = !param["all"].as_bool().unwrap_or(false); let running = !param["all"].as_bool().unwrap_or(false);
@ -57,7 +57,7 @@ async fn task_list(param: Value) -> Result<Value, Error> {
"running": running, "running": running,
"start": 0, "start": 0,
"limit": limit, "limit": limit,
"userfilter": repo.user(), "userfilter": repo.auth_id(),
"store": repo.store(), "store": repo.store(),
}); });
@ -96,7 +96,7 @@ async fn task_log(param: Value) -> Result<Value, Error> {
let repo = extract_repository_from_value(&param)?; let repo = extract_repository_from_value(&param)?;
let upid = tools::required_string_param(&param, "upid")?; let upid = tools::required_string_param(&param, "upid")?;
let client = connect(repo.host(), repo.port(), repo.user())?; let client = connect(repo.host(), repo.port(), repo.auth_id())?;
display_task_log(client, upid, true).await?; display_task_log(client, upid, true).await?;
@ -122,7 +122,7 @@ async fn task_stop(param: Value) -> Result<Value, Error> {
let repo = extract_repository_from_value(&param)?; let repo = extract_repository_from_value(&param)?;
let upid_str = tools::required_string_param(&param, "upid")?; let upid_str = tools::required_string_param(&param, "upid")?;
let mut client = connect(repo.host(), repo.port(), repo.user())?; let mut client = connect(repo.host(), repo.port(), repo.auth_id())?;
let path = format!("api2/json/nodes/localhost/tasks/{}", upid_str); let path = format!("api2/json/nodes/localhost/tasks/{}", upid_str);
let _ = client.delete(&path, None).await?; let _ = client.delete(&path, None).await?;

View File

@ -60,7 +60,7 @@ pub fn acl_commands() -> CommandLineInterface {
"update", "update",
CliCommand::new(&api2::access::acl::API_METHOD_UPDATE_ACL) CliCommand::new(&api2::access::acl::API_METHOD_UPDATE_ACL)
.arg_param(&["path", "role"]) .arg_param(&["path", "role"])
.completion_cb("userid", config::user::complete_user_name) .completion_cb("userid", config::user::complete_userid)
.completion_cb("path", config::datastore::complete_acl_path) .completion_cb("path", config::datastore::complete_acl_path)
); );

View File

@ -1,11 +1,14 @@
use anyhow::Error; use anyhow::Error;
use serde_json::Value; use serde_json::Value;
use std::collections::HashMap;
use proxmox::api::{api, cli::*, RpcEnvironment, ApiHandler}; use proxmox::api::{api, cli::*, RpcEnvironment, ApiHandler};
use proxmox_backup::config; use proxmox_backup::config;
use proxmox_backup::tools; use proxmox_backup::tools;
use proxmox_backup::api2; use proxmox_backup::api2;
use proxmox_backup::api2::types::{ACL_PATH_SCHEMA, Authid, Userid};
#[api( #[api(
input: { input: {
@ -48,6 +51,106 @@ fn list_users(param: Value, rpcenv: &mut dyn RpcEnvironment) -> Result<Value, Er
Ok(Value::Null) Ok(Value::Null)
} }
#[api(
input: {
properties: {
"output-format": {
schema: OUTPUT_FORMAT,
optional: true,
},
userid: {
type: Userid,
}
}
}
)]
/// List tokens associated with user.
fn list_tokens(param: Value, rpcenv: &mut dyn RpcEnvironment) -> Result<Value, Error> {
let output_format = get_output_format(&param);
let info = &api2::access::user::API_METHOD_LIST_TOKENS;
let mut data = match info.handler {
ApiHandler::Sync(handler) => (handler)(param, info, rpcenv)?,
_ => unreachable!(),
};
let options = default_table_format_options()
.column(ColumnConfig::new("tokenid"))
.column(
ColumnConfig::new("enable")
.renderer(tools::format::render_bool_with_default_true)
)
.column(
ColumnConfig::new("expire")
.renderer(tools::format::render_epoch)
)
.column(ColumnConfig::new("comment"));
format_and_print_result_full(&mut data, info.returns, &output_format, &options);
Ok(Value::Null)
}
#[api(
input: {
properties: {
"output-format": {
schema: OUTPUT_FORMAT,
optional: true,
},
auth_id: {
type: Authid,
},
path: {
schema: ACL_PATH_SCHEMA,
optional: true,
},
}
}
)]
/// List permissions of user/token.
fn list_permissions(param: Value, rpcenv: &mut dyn RpcEnvironment) -> Result<Value, Error> {
let output_format = get_output_format(&param);
let info = &api2::access::API_METHOD_LIST_PERMISSIONS;
let mut data = match info.handler {
ApiHandler::Sync(handler) => (handler)(param, info, rpcenv)?,
_ => unreachable!(),
};
if output_format == "text" {
println!("Privileges with (*) have the propagate flag set\n");
let data:HashMap<String, HashMap<String, bool>> = serde_json::from_value(data)?;
let mut paths:Vec<String> = data.keys().cloned().collect();
paths.sort_unstable();
for path in paths {
println!("Path: {}", path);
let priv_map = data.get(&path).unwrap();
let mut privs:Vec<String> = priv_map.keys().cloned().collect();
if privs.is_empty() {
println!("- NoAccess");
} else {
privs.sort_unstable();
for privilege in privs {
if *priv_map.get(&privilege).unwrap() {
println!("- {} (*)", privilege);
} else {
println!("- {}", privilege);
}
}
}
}
} else {
format_and_print_result(&mut data, &output_format);
}
Ok(Value::Null)
}
pub fn user_commands() -> CommandLineInterface { pub fn user_commands() -> CommandLineInterface {
let cmd_def = CliCommandMap::new() let cmd_def = CliCommandMap::new()
@ -62,13 +165,39 @@ pub fn user_commands() -> CommandLineInterface {
"update", "update",
CliCommand::new(&api2::access::user::API_METHOD_UPDATE_USER) CliCommand::new(&api2::access::user::API_METHOD_UPDATE_USER)
.arg_param(&["userid"]) .arg_param(&["userid"])
.completion_cb("userid", config::user::complete_user_name) .completion_cb("userid", config::user::complete_userid)
) )
.insert( .insert(
"remove", "remove",
CliCommand::new(&api2::access::user::API_METHOD_DELETE_USER) CliCommand::new(&api2::access::user::API_METHOD_DELETE_USER)
.arg_param(&["userid"]) .arg_param(&["userid"])
.completion_cb("userid", config::user::complete_user_name) .completion_cb("userid", config::user::complete_userid)
)
.insert(
"list-tokens",
CliCommand::new(&&API_METHOD_LIST_TOKENS)
.arg_param(&["userid"])
.completion_cb("userid", config::user::complete_userid)
)
.insert(
"generate-token",
CliCommand::new(&api2::access::user::API_METHOD_GENERATE_TOKEN)
.arg_param(&["userid", "tokenname"])
.completion_cb("userid", config::user::complete_userid)
)
.insert(
"delete-token",
CliCommand::new(&api2::access::user::API_METHOD_DELETE_TOKEN)
.arg_param(&["userid", "tokenname"])
.completion_cb("userid", config::user::complete_userid)
.completion_cb("tokenname", config::user::complete_token_name)
)
.insert(
"permissions",
CliCommand::new(&&API_METHOD_LIST_PERMISSIONS)
.arg_param(&["auth_id"])
.completion_cb("auth_id", config::user::complete_authid)
.completion_cb("path", config::datastore::complete_acl_path)
); );
cmd_def.into() cmd_def.into()

View File

@ -16,7 +16,7 @@ pub const BACKUP_REPO_URL: ApiStringFormat = ApiStringFormat::Pattern(&BACKUP_RE
#[derive(Debug)] #[derive(Debug)]
pub struct BackupRepository { pub struct BackupRepository {
/// The user name used for Authentication /// The user name used for Authentication
user: Option<Userid>, auth_id: Option<Authid>,
/// The host name or IP address /// The host name or IP address
host: Option<String>, host: Option<String>,
/// The port /// The port
@ -27,20 +27,29 @@ pub struct BackupRepository {
impl BackupRepository { impl BackupRepository {
pub fn new(user: Option<Userid>, host: Option<String>, port: Option<u16>, store: String) -> Self { pub fn new(auth_id: Option<Authid>, host: Option<String>, port: Option<u16>, store: String) -> Self {
let host = match host { let host = match host {
Some(host) if (IP_V6_REGEX.regex_obj)().is_match(&host) => { Some(host) if (IP_V6_REGEX.regex_obj)().is_match(&host) => {
Some(format!("[{}]", host)) Some(format!("[{}]", host))
}, },
other => other, other => other,
}; };
Self { user, host, port, store } Self { auth_id, host, port, store }
}
pub fn auth_id(&self) -> &Authid {
if let Some(ref auth_id) = self.auth_id {
return auth_id;
}
&Authid::root_auth_id()
} }
pub fn user(&self) -> &Userid { pub fn user(&self) -> &Userid {
if let Some(ref user) = self.user { if let Some(auth_id) = &self.auth_id {
return &user; return auth_id.user();
} }
Userid::root_userid() Userid::root_userid()
} }
@ -65,8 +74,8 @@ impl BackupRepository {
impl fmt::Display for BackupRepository { impl fmt::Display for BackupRepository {
fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result { fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result {
match (&self.user, &self.host, self.port) { match (&self.auth_id, &self.host, self.port) {
(Some(user), _, _) => write!(f, "{}@{}:{}:{}", user, self.host(), self.port(), self.store), (Some(auth_id), _, _) => write!(f, "{}@{}:{}:{}", auth_id, self.host(), self.port(), self.store),
(None, Some(host), None) => write!(f, "{}:{}", host, self.store), (None, Some(host), None) => write!(f, "{}:{}", host, self.store),
(None, _, Some(port)) => write!(f, "{}:{}:{}", self.host(), port, self.store), (None, _, Some(port)) => write!(f, "{}:{}:{}", self.host(), port, self.store),
(None, None, None) => write!(f, "{}", self.store), (None, None, None) => write!(f, "{}", self.store),
@ -88,7 +97,7 @@ impl std::str::FromStr for BackupRepository {
.ok_or_else(|| format_err!("unable to parse repository url '{}'", url))?; .ok_or_else(|| format_err!("unable to parse repository url '{}'", url))?;
Ok(Self { Ok(Self {
user: cap.get(1).map(|m| Userid::try_from(m.as_str().to_owned())).transpose()?, auth_id: cap.get(1).map(|m| Authid::try_from(m.as_str().to_owned())).transpose()?,
host: cap.get(2).map(|m| m.as_str().to_owned()), host: cap.get(2).map(|m| m.as_str().to_owned()),
port: cap.get(3).map(|m| m.as_str().parse::<u16>()).transpose()?, port: cap.get(3).map(|m| m.as_str().parse::<u16>()).transpose()?,
store: cap[4].to_owned(), store: cap[4].to_owned(),

View File

@ -21,7 +21,7 @@ use proxmox::{
}; };
use super::pipe_to_stream::PipeToSendStream; use super::pipe_to_stream::PipeToSendStream;
use crate::api2::types::Userid; use crate::api2::types::{Authid, Userid};
use crate::tools::{ use crate::tools::{
self, self,
BroadcastFuture, BroadcastFuture,
@ -31,7 +31,7 @@ use crate::tools::{
#[derive(Clone)] #[derive(Clone)]
pub struct AuthInfo { pub struct AuthInfo {
pub userid: Userid, pub auth_id: Authid,
pub ticket: String, pub ticket: String,
pub token: String, pub token: String,
} }
@ -102,7 +102,7 @@ pub struct HttpClient {
server: String, server: String,
port: u16, port: u16,
fingerprint: Arc<Mutex<Option<String>>>, fingerprint: Arc<Mutex<Option<String>>>,
first_auth: BroadcastFuture<()>, first_auth: Option<BroadcastFuture<()>>,
auth: Arc<RwLock<AuthInfo>>, auth: Arc<RwLock<AuthInfo>>,
ticket_abort: futures::future::AbortHandle, ticket_abort: futures::future::AbortHandle,
_options: HttpClientOptions, _options: HttpClientOptions,
@ -251,7 +251,7 @@ impl HttpClient {
pub fn new( pub fn new(
server: &str, server: &str,
port: u16, port: u16,
userid: &Userid, auth_id: &Authid,
mut options: HttpClientOptions, mut options: HttpClientOptions,
) -> Result<Self, Error> { ) -> Result<Self, Error> {
@ -311,6 +311,11 @@ impl HttpClient {
let password = if let Some(password) = password { let password = if let Some(password) = password {
password password
} else { } else {
let userid = if auth_id.is_token() {
bail!("API token secret must be provided!");
} else {
auth_id.user()
};
let mut ticket_info = None; let mut ticket_info = None;
if use_ticket_cache { if use_ticket_cache {
ticket_info = load_ticket_info(options.prefix.as_ref().unwrap(), server, userid); ticket_info = load_ticket_info(options.prefix.as_ref().unwrap(), server, userid);
@ -323,7 +328,7 @@ impl HttpClient {
}; };
let auth = Arc::new(RwLock::new(AuthInfo { let auth = Arc::new(RwLock::new(AuthInfo {
userid: userid.clone(), auth_id: auth_id.clone(),
ticket: password.clone(), ticket: password.clone(),
token: "".to_string(), token: "".to_string(),
})); }));
@ -336,14 +341,14 @@ impl HttpClient {
let renewal_future = async move { let renewal_future = async move {
loop { loop {
tokio::time::delay_for(Duration::new(60*15, 0)).await; // 15 minutes tokio::time::delay_for(Duration::new(60*15, 0)).await; // 15 minutes
let (userid, ticket) = { let (auth_id, ticket) = {
let authinfo = auth2.read().unwrap().clone(); let authinfo = auth2.read().unwrap().clone();
(authinfo.userid, authinfo.ticket) (authinfo.auth_id, authinfo.ticket)
}; };
match Self::credentials(client2.clone(), server2.clone(), port, userid, ticket).await { match Self::credentials(client2.clone(), server2.clone(), port, auth_id.user().clone(), ticket).await {
Ok(auth) => { Ok(auth) => {
if use_ticket_cache & &prefix2.is_some() { if use_ticket_cache & &prefix2.is_some() {
let _ = store_ticket_info(prefix2.as_ref().unwrap(), &server2, &auth.userid.to_string(), &auth.ticket, &auth.token); let _ = store_ticket_info(prefix2.as_ref().unwrap(), &server2, &auth.auth_id.to_string(), &auth.ticket, &auth.token);
} }
*auth2.write().unwrap() = auth; *auth2.write().unwrap() = auth;
}, },
@ -361,7 +366,7 @@ impl HttpClient {
client.clone(), client.clone(),
server.to_owned(), server.to_owned(),
port, port,
userid.to_owned(), auth_id.user().clone(),
password.to_owned(), password.to_owned(),
).map_ok({ ).map_ok({
let server = server.to_string(); let server = server.to_string();
@ -370,13 +375,20 @@ impl HttpClient {
move |auth| { move |auth| {
if use_ticket_cache & &prefix.is_some() { if use_ticket_cache & &prefix.is_some() {
let _ = store_ticket_info(prefix.as_ref().unwrap(), &server, &auth.userid.to_string(), &auth.ticket, &auth.token); let _ = store_ticket_info(prefix.as_ref().unwrap(), &server, &auth.auth_id.to_string(), &auth.ticket, &auth.token);
} }
*authinfo.write().unwrap() = auth; *authinfo.write().unwrap() = auth;
tokio::spawn(renewal_future); tokio::spawn(renewal_future);
} }
}); });
let first_auth = if auth_id.is_token() {
// TODO check access here?
None
} else {
Some(BroadcastFuture::new(Box::new(login_future)))
};
Ok(Self { Ok(Self {
client, client,
server: String::from(server), server: String::from(server),
@ -384,7 +396,7 @@ impl HttpClient {
fingerprint: verified_fingerprint, fingerprint: verified_fingerprint,
auth, auth,
ticket_abort, ticket_abort,
first_auth: BroadcastFuture::new(Box::new(login_future)), first_auth,
_options: options, _options: options,
}) })
} }
@ -394,7 +406,10 @@ impl HttpClient {
/// Login is done on demand, so this is only required if you need /// Login is done on demand, so this is only required if you need
/// access to authentication data in 'AuthInfo'. /// access to authentication data in 'AuthInfo'.
pub async fn login(&self) -> Result<AuthInfo, Error> { pub async fn login(&self) -> Result<AuthInfo, Error> {
self.first_auth.listen().await?; if let Some(future) = &self.first_auth {
future.listen().await?;
}
let authinfo = self.auth.read().unwrap(); let authinfo = self.auth.read().unwrap();
Ok(authinfo.clone()) Ok(authinfo.clone())
} }
@ -477,10 +492,14 @@ impl HttpClient {
let client = self.client.clone(); let client = self.client.clone();
let auth = self.login().await?; let auth = self.login().await?;
if auth.auth_id.is_token() {
let enc_ticket = format!("PBSAuthCookie={}", percent_encode(auth.ticket.as_bytes(), DEFAULT_ENCODE_SET)); let enc_api_token = format!("{}:{}", auth.auth_id, percent_encode(auth.ticket.as_bytes(), DEFAULT_ENCODE_SET));
req.headers_mut().insert("Cookie", HeaderValue::from_str(&enc_ticket).unwrap()); req.headers_mut().insert("Authorization", HeaderValue::from_str(&enc_api_token).unwrap());
req.headers_mut().insert("CSRFPreventionToken", HeaderValue::from_str(&auth.token).unwrap()); } else {
let enc_ticket = format!("PBSAuthCookie={}", percent_encode(auth.ticket.as_bytes(), DEFAULT_ENCODE_SET));
req.headers_mut().insert("Cookie", HeaderValue::from_str(&enc_ticket).unwrap());
req.headers_mut().insert("CSRFPreventionToken", HeaderValue::from_str(&auth.token).unwrap());
}
Self::api_request(client, req).await Self::api_request(client, req).await
} }
@ -579,11 +598,18 @@ impl HttpClient {
protocol_name: String, protocol_name: String,
) -> Result<(H2Client, futures::future::AbortHandle), Error> { ) -> Result<(H2Client, futures::future::AbortHandle), Error> {
let auth = self.login().await?;
let client = self.client.clone(); let client = self.client.clone();
let auth = self.login().await?;
if auth.auth_id.is_token() {
let enc_api_token = format!("{}:{}", auth.auth_id, percent_encode(auth.ticket.as_bytes(), DEFAULT_ENCODE_SET));
req.headers_mut().insert("Authorization", HeaderValue::from_str(&enc_api_token).unwrap());
} else {
let enc_ticket = format!("PBSAuthCookie={}", percent_encode(auth.ticket.as_bytes(), DEFAULT_ENCODE_SET));
req.headers_mut().insert("Cookie", HeaderValue::from_str(&enc_ticket).unwrap());
req.headers_mut().insert("CSRFPreventionToken", HeaderValue::from_str(&auth.token).unwrap());
}
let enc_ticket = format!("PBSAuthCookie={}", percent_encode(auth.ticket.as_bytes(), DEFAULT_ENCODE_SET));
req.headers_mut().insert("Cookie", HeaderValue::from_str(&enc_ticket).unwrap());
req.headers_mut().insert("UPGRADE", HeaderValue::from_str(&protocol_name).unwrap()); req.headers_mut().insert("UPGRADE", HeaderValue::from_str(&protocol_name).unwrap());
let resp = client.request(req).await?; let resp = client.request(req).await?;
@ -636,7 +662,7 @@ impl HttpClient {
let req = Self::request_builder(&server, port, "POST", "/api2/json/access/ticket", Some(data))?; let req = Self::request_builder(&server, port, "POST", "/api2/json/access/ticket", Some(data))?;
let cred = Self::api_request(client, req).await?; let cred = Self::api_request(client, req).await?;
let auth = AuthInfo { let auth = AuthInfo {
userid: cred["data"]["username"].as_str().unwrap().parse()?, auth_id: cred["data"]["username"].as_str().unwrap().parse()?,
ticket: cred["data"]["ticket"].as_str().unwrap().to_owned(), ticket: cred["data"]["ticket"].as_str().unwrap().to_owned(),
token: cred["data"]["CSRFPreventionToken"].as_str().unwrap().to_owned(), token: cred["data"]["CSRFPreventionToken"].as_str().unwrap().to_owned(),
}; };

View File

@ -451,7 +451,7 @@ pub async fn pull_group(
.password(Some(auth_info.ticket.clone())) .password(Some(auth_info.ticket.clone()))
.fingerprint(fingerprint.clone()); .fingerprint(fingerprint.clone());
let new_client = HttpClient::new(src_repo.host(), src_repo.port(), src_repo.user(), options)?; let new_client = HttpClient::new(src_repo.host(), src_repo.port(), src_repo.auth_id(), options)?;
let reader = BackupReader::start( let reader = BackupReader::start(
new_client, new_client,
@ -491,7 +491,7 @@ pub async fn pull_store(
src_repo: &BackupRepository, src_repo: &BackupRepository,
tgt_store: Arc<DataStore>, tgt_store: Arc<DataStore>,
delete: bool, delete: bool,
userid: Userid, auth_id: Authid,
) -> Result<(), Error> { ) -> Result<(), Error> {
// explicit create shared lock to prevent GC on newly created chunks // explicit create shared lock to prevent GC on newly created chunks
@ -524,11 +524,11 @@ pub async fn pull_store(
for (groups_done, item) in list.into_iter().enumerate() { for (groups_done, item) in list.into_iter().enumerate() {
let group = BackupGroup::new(&item.backup_type, &item.backup_id); let group = BackupGroup::new(&item.backup_type, &item.backup_id);
let (owner, _lock_guard) = tgt_store.create_locked_backup_group(&group, &userid)?; let (owner, _lock_guard) = tgt_store.create_locked_backup_group(&group, &auth_id)?;
// permission check // permission check
if userid != owner { // only the owner is allowed to create additional snapshots if auth_id != owner { // only the owner is allowed to create additional snapshots
worker.log(format!("sync group {}/{} failed - owner check failed ({} != {})", worker.log(format!("sync group {}/{} failed - owner check failed ({} != {})",
item.backup_type, item.backup_id, userid, owner)); item.backup_type, item.backup_id, auth_id, owner));
errors = true; // do not stop here, instead continue errors = true; // do not stop here, instead continue
} else if let Err(err) = pull_group( } else if let Err(err) = pull_group(

View File

@ -21,6 +21,7 @@ pub mod datastore;
pub mod network; pub mod network;
pub mod remote; pub mod remote;
pub mod sync; pub mod sync;
pub mod token_shadow;
pub mod user; pub mod user;
pub mod verify; pub mod verify;

View File

@ -1,5 +1,5 @@
use std::io::Write; use std::io::Write;
use std::collections::{HashMap, HashSet, BTreeMap, BTreeSet}; use std::collections::{HashMap, BTreeMap, BTreeSet};
use std::path::{PathBuf, Path}; use std::path::{PathBuf, Path};
use std::sync::{Arc, RwLock}; use std::sync::{Arc, RwLock};
use std::str::FromStr; use std::str::FromStr;
@ -15,7 +15,7 @@ use proxmox::tools::{fs::replace_file, fs::CreateOptions};
use proxmox::constnamedbitmap; use proxmox::constnamedbitmap;
use proxmox::api::{api, schema::*}; use proxmox::api::{api, schema::*};
use crate::api2::types::Userid; use crate::api2::types::{Authid,Userid};
// define Privilege bitfield // define Privilege bitfield
@ -231,7 +231,7 @@ pub struct AclTree {
} }
pub struct AclTreeNode { pub struct AclTreeNode {
pub users: HashMap<Userid, HashMap<String, bool>>, pub users: HashMap<Authid, HashMap<String, bool>>,
pub groups: HashMap<String, HashMap<String, bool>>, pub groups: HashMap<String, HashMap<String, bool>>,
pub children: BTreeMap<String, AclTreeNode>, pub children: BTreeMap<String, AclTreeNode>,
} }
@ -246,43 +246,43 @@ impl AclTreeNode {
} }
} }
pub fn extract_roles(&self, user: &Userid, all: bool) -> HashSet<String> { pub fn extract_roles(&self, auth_id: &Authid, all: bool) -> HashMap<String, bool> {
let user_roles = self.extract_user_roles(user, all); let user_roles = self.extract_user_roles(auth_id, all);
if !user_roles.is_empty() { if !user_roles.is_empty() || auth_id.is_token() {
// user privs always override group privs // user privs always override group privs
return user_roles return user_roles
}; };
self.extract_group_roles(user, all) self.extract_group_roles(auth_id.user(), all)
} }
pub fn extract_user_roles(&self, user: &Userid, all: bool) -> HashSet<String> { pub fn extract_user_roles(&self, auth_id: &Authid, all: bool) -> HashMap<String, bool> {
let mut set = HashSet::new(); let mut map = HashMap::new();
let roles = match self.users.get(user) { let roles = match self.users.get(auth_id) {
Some(m) => m, Some(m) => m,
None => return set, None => return map,
}; };
for (role, propagate) in roles { for (role, propagate) in roles {
if *propagate || all { if *propagate || all {
if role == ROLE_NAME_NO_ACCESS { if role == ROLE_NAME_NO_ACCESS {
// return a set with a single role 'NoAccess' // return a map with a single role 'NoAccess'
let mut set = HashSet::new(); let mut map = HashMap::new();
set.insert(role.to_string()); map.insert(role.to_string(), false);
return set; return map;
} }
set.insert(role.to_string()); map.insert(role.to_string(), *propagate);
} }
} }
set map
} }
pub fn extract_group_roles(&self, _user: &Userid, all: bool) -> HashSet<String> { pub fn extract_group_roles(&self, _user: &Userid, all: bool) -> HashMap<String, bool> {
let mut set = HashSet::new(); let mut map = HashMap::new();
for (_group, roles) in &self.groups { for (_group, roles) in &self.groups {
let is_member = false; // fixme: check if user is member of the group let is_member = false; // fixme: check if user is member of the group
@ -291,17 +291,17 @@ impl AclTreeNode {
for (role, propagate) in roles { for (role, propagate) in roles {
if *propagate || all { if *propagate || all {
if role == ROLE_NAME_NO_ACCESS { if role == ROLE_NAME_NO_ACCESS {
// return a set with a single role 'NoAccess' // return a map with a single role 'NoAccess'
let mut set = HashSet::new(); let mut map = HashMap::new();
set.insert(role.to_string()); map.insert(role.to_string(), false);
return set; return map;
} }
set.insert(role.to_string()); map.insert(role.to_string(), *propagate);
} }
} }
} }
set map
} }
pub fn delete_group_role(&mut self, group: &str, role: &str) { pub fn delete_group_role(&mut self, group: &str, role: &str) {
@ -312,8 +312,8 @@ impl AclTreeNode {
roles.remove(role); roles.remove(role);
} }
pub fn delete_user_role(&mut self, userid: &Userid, role: &str) { pub fn delete_user_role(&mut self, auth_id: &Authid, role: &str) {
let roles = match self.users.get_mut(userid) { let roles = match self.users.get_mut(auth_id) {
Some(r) => r, Some(r) => r,
None => return, None => return,
}; };
@ -331,8 +331,8 @@ impl AclTreeNode {
} }
} }
pub fn insert_user_role(&mut self, user: Userid, role: String, propagate: bool) { pub fn insert_user_role(&mut self, auth_id: Authid, role: String, propagate: bool) {
let map = self.users.entry(user).or_insert_with(|| HashMap::new()); let map = self.users.entry(auth_id).or_insert_with(|| HashMap::new());
if role == ROLE_NAME_NO_ACCESS { if role == ROLE_NAME_NO_ACCESS {
map.clear(); map.clear();
map.insert(role, propagate); map.insert(role, propagate);
@ -346,7 +346,9 @@ impl AclTreeNode {
impl AclTree { impl AclTree {
pub fn new() -> Self { pub fn new() -> Self {
Self { root: AclTreeNode::new() } Self {
root: AclTreeNode::new(),
}
} }
pub fn find_node(&mut self, path: &str) -> Option<&mut AclTreeNode> { pub fn find_node(&mut self, path: &str) -> Option<&mut AclTreeNode> {
@ -383,13 +385,13 @@ impl AclTree {
node.delete_group_role(group, role); node.delete_group_role(group, role);
} }
pub fn delete_user_role(&mut self, path: &str, userid: &Userid, role: &str) { pub fn delete_user_role(&mut self, path: &str, auth_id: &Authid, role: &str) {
let path = split_acl_path(path); let path = split_acl_path(path);
let node = match self.get_node(&path) { let node = match self.get_node(&path) {
Some(n) => n, Some(n) => n,
None => return, None => return,
}; };
node.delete_user_role(userid, role); node.delete_user_role(auth_id, role);
} }
pub fn insert_group_role(&mut self, path: &str, group: &str, role: &str, propagate: bool) { pub fn insert_group_role(&mut self, path: &str, group: &str, role: &str, propagate: bool) {
@ -398,10 +400,10 @@ impl AclTree {
node.insert_group_role(group.to_string(), role.to_string(), propagate); node.insert_group_role(group.to_string(), role.to_string(), propagate);
} }
pub fn insert_user_role(&mut self, path: &str, user: &Userid, role: &str, propagate: bool) { pub fn insert_user_role(&mut self, path: &str, auth_id: &Authid, role: &str, propagate: bool) {
let path = split_acl_path(path); let path = split_acl_path(path);
let node = self.get_or_insert_node(&path); let node = self.get_or_insert_node(&path);
node.insert_user_role(user.to_owned(), role.to_string(), propagate); node.insert_user_role(auth_id.to_owned(), role.to_string(), propagate);
} }
fn write_node_config( fn write_node_config(
@ -413,18 +415,18 @@ impl AclTree {
let mut role_ug_map0 = HashMap::new(); let mut role_ug_map0 = HashMap::new();
let mut role_ug_map1 = HashMap::new(); let mut role_ug_map1 = HashMap::new();
for (user, roles) in &node.users { for (auth_id, roles) in &node.users {
// no need to save, because root is always 'Administrator' // no need to save, because root is always 'Administrator'
if user == "root@pam" { continue; } if !auth_id.is_token() && auth_id.user() == "root@pam" { continue; }
for (role, propagate) in roles { for (role, propagate) in roles {
let role = role.as_str(); let role = role.as_str();
let user = user.to_string(); let auth_id = auth_id.to_string();
if *propagate { if *propagate {
role_ug_map1.entry(role).or_insert_with(|| BTreeSet::new()) role_ug_map1.entry(role).or_insert_with(|| BTreeSet::new())
.insert(user); .insert(auth_id);
} else { } else {
role_ug_map0.entry(role).or_insert_with(|| BTreeSet::new()) role_ug_map0.entry(role).or_insert_with(|| BTreeSet::new())
.insert(user); .insert(auth_id);
} }
} }
} }
@ -512,7 +514,8 @@ impl AclTree {
bail!("expected '0' or '1' for propagate flag."); bail!("expected '0' or '1' for propagate flag.");
}; };
let path = split_acl_path(items[2]); let path_str = items[2];
let path = split_acl_path(path_str);
let node = self.get_or_insert_node(&path); let node = self.get_or_insert_node(&path);
let uglist: Vec<&str> = items[3].split(',').map(|v| v.trim()).collect(); let uglist: Vec<&str> = items[3].split(',').map(|v| v.trim()).collect();
@ -576,25 +579,26 @@ impl AclTree {
Ok(tree) Ok(tree)
} }
pub fn roles(&self, userid: &Userid, path: &[&str]) -> HashSet<String> { pub fn roles(&self, auth_id: &Authid, path: &[&str]) -> HashMap<String, bool> {
let mut node = &self.root; let mut node = &self.root;
let mut role_set = node.extract_roles(userid, path.is_empty()); let mut role_map = node.extract_roles(auth_id, path.is_empty());
for (pos, comp) in path.iter().enumerate() { for (pos, comp) in path.iter().enumerate() {
let last_comp = (pos + 1) == path.len(); let last_comp = (pos + 1) == path.len();
node = match node.children.get(*comp) { node = match node.children.get(*comp) {
Some(n) => n, Some(n) => n,
None => return role_set, // path not found None => return role_map, // path not found
}; };
let new_set = node.extract_roles(userid, last_comp);
if !new_set.is_empty() { let new_map = node.extract_roles(auth_id, last_comp);
// overwrite previous settings if !new_map.is_empty() {
role_set = new_set; // overwrite previous maptings
role_map = new_map;
} }
} }
role_set role_map
} }
} }
@ -675,22 +679,22 @@ mod test {
use anyhow::{Error}; use anyhow::{Error};
use super::AclTree; use super::AclTree;
use crate::api2::types::Userid; use crate::api2::types::Authid;
fn check_roles( fn check_roles(
tree: &AclTree, tree: &AclTree,
user: &Userid, auth_id: &Authid,
path: &str, path: &str,
expected_roles: &str, expected_roles: &str,
) { ) {
let path_vec = super::split_acl_path(path); let path_vec = super::split_acl_path(path);
let mut roles = tree.roles(user, &path_vec) let mut roles = tree.roles(auth_id, &path_vec)
.iter().map(|v| v.clone()).collect::<Vec<String>>(); .iter().map(|(v, _)| v.clone()).collect::<Vec<String>>();
roles.sort(); roles.sort();
let roles = roles.join(","); let roles = roles.join(",");
assert_eq!(roles, expected_roles, "\nat check_roles for '{}' on '{}'", user, path); assert_eq!(roles, expected_roles, "\nat check_roles for '{}' on '{}'", auth_id, path);
} }
#[test] #[test]
@ -721,13 +725,13 @@ acl:1:/storage:user1@pbs:Admin
acl:1:/storage/store1:user1@pbs:DatastoreBackup acl:1:/storage/store1:user1@pbs:DatastoreBackup
acl:1:/storage/store2:user2@pbs:DatastoreBackup acl:1:/storage/store2:user2@pbs:DatastoreBackup
"###)?; "###)?;
let user1: Userid = "user1@pbs".parse()?; let user1: Authid = "user1@pbs".parse()?;
check_roles(&tree, &user1, "/", ""); check_roles(&tree, &user1, "/", "");
check_roles(&tree, &user1, "/storage", "Admin"); check_roles(&tree, &user1, "/storage", "Admin");
check_roles(&tree, &user1, "/storage/store1", "DatastoreBackup"); check_roles(&tree, &user1, "/storage/store1", "DatastoreBackup");
check_roles(&tree, &user1, "/storage/store2", "Admin"); check_roles(&tree, &user1, "/storage/store2", "Admin");
let user2: Userid = "user2@pbs".parse()?; let user2: Authid = "user2@pbs".parse()?;
check_roles(&tree, &user2, "/", ""); check_roles(&tree, &user2, "/", "");
check_roles(&tree, &user2, "/storage", ""); check_roles(&tree, &user2, "/storage", "");
check_roles(&tree, &user2, "/storage/store1", ""); check_roles(&tree, &user2, "/storage/store1", "");
@ -744,7 +748,7 @@ acl:1:/:user1@pbs:Admin
acl:1:/storage:user1@pbs:NoAccess acl:1:/storage:user1@pbs:NoAccess
acl:1:/storage/store1:user1@pbs:DatastoreBackup acl:1:/storage/store1:user1@pbs:DatastoreBackup
"###)?; "###)?;
let user1: Userid = "user1@pbs".parse()?; let user1: Authid = "user1@pbs".parse()?;
check_roles(&tree, &user1, "/", "Admin"); check_roles(&tree, &user1, "/", "Admin");
check_roles(&tree, &user1, "/storage", "NoAccess"); check_roles(&tree, &user1, "/storage", "NoAccess");
check_roles(&tree, &user1, "/storage/store1", "DatastoreBackup"); check_roles(&tree, &user1, "/storage/store1", "DatastoreBackup");
@ -770,7 +774,7 @@ acl:1:/storage/store1:user1@pbs:DatastoreBackup
let mut tree = AclTree::new(); let mut tree = AclTree::new();
let user1: Userid = "user1@pbs".parse()?; let user1: Authid = "user1@pbs".parse()?;
tree.insert_user_role("/", &user1, "Admin", true); tree.insert_user_role("/", &user1, "Admin", true);
tree.insert_user_role("/", &user1, "Audit", true); tree.insert_user_role("/", &user1, "Audit", true);
@ -794,7 +798,7 @@ acl:1:/storage/store1:user1@pbs:DatastoreBackup
let mut tree = AclTree::new(); let mut tree = AclTree::new();
let user1: Userid = "user1@pbs".parse()?; let user1: Authid = "user1@pbs".parse()?;
tree.insert_user_role("/storage", &user1, "NoAccess", true); tree.insert_user_role("/storage", &user1, "NoAccess", true);

View File

@ -9,10 +9,10 @@ use lazy_static::lazy_static;
use proxmox::api::UserInformation; use proxmox::api::UserInformation;
use super::acl::{AclTree, ROLE_NAMES, ROLE_ADMIN}; use super::acl::{AclTree, ROLE_NAMES, ROLE_ADMIN};
use super::user::User; use super::user::{ApiToken, User};
use crate::api2::types::Userid; use crate::api2::types::{Authid, Userid};
/// Cache User/Group/Acl configuration data for fast permission tests /// Cache User/Group/Token/Acl configuration data for fast permission tests
pub struct CachedUserInfo { pub struct CachedUserInfo {
user_cfg: Arc<SectionConfigData>, user_cfg: Arc<SectionConfigData>,
acl_tree: Arc<AclTree>, acl_tree: Arc<AclTree>,
@ -57,8 +57,10 @@ impl CachedUserInfo {
Ok(config) Ok(config)
} }
/// Test if a user account is enabled and not expired /// Test if a authentication id is enabled and not expired
pub fn is_active_user(&self, userid: &Userid) -> bool { pub fn is_active_auth_id(&self, auth_id: &Authid) -> bool {
let userid = auth_id.user();
if let Ok(info) = self.user_cfg.lookup::<User>("user", userid.as_str()) { if let Ok(info) = self.user_cfg.lookup::<User>("user", userid.as_str()) {
if !info.enable.unwrap_or(true) { if !info.enable.unwrap_or(true) {
return false; return false;
@ -68,24 +70,41 @@ impl CachedUserInfo {
return false; return false;
} }
} }
return true;
} else { } else {
return false; return false;
} }
if auth_id.is_token() {
if let Ok(info) = self.user_cfg.lookup::<ApiToken>("token", &auth_id.to_string()) {
if !info.enable.unwrap_or(true) {
return false;
}
if let Some(expire) = info.expire {
if expire > 0 && expire <= now() {
return false;
}
}
return true;
} else {
return false;
}
}
return true;
} }
pub fn check_privs( pub fn check_privs(
&self, &self,
userid: &Userid, auth_id: &Authid,
path: &[&str], path: &[&str],
required_privs: u64, required_privs: u64,
partial: bool, partial: bool,
) -> Result<(), Error> { ) -> Result<(), Error> {
let user_privs = self.lookup_privs(&userid, path); let privs = self.lookup_privs(&auth_id, path);
let allowed = if partial { let allowed = if partial {
(user_privs & required_privs) != 0 (privs & required_privs) != 0
} else { } else {
(user_privs & required_privs) == required_privs (privs & required_privs) == required_privs
}; };
if !allowed { if !allowed {
// printing the path doesn't leaks any information as long as we // printing the path doesn't leaks any information as long as we
@ -95,29 +114,48 @@ impl CachedUserInfo {
Ok(()) Ok(())
} }
pub fn is_superuser(&self, userid: &Userid) -> bool { pub fn is_superuser(&self, auth_id: &Authid) -> bool {
userid == "root@pam" !auth_id.is_token() && auth_id.user() == "root@pam"
} }
pub fn is_group_member(&self, _userid: &Userid, _group: &str) -> bool { pub fn is_group_member(&self, _userid: &Userid, _group: &str) -> bool {
false false
} }
pub fn lookup_privs(&self, userid: &Userid, path: &[&str]) -> u64 { pub fn lookup_privs(&self, auth_id: &Authid, path: &[&str]) -> u64 {
let (privs, _) = self.lookup_privs_details(auth_id, path);
privs
}
if self.is_superuser(userid) { pub fn lookup_privs_details(&self, auth_id: &Authid, path: &[&str]) -> (u64, u64) {
return ROLE_ADMIN; if self.is_superuser(auth_id) {
return (ROLE_ADMIN, ROLE_ADMIN);
} }
let roles = self.acl_tree.roles(userid, path); let roles = self.acl_tree.roles(auth_id, path);
let mut privs: u64 = 0; let mut privs: u64 = 0;
for role in roles { let mut propagated_privs: u64 = 0;
for (role, propagate) in roles {
if let Some((role_privs, _)) = ROLE_NAMES.get(role.as_str()) { if let Some((role_privs, _)) = ROLE_NAMES.get(role.as_str()) {
if propagate {
propagated_privs |= role_privs;
}
privs |= role_privs; privs |= role_privs;
} }
} }
privs
if auth_id.is_token() {
// limit privs to that of owning user
let user_auth_id = Authid::from(auth_id.user().clone());
privs &= self.lookup_privs(&user_auth_id, path);
let (owner_privs, owner_propagated_privs) = self.lookup_privs_details(&user_auth_id, path);
privs &= owner_privs;
propagated_privs &= owner_propagated_privs;
}
(privs, propagated_privs)
} }
} }
impl UserInformation for CachedUserInfo { impl UserInformation for CachedUserInfo {
@ -129,9 +167,9 @@ impl UserInformation for CachedUserInfo {
false false
} }
fn lookup_privs(&self, userid: &str, path: &[&str]) -> u64 { fn lookup_privs(&self, auth_id: &str, path: &[&str]) -> u64 {
match userid.parse::<Userid>() { match auth_id.parse::<Authid>() {
Ok(userid) => Self::lookup_privs(self, &userid, path), Ok(auth_id) => Self::lookup_privs(self, &auth_id, path),
Err(_) => 0, Err(_) => 0,
} }
} }

View File

@ -45,7 +45,7 @@ pub const REMOTE_PASSWORD_SCHEMA: Schema = StringSchema::new("Password or auth t
type: u16, type: u16,
}, },
userid: { userid: {
type: Userid, type: Authid,
}, },
password: { password: {
schema: REMOTE_PASSWORD_SCHEMA, schema: REMOTE_PASSWORD_SCHEMA,
@ -65,7 +65,7 @@ pub struct Remote {
pub host: String, pub host: String,
#[serde(skip_serializing_if="Option::is_none")] #[serde(skip_serializing_if="Option::is_none")]
pub port: Option<u16>, pub port: Option<u16>,
pub userid: Userid, pub userid: Authid,
#[serde(skip_serializing_if="String::is_empty")] #[serde(skip_serializing_if="String::is_empty")]
#[serde(with = "proxmox::tools::serde::string_as_base64")] #[serde(with = "proxmox::tools::serde::string_as_base64")]
pub password: String, pub password: String,

View File

@ -51,7 +51,7 @@ lazy_static! {
} }
)] )]
#[serde(rename_all="kebab-case")] #[serde(rename_all="kebab-case")]
#[derive(Serialize,Deserialize)] #[derive(Serialize,Deserialize,Clone)]
/// Sync Job /// Sync Job
pub struct SyncJobConfig { pub struct SyncJobConfig {
pub id: String, pub id: String,

View File

@ -0,0 +1,91 @@
use std::collections::HashMap;
use std::time::Duration;
use anyhow::{bail, format_err, Error};
use serde::{Serialize, Deserialize};
use serde_json::{from_value, Value};
use proxmox::tools::fs::{open_file_locked, CreateOptions};
use crate::api2::types::Authid;
use crate::auth;
const LOCK_FILE: &str = configdir!("/token.shadow.lock");
const CONF_FILE: &str = configdir!("/token.shadow");
const LOCK_TIMEOUT: Duration = Duration::from_secs(5);
#[serde(rename_all="kebab-case")]
#[derive(Serialize, Deserialize)]
/// ApiToken id / secret pair
pub struct ApiTokenSecret {
pub tokenid: Authid,
pub secret: String,
}
fn read_file() -> Result<HashMap<Authid, String>, Error> {
let json = proxmox::tools::fs::file_get_json(CONF_FILE, Some(Value::Null))?;
if json == Value::Null {
Ok(HashMap::new())
} else {
// swallow serde error which might contain sensitive data
from_value(json).map_err(|_err| format_err!("unable to parse '{}'", CONF_FILE))
}
}
fn write_file(data: HashMap<Authid, String>) -> Result<(), Error> {
let backup_user = crate::backup::backup_user()?;
let options = CreateOptions::new()
.perm(nix::sys::stat::Mode::from_bits_truncate(0o0640))
.owner(backup_user.uid)
.group(backup_user.gid);
let json = serde_json::to_vec(&data)?;
proxmox::tools::fs::replace_file(CONF_FILE, &json, options)
}
/// Verifies that an entry for given tokenid / API token secret exists
pub fn verify_secret(tokenid: &Authid, secret: &str) -> Result<(), Error> {
if !tokenid.is_token() {
bail!("not an API token ID");
}
let data = read_file()?;
match data.get(tokenid) {
Some(hashed_secret) => {
auth::verify_crypt_pw(secret, &hashed_secret)
},
None => bail!("invalid API token"),
}
}
/// Adds a new entry for the given tokenid / API token secret. The secret is stored as salted hash.
pub fn set_secret(tokenid: &Authid, secret: &str) -> Result<(), Error> {
if !tokenid.is_token() {
bail!("not an API token ID");
}
let _guard = open_file_locked(LOCK_FILE, LOCK_TIMEOUT, true)?;
let mut data = read_file()?;
let hashed_secret = auth::encrypt_pw(secret)?;
data.insert(tokenid.clone(), hashed_secret);
write_file(data)?;
Ok(())
}
/// Deletes the entry for the given tokenid.
pub fn delete_secret(tokenid: &Authid) -> Result<(), Error> {
if !tokenid.is_token() {
bail!("not an API token ID");
}
let _guard = open_file_locked(LOCK_FILE, LOCK_TIMEOUT, true)?;
let mut data = read_file()?;
data.remove(tokenid);
write_file(data)?;
Ok(())
}

View File

@ -52,6 +52,36 @@ pub const EMAIL_SCHEMA: Schema = StringSchema::new("E-Mail Address.")
.max_length(64) .max_length(64)
.schema(); .schema();
#[api(
properties: {
tokenid: {
schema: PROXMOX_TOKEN_ID_SCHEMA,
},
comment: {
optional: true,
schema: SINGLE_LINE_COMMENT_SCHEMA,
},
enable: {
optional: true,
schema: ENABLE_USER_SCHEMA,
},
expire: {
optional: true,
schema: EXPIRE_USER_SCHEMA,
},
}
)]
#[derive(Serialize,Deserialize)]
/// ApiToken properties.
pub struct ApiToken {
pub tokenid: Authid,
#[serde(skip_serializing_if="Option::is_none")]
pub comment: Option<String>,
#[serde(skip_serializing_if="Option::is_none")]
pub enable: Option<bool>,
#[serde(skip_serializing_if="Option::is_none")]
pub expire: Option<i64>,
}
#[api( #[api(
properties: { properties: {
@ -103,15 +133,21 @@ pub struct User {
} }
fn init() -> SectionConfig { fn init() -> SectionConfig {
let obj_schema = match User::API_SCHEMA { let mut config = SectionConfig::new(&Authid::API_SCHEMA);
Schema::Object(ref obj_schema) => obj_schema,
let user_schema = match User::API_SCHEMA {
Schema::Object(ref user_schema) => user_schema,
_ => unreachable!(), _ => unreachable!(),
}; };
let user_plugin = SectionConfigPlugin::new("user".to_string(), Some("userid".to_string()), user_schema);
config.register_plugin(user_plugin);
let plugin = SectionConfigPlugin::new("user".to_string(), Some("userid".to_string()), obj_schema); let token_schema = match ApiToken::API_SCHEMA {
let mut config = SectionConfig::new(&Userid::API_SCHEMA); Schema::Object(ref token_schema) => token_schema,
_ => unreachable!(),
config.register_plugin(plugin); };
let token_plugin = SectionConfigPlugin::new("token".to_string(), Some("tokenid".to_string()), token_schema);
config.register_plugin(token_plugin);
config config
} }
@ -206,9 +242,57 @@ pub fn save_config(config: &SectionConfigData) -> Result<(), Error> {
} }
// shell completion helper // shell completion helper
pub fn complete_user_name(_arg: &str, _param: &HashMap<String, String>) -> Vec<String> { pub fn complete_userid(_arg: &str, _param: &HashMap<String, String>) -> Vec<String> {
match config() { match config() {
Ok((data, _digest)) => data.sections.iter().map(|(id, _)| id.to_string()).collect(), Ok((data, _digest)) => {
data.sections.iter()
.filter_map(|(id, (section_type, _))| {
if section_type == "user" {
Some(id.to_string())
} else {
None
}
}).collect()
},
Err(_) => return vec![], Err(_) => return vec![],
} }
} }
// shell completion helper
pub fn complete_authid(_arg: &str, _param: &HashMap<String, String>) -> Vec<String> {
match config() {
Ok((data, _digest)) => data.sections.iter().map(|(id, _)| id.to_string()).collect(),
Err(_) => vec![],
}
}
// shell completion helper
pub fn complete_token_name(_arg: &str, param: &HashMap<String, String>) -> Vec<String> {
let data = match config() {
Ok((data, _digest)) => data,
Err(_) => return Vec::new(),
};
match param.get("userid") {
Some(userid) => {
let user = data.lookup::<User>("user", userid);
let tokens = data.convert_to_typed_array("token");
match (user, tokens) {
(Ok(_), Ok(tokens)) => {
tokens
.into_iter()
.filter_map(|token: ApiToken| {
let tokenid = token.tokenid;
if tokenid.is_token() && tokenid.user() == userid {
Some(tokenid.tokenname().unwrap().as_str().to_string())
} else {
None
}
}).collect()
},
_ => vec![],
}
},
None => vec![],
}
}

View File

@ -9,7 +9,7 @@ use std::ffi::CStr;
pub trait BackupCatalogWriter { pub trait BackupCatalogWriter {
fn start_directory(&mut self, name: &CStr) -> Result<(), Error>; fn start_directory(&mut self, name: &CStr) -> Result<(), Error>;
fn end_directory(&mut self) -> Result<(), Error>; fn end_directory(&mut self) -> Result<(), Error>;
fn add_file(&mut self, name: &CStr, size: u64, mtime: u64) -> Result<(), Error>; fn add_file(&mut self, name: &CStr, size: u64, mtime: i64) -> Result<(), Error>;
fn add_symlink(&mut self, name: &CStr) -> Result<(), Error>; fn add_symlink(&mut self, name: &CStr) -> Result<(), Error>;
fn add_hardlink(&mut self, name: &CStr) -> Result<(), Error>; fn add_hardlink(&mut self, name: &CStr) -> Result<(), Error>;
fn add_block_device(&mut self, name: &CStr) -> Result<(), Error>; fn add_block_device(&mut self, name: &CStr) -> Result<(), Error>;

View File

@ -535,7 +535,7 @@ impl<'a, 'b> Archiver<'a, 'b> {
let file_size = stat.st_size as u64; let file_size = stat.st_size as u64;
if let Some(ref mut catalog) = self.catalog { if let Some(ref mut catalog) = self.catalog {
catalog.add_file(c_file_name, file_size, stat.st_mtime as u64)?; catalog.add_file(c_file_name, file_size, stat.st_mtime)?;
} }
let offset: LinkOffset = let offset: LinkOffset =

View File

@ -7,6 +7,7 @@ use proxmox::tools::email::sendmail;
use crate::{ use crate::{
config::verify::VerificationJobConfig, config::verify::VerificationJobConfig,
config::sync::SyncJobConfig,
api2::types::{ api2::types::{
Userid, Userid,
GarbageCollectionStatus, GarbageCollectionStatus,
@ -16,19 +17,22 @@ use crate::{
const GC_OK_TEMPLATE: &str = r###" const GC_OK_TEMPLATE: &str = r###"
Datastore: {{datastore}} Datastore: {{datastore}}
Task ID: {{status.upid}} Task ID: {{status.upid}}
Index file count: {{status.index-file-count}} Index file count: {{status.index-file-count}}
Removed garbage: {{human-bytes status.removed-bytes}} Removed garbage: {{human-bytes status.removed-bytes}}
Removed chunks: {{status.removed-chunks}} Removed chunks: {{status.removed-chunks}}
Remove bad files: {{status.removed-bad}} Remove bad files: {{status.removed-bad}}
Pending removals: {{human-bytes status.pending-bytes}} (in {{status.pending-chunks}} chunks) Bad files: {{status.still-bad}}
Pending removals: {{human-bytes status.pending-bytes}} (in {{status.pending-chunks}} chunks)
Original Data usage: {{human-bytes status.index-data-bytes}} Original Data usage: {{human-bytes status.index-data-bytes}}
On Disk usage: {{human-bytes status.disk-bytes}} ({{relative-percentage status.disk-bytes status.index-data-bytes}}) On Disk usage: {{human-bytes status.disk-bytes}} ({{relative-percentage status.disk-bytes status.index-data-bytes}})
On Disk chunks: {{status.disk-chunks}} On Disk chunks: {{status.disk-chunks}}
Deduplication Factor: {{deduplication-factor}}
Garbage collection successful. Garbage collection successful.
@ -65,6 +69,28 @@ Verification failed on these snapshots:
"###; "###;
const SYNC_OK_TEMPLATE: &str = r###"
Job ID: {{job.id}}
Datastore: {{job.store}}
Remote: {{job.remote}}
Remote Store: {{job.remote-store}}
Synchronization successful.
"###;
const SYNC_ERR_TEMPLATE: &str = r###"
Job ID: {{job.id}}
Datastore: {{job.store}}
Remote: {{job.remote}}
Remote Store: {{job.remote-store}}
Synchronization failed: {{error}}
"###;
lazy_static::lazy_static!{ lazy_static::lazy_static!{
static ref HANDLEBARS: Handlebars<'static> = { static ref HANDLEBARS: Handlebars<'static> = {
@ -81,6 +107,9 @@ lazy_static::lazy_static!{
hb.register_template_string("verify_ok_template", VERIFY_OK_TEMPLATE).unwrap(); hb.register_template_string("verify_ok_template", VERIFY_OK_TEMPLATE).unwrap();
hb.register_template_string("verify_err_template", VERIFY_ERR_TEMPLATE).unwrap(); hb.register_template_string("verify_err_template", VERIFY_ERR_TEMPLATE).unwrap();
hb.register_template_string("sync_ok_template", SYNC_OK_TEMPLATE).unwrap();
hb.register_template_string("sync_err_template", SYNC_ERR_TEMPLATE).unwrap();
hb hb
}; };
} }
@ -93,7 +122,7 @@ fn send_job_status_mail(
// Note: OX has serious problems displaying text mails, // Note: OX has serious problems displaying text mails,
// so we include html as well // so we include html as well
let html = format!("<html><body><pre>\n{}\n<pre>", text); let html = format!("<html><body><pre>\n{}\n<pre>", handlebars::html_escape(text));
let nodename = proxmox::tools::nodename(); let nodename = proxmox::tools::nodename();
@ -120,10 +149,18 @@ pub fn send_gc_status(
let text = match result { let text = match result {
Ok(()) => { Ok(()) => {
let deduplication_factor = if status.disk_bytes > 0 {
(status.index_data_bytes as f64)/(status.disk_bytes as f64)
} else {
1.0
};
let data = json!({ let data = json!({
"status": status, "status": status,
"datastore": datastore, "datastore": datastore,
"deduplication-factor": format!("{:.2}", deduplication_factor),
}); });
HANDLEBARS.render("gc_ok_template", &data)? HANDLEBARS.render("gc_ok_template", &data)?
} }
Err(err) => { Err(err) => {
@ -189,6 +226,41 @@ pub fn send_verify_status(
Ok(()) Ok(())
} }
pub fn send_sync_status(
email: &str,
job: &SyncJobConfig,
result: &Result<(), Error>,
) -> Result<(), Error> {
let text = match result {
Ok(()) => {
let data = json!({ "job": job });
HANDLEBARS.render("sync_ok_template", &data)?
}
Err(err) => {
let data = json!({ "job": job, "error": err.to_string() });
HANDLEBARS.render("sync_err_template", &data)?
}
};
let subject = match result {
Ok(()) => format!(
"Sync remote '{}' datastore '{}' successful",
job.remote,
job.remote_store,
),
Err(_) => format!(
"Sync remote '{}' datastore '{}' failed",
job.remote,
job.remote_store,
),
};
send_job_status_mail(email, &subject, &text)?;
Ok(())
}
/// Lookup users email address /// Lookup users email address
/// ///
/// For "backup@pam", this returns the address from "root@pam". /// For "backup@pam", this returns the address from "root@pam".

View File

@ -6,7 +6,7 @@ use proxmox::api::{RpcEnvironment, RpcEnvironmentType};
pub struct RestEnvironment { pub struct RestEnvironment {
env_type: RpcEnvironmentType, env_type: RpcEnvironmentType,
result_attributes: Value, result_attributes: Value,
user: Option<String>, auth_id: Option<String>,
client_ip: Option<std::net::SocketAddr>, client_ip: Option<std::net::SocketAddr>,
} }
@ -14,7 +14,7 @@ impl RestEnvironment {
pub fn new(env_type: RpcEnvironmentType) -> Self { pub fn new(env_type: RpcEnvironmentType) -> Self {
Self { Self {
result_attributes: json!({}), result_attributes: json!({}),
user: None, auth_id: None,
client_ip: None, client_ip: None,
env_type, env_type,
} }
@ -35,12 +35,12 @@ impl RpcEnvironment for RestEnvironment {
self.env_type self.env_type
} }
fn set_user(&mut self, user: Option<String>) { fn set_auth_id(&mut self, auth_id: Option<String>) {
self.user = user; self.auth_id = auth_id;
} }
fn get_user(&self) -> Option<String> { fn get_auth_id(&self) -> Option<String> {
self.user.clone() self.auth_id.clone()
} }
fn set_client_ip(&mut self, client_ip: Option<std::net::SocketAddr>) { fn set_client_ip(&mut self, client_ip: Option<std::net::SocketAddr>) {

View File

@ -42,7 +42,7 @@ use super::formatter::*;
use super::ApiConfig; use super::ApiConfig;
use crate::auth_helpers::*; use crate::auth_helpers::*;
use crate::api2::types::Userid; use crate::api2::types::{Authid, Userid};
use crate::tools; use crate::tools;
use crate::tools::FileLogger; use crate::tools::FileLogger;
use crate::tools::ticket::Ticket; use crate::tools::ticket::Ticket;
@ -138,9 +138,9 @@ fn log_response(
log::error!("{} {}: {} {}: [client {}] {}", method.as_str(), path, status.as_str(), reason, peer, message); log::error!("{} {}: {} {}: [client {}] {}", method.as_str(), path, status.as_str(), reason, peer, message);
} }
if let Some(logfile) = logfile { if let Some(logfile) = logfile {
let user = match resp.extensions().get::<Userid>() { let auth_id = match resp.extensions().get::<Authid>() {
Some(userid) => userid.as_str(), Some(auth_id) => auth_id.to_string(),
None => "-", None => "-".to_string(),
}; };
let now = proxmox::tools::time::epoch_i64(); let now = proxmox::tools::time::epoch_i64();
// time format which apache/nginx use (by default), copied from pve-http-server // time format which apache/nginx use (by default), copied from pve-http-server
@ -153,7 +153,7 @@ fn log_response(
.log(format!( .log(format!(
"{} - {} [{}] \"{} {}\" {} {} {}", "{} - {} [{}] \"{} {}\" {} {} {}",
peer.ip(), peer.ip(),
user, auth_id,
datetime, datetime,
method.as_str(), method.as_str(),
path, path,
@ -441,7 +441,7 @@ fn get_index(
.unwrap(); .unwrap();
if let Some(userid) = userid { if let Some(userid) = userid {
resp.extensions_mut().insert(userid); resp.extensions_mut().insert(Authid::from((userid, None)));
} }
resp resp
@ -531,50 +531,89 @@ async fn handle_static_file_download(filename: PathBuf) -> Result<Response<Body
} }
} }
fn extract_auth_data(headers: &http::HeaderMap) -> (Option<String>, Option<String>, Option<String>) { fn extract_lang_header(headers: &http::HeaderMap) -> Option<String> {
let mut ticket = None;
let mut language = None;
if let Some(raw_cookie) = headers.get("COOKIE") { if let Some(raw_cookie) = headers.get("COOKIE") {
if let Ok(cookie) = raw_cookie.to_str() { if let Ok(cookie) = raw_cookie.to_str() {
ticket = tools::extract_cookie(cookie, "PBSAuthCookie"); return tools::extract_cookie(cookie, "PBSLangCookie");
language = tools::extract_cookie(cookie, "PBSLangCookie");
} }
} }
let csrf_token = match headers.get("CSRFPreventionToken").map(|v| v.to_str()) { None
Some(Ok(v)) => Some(v.to_owned()), }
_ => None,
};
(ticket, csrf_token, language) struct UserAuthData{
ticket: String,
csrf_token: Option<String>,
}
enum AuthData {
User(UserAuthData),
ApiToken(String),
}
fn extract_auth_data(headers: &http::HeaderMap) -> Option<AuthData> {
if let Some(raw_cookie) = headers.get("COOKIE") {
if let Ok(cookie) = raw_cookie.to_str() {
if let Some(ticket) = tools::extract_cookie(cookie, "PBSAuthCookie") {
let csrf_token = match headers.get("CSRFPreventionToken").map(|v| v.to_str()) {
Some(Ok(v)) => Some(v.to_owned()),
_ => None,
};
return Some(AuthData::User(UserAuthData {
ticket,
csrf_token,
}));
}
}
}
match headers.get("AUTHORIZATION").map(|v| v.to_str()) {
Some(Ok(v)) => Some(AuthData::ApiToken(v.to_owned())),
_ => None,
}
} }
fn check_auth( fn check_auth(
method: &hyper::Method, method: &hyper::Method,
ticket: &Option<String>, auth_data: &AuthData,
csrf_token: &Option<String>,
user_info: &CachedUserInfo, user_info: &CachedUserInfo,
) -> Result<Userid, Error> { ) -> Result<Authid, Error> {
let ticket_lifetime = tools::ticket::TICKET_LIFETIME; match auth_data {
AuthData::User(user_auth_data) => {
let ticket = user_auth_data.ticket.clone();
let ticket_lifetime = tools::ticket::TICKET_LIFETIME;
let ticket = ticket.as_ref().map(String::as_str); let userid: Userid = Ticket::parse(&ticket)?
let userid: Userid = Ticket::parse(&ticket.ok_or_else(|| format_err!("missing ticket"))?)? .verify_with_time_frame(public_auth_key(), "PBS", None, -300..ticket_lifetime)?;
.verify_with_time_frame(public_auth_key(), "PBS", None, -300..ticket_lifetime)?;
if !user_info.is_active_user(&userid) { let auth_id = Authid::from(userid.clone());
bail!("user account disabled or expired."); if !user_info.is_active_auth_id(&auth_id) {
} bail!("user account disabled or expired.");
}
if method != hyper::Method::GET { if method != hyper::Method::GET {
if let Some(csrf_token) = csrf_token { if let Some(csrf_token) = &user_auth_data.csrf_token {
verify_csrf_prevention_token(csrf_secret(), &userid, &csrf_token, -300, ticket_lifetime)?; verify_csrf_prevention_token(csrf_secret(), &userid, &csrf_token, -300, ticket_lifetime)?;
} else { } else {
bail!("missing CSRF prevention token"); bail!("missing CSRF prevention token");
}
}
Ok(auth_id)
},
AuthData::ApiToken(api_token) => {
let mut parts = api_token.splitn(2, ':');
let tokenid = parts.next()
.ok_or_else(|| format_err!("failed to split API token header"))?;
let tokenid: Authid = tokenid.parse()?;
let tokensecret = parts.next()
.ok_or_else(|| format_err!("failed to split API token header"))?;
crate::config::token_shadow::verify_secret(&tokenid, &tokensecret)?;
Ok(tokenid)
} }
} }
Ok(userid)
} }
async fn handle_request( async fn handle_request(
@ -630,9 +669,12 @@ async fn handle_request(
} }
if auth_required { if auth_required {
let (ticket, csrf_token, _) = extract_auth_data(&parts.headers); let auth_result = match extract_auth_data(&parts.headers) {
match check_auth(&method, &ticket, &csrf_token, &user_info) { Some(auth_data) => check_auth(&method, &auth_data, &user_info),
Ok(userid) => rpcenv.set_user(Some(userid.to_string())), None => Err(format_err!("no authentication credentials provided.")),
};
match auth_result {
Ok(authid) => rpcenv.set_auth_id(Some(authid.to_string())),
Err(err) => { Err(err) => {
// always delay unauthorized calls by 3 seconds (from start of request) // always delay unauthorized calls by 3 seconds (from start of request)
let err = http_err!(UNAUTHORIZED, "authentication failed - {}", err); let err = http_err!(UNAUTHORIZED, "authentication failed - {}", err);
@ -648,8 +690,8 @@ async fn handle_request(
return Ok((formatter.format_error)(err)); return Ok((formatter.format_error)(err));
} }
Some(api_method) => { Some(api_method) => {
let user = rpcenv.get_user(); let auth_id = rpcenv.get_auth_id();
if !check_api_permission(api_method.access.permission, user.as_deref(), &uri_param, user_info.as_ref()) { if !check_api_permission(api_method.access.permission, auth_id.as_deref(), &uri_param, user_info.as_ref()) {
let err = http_err!(FORBIDDEN, "permission check failed"); let err = http_err!(FORBIDDEN, "permission check failed");
tokio::time::delay_until(Instant::from_std(access_forbidden_time)).await; tokio::time::delay_until(Instant::from_std(access_forbidden_time)).await;
return Ok((formatter.format_error)(err)); return Ok((formatter.format_error)(err));
@ -666,9 +708,9 @@ async fn handle_request(
Err(err) => (formatter.format_error)(err), Err(err) => (formatter.format_error)(err),
}; };
if let Some(user) = user { if let Some(auth_id) = auth_id {
let userid: Userid = user.parse()?; let auth_id: Authid = auth_id.parse()?;
response.extensions_mut().insert(userid); response.extensions_mut().insert(auth_id);
} }
return Ok(response); return Ok(response);
@ -684,13 +726,14 @@ async fn handle_request(
} }
if comp_len == 0 { if comp_len == 0 {
let (ticket, csrf_token, language) = extract_auth_data(&parts.headers); let language = extract_lang_header(&parts.headers);
if ticket != None { if let Some(auth_data) = extract_auth_data(&parts.headers) {
match check_auth(&method, &ticket, &csrf_token, &user_info) { match check_auth(&method, &auth_data, &user_info) {
Ok(userid) => { Ok(auth_id) if !auth_id.is_token() => {
let new_csrf_token = assemble_csrf_prevention_token(csrf_secret(), &userid); let userid = auth_id.user();
return Ok(get_index(Some(userid), Some(new_csrf_token), language, &api, parts)); let new_csrf_token = assemble_csrf_prevention_token(csrf_secret(), userid);
} return Ok(get_index(Some(userid.clone()), Some(new_csrf_token), language, &api, parts));
},
_ => { _ => {
tokio::time::delay_until(Instant::from_std(delay_unauth_time)).await; tokio::time::delay_until(Instant::from_std(delay_unauth_time)).await;
return Ok(get_index(None, None, language, &api, parts)); return Ok(get_index(None, None, language, &api, parts));

View File

@ -6,7 +6,7 @@ use proxmox::api::schema::{ApiStringFormat, Schema, StringSchema};
use proxmox::const_regex; use proxmox::const_regex;
use proxmox::sys::linux::procfs; use proxmox::sys::linux::procfs;
use crate::api2::types::Userid; use crate::api2::types::Authid;
/// Unique Process/Task Identifier /// Unique Process/Task Identifier
/// ///
@ -34,8 +34,8 @@ pub struct UPID {
pub worker_type: String, pub worker_type: String,
/// Worker ID (arbitrary ASCII string) /// Worker ID (arbitrary ASCII string)
pub worker_id: Option<String>, pub worker_id: Option<String>,
/// The user who started the task /// The authenticated entity who started the task
pub userid: Userid, pub auth_id: Authid,
/// The node name. /// The node name.
pub node: String, pub node: String,
} }
@ -47,7 +47,7 @@ const_regex! {
pub PROXMOX_UPID_REGEX = concat!( pub PROXMOX_UPID_REGEX = concat!(
r"^UPID:(?P<node>[a-zA-Z0-9]([a-zA-Z0-9\-]*[a-zA-Z0-9])?):(?P<pid>[0-9A-Fa-f]{8}):", r"^UPID:(?P<node>[a-zA-Z0-9]([a-zA-Z0-9\-]*[a-zA-Z0-9])?):(?P<pid>[0-9A-Fa-f]{8}):",
r"(?P<pstart>[0-9A-Fa-f]{8,9}):(?P<task_id>[0-9A-Fa-f]{8,16}):(?P<starttime>[0-9A-Fa-f]{8}):", r"(?P<pstart>[0-9A-Fa-f]{8,9}):(?P<task_id>[0-9A-Fa-f]{8,16}):(?P<starttime>[0-9A-Fa-f]{8}):",
r"(?P<wtype>[^:\s]+):(?P<wid>[^:\s]*):(?P<userid>[^:\s]+):$" r"(?P<wtype>[^:\s]+):(?P<wid>[^:\s]*):(?P<authid>[^:\s]+):$"
); );
} }
@ -65,7 +65,7 @@ impl UPID {
pub fn new( pub fn new(
worker_type: &str, worker_type: &str,
worker_id: Option<String>, worker_id: Option<String>,
userid: Userid, auth_id: Authid,
) -> Result<Self, Error> { ) -> Result<Self, Error> {
let pid = unsafe { libc::getpid() }; let pid = unsafe { libc::getpid() };
@ -87,7 +87,7 @@ impl UPID {
task_id, task_id,
worker_type: worker_type.to_owned(), worker_type: worker_type.to_owned(),
worker_id, worker_id,
userid, auth_id,
node: proxmox::tools::nodename().to_owned(), node: proxmox::tools::nodename().to_owned(),
}) })
} }
@ -122,7 +122,7 @@ impl std::str::FromStr for UPID {
task_id: usize::from_str_radix(&cap["task_id"], 16).unwrap(), task_id: usize::from_str_radix(&cap["task_id"], 16).unwrap(),
worker_type: cap["wtype"].to_string(), worker_type: cap["wtype"].to_string(),
worker_id, worker_id,
userid: cap["userid"].parse()?, auth_id: cap["authid"].parse()?,
node: cap["node"].to_string(), node: cap["node"].to_string(),
}) })
} else { } else {
@ -146,6 +146,6 @@ impl std::fmt::Display for UPID {
// more that 8 characters for pstart // more that 8 characters for pstart
write!(f, "UPID:{}:{:08X}:{:08X}:{:08X}:{:08X}:{}:{}:{}:", write!(f, "UPID:{}:{:08X}:{:08X}:{:08X}:{:08X}:{}:{}:{}:",
self.node, self.pid, self.pstart, self.task_id, self.starttime, self.worker_type, wid, self.userid) self.node, self.pid, self.pstart, self.task_id, self.starttime, self.worker_type, wid, self.auth_id)
} }
} }

View File

@ -7,7 +7,7 @@ use crate::{
config::verify::VerificationJobConfig, config::verify::VerificationJobConfig,
backup::{ backup::{
DataStore, DataStore,
BackupInfo, BackupManifest,
verify_all_backups, verify_all_backups,
}, },
task_log, task_log,
@ -17,25 +17,19 @@ use crate::{
pub fn do_verification_job( pub fn do_verification_job(
mut job: Job, mut job: Job,
verification_job: VerificationJobConfig, verification_job: VerificationJobConfig,
userid: &Userid, auth_id: &Authid,
schedule: Option<String>, schedule: Option<String>,
) -> Result<String, Error> { ) -> Result<String, Error> {
let datastore = DataStore::lookup_datastore(&verification_job.store)?; let datastore = DataStore::lookup_datastore(&verification_job.store)?;
let datastore2 = datastore.clone();
let outdated_after = verification_job.outdated_after.clone(); let outdated_after = verification_job.outdated_after.clone();
let ignore_verified_snapshots = verification_job.ignore_verified.unwrap_or(true); let ignore_verified_snapshots = verification_job.ignore_verified.unwrap_or(true);
let filter = move |backup_info: &BackupInfo| { let filter = move |manifest: &BackupManifest| {
if !ignore_verified_snapshots { if !ignore_verified_snapshots {
return true; return true;
} }
let manifest = match datastore2.load_manifest(&backup_info.backup_dir) {
Ok((manifest, _)) => manifest,
Err(_) => return true, // include, so task picks this up as error
};
let raw_verify_state = manifest.unprotected["verify_state"].clone(); let raw_verify_state = manifest.unprotected["verify_state"].clone();
match serde_json::from_value::<SnapshotVerifyState>(raw_verify_state) { match serde_json::from_value::<SnapshotVerifyState>(raw_verify_state) {
@ -54,14 +48,14 @@ pub fn do_verification_job(
} }
}; };
let email = crate::server::lookup_user_email(userid); let email = crate::server::lookup_user_email(auth_id.user());
let job_id = job.jobname().to_string(); let job_id = job.jobname().to_string();
let worker_type = job.jobtype().to_string(); let worker_type = job.jobtype().to_string();
let upid_str = WorkerTask::new_thread( let upid_str = WorkerTask::new_thread(
&worker_type, &worker_type,
Some(job.jobname().to_string()), Some(job.jobname().to_string()),
userid.clone(), auth_id.clone(),
false, false,
move |worker| { move |worker| {
job.start(&worker.upid().to_string())?; job.start(&worker.upid().to_string())?;
@ -71,7 +65,7 @@ pub fn do_verification_job(
task_log!(worker,"task triggered by schedule '{}'", event_str); task_log!(worker,"task triggered by schedule '{}'", event_str);
} }
let result = verify_all_backups(datastore, worker.clone(), worker.upid(), &filter); let result = verify_all_backups(datastore, worker.clone(), worker.upid(), Some(&filter));
let job_result = match result { let job_result = match result {
Ok(ref errors) if errors.is_empty() => Ok(()), Ok(ref errors) if errors.is_empty() => Ok(()),
Ok(_) => Err(format_err!("verification failed - please check the log for details")), Ok(_) => Err(format_err!("verification failed - please check the log for details")),

View File

@ -21,7 +21,7 @@ use super::UPID;
use crate::tools::logrotate::{LogRotate, LogRotateFiles}; use crate::tools::logrotate::{LogRotate, LogRotateFiles};
use crate::tools::{FileLogger, FileLogOptions}; use crate::tools::{FileLogger, FileLogOptions};
use crate::api2::types::Userid; use crate::api2::types::Authid;
macro_rules! PROXMOX_BACKUP_VAR_RUN_DIR_M { () => ("/run/proxmox-backup") } macro_rules! PROXMOX_BACKUP_VAR_RUN_DIR_M { () => ("/run/proxmox-backup") }
macro_rules! PROXMOX_BACKUP_LOG_DIR_M { () => ("/var/log/proxmox-backup") } macro_rules! PROXMOX_BACKUP_LOG_DIR_M { () => ("/var/log/proxmox-backup") }
@ -35,8 +35,6 @@ pub const PROXMOX_BACKUP_ACTIVE_TASK_FN: &str = concat!(PROXMOX_BACKUP_TASK_DIR_
pub const PROXMOX_BACKUP_INDEX_TASK_FN: &str = concat!(PROXMOX_BACKUP_TASK_DIR_M!(), "/index"); pub const PROXMOX_BACKUP_INDEX_TASK_FN: &str = concat!(PROXMOX_BACKUP_TASK_DIR_M!(), "/index");
pub const PROXMOX_BACKUP_ARCHIVE_TASK_FN: &str = concat!(PROXMOX_BACKUP_TASK_DIR_M!(), "/archive"); pub const PROXMOX_BACKUP_ARCHIVE_TASK_FN: &str = concat!(PROXMOX_BACKUP_TASK_DIR_M!(), "/archive");
const MAX_INDEX_TASKS: usize = 1000;
lazy_static! { lazy_static! {
static ref WORKER_TASK_LIST: Mutex<HashMap<usize, Arc<WorkerTask>>> = Mutex::new(HashMap::new()); static ref WORKER_TASK_LIST: Mutex<HashMap<usize, Arc<WorkerTask>>> = Mutex::new(HashMap::new());
@ -363,7 +361,10 @@ fn update_active_workers(new_upid: Option<&UPID>) -> Result<(), Error> {
let lock = lock_task_list_files(true)?; let lock = lock_task_list_files(true)?;
// TODO remove with 1.x
let mut finish_list: Vec<TaskListInfo> = read_task_file_from_path(PROXMOX_BACKUP_INDEX_TASK_FN)?; let mut finish_list: Vec<TaskListInfo> = read_task_file_from_path(PROXMOX_BACKUP_INDEX_TASK_FN)?;
let had_index_file = !finish_list.is_empty();
let mut active_list: Vec<TaskListInfo> = read_task_file_from_path(PROXMOX_BACKUP_ACTIVE_TASK_FN)? let mut active_list: Vec<TaskListInfo> = read_task_file_from_path(PROXMOX_BACKUP_ACTIVE_TASK_FN)?
.into_iter() .into_iter()
.filter_map(|info| { .filter_map(|info| {
@ -374,7 +375,7 @@ fn update_active_workers(new_upid: Option<&UPID>) -> Result<(), Error> {
} }
if !worker_is_active_local(&info.upid) { if !worker_is_active_local(&info.upid) {
println!("Detected stopped UPID {}", &info.upid_str); println!("Detected stopped task '{}'", &info.upid_str);
let now = proxmox::tools::time::epoch_i64(); let now = proxmox::tools::time::epoch_i64();
let status = upid_read_status(&info.upid) let status = upid_read_status(&info.upid)
.unwrap_or_else(|_| TaskState::Unknown { endtime: now }); .unwrap_or_else(|_| TaskState::Unknown { endtime: now });
@ -412,33 +413,10 @@ fn update_active_workers(new_upid: Option<&UPID>) -> Result<(), Error> {
} }
}); });
if !finish_list.is_empty() {
let start = if finish_list.len() > MAX_INDEX_TASKS {
finish_list.len() - MAX_INDEX_TASKS
} else {
0
};
let end = (start+MAX_INDEX_TASKS).min(finish_list.len());
let index_raw = if end > start {
render_task_list(&finish_list[start..end])
} else {
"".to_string()
};
replace_file(
PROXMOX_BACKUP_INDEX_TASK_FN,
index_raw.as_bytes(),
CreateOptions::new()
.owner(backup_user.uid)
.group(backup_user.gid),
)?;
if !finish_list.is_empty() && start > 0 {
match std::fs::OpenOptions::new().append(true).create(true).open(PROXMOX_BACKUP_ARCHIVE_TASK_FN) { match std::fs::OpenOptions::new().append(true).create(true).open(PROXMOX_BACKUP_ARCHIVE_TASK_FN) {
Ok(mut writer) => { Ok(mut writer) => {
for info in &finish_list[0..start] { for info in &finish_list {
writer.write_all(render_task_line(&info).as_bytes())?; writer.write_all(render_task_line(&info).as_bytes())?;
} }
}, },
@ -448,6 +426,12 @@ fn update_active_workers(new_upid: Option<&UPID>) -> Result<(), Error> {
nix::unistd::chown(PROXMOX_BACKUP_ARCHIVE_TASK_FN, Some(backup_user.uid), Some(backup_user.gid))?; nix::unistd::chown(PROXMOX_BACKUP_ARCHIVE_TASK_FN, Some(backup_user.uid), Some(backup_user.gid))?;
} }
// TODO Remove with 1.x
// for compatibility, if we had an INDEX file, we do not need it anymore
if had_index_file {
let _ = nix::unistd::unlink(PROXMOX_BACKUP_INDEX_TASK_FN);
}
drop(lock); drop(lock);
Ok(()) Ok(())
@ -511,16 +495,9 @@ where
read_task_file(file) read_task_file(file)
} }
enum TaskFile {
Active,
Index,
Archive,
End,
}
pub struct TaskListInfoIterator { pub struct TaskListInfoIterator {
list: VecDeque<TaskListInfo>, list: VecDeque<TaskListInfo>,
file: TaskFile, end: bool,
archive: Option<LogRotateFiles>, archive: Option<LogRotateFiles>,
lock: Option<File>, lock: Option<File>,
} }
@ -535,7 +512,10 @@ impl TaskListInfoIterator {
.iter() .iter()
.any(|info| info.state.is_some() || !worker_is_active_local(&info.upid)); .any(|info| info.state.is_some() || !worker_is_active_local(&info.upid));
if needs_update { // TODO remove with 1.x
let index_exists = std::path::Path::new(PROXMOX_BACKUP_INDEX_TASK_FN).is_file();
if needs_update || index_exists {
drop(lock); drop(lock);
update_active_workers(None)?; update_active_workers(None)?;
let lock = lock_task_list_files(false)?; let lock = lock_task_list_files(false)?;
@ -554,12 +534,11 @@ impl TaskListInfoIterator {
Some(logrotate.files()) Some(logrotate.files())
}; };
let file = if active_only { TaskFile::End } else { TaskFile::Active };
let lock = if active_only { None } else { Some(read_lock) }; let lock = if active_only { None } else { Some(read_lock) };
Ok(Self { Ok(Self {
list: active_list.into(), list: active_list.into(),
file, end: active_only,
archive, archive,
lock, lock,
}) })
@ -573,35 +552,23 @@ impl Iterator for TaskListInfoIterator {
loop { loop {
if let Some(element) = self.list.pop_back() { if let Some(element) = self.list.pop_back() {
return Some(Ok(element)); return Some(Ok(element));
} else if self.end {
return None;
} else { } else {
match self.file { if let Some(mut archive) = self.archive.take() {
TaskFile::Active => { if let Some(file) = archive.next() {
let index = match read_task_file_from_path(PROXMOX_BACKUP_INDEX_TASK_FN) { let list = match read_task_file(file) {
Ok(index) => index, Ok(list) => list,
Err(err) => return Some(Err(err)), Err(err) => return Some(Err(err)),
}; };
self.list.append(&mut index.into()); self.list.append(&mut list.into());
self.file = TaskFile::Index; self.archive = Some(archive);
}, continue;
TaskFile::Index | TaskFile::Archive => {
if let Some(mut archive) = self.archive.take() {
if let Some(file) = archive.next() {
let list = match read_task_file(file) {
Ok(list) => list,
Err(err) => return Some(Err(err)),
};
self.list.append(&mut list.into());
self.archive = Some(archive);
self.file = TaskFile::Archive;
continue;
}
}
self.file = TaskFile::End;
self.lock.take();
return None;
} }
TaskFile::End => return None,
} }
self.end = true;
self.lock.take();
} }
} }
} }
@ -644,15 +611,15 @@ impl Drop for WorkerTask {
impl WorkerTask { impl WorkerTask {
pub fn new(worker_type: &str, worker_id: Option<String>, userid: Userid, to_stdout: bool) -> Result<Arc<Self>, Error> { pub fn new(worker_type: &str, worker_id: Option<String>, auth_id: Authid, to_stdout: bool) -> Result<Arc<Self>, Error> {
println!("register worker"); println!("register worker");
let upid = UPID::new(worker_type, worker_id, userid)?; let upid = UPID::new(worker_type, worker_id, auth_id)?;
let task_id = upid.task_id; let task_id = upid.task_id;
let mut path = std::path::PathBuf::from(PROXMOX_BACKUP_TASK_DIR); let mut path = std::path::PathBuf::from(PROXMOX_BACKUP_TASK_DIR);
path.push(format!("{:02X}", upid.pstart % 256)); path.push(format!("{:02X}", upid.pstart & 255));
let backup_user = crate::backup::backup_user()?; let backup_user = crate::backup::backup_user()?;
@ -660,8 +627,6 @@ impl WorkerTask {
path.push(upid.to_string()); path.push(upid.to_string());
println!("FILE: {:?}", path);
let logger_options = FileLogOptions { let logger_options = FileLogOptions {
to_stdout: to_stdout, to_stdout: to_stdout,
exclusive: true, exclusive: true,
@ -699,14 +664,14 @@ impl WorkerTask {
pub fn spawn<F, T>( pub fn spawn<F, T>(
worker_type: &str, worker_type: &str,
worker_id: Option<String>, worker_id: Option<String>,
userid: Userid, auth_id: Authid,
to_stdout: bool, to_stdout: bool,
f: F, f: F,
) -> Result<String, Error> ) -> Result<String, Error>
where F: Send + 'static + FnOnce(Arc<WorkerTask>) -> T, where F: Send + 'static + FnOnce(Arc<WorkerTask>) -> T,
T: Send + 'static + Future<Output = Result<(), Error>>, T: Send + 'static + Future<Output = Result<(), Error>>,
{ {
let worker = WorkerTask::new(worker_type, worker_id, userid, to_stdout)?; let worker = WorkerTask::new(worker_type, worker_id, auth_id, to_stdout)?;
let upid_str = worker.upid.to_string(); let upid_str = worker.upid.to_string();
let f = f(worker.clone()); let f = f(worker.clone());
tokio::spawn(async move { tokio::spawn(async move {
@ -721,7 +686,7 @@ impl WorkerTask {
pub fn new_thread<F>( pub fn new_thread<F>(
worker_type: &str, worker_type: &str,
worker_id: Option<String>, worker_id: Option<String>,
userid: Userid, auth_id: Authid,
to_stdout: bool, to_stdout: bool,
f: F, f: F,
) -> Result<String, Error> ) -> Result<String, Error>
@ -729,7 +694,7 @@ impl WorkerTask {
{ {
println!("register worker thread"); println!("register worker thread");
let worker = WorkerTask::new(worker_type, worker_id, userid, to_stdout)?; let worker = WorkerTask::new(worker_type, worker_id, auth_id, to_stdout)?;
let upid_str = worker.upid.to_string(); let upid_str = worker.upid.to_string();
let _child = std::thread::Builder::new().name(upid_str.clone()).spawn(move || { let _child = std::thread::Builder::new().name(upid_str.clone()).spawn(move || {

View File

@ -57,7 +57,7 @@ fn worker_task_abort() -> Result<(), Error> {
let res = server::WorkerTask::new_thread( let res = server::WorkerTask::new_thread(
"garbage_collection", "garbage_collection",
None, None,
proxmox_backup::api2::types::Userid::root_userid().clone(), proxmox_backup::api2::types::Authid::root_auth_id().clone(),
true, true,
move |worker| { move |worker| {
println!("WORKER {}", worker); println!("WORKER {}", worker);

34
www/AccessControlPanel.js Normal file
View File

@ -0,0 +1,34 @@
Ext.define('PBS.AccessControlPanel', {
extend: 'Ext.tab.Panel',
alias: 'widget.pbsAccessControlPanel',
mixins: ['Proxmox.Mixin.CBind'],
title: gettext('Access Control'),
border: false,
defaults: {
border: false,
},
items: [
{
xtype: 'pbsUserView',
title: gettext('User Management'),
itemId: 'users',
iconCls: 'fa fa-user',
},
{
xtype: 'pbsTokenView',
title: gettext('API Token'),
itemId: 'apitokens',
iconCls: 'fa fa-user-o',
},
{
xtype: 'pbsACLView',
title: gettext('Permissions'),
itemId: 'permissions',
iconCls: 'fa fa-unlock',
},
],
});

View File

@ -542,7 +542,7 @@ Ext.define('PBS.DataStoreContent', {
v = ''; v = '';
} }
v = Ext.String.htmlEncode(v); v = Ext.String.htmlEncode(v);
let icon = 'fa fa-fw fa-pencil'; let icon = 'fa fa-fw fa-pencil pointer';
return `<span class="snapshot-comment-column">${v}</span> return `<span class="snapshot-comment-column">${v}</span>
<i data-qtip="${gettext('Edit')}" style="float: right;" class="${icon}"></i>`; <i data-qtip="${gettext('Edit')}" style="float: right;" class="${icon}"></i>`;

View File

@ -48,21 +48,23 @@ Ext.define('PBS.DataStoreInfo', {
let vm = me.getViewModel(); let vm = me.getViewModel();
let counts = store.getById('counts').data.value; let counts = store.getById('counts').data.value;
let storage = store.getById('storage').data.value; let total = store.getById('total').data.value;
let used = store.getById('used').data.value;
let used = Proxmox.Utils.format_size(storage.used); let percent = 100*used/total;
let total = Proxmox.Utils.format_size(storage.total); if (total === 0) {
let percent = 100*storage.used/storage.total;
if (storage.total === 0) {
percent = 0; percent = 0;
} }
let used_percent = `${percent.toFixed(2)}%`; let used_percent = `${percent.toFixed(2)}%`;
let usage = used_percent + ' (' + let usage = used_percent + ' (' +
Ext.String.format(gettext('{0} of {1}'), Ext.String.format(
used, total) + ')'; gettext('{0} of {1}'),
Proxmox.Utils.format_size(used),
Proxmox.Utils.format_size(total),
) + ')';
vm.set('usagetext', usage); vm.set('usagetext', usage);
vm.set('usage', storage.used/storage.total); vm.set('usage', used/total);
let gcstatus = store.getById('gc-status').data.value; let gcstatus = store.getById('gc-status').data.value;
@ -70,12 +72,13 @@ Ext.define('PBS.DataStoreInfo', {
(gcstatus['disk-bytes'] || Infinity); (gcstatus['disk-bytes'] || Infinity);
let countstext = function(count) { let countstext = function(count) {
return `${count[0]} ${gettext('Groups')}, ${count[1]} ${gettext('Snapshots')}`; count = count || {};
return `${count.groups || 0} ${gettext('Groups')}, ${count.snapshots || 0} ${gettext('Snapshots')}`;
}; };
vm.set('ctcount', countstext(counts.ct || [0, 0])); vm.set('ctcount', countstext(counts.ct));
vm.set('vmcount', countstext(counts.vm || [0, 0])); vm.set('vmcount', countstext(counts.vm));
vm.set('hostcount', countstext(counts.host || [0, 0])); vm.set('hostcount', countstext(counts.host));
vm.set('deduplication', dedup.toFixed(2)); vm.set('deduplication', dedup.toFixed(2));
vm.set('stillbad', gcstatus['still-bad']); vm.set('stillbad', gcstatus['still-bad']);
vm.set('removedbytes', Proxmox.Utils.format_size(gcstatus['removed-bytes'])); vm.set('removedbytes', Proxmox.Utils.format_size(gcstatus['removed-bytes']));

View File

@ -6,6 +6,7 @@ IMAGES := \
JSSRC= \ JSSRC= \
form/UserSelector.js \ form/UserSelector.js \
form/TokenSelector.js \
form/RemoteSelector.js \ form/RemoteSelector.js \
form/DataStoreSelector.js \ form/DataStoreSelector.js \
form/CalendarEvent.js \ form/CalendarEvent.js \
@ -13,6 +14,7 @@ JSSRC= \
data/RunningTasksStore.js \ data/RunningTasksStore.js \
button/TaskButton.js \ button/TaskButton.js \
config/UserView.js \ config/UserView.js \
config/TokenView.js \
config/RemoteView.js \ config/RemoteView.js \
config/ACLView.js \ config/ACLView.js \
config/SyncView.js \ config/SyncView.js \
@ -27,6 +29,7 @@ JSSRC= \
window/SyncJobEdit.js \ window/SyncJobEdit.js \
window/UserEdit.js \ window/UserEdit.js \
window/UserPassword.js \ window/UserPassword.js \
window/TokenEdit.js \
window/VerifyJobEdit.js \ window/VerifyJobEdit.js \
window/ZFSCreate.js \ window/ZFSCreate.js \
dashboard/DataStoreStatistics.js \ dashboard/DataStoreStatistics.js \
@ -34,6 +37,7 @@ JSSRC= \
dashboard/RunningTasks.js \ dashboard/RunningTasks.js \
dashboard/TaskSummary.js \ dashboard/TaskSummary.js \
Utils.js \ Utils.js \
AccessControlPanel.js \
ZFSList.js \ ZFSList.js \
DirectoryList.js \ DirectoryList.js \
LoginView.js \ LoginView.js \

View File

@ -29,15 +29,9 @@ Ext.define('PBS.store.NavigationStore', {
expanded: true, expanded: true,
children: [ children: [
{ {
text: gettext('User Management'), text: gettext('Access Control'),
iconCls: 'fa fa-user', iconCls: 'fa fa-key',
path: 'pbsUserView', path: 'pbsAccessControlPanel',
leaf: true,
},
{
text: gettext('Permissions'),
iconCls: 'fa fa-unlock',
path: 'pbsACLView',
leaf: true, leaf: true,
}, },
{ {

View File

@ -84,6 +84,14 @@ Ext.define('PBS.Utils', {
return `Datastore ${what} ${id}`; return `Datastore ${what} ${id}`;
}, },
extractTokenUser: function(tokenid) {
return tokenid.match(/^(.+)!([^!]+)$/)[1];
},
extractTokenName: function(tokenid) {
return tokenid.match(/^(.+)!([^!]+)$/)[2];
},
constructor: function() { constructor: function() {
var me = this; var me = this;

View File

@ -31,19 +31,35 @@ Ext.define('PBS.config.ACLView', {
controller: { controller: {
xclass: 'Ext.app.ViewController', xclass: 'Ext.app.ViewController',
addACL: function() { addUserACL: function() {
let me = this; let me = this;
let view = me.getView(); let view = me.getView();
Ext.create('PBS.window.ACLEdit', { Ext.create('PBS.window.ACLEdit', {
path: view.aclPath, path: view.aclPath,
aclType: 'user',
listeners: { listeners: {
destroy: function() { destroy: function() {
me.reload(); me.reload();
}, },
}, },
}).show(); }).show();
}, },
addTokenACL: function() {
let me = this;
let view = me.getView();
Ext.create('PBS.window.ACLEdit', {
path: view.aclPath,
aclType: 'token',
listeners: {
destroy: function() {
me.reload();
},
},
}).show();
},
removeACL: function(btn, event, rec) { removeACL: function(btn, event, rec) {
let me = this; let me = this;
Proxmox.Utils.API2Request({ Proxmox.Utils.API2Request({
@ -53,7 +69,7 @@ Ext.define('PBS.config.ACLView', {
'delete': 1, 'delete': 1,
path: rec.data.path, path: rec.data.path,
role: rec.data.roleid, role: rec.data.roleid,
userid: rec.data.ugid, auth_id: rec.data.ugid,
}, },
callback: function() { callback: function() {
me.reload(); me.reload();
@ -106,10 +122,22 @@ Ext.define('PBS.config.ACLView', {
tbar: [ tbar: [
{ {
xtype: 'proxmoxButton',
text: gettext('Add'), text: gettext('Add'),
handler: 'addACL', menu: {
selModel: false, xtype: 'menu',
items: [
{
text: gettext('User Permission'),
iconCls: 'fa fa-fw fa-user',
handler: 'addUserACL',
},
{
text: gettext('API Token Permission'),
iconCls: 'fa fa-fw fa-user-o',
handler: 'addTokenACL',
},
],
},
}, },
{ {
xtype: 'proxmoxStdRemoveButton', xtype: 'proxmoxStdRemoveButton',
@ -127,7 +155,7 @@ Ext.define('PBS.config.ACLView', {
dataIndex: 'path', dataIndex: 'path',
}, },
{ {
header: gettext('User/Group'), header: gettext('User/Group/API Token'),
width: 100, width: 100,
sortable: true, sortable: true,
renderer: Ext.String.htmlEncode, renderer: Ext.String.htmlEncode,

View File

@ -199,6 +199,7 @@ Ext.define('PBS.config.SyncJobView', {
{ {
xtype: 'proxmoxStdRemoveButton', xtype: 'proxmoxStdRemoveButton',
baseurl: '/config/sync/', baseurl: '/config/sync/',
confirmMsg: gettext('Remove entry?'),
callback: 'reload', callback: 'reload',
}, },
'-', '-',

218
www/config/TokenView.js Normal file
View File

@ -0,0 +1,218 @@
Ext.define('pbs-tokens', {
extend: 'Ext.data.Model',
fields: [
'tokenid', 'tokenname', 'user', 'comment',
{ type: 'boolean', name: 'enable', defaultValue: true },
{ type: 'date', dateFormat: 'timestamp', name: 'expire' },
],
idProperty: 'tokenid',
});
Ext.define('pbs-users-with-tokens', {
extend: 'Ext.data.Model',
fields: [
'userid', 'firstname', 'lastname', 'email', 'comment',
{ type: 'boolean', name: 'enable', defaultValue: true },
{ type: 'date', dateFormat: 'timestamp', name: 'expire' },
'tokens',
],
idProperty: 'userid',
proxy: {
type: 'proxmox',
url: '/api2/json/access/users/?include_tokens=1',
},
});
Ext.define('PBS.config.TokenView', {
extend: 'Ext.grid.GridPanel',
alias: 'widget.pbsTokenView',
stateful: true,
stateId: 'grid-tokens',
title: gettext('API Tokens'),
controller: {
xclass: 'Ext.app.ViewController',
init: function(view) {
view.userStore = Ext.create('Proxmox.data.UpdateStore', {
autoStart: true,
interval: 5 * 1000,
storeId: 'pbs-users-with-tokens',
storeid: 'pbs-users-with-tokens',
model: 'pbs-users-with-tokens',
});
view.userStore.on('load', this.onLoad, this);
view.on('destroy', view.userStore.stopUpdate);
Proxmox.Utils.monStoreErrors(view, view.userStore);
},
reload: function() { this.getView().userStore.load(); },
onLoad: function(store, data, success) {
if (!success) return;
let tokenStore = this.getView().store.rstore;
let records = [];
Ext.Array.each(data, function(user) {
let tokens = user.data.tokens || [];
Ext.Array.each(tokens, function(token) {
let r = {};
r.tokenid = token.tokenid;
r.comment = token.comment;
r.expire = token.expire;
r.enable = token.enable;
records.push(r);
});
});
tokenStore.loadData(records);
tokenStore.fireEvent('load', tokenStore, records, true);
},
addToken: function() {
let me = this;
Ext.create('PBS.window.TokenEdit', {
isCreate: true,
listeners: {
destroy: function() {
me.reload();
},
},
}).show();
},
editToken: function() {
let me = this;
let view = me.getView();
let selection = view.getSelection();
if (selection.length < 1) return;
Ext.create('PBS.window.TokenEdit', {
user: PBS.Utils.extractTokenUser(selection[0].data.tokenid),
tokenname: PBS.Utils.extractTokenName(selection[0].data.tokenid),
listeners: {
destroy: function() {
me.reload();
},
},
}).show();
},
showPermissions: function() {
let me = this;
let view = me.getView();
let selection = view.getSelection();
if (selection.length < 1) return;
Ext.create('Proxmox.PermissionView', {
auth_id: selection[0].data.tokenid,
auth_id_name: 'auth_id',
}).show();
},
renderUser: function(tokenid) {
return Ext.String.htmlEncode(PBS.Utils.extractTokenUser(tokenid));
},
renderTokenname: function(tokenid) {
return Ext.String.htmlEncode(PBS.Utils.extractTokenName(tokenid));
},
},
listeners: {
activate: 'reload',
itemdblclick: 'editToken',
},
store: {
type: 'diff',
autoDestroy: true,
autoDestroyRstore: true,
sorters: 'tokenid',
model: 'pbs-tokens',
rstore: {
type: 'store',
proxy: 'memory',
storeid: 'pbs-tokens',
model: 'pbs-tokens',
},
},
tbar: [
{
xtype: 'proxmoxButton',
text: gettext('Add'),
handler: 'addToken',
selModel: false,
},
{
xtype: 'proxmoxButton',
text: gettext('Edit'),
handler: 'editToken',
disabled: true,
},
{
xtype: 'proxmoxStdRemoveButton',
baseurl: '/access/users/',
callback: 'reload',
getUrl: function(rec) {
let tokenid = rec.getId();
let user = PBS.Utils.extractTokenUser(tokenid);
let tokenname = PBS.Utils.extractTokenName(tokenid);
return '/access/users/' + encodeURIComponent(user) + '/token/' + encodeURIComponent(tokenname);
},
},
{
xtype: 'proxmoxButton',
text: gettext('Permissions'),
handler: 'showPermissions',
disabled: true,
},
],
viewConfig: {
trackOver: false,
},
columns: [
{
header: gettext('User'),
width: 200,
sortable: true,
renderer: 'renderUser',
dataIndex: 'tokenid',
},
{
header: gettext('Token name'),
width: 100,
sortable: true,
renderer: 'renderTokenname',
dataIndex: 'tokenid',
},
{
header: gettext('Enabled'),
width: 80,
sortable: true,
renderer: Proxmox.Utils.format_boolean,
dataIndex: 'enable',
},
{
header: gettext('Expire'),
width: 80,
sortable: true,
renderer: Proxmox.Utils.format_expire,
dataIndex: 'expire',
},
{
header: gettext('Comment'),
sortable: false,
renderer: Ext.String.htmlEncode,
dataIndex: 'comment',
flex: 1,
},
],
});

View File

@ -63,6 +63,19 @@ Ext.define('PBS.config.UserView', {
}).show(); }).show();
}, },
showPermissions: function() {
let me = this;
let view = me.getView();
let selection = view.getSelection();
if (selection.length < 1) return;
Ext.create('Proxmox.PermissionView', {
auth_id: selection[0].data.userid,
auth_id_name: 'auth_id',
}).show();
},
renderUsername: function(userid) { renderUsername: function(userid) {
return Ext.String.htmlEncode(userid.match(/^(.+)@([^@]+)$/)[1]); return Ext.String.htmlEncode(userid.match(/^(.+)@([^@]+)$/)[1]);
}, },
@ -122,6 +135,12 @@ Ext.define('PBS.config.UserView', {
enableFn: (rec) => rec.data.userid !== 'root@pam', enableFn: (rec) => rec.data.userid !== 'root@pam',
callback: 'reload', callback: 'reload',
}, },
{
xtype: 'proxmoxButton',
text: gettext('Permissions'),
handler: 'showPermissions',
disabled: true,
},
], ],
viewConfig: { viewConfig: {

View File

@ -199,6 +199,7 @@ Ext.define('PBS.config.VerifyJobView', {
{ {
xtype: 'proxmoxStdRemoveButton', xtype: 'proxmoxStdRemoveButton',
baseurl: '/config/verify/', baseurl: '/config/verify/',
confirmMsg: gettext('Remove entry?'),
callback: 'reload', callback: 'reload',
}, },
'-', '-',

72
www/form/TokenSelector.js Normal file
View File

@ -0,0 +1,72 @@
Ext.define('PBS.form.TokenSelector', {
extend: 'Proxmox.form.ComboGrid',
alias: 'widget.pbsTokenSelector',
allowBlank: false,
autoSelect: false,
valueField: 'tokenid',
displayField: 'tokenid',
editable: true,
anyMatch: true,
forceSelection: true,
store: {
model: 'pbs-tokens',
params: {
enabled: 1,
},
sorters: 'tokenid',
},
initComponent: function() {
let me = this;
me.userStore = Ext.create('Ext.data.Store', {
model: 'pbs-users-with-tokens',
});
me.userStore.on('load', this.onLoad, this);
me.userStore.load();
me.callParent();
},
onLoad: function(store, data, success) {
if (!success) return;
let tokenStore = this.store;
let records = [];
Ext.Array.each(data, function(user) {
let tokens = user.data.tokens || [];
Ext.Array.each(tokens, function(token) {
let r = {};
r.tokenid = token.tokenid;
r.comment = token.comment;
r.expire = token.expire;
r.enable = token.enable;
records.push(r);
});
});
tokenStore.loadData(records);
},
listConfig: {
columns: [
{
header: gettext('API Token'),
sortable: true,
dataIndex: 'tokenid',
renderer: Ext.String.htmlEncode,
flex: 1,
},
{
header: gettext('Comment'),
sortable: false,
dataIndex: 'comment',
renderer: Ext.String.htmlEncode,
flex: 1,
},
],
},
});

View File

@ -14,7 +14,53 @@ Ext.define('PBS.window.ACLEdit', {
// caller can give a static path // caller can give a static path
path: undefined, path: undefined,
subject: gettext('User Permission'), initComponent: function() {
let me = this;
me.items = [];
me.items.push({
xtype: 'pbsPermissionPathSelector',
fieldLabel: gettext('Path'),
editable: !me.path,
value: me.path,
name: 'path',
allowBlank: false,
});
if (me.aclType === 'user') {
me.subject = gettext('User Permission');
me.items.push({
xtype: 'pbsUserSelector',
fieldLabel: gettext('User'),
name: 'auth_id',
allowBlank: false,
});
} else if (me.aclType === 'token') {
me.subject = gettext('API Token Permission');
me.items.push({
xtype: 'pbsTokenSelector',
fieldLabel: gettext('API Token'),
name: 'auth_id',
allowBlank: false,
});
}
me.items.push({
xtype: 'pmxRoleSelector',
name: 'role',
value: 'NoAccess',
fieldLabel: gettext('Role'),
});
me.items.push({
xtype: 'proxmoxcheckbox',
name: 'propagate',
checked: true,
uncheckedValue: 0,
fieldLabel: gettext('Propagate'),
});
me.callParent();
},
getValues: function(dirtyOnly) { getValues: function(dirtyOnly) {
let me = this; let me = this;
@ -26,35 +72,4 @@ Ext.define('PBS.window.ACLEdit', {
return values; return values;
}, },
items: [
{
xtype: 'pbsPermissionPathSelector',
fieldLabel: gettext('Path'),
cbind: {
editable: '{!path}',
value: '{path}',
},
name: 'path',
allowBlank: false,
},
{
xtype: 'pbsUserSelector',
fieldLabel: gettext('User'),
name: 'userid',
allowBlank: false,
},
{
xtype: 'pmxRoleSelector',
name: 'role',
value: 'NoAccess',
fieldLabel: gettext('Role'),
},
{
xtype: 'proxmoxcheckbox',
name: 'propagate',
checked: true,
uncheckedValue: 0,
fieldLabel: gettext('Propagate'),
},
],
}); });

View File

@ -60,20 +60,6 @@ Ext.define('PBS.window.SyncJobEdit', {
name: 'remote-store', name: 'remote-store',
}, },
], ],
advancedColumn1: [
{
xtype: 'pmxDisplayEditField',
name: 'id',
fieldLabel: gettext('Sync Job ID'),
emptyText: gettext('Automatic'),
renderer: Ext.htmlEncode,
allowBlank: true,
minLength: 4,
cbind: {
editable: '{isCreate}',
},
},
],
column2: [ column2: [
{ {

213
www/window/TokenEdit.js Normal file
View File

@ -0,0 +1,213 @@
Ext.define('PBS.window.TokenEdit', {
extend: 'Proxmox.window.Edit',
alias: 'widget.pbsTokenEdit',
mixins: ['Proxmox.Mixin.CBind'],
onlineHelp: 'user_mgmt',
user: undefined,
tokenname: undefined,
isAdd: true,
isCreate: false,
fixedUser: false,
subject: gettext('API token'),
fieldDefaults: { labelWidth: 120 },
items: {
xtype: 'inputpanel',
column1: [
{
xtype: 'pmxDisplayEditField',
cbind: {
editable: (get) => get('isCreate') && !get('fixedUser'),
},
editConfig: {
xtype: 'pbsUserSelector',
allowBlank: false,
},
name: 'user',
value: Proxmox.UserName,
renderer: Ext.String.htmlEncode,
fieldLabel: gettext('User'),
},
{
xtype: 'pmxDisplayEditField',
cbind: {
editable: '{isCreate}',
},
name: 'tokenname',
fieldLabel: gettext('Token Name'),
minLength: 2,
allowBlank: false,
},
],
column2: [
{
xtype: 'datefield',
name: 'expire',
emptyText: Proxmox.Utils.neverText,
format: 'Y-m-d',
submitFormat: 'U',
fieldLabel: gettext('Expire'),
},
{
xtype: 'proxmoxcheckbox',
fieldLabel: gettext('Enabled'),
name: 'enable',
uncheckedValue: 0,
defaultValue: 1,
checked: true,
},
],
columnB: [
{
xtype: 'proxmoxtextfield',
name: 'comment',
fieldLabel: gettext('Comment'),
},
],
},
getValues: function(dirtyOnly) {
var me = this;
var values = me.callParent(arguments);
// hack: ExtJS datefield does not submit 0, so we need to set that
if (!values.expire) {
values.expire = 0;
}
if (me.isCreate) {
me.url = '/api2/extjs/access/users/';
let uid = encodeURIComponent(values.user);
let tid = encodeURIComponent(values.tokenname);
delete values.user;
delete values.tokenname;
me.url += `${uid}/token/${tid}`;
}
return values;
},
setValues: function(values) {
var me = this;
if (Ext.isDefined(values.expire)) {
if (values.expire) {
values.expire = new Date(values.expire * 1000);
} else {
// display 'never' instead of '1970-01-01'
values.expire = null;
}
}
me.callParent([values]);
},
initComponent: function() {
let me = this;
me.url = '/api2/extjs/access/users/';
me.callParent();
if (me.isCreate) {
me.method = 'POST';
} else {
me.method = 'PUT';
let uid = encodeURIComponent(me.user);
let tid = encodeURIComponent(me.tokenname);
me.url += `${uid}/token/${tid}`;
me.load({
success: function(response, options) {
let values = response.result.data;
values.user = me.user;
values.tokenname = me.tokenname;
me.setValues(values);
},
});
}
},
apiCallDone: function(success, response, options) {
let res = response.result.data;
if (!success || !res || !res.value) {
return;
}
Ext.create('PBS.window.TokenShow', {
autoShow: true,
tokenid: res.tokenid,
secret: res.value,
});
},
});
Ext.define('PBS.window.TokenShow', {
extend: 'Ext.window.Window',
alias: ['widget.pbsTokenShow'],
mixins: ['Proxmox.Mixin.CBind'],
width: 600,
modal: true,
resizable: false,
title: gettext('Token Secret'),
items: [
{
xtype: 'container',
layout: 'form',
bodyPadding: 10,
border: false,
fieldDefaults: {
labelWidth: 100,
anchor: '100%',
},
padding: '0 10 10 10',
items: [
{
xtype: 'textfield',
fieldLabel: gettext('Token ID'),
cbind: {
value: '{tokenid}',
},
editable: false,
},
{
xtype: 'textfield',
fieldLabel: gettext('Secret'),
inputId: 'token-secret-value',
cbind: {
value: '{secret}',
},
editable: false,
},
],
},
{
xtype: 'component',
border: false,
padding: '10 10 10 10',
userCls: 'pmx-hint',
html: gettext('Please record the API token secret - it will only be displayed now'),
},
],
buttons: [
{
handler: function(b) {
document.getElementById('token-secret-value').select();
document.execCommand("copy");
},
text: gettext('Copy Secret Value'),
},
],
});

View File

@ -65,20 +65,6 @@ Ext.define('PBS.window.VerifyJobEdit', {
}, },
}, },
], ],
advancedColumn1: [
{
xtype: 'pmxDisplayEditField',
name: 'id',
fieldLabel: gettext('Verify Job ID'),
emptyText: gettext('Automatic'),
renderer: Ext.htmlEncode,
allowBlank: true,
minLength: 4,
cbind: {
editable: '{isCreate}',
},
},
],
column2: [ column2: [
{ {