Compare commits

...

23 Commits

Author SHA1 Message Date
355a41a763 bump version to 1.0.10-1
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-03-11 13:40:22 +01:00
5bd4825432 d/postinst: fixup tape permissions if existing with wrong permissions
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-03-11 13:40:22 +01:00
8f7e5b028a d/postinst: only check for broken task index on upgrade
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-03-11 13:40:22 +01:00
2a29d9a1ee d/postinst: tell user that we restart when updating from older version 2021-03-11 13:40:00 +01:00
e056966bc7 d/postinst: restart when updating from older version
Else one has quite a terrible UX when installing from 1.0 ISO and
then upgrading to latest release..

commit 0ec79339f7 for the fix and some other details

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-03-11 09:56:12 +01:00
ef0ea4ba05 server/worker_task: improve endtime for unknown tasks
instead of always using the starttime, use the last timestamp from the log
this way, one can see when the task was aborted without having to read
the log

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2021-03-11 09:56:12 +01:00
2892624783 tape/send_load_media_email: move to server/email_notifications
and reuse 'send_job_status_mail' there so that we get consistent
formatted mails from pbs (e.g. html part and author)

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2021-03-11 09:56:12 +01:00
2c10410b0d tape: improve backup task log 2021-03-11 08:43:13 +01:00
d1d74c4367 typo fixes all over the place
found and semi-manually replaced by using:
 codespell -L mut -L crate -i 3 -w

Mostly in comments, but also email notification and two occurrences
of misspelled  'reserved' struct member, which where not used and
cargo build did not complain about the change, soo ...

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-03-10 16:39:57 +01:00
8b7f3b8f1d ui: fix typo in comment
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-03-10 15:24:39 +01:00
3f6c2efb8d ui: fix typo in options
Signed-off-by: Oguz Bektas <o.bektas@proxmox.com>
2021-03-10 15:23:17 +01:00
227f36497a d/postinst: fix typo in comment
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2021-03-10 15:22:52 +01:00
5ef4c7bcd3 tape: fix scsi volume_statistics and cartridge_memory for quantum drives 2021-03-10 14:13:48 +01:00
70d00e0149 tape: documentation language fixup
Signed-off-by: Dylan Whyte <d.whyte@proxmox.com>
2021-03-10 11:05:30 +01:00
dcf155dac9 ui: tape: increase tapestore interval
from 2 to 60 seconds. To retain the response time of the gui
when adding/editing/removing, trigger a manual reload on these actions

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2021-03-10 11:00:10 +01:00
3c5b523631 ui: NavigationTree: do not modify list while iterating
iterating over a nodeinterfaces children while removing them
will lead to 'child' being undefined

instead collect the children to remove in a separate list
and iterate over them

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2021-03-10 10:59:59 +01:00
6396bace3d tape: improve backup task log (show percentage) 2021-03-10 10:59:13 +01:00
713a128adf tape: improve backup task log format 2021-03-10 09:54:51 +01:00
affc224aca tape: read_tape_mam - pass correct allocation len 2021-03-10 09:24:38 +01:00
6f82d32977 tape: cleanup - remove wrong inline comment 2021-03-10 08:11:51 +01:00
2a06e08618 api2/tape/backup: continue on vanishing snapshots
when we do a prune during a tape backup, do not cancel the tape backup,
but continue with a warning

the task still fails and prompts the user to check the log

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2021-03-09 10:20:54 +01:00
1057b1f5a5 tape: lock artificial "__UNASSIGNED__" pool to avoid races 2021-03-09 10:00:26 +01:00
af76234112 tape: improve MediaPool allocation by sorting tapes by ctime and label_text 2021-03-09 08:33:21 +01:00
59 changed files with 398 additions and 278 deletions

View File

@ -1,6 +1,6 @@
[package] [package]
name = "proxmox-backup" name = "proxmox-backup"
version = "1.0.9" version = "1.0.10"
authors = [ authors = [
"Dietmar Maurer <dietmar@proxmox.com>", "Dietmar Maurer <dietmar@proxmox.com>",
"Dominik Csapak <d.csapak@proxmox.com>", "Dominik Csapak <d.csapak@proxmox.com>",

16
debian/changelog vendored
View File

@ -1,3 +1,19 @@
rust-proxmox-backup (1.0.10-1) unstable; urgency=medium
* tape: improve MediaPool allocation by sorting tapes by creation time and
label text
* api: tape backup: continue on vanishing snapshots, as a prune during long
running tape backup jobs is OK
* tape: fix scsi volume_statistics and cartridge_memory for quantum drives
* typo fixes all over the place
* d/postinst: restart, not reload, when updating from a to old version
-- Proxmox Support Team <support@proxmox.com> Thu, 11 Mar 2021 08:24:31 +0100
rust-proxmox-backup (1.0.9-1) unstable; urgency=medium rust-proxmox-backup (1.0.9-1) unstable; urgency=medium
* client: track key source, print when used * client: track key source, print when used

27
debian/postinst vendored
View File

@ -6,13 +6,21 @@ set -e
case "$1" in case "$1" in
configure) configure)
# need to have user backup in the tapoe group # need to have user backup in the tape group
usermod -a -G tape backup usermod -a -G tape backup
# modeled after dh_systemd_start output # modeled after dh_systemd_start output
systemctl --system daemon-reload >/dev/null || true systemctl --system daemon-reload >/dev/null || true
if [ -n "$2" ]; then if [ -n "$2" ]; then
_dh_action=try-reload-or-restart if dpkg --compare-versions "$2" 'lt' '1.0.7-1'; then
# there was an issue with reloading and systemd being confused in older daemon versions
# so restart instead of reload if upgrading from there, see commit 0ec79339f7aebf9
# FIXME: remove with PBS 2.1
echo "Upgrading from older proxmox-backup-server: restart (not reload) daemons"
_dh_action=try-restart
else
_dh_action=try-reload-or-restart
fi
else else
_dh_action=start _dh_action=start
fi fi
@ -40,11 +48,16 @@ case "$1" in
/etc/proxmox-backup/remote.cfg || true /etc/proxmox-backup/remote.cfg || true
fi fi
fi fi
fi # FIXME: remove with 2.0
# FIXME: Remove in future version once we're sure no broken entries remain in anyone's files if [ -d "/var/lib/proxmox-backup/tape" ] &&
if grep -q -e ':termproxy::[^@]\+: ' /var/log/proxmox-backup/tasks/active; then [ "$(stat --printf '%a' '/var/lib/proxmox-backup/tape')" != "750" ]; then
echo "Fixing up termproxy user id in task log..." chmod 0750 /var/lib/proxmox-backup/tape || true
flock -w 30 /var/log/proxmox-backup/tasks/active.lock sed -i 's/:termproxy::\([^@]\+\): /:termproxy::\1@pam: /' /var/log/proxmox-backup/tasks/active || true fi
# FIXME: Remove in future version once we're sure no broken entries remain in anyone's files
if grep -q -e ':termproxy::[^@]\+: ' /var/log/proxmox-backup/tasks/active; then
echo "Fixing up termproxy user id in task log..."
flock -w 30 /var/log/proxmox-backup/tasks/active.lock sed -i 's/:termproxy::\([^@]\+\): /:termproxy::\1@pam: /' /var/log/proxmox-backup/tasks/active || true
fi
fi fi
;; ;;

View File

@ -1,11 +1,11 @@
All command supports the following parameters to specify the tape device: All commands support the following parameters to specify the tape device:
--device <path> Path to the Linux tape device --device <path> Path to the Linux tape device
--drive <name> Use drive from Proxmox Backup Server configuration. --drive <name> Use drive from Proxmox Backup Server configuration.
Commands generating output supports the ``--output-format`` Commands which generate output support the ``--output-format``
parameter. It accepts the following values: parameter. It accepts the following values:
:``text``: Text format (default). Human readable. :``text``: Text format (default). Human readable.

View File

@ -4,7 +4,7 @@ Tape Backup
=========== ===========
.. CAUTION:: Tape Backup is a technical preview feature, not meant for .. CAUTION:: Tape Backup is a technical preview feature, not meant for
production usage. To enable the GUI, you need to issue the production use. To enable it in the GUI, you need to issue the
following command (as root user on the console): following command (as root user on the console):
.. code-block:: console .. code-block:: console
@ -14,36 +14,36 @@ Tape Backup
Proxmox tape backup provides an easy way to store datastore content Proxmox tape backup provides an easy way to store datastore content
onto magnetic tapes. This increases data safety because you get: onto magnetic tapes. This increases data safety because you get:
- an additional copy of the data - an additional copy of the data,
- to a different media type (tape) - on a different media type (tape),
- to an additional location (you can move tapes off-site) - to an additional location (you can move tapes off-site)
In most restore jobs, only data from the last backup job is restored. In most restore jobs, only data from the last backup job is restored.
Restore requests further decline the older the data Restore requests further decline, the older the data
gets. Considering this, tape backup may also help to reduce disk gets. Considering this, tape backup may also help to reduce disk
usage, because you can safely remove data from disk once archived on usage, because you can safely remove data from disk, once it's archived on
tape. This is especially true if you need to keep data for several tape. This is especially true if you need to retain data for several
years. years.
Tape backups do not provide random access to the stored data. Instead, Tape backups do not provide random access to the stored data. Instead,
you need to restore the data to disk before you can access it you need to restore the data to disk, before you can access it
again. Also, if you store your tapes off-site (using some kind of tape again. Also, if you store your tapes off-site (using some kind of tape
vaulting service), you need to bring them on-site before you can do any vaulting service), you need to bring them back on-site, before you can do any
restore. So please consider that restores from tapes can take much restores. So please consider that restoring from tape can take much
longer than restores from disk. longer than restoring from disk.
Tape Technology Primer Tape Technology Primer
---------------------- ----------------------
.. _Linear Tape Open: https://en.wikipedia.org/wiki/Linear_Tape-Open .. _Linear Tape-Open: https://en.wikipedia.org/wiki/Linear_Tape-Open
As of 2021, the only broadly available tape technology standard is As of 2021, the only widely available tape technology standard is
`Linear Tape Open`_, and different vendors offers LTO Ultrium tape `Linear Tape-Open`_ (LTO). Different vendors offer LTO Ultrium tape
drives, auto-loaders and LTO tape cartridges. drives, auto-loaders, and LTO tape cartridges.
There are a few vendors offering proprietary drives with There are a few vendors that offer proprietary drives with
slight advantages in performance and capacity, but they have slight advantages in performance and capacity. Nevertheless, they have
significant disadvantages: significant disadvantages:
- proprietary (single vendor) - proprietary (single vendor)
@ -53,13 +53,13 @@ So we currently do not test such drives.
In general, LTO tapes offer the following advantages: In general, LTO tapes offer the following advantages:
- Durable (30 years) - Durability (30 year lifespan)
- High Capacity (12 TB) - High Capacity (12 TB)
- Relatively low cost per TB - Relatively low cost per TB
- Cold Media - Cold Media
- Movable (storable inside vault) - Movable (storable inside vault)
- Multiple vendors (for both media and drives) - Multiple vendors (for both media and drives)
- Build in AES-CGM Encryption engine - Build in AES-GCM Encryption engine
Note that `Proxmox Backup Server` already stores compressed data, so using the Note that `Proxmox Backup Server` already stores compressed data, so using the
tape compression feature has no advantage. tape compression feature has no advantage.
@ -68,41 +68,40 @@ tape compression feature has no advantage.
Supported Hardware Supported Hardware
------------------ ------------------
Proxmox Backup Server supports `Linear Tape Open`_ generation 4 (LTO4) Proxmox Backup Server supports `Linear Tape-Open`_ generation 4 (LTO-4)
or later. In general, all SCSI2 tape drives supported by the Linux or later. In general, all SCSI-2 tape drives supported by the Linux
kernel should work, but feature like hardware encryptions needs LTO4 kernel should work, but features like hardware encryption need LTO-4
or later. or later.
Tape changer support is done using the Linux 'mtx' command line Tape changing is carried out using the Linux 'mtx' command line
tool. So any changer device supported by that tool should work. tool, so any changer device supported by this tool should work.
Drive Performance Drive Performance
~~~~~~~~~~~~~~~~~ ~~~~~~~~~~~~~~~~~
Current LTO-8 tapes provide read/write speeds up to 360 MB/s. This means, Current LTO-8 tapes provide read/write speeds of up to 360 MB/s. This means,
that it still takes a minimum of 9 hours to completely write or that it still takes a minimum of 9 hours to completely write or
read a single tape (even at maximum speed). read a single tape (even at maximum speed).
The only way to speed up that data rate is to use more than one The only way to speed up that data rate is to use more than one
drive. That way you can run several backup jobs in parallel, or run drive. That way, you can run several backup jobs in parallel, or run
restore jobs while the other dives are used for backups. restore jobs while the other dives are used for backups.
Also consider that you need to read data first from your datastore Also consider that you first need to read data from your datastore
(disk). But a single spinning disk is unable to deliver data at this (disk). However, a single spinning disk is unable to deliver data at this
rate. We measured a maximum rate of about 60MB/s to 100MB/s in practice, rate. We measured a maximum rate of about 60MB/s to 100MB/s in practice,
so it takes 33 hours to read 12TB to fill up an LTO-8 tape. If you want so it takes 33 hours to read the 12TB needed to fill up an LTO-8 tape. If you want
to run your tape at full speed, please make sure that the source to write to your tape at full speed, please make sure that the source
datastore is able to deliver that performance (e.g, by using SSDs). datastore is able to deliver that performance (e.g, by using SSDs).
Terminology Terminology
----------- -----------
:Tape Labels: are used to uniquely identify a tape. You normally use :Tape Labels: are used to uniquely identify a tape. You would normally apply a
some sticky paper labels and apply them on the front of the sticky paper label to the front of the cartridge. We additionally store the
cartridge. We additionally store the label text magnetically on the label text magnetically on the tape (first file on tape).
tape (first file on tape).
.. _Code 39: https://en.wikipedia.org/wiki/Code_39 .. _Code 39: https://en.wikipedia.org/wiki/Code_39
@ -116,10 +115,10 @@ Terminology
Specification`_. Specification`_.
You can either buy such barcode labels from your cartridge vendor, You can either buy such barcode labels from your cartridge vendor,
or print them yourself. You can use our `LTO Barcode Generator`_ App or print them yourself. You can use our `LTO Barcode Generator`_
for that. app, if you would like to print them yourself.
.. Note:: Physical labels and the associated adhesive shall have an .. Note:: Physical labels and the associated adhesive should have an
environmental performance to match or exceed the environmental environmental performance to match or exceed the environmental
specifications of the cartridge to which it is applied. specifications of the cartridge to which it is applied.
@ -133,7 +132,7 @@ Terminology
media pool). media pool).
:Tape drive: The device used to read and write data to the tape. There :Tape drive: The device used to read and write data to the tape. There
are standalone drives, but drives often ship within tape libraries. are standalone drives, but drives are usually shipped within tape libraries.
:Tape changer: A device which can change the tapes inside a tape drive :Tape changer: A device which can change the tapes inside a tape drive
(tape robot). They are usually part of a tape library. (tape robot). They are usually part of a tape library.
@ -142,10 +141,10 @@ Terminology
:`Tape library`_: A storage device that contains one or more tape drives, :`Tape library`_: A storage device that contains one or more tape drives,
a number of slots to hold tape cartridges, a barcode reader to a number of slots to hold tape cartridges, a barcode reader to
identify tape cartridges and an automated method for loading tapes identify tape cartridges, and an automated method for loading tapes
(a robot). (a robot).
This is also commonly known as 'autoloader', 'tape robot' or 'tape jukebox'. This is also commonly known as an 'autoloader', 'tape robot' or 'tape jukebox'.
:Inventory: The inventory stores the list of known tapes (with :Inventory: The inventory stores the list of known tapes (with
additional status information). additional status information).
@ -153,14 +152,14 @@ Terminology
:Catalog: A media catalog stores information about the media content. :Catalog: A media catalog stores information about the media content.
Tape Quickstart Tape Quick Start
--------------- ---------------
1. Configure your tape hardware (drives and changers) 1. Configure your tape hardware (drives and changers)
2. Configure one or more media pools 2. Configure one or more media pools
3. Label your tape cartridges. 3. Label your tape cartridges
4. Start your first tape backup job ... 4. Start your first tape backup job ...
@ -169,7 +168,7 @@ Configuration
------------- -------------
Please note that you can configure anything using the graphical user Please note that you can configure anything using the graphical user
interface or the command line interface. Both methods results in the interface or the command line interface. Both methods result in the
same configuration. same configuration.
.. _tape_changer_config: .. _tape_changer_config:
@ -180,7 +179,7 @@ Tape changers
Tape changers (robots) are part of a `Tape Library`_. You can skip Tape changers (robots) are part of a `Tape Library`_. You can skip
this step if you are using a standalone drive. this step if you are using a standalone drive.
Linux is able to auto detect those devices, and you can get a list Linux is able to auto detect these devices, and you can get a list
of available devices using: of available devices using:
.. code-block:: console .. code-block:: console
@ -192,7 +191,7 @@ of available devices using:
│ /dev/tape/by-id/scsi-CC2C52 │ Quantum │ Superloader3 │ CC2C52 │ │ /dev/tape/by-id/scsi-CC2C52 │ Quantum │ Superloader3 │ CC2C52 │
└─────────────────────────────┴─────────┴──────────────┴────────┘ └─────────────────────────────┴─────────┴──────────────┴────────┘
In order to use that device with Proxmox, you need to create a In order to use a device with Proxmox Backup Server, you need to create a
configuration entry: configuration entry:
.. code-block:: console .. code-block:: console
@ -201,11 +200,11 @@ configuration entry:
Where ``sl3`` is an arbitrary name you can choose. Where ``sl3`` is an arbitrary name you can choose.
.. Note:: Please use stable device path names from inside .. Note:: Please use the persistent device path names from inside
``/dev/tape/by-id/``. Names like ``/dev/sg0`` may point to a ``/dev/tape/by-id/``. Names like ``/dev/sg0`` may point to a
different device after reboot, and that is not what you want. different device after reboot, and that is not what you want.
You can show the final configuration with: You can display the final configuration with:
.. code-block:: console .. code-block:: console
@ -255,12 +254,12 @@ Tape libraries usually provide some special import/export slots (also
called "mail slots"). Tapes inside those slots are accessible from called "mail slots"). Tapes inside those slots are accessible from
outside, making it easy to add/remove tapes to/from the library. Those outside, making it easy to add/remove tapes to/from the library. Those
tapes are considered to be "offline", so backup jobs will not use tapes are considered to be "offline", so backup jobs will not use
them. Those special slots are auto-detected and marked as them. Those special slots are auto-detected and marked as an
``import-export`` slot in the status command. ``import-export`` slot in the status command.
It's worth noting that some of the smaller tape libraries don't have It's worth noting that some of the smaller tape libraries don't have
such slots. While they have something called "Mail Slot", that slot such slots. While they have something called a "Mail Slot", that slot
is just a way to grab the tape from the gripper. But they are unable is just a way to grab the tape from the gripper. They are unable
to hold media while the robot does other things. They also do not to hold media while the robot does other things. They also do not
expose that "Mail Slot" over the SCSI interface, so you wont see them in expose that "Mail Slot" over the SCSI interface, so you wont see them in
the status output. the status output.
@ -322,7 +321,7 @@ configuration entry:
# proxmox-tape drive create mydrive --path /dev/tape/by-id/scsi-12345-nst # proxmox-tape drive create mydrive --path /dev/tape/by-id/scsi-12345-nst
.. Note:: Please use stable device path names from inside .. Note:: Please use the persistent device path names from inside
``/dev/tape/by-id/``. Names like ``/dev/nst0`` may point to a ``/dev/tape/by-id/``. Names like ``/dev/nst0`` may point to a
different device after reboot, and that is not what you want. different device after reboot, and that is not what you want.
@ -334,10 +333,10 @@ changer device:
# proxmox-tape drive update mydrive --changer sl3 --changer-drivenum 0 # proxmox-tape drive update mydrive --changer sl3 --changer-drivenum 0
The ``--changer-drivenum`` is only necessary if the tape library The ``--changer-drivenum`` is only necessary if the tape library
includes more than one drive (The changer status command lists all includes more than one drive (the changer status command lists all
drive numbers). drive numbers).
You can show the final configuration with: You can display the final configuration with:
.. code-block:: console .. code-block:: console
@ -353,7 +352,7 @@ You can show the final configuration with:
└─────────┴────────────────────────────────┘ └─────────┴────────────────────────────────┘
.. NOTE:: The ``changer-drivenum`` value 0 is not stored in the .. NOTE:: The ``changer-drivenum`` value 0 is not stored in the
configuration, because that is the default. configuration, because it is the default.
To list all configured drives use: To list all configured drives use:
@ -383,7 +382,7 @@ For testing, you can simply query the drive status with:
└───────────┴────────────────────────┘ └───────────┴────────────────────────┘
.. NOTE:: Blocksize should always be 0 (variable block size .. NOTE:: Blocksize should always be 0 (variable block size
mode). This is the default anyways. mode). This is the default anyway.
.. _tape_media_pool_config: .. _tape_media_pool_config:
@ -399,11 +398,11 @@ one media pool, so a job only uses tapes from that pool.
A media set is a group of continuously written tapes, used to split A media set is a group of continuously written tapes, used to split
the larger pool into smaller, restorable units. One or more backup the larger pool into smaller, restorable units. One or more backup
jobs write to a media set, producing an ordered group of jobs write to a media set, producing an ordered group of
tapes. Media sets are identified by an unique ID. That ID and the tapes. Media sets are identified by a unique ID. That ID and the
sequence number is stored on each tape of that set (tape label). sequence number are stored on each tape of that set (tape label).
Media sets are the basic unit for restore tasks, i.e. you need all Media sets are the basic unit for restore tasks. This means that you need
tapes in the set to restore the media set content. Data is fully every tape in the set to restore the media set contents. Data is fully
deduplicated inside a media set. deduplicated inside a media set.
@ -414,20 +413,20 @@ one media pool, so a job only uses tapes from that pool.
- Try to use the current media set. - Try to use the current media set.
This setting produce one large media set. While this is very This setting produces one large media set. While this is very
space efficient (deduplication, no unused space), it can lead to space efficient (deduplication, no unused space), it can lead to
long restore times, because restore jobs needs to read all tapes in the long restore times, because restore jobs need to read all tapes in the
set. set.
.. NOTE:: Data is fully deduplicated inside a media set. That .. NOTE:: Data is fully deduplicated inside a media set. This
also means that data is randomly distributed over the tapes in also means that data is randomly distributed over the tapes in
the set. So even if you restore a single VM, this may have to the set. Thus, even if you restore a single VM, data may have to be
read data from all tapes inside the media set. read from all tapes inside the media set.
Larger media sets are also more error prone, because a single Larger media sets are also more error-prone, because a single
damaged media makes the restore fail. damaged tape makes the restore fail.
Usage scenario: Mostly used with tape libraries, and you manually Usage scenario: Mostly used with tape libraries. You manually
trigger new set creation by running a backup job with the trigger new set creation by running a backup job with the
``--export`` option. ``--export`` option.
@ -436,13 +435,13 @@ one media pool, so a job only uses tapes from that pool.
- Always create a new media set. - Always create a new media set.
With this setting each backup job creates a new media set. This With this setting, each backup job creates a new media set. This
is less space efficient, because the last media from the last set is less space efficient, because the media from the last set
may not be fully written, leaving the remaining space unused. may not be fully written, leaving the remaining space unused.
The advantage is that this procudes media sets of minimal The advantage is that this procudes media sets of minimal
size. Small set are easier to handle, you can move sets to an size. Small sets are easier to handle, can be moved more conveniently
off-site vault, and restore is much faster. to an off-site vault, and can be restored much faster.
.. NOTE:: Retention period starts with the creation time of the .. NOTE:: Retention period starts with the creation time of the
media set. media set.
@ -468,11 +467,11 @@ one media pool, so a job only uses tapes from that pool.
- Current set contains damaged or retired tapes. - Current set contains damaged or retired tapes.
- Media pool encryption changed - Media pool encryption has changed
- Database consistency errors, e.g. if the inventory does not - Database consistency errors, for example, if the inventory does not
contain required media info, or contain conflicting infos contain the required media information, or it contains conflicting
(outdated data). information (outdated data).
.. topic:: Retention Policy .. topic:: Retention Policy
@ -489,26 +488,27 @@ one media pool, so a job only uses tapes from that pool.
.. topic:: Hardware Encryption .. topic:: Hardware Encryption
LTO4 (or later) tape drives support hardware encryption. If you LTO-4 (or later) tape drives support hardware encryption. If you
configure the media pool to use encryption, all data written to the configure the media pool to use encryption, all data written to the
tapes is encrypted using the configured key. tapes is encrypted using the configured key.
That way, unauthorized users cannot read data from the media, This way, unauthorized users cannot read data from the media,
e.g. if you loose a media while shipping to an offsite location. for example, if you loose a tape while shipping to an offsite location.
.. Note:: If the backup client also encrypts data, data on tape .. Note:: If the backup client also encrypts data, data on the tape
will be double encrypted. will be double encrypted.
The password protected key is stored on each media, so it is The password protected key is stored on each medium, so that it is
possbible to `restore the key <tape_restore_encryption_key_>`_ using the password. Please make sure possbible to `restore the key <tape_restore_encryption_key_>`_ using
you remember the password in case you need to restore the key. the password. Please make sure to remember the password, in case
you need to restore the key.
.. NOTE:: We use global content namespace, i.e. we do not store the .. NOTE:: We use global content namespace, meaning we do not store the
source datastore, so it is impossible to distinguish store1:/vm/100 source datastore name. Because of this, it is impossible to distinguish
from store2:/vm/100. Please use different media pools if the store1:/vm/100 from store2:/vm/100. Please use different media pools
sources are from different name spaces with conflicting names if the sources are from different namespaces with conflicting names
(E.g. if the sources are from different Proxmox VE clusters). (for example, if the sources are from different Proxmox VE clusters).
The following command creates a new media pool: The following command creates a new media pool:
@ -520,7 +520,7 @@ The following command creates a new media pool:
# proxmox-tape pool create daily --drive mydrive # proxmox-tape pool create daily --drive mydrive
Additional option can be set later using the update command: Additional option can be set later, using the update command:
.. code-block:: console .. code-block:: console
@ -544,8 +544,8 @@ Tape Backup Jobs
~~~~~~~~~~~~~~~~ ~~~~~~~~~~~~~~~~
To automate tape backup, you can configure tape backup jobs which To automate tape backup, you can configure tape backup jobs which
store datastore content to a media pool at a specific time write datastore content to a media pool, based on a specific time schedule.
schedule. Required settings are: The required settings are:
- ``store``: The datastore you want to backup - ``store``: The datastore you want to backup
@ -564,14 +564,14 @@ use:
# proxmox-tape backup-job create job2 --store vmstore1 \ # proxmox-tape backup-job create job2 --store vmstore1 \
--pool yourpool --drive yourdrive --schedule daily --pool yourpool --drive yourdrive --schedule daily
Backup includes all snapshot from a backup group by default. You can The backup includes all snapshots from a backup group by default. You can
set the ``latest-only`` flag to include only the latest snapshots: set the ``latest-only`` flag to include only the latest snapshots:
.. code-block:: console .. code-block:: console
# proxmox-tape backup-job update job2 --latest-only # proxmox-tape backup-job update job2 --latest-only
Backup jobs can use email to send tape requests notifications or Backup jobs can use email to send tape request notifications or
report errors. You can set the notification user with: report errors. You can set the notification user with:
.. code-block:: console .. code-block:: console
@ -581,7 +581,7 @@ report errors. You can set the notification user with:
.. Note:: The email address is a property of the user (see :ref:`user_mgmt`). .. Note:: The email address is a property of the user (see :ref:`user_mgmt`).
It is sometimes useful to eject the tape from the drive after a It is sometimes useful to eject the tape from the drive after a
backup. For a standalone drive, the ``eject-media`` option eject the backup. For a standalone drive, the ``eject-media`` option ejects the
tape, making sure that the following backup cannot use the tape tape, making sure that the following backup cannot use the tape
(unless someone manually loads the tape again). For tape libraries, (unless someone manually loads the tape again). For tape libraries,
this option unloads the tape to a free slot, which provides better this option unloads the tape to a free slot, which provides better
@ -591,11 +591,11 @@ dust protection than inside a drive:
# proxmox-tape backup-job update job2 --eject-media # proxmox-tape backup-job update job2 --eject-media
.. Note:: For failed jobs, the tape remain in the drive. .. Note:: For failed jobs, the tape remains in the drive.
For tape libraries, the ``export-media`` options moves all tapes from For tape libraries, the ``export-media`` option moves all tapes from
the media set to an export slot, making sure that the following backup the media set to an export slot, making sure that the following backup
cannot use the tapes. An operator can pickup those tapes and move them cannot use the tapes. An operator can pick up those tapes and move them
to a vault. to a vault.
.. code-block:: console .. code-block:: console
@ -622,9 +622,9 @@ To remove a job, please use:
Administration Administration
-------------- --------------
Many sub-command of the ``proxmox-tape`` command line tools take a Many sub-commands of the ``proxmox-tape`` command line tools take a
parameter called ``--drive``, which specifies the tape drive you want parameter called ``--drive``, which specifies the tape drive you want
to work on. For convenience, you can set that in an environment to work on. For convenience, you can set this in an environment
variable: variable:
.. code-block:: console .. code-block:: console
@ -639,27 +639,27 @@ parameter from commands that needs a changer device, for example:
# proxmox-tape changer status # proxmox-tape changer status
Should displays the changer status of the changer device associated with should display the changer status of the changer device associated with
drive ``mydrive``. drive ``mydrive``.
Label Tapes Label Tapes
~~~~~~~~~~~ ~~~~~~~~~~~
By default, tape cartidges all looks the same, so you need to put a By default, tape cartridges all look the same, so you need to put a
label on them for unique identification. So first, put a sticky paper label on them for unique identification. First, put a sticky paper
label with some human readable text on the cartridge. label with some human readable text on the cartridge.
If you use a `Tape Library`_, you should use an 8 character string If you use a `Tape Library`_, you should use an 8 character string
encoded as `Code 39`_, as definded in the `LTO Ultrium Cartridge Label encoded as `Code 39`_, as defined in the `LTO Ultrium Cartridge Label
Specification`_. You can either bye such barcode labels from your Specification`_. You can either buy such barcode labels from your
cartidge vendor, or print them yourself. You can use our `LTO Barcode cartridge vendor, or print them yourself. You can use our `LTO Barcode
Generator`_ App for that. Generator`_ app to print them.
Next, you need to write that same label text to the tape, so that the Next, you need to write that same label text to the tape, so that the
software can uniquely identify the tape too. software can uniquely identify the tape too.
For a standalone drive, manually insert the new tape cartidge into the For a standalone drive, manually insert the new tape cartridge into the
drive and run: drive and run:
.. code-block:: console .. code-block:: console
@ -668,7 +668,7 @@ drive and run:
You may omit the ``--pool`` argument to allow the tape to be used by any pool. You may omit the ``--pool`` argument to allow the tape to be used by any pool.
.. Note:: For safety reasons, this command fails if the tape contain .. Note:: For safety reasons, this command fails if the tape contains
any data. If you want to overwrite it anyway, erase the tape first. any data. If you want to overwrite it anyway, erase the tape first.
You can verify success by reading back the label: You can verify success by reading back the label:
@ -718,7 +718,7 @@ The following options are available:
--eject-media Eject media upon job completion. --eject-media Eject media upon job completion.
It is normally good practice to eject the tape after use. This unmounts the It is normally good practice to eject the tape after use. This unmounts the
tape from the drive and prevents the tape from getting dirty with dust. tape from the drive and prevents the tape from getting dusty.
--export-media-set Export media set upon job completion. --export-media-set Export media set upon job completion.
@ -737,7 +737,7 @@ catalogs, you need to restore them first. Please note that you need
the catalog to find your data, but restoring a complete media-set does the catalog to find your data, but restoring a complete media-set does
not need media catalogs. not need media catalogs.
The following command shows the media content (from catalog): The following command lists the media content (from catalog):
.. code-block:: console .. code-block:: console
@ -841,7 +841,7 @@ database. Further restore jobs automatically use any available key.
Tape Cleaning Tape Cleaning
~~~~~~~~~~~~~ ~~~~~~~~~~~~~
LTO tape drives requires regular cleaning. This is done by loading a LTO tape drives require regular cleaning. This is done by loading a
cleaning cartridge into the drive, which is a manual task for cleaning cartridge into the drive, which is a manual task for
standalone drives. standalone drives.
@ -876,7 +876,7 @@ This command does the following:
- find the cleaning tape (in slot 3) - find the cleaning tape (in slot 3)
- unload the current media from the drive (back to slot1) - unload the current media from the drive (back to slot 1)
- load the cleaning tape into the drive - load the cleaning tape into the drive

View File

@ -181,7 +181,7 @@ fn get_tfa_entry(userid: Userid, id: String) -> Result<TypedTfaInfo, Error> {
if let Some(user_data) = crate::config::tfa::read()?.users.remove(&userid) { if let Some(user_data) = crate::config::tfa::read()?.users.remove(&userid) {
match { match {
// scope to prevent the temprary iter from borrowing across the whole match // scope to prevent the temporary iter from borrowing across the whole match
let entry = tfa_id_iter(&user_data).find(|(_ty, _index, entry_id)| id == *entry_id); let entry = tfa_id_iter(&user_data).find(|(_ty, _index, entry_id)| id == *entry_id);
entry.map(|(ty, index, _)| (ty, index)) entry.map(|(ty, index, _)| (ty, index))
} { } {
@ -259,7 +259,7 @@ fn delete_tfa(
.ok_or_else(|| http_err!(NOT_FOUND, "no such entry: {}/{}", userid, id))?; .ok_or_else(|| http_err!(NOT_FOUND, "no such entry: {}/{}", userid, id))?;
match { match {
// scope to prevent the temprary iter from borrowing across the whole match // scope to prevent the temporary iter from borrowing across the whole match
let entry = tfa_id_iter(&user_data).find(|(_, _, entry_id)| id == *entry_id); let entry = tfa_id_iter(&user_data).find(|(_, _, entry_id)| id == *entry_id);
entry.map(|(ty, index, _)| (ty, index)) entry.map(|(ty, index, _)| (ty, index))
} { } {

View File

@ -1,4 +1,4 @@
//! Datastore Syncronization Job Management //! Datastore Synchronization Job Management
use anyhow::{bail, format_err, Error}; use anyhow::{bail, format_err, Error};
use serde_json::Value; use serde_json::Value;

View File

@ -119,7 +119,7 @@ pub fn change_passphrase(
let kdf = kdf.unwrap_or_default(); let kdf = kdf.unwrap_or_default();
if let Kdf::None = kdf { if let Kdf::None = kdf {
bail!("Please specify a key derivation funktion (none is not allowed here)."); bail!("Please specify a key derivation function (none is not allowed here).");
} }
let _lock = open_file_locked( let _lock = open_file_locked(
@ -187,7 +187,7 @@ pub fn create_key(
let kdf = kdf.unwrap_or_default(); let kdf = kdf.unwrap_or_default();
if let Kdf::None = kdf { if let Kdf::None = kdf {
bail!("Please specify a key derivation funktion (none is not allowed here)."); bail!("Please specify a key derivation function (none is not allowed here).");
} }
let (key, mut key_config) = KeyConfig::new(password.as_bytes(), kdf)?; let (key, mut key_config) = KeyConfig::new(password.as_bytes(), kdf)?;

View File

@ -85,7 +85,7 @@ fn do_apt_update(worker: &WorkerTask, quiet: bool) -> Result<(), Error> {
}, },
notify: { notify: {
type: bool, type: bool,
description: r#"Send notification mail about new package updates availanle to the description: r#"Send notification mail about new package updates available to the
email address configured for 'root@pam')."#, email address configured for 'root@pam')."#,
default: false, default: false,
optional: true, optional: true,

View File

@ -16,6 +16,7 @@ use proxmox::{
use crate::{ use crate::{
task_log, task_log,
task_warn,
config::{ config::{
self, self,
cached_user_info::CachedUserInfo, cached_user_info::CachedUserInfo,
@ -42,6 +43,7 @@ use crate::{
DataStore, DataStore,
BackupDir, BackupDir,
BackupInfo, BackupInfo,
StoreProgress,
}, },
api2::types::{ api2::types::{
Authid, Authid,
@ -389,32 +391,61 @@ fn backup_worker(
group_list.sort_unstable(); group_list.sort_unstable();
let group_count = group_list.len();
task_log!(worker, "found {} groups", group_count);
let mut progress = StoreProgress::new(group_count as u64);
let latest_only = setup.latest_only.unwrap_or(false); let latest_only = setup.latest_only.unwrap_or(false);
if latest_only { if latest_only {
task_log!(worker, "latest-only: true (only considering latest snapshots)"); task_log!(worker, "latest-only: true (only considering latest snapshots)");
} }
for group in group_list { let mut errors = false;
for (group_number, group) in group_list.into_iter().enumerate() {
progress.done_groups = group_number as u64;
progress.done_snapshots = 0;
progress.group_snapshots = 0;
let mut snapshot_list = group.list_backups(&datastore.base_path())?; let mut snapshot_list = group.list_backups(&datastore.base_path())?;
BackupInfo::sort_list(&mut snapshot_list, true); // oldest first BackupInfo::sort_list(&mut snapshot_list, true); // oldest first
if latest_only { if latest_only {
progress.group_snapshots = 1;
if let Some(info) = snapshot_list.pop() { if let Some(info) = snapshot_list.pop() {
if pool_writer.contains_snapshot(&info.backup_dir.to_string()) { if pool_writer.contains_snapshot(&info.backup_dir.to_string()) {
task_log!(worker, "skip snapshot {}", info.backup_dir);
continue; continue;
} }
task_log!(worker, "backup snapshot {}", info.backup_dir); if !backup_snapshot(worker, &mut pool_writer, datastore.clone(), info.backup_dir)? {
backup_snapshot(worker, &mut pool_writer, datastore.clone(), info.backup_dir)?; errors = true;
}
progress.done_snapshots = 1;
task_log!(
worker,
"percentage done: {}",
progress
);
} }
} else { } else {
for info in snapshot_list { progress.group_snapshots = snapshot_list.len() as u64;
for (snapshot_number, info) in snapshot_list.into_iter().enumerate() {
if pool_writer.contains_snapshot(&info.backup_dir.to_string()) { if pool_writer.contains_snapshot(&info.backup_dir.to_string()) {
task_log!(worker, "skip snapshot {}", info.backup_dir);
continue; continue;
} }
task_log!(worker, "backup snapshot {}", info.backup_dir); if !backup_snapshot(worker, &mut pool_writer, datastore.clone(), info.backup_dir)? {
backup_snapshot(worker, &mut pool_writer, datastore.clone(), info.backup_dir)?; errors = true;
}
progress.done_snapshots = snapshot_number as u64 + 1;
task_log!(
worker,
"percentage done: {}",
progress
);
} }
} }
} }
@ -427,6 +458,10 @@ fn backup_worker(
pool_writer.eject_media(worker)?; pool_writer.eject_media(worker)?;
} }
if errors {
bail!("Tape backup finished with some errors. Please check the task log.");
}
Ok(()) Ok(())
} }
@ -460,11 +495,18 @@ pub fn backup_snapshot(
pool_writer: &mut PoolWriter, pool_writer: &mut PoolWriter,
datastore: Arc<DataStore>, datastore: Arc<DataStore>,
snapshot: BackupDir, snapshot: BackupDir,
) -> Result<(), Error> { ) -> Result<bool, Error> {
task_log!(worker, "start backup {}:{}", datastore.name(), snapshot); task_log!(worker, "backup snapshot {}", snapshot);
let snapshot_reader = SnapshotReader::new(datastore.clone(), snapshot.clone())?; let snapshot_reader = match SnapshotReader::new(datastore.clone(), snapshot.clone()) {
Ok(reader) => reader,
Err(err) => {
// ignore missing snapshots and continue
task_warn!(worker, "failed opening snapshot '{}': {}", snapshot, err);
return Ok(false);
}
};
let mut chunk_iter = snapshot_reader.chunk_iterator()?.peekable(); let mut chunk_iter = snapshot_reader.chunk_iterator()?.peekable();
@ -511,5 +553,5 @@ pub fn backup_snapshot(
task_log!(worker, "end backup {}:{}", datastore.name(), snapshot); task_log!(worker, "end backup {}:{}", datastore.name(), snapshot);
Ok(()) Ok(true)
} }

View File

@ -220,7 +220,7 @@ pub async fn load_slot(drive: String, source_slot: u64) -> Result<(), Error> {
}, },
}, },
returns: { returns: {
description: "The import-export slot number the media was transfered to.", description: "The import-export slot number the media was transferred to.",
type: u64, type: u64,
minimum: 1, minimum: 1,
}, },
@ -782,7 +782,7 @@ pub fn clean_drive(
} }
} }
worker.log("Drive cleaned sucessfully"); worker.log("Drive cleaned successfully");
Ok(()) Ok(())
}, },
@ -943,7 +943,7 @@ pub fn update_inventory(
} }
Ok((Some(media_id), _key_config)) => { Ok((Some(media_id), _key_config)) => {
if label_text != media_id.label.label_text { if label_text != media_id.label.label_text {
worker.warn(format!("label text missmatch ({} != {})", label_text, media_id.label.label_text)); worker.warn(format!("label text mismatch ({} != {})", label_text, media_id.label.label_text));
continue; continue;
} }
worker.log(format!("inventorize media '{}' with uuid '{}'", label_text, media_id.label.uuid)); worker.log(format!("inventorize media '{}' with uuid '{}'", label_text, media_id.label.uuid));
@ -1012,7 +1012,10 @@ fn barcode_label_media_worker(
) -> Result<(), Error> { ) -> Result<(), Error> {
let (mut changer, changer_name) = required_media_changer(drive_config, &drive)?; let (mut changer, changer_name) = required_media_changer(drive_config, &drive)?;
let label_text_list = changer.online_media_label_texts()?; let mut label_text_list = changer.online_media_label_texts()?;
// make sure we label them in the right order
label_text_list.sort();
let state_path = Path::new(TAPE_STATUS_DIR); let state_path = Path::new(TAPE_STATUS_DIR);

View File

@ -497,7 +497,7 @@ pub fn get_media_status(uuid: Uuid) -> Result<MediaStatus, Error> {
/// Update media status (None, 'full', 'damaged' or 'retired') /// Update media status (None, 'full', 'damaged' or 'retired')
/// ///
/// It is not allowed to set status to 'writable' or 'unknown' (those /// It is not allowed to set status to 'writable' or 'unknown' (those
/// are internaly managed states). /// are internally managed states).
pub fn update_media_status(uuid: Uuid, status: Option<MediaStatus>) -> Result<(), Error> { pub fn update_media_status(uuid: Uuid, status: Option<MediaStatus>) -> Result<(), Error> {
let status_path = Path::new(TAPE_STATUS_DIR); let status_path = Path::new(TAPE_STATUS_DIR);

View File

@ -1272,7 +1272,7 @@ pub struct APTUpdateInfo {
pub enum Notify { pub enum Notify {
/// Never send notification /// Never send notification
Never, Never,
/// Send notifications for failed and sucessful jobs /// Send notifications for failed and successful jobs
Always, Always,
/// Send notifications for failed jobs only /// Send notifications for failed jobs only
Error, Error,

View File

@ -21,7 +21,7 @@ pub struct OptionalDeviceIdentification {
#[api()] #[api()]
#[derive(Debug,Serialize,Deserialize)] #[derive(Debug,Serialize,Deserialize)]
#[serde(rename_all = "kebab-case")] #[serde(rename_all = "kebab-case")]
/// Kind of devive /// Kind of device
pub enum DeviceKind { pub enum DeviceKind {
/// Tape changer (Autoloader, Robot) /// Tape changer (Autoloader, Robot)
Changer, Changer,

View File

@ -75,7 +75,7 @@
//! //!
//! Since PBS allows multiple potentially interfering operations at the //! Since PBS allows multiple potentially interfering operations at the
//! same time (e.g. garbage collect, prune, multiple backup creations //! same time (e.g. garbage collect, prune, multiple backup creations
//! (only in seperate groups), forget, ...), these need to lock against //! (only in separate groups), forget, ...), these need to lock against
//! each other in certain scenarios. There is no overarching global lock //! each other in certain scenarios. There is no overarching global lock
//! though, instead always the finest grained lock possible is used, //! though, instead always the finest grained lock possible is used,
//! because running these operations concurrently is treated as a feature //! because running these operations concurrently is treated as a feature

View File

@ -452,7 +452,7 @@ impl ChunkStore {
#[test] #[test]
fn test_chunk_store1() { fn test_chunk_store1() {
let mut path = std::fs::canonicalize(".").unwrap(); // we need absulute path let mut path = std::fs::canonicalize(".").unwrap(); // we need absolute path
path.push(".testdir"); path.push(".testdir");
if let Err(_e) = std::fs::remove_dir_all(".testdir") { /* ignore */ } if let Err(_e) = std::fs::remove_dir_all(".testdir") { /* ignore */ }

View File

@ -448,7 +448,7 @@ impl DataStore {
if !self.chunk_store.cond_touch_chunk(digest, false)? { if !self.chunk_store.cond_touch_chunk(digest, false)? {
crate::task_warn!( crate::task_warn!(
worker, worker,
"warning: unable to access non-existant chunk {}, required by {:?}", "warning: unable to access non-existent chunk {}, required by {:?}",
proxmox::tools::digest_to_hex(digest), proxmox::tools::digest_to_hex(digest),
file_name, file_name,
); );

View File

@ -1453,7 +1453,7 @@ fn parse_archive_type(name: &str) -> (String, ArchiveType) {
type: String, type: String,
description: r###"Target directory path. Use '-' to write to standard output. description: r###"Target directory path. Use '-' to write to standard output.
We do not extraxt '.pxar' archives when writing to standard output. We do not extract '.pxar' archives when writing to standard output.
"### "###
}, },

View File

@ -330,7 +330,7 @@ async fn get_versions(verbose: bool, param: Value) -> Result<Value, Error> {
let options = default_table_format_options() let options = default_table_format_options()
.disable_sort() .disable_sort()
.noborder(true) // just not helpfull for version info which gets copy pasted often .noborder(true) // just not helpful for version info which gets copy pasted often
.column(ColumnConfig::new("Package")) .column(ColumnConfig::new("Package"))
.column(ColumnConfig::new("Version")) .column(ColumnConfig::new("Version"))
.column(ColumnConfig::new("ExtraInfo").header("Extra Info")) .column(ColumnConfig::new("ExtraInfo").header("Extra Info"))

View File

@ -527,7 +527,7 @@ fn show_master_pubkey(path: Option<String>, param: Value) -> Result<(), Error> {
optional: true, optional: true,
}, },
subject: { subject: {
description: "Include the specified subject as titel text.", description: "Include the specified subject as title text.",
optional: true, optional: true,
}, },
"output-format": { "output-format": {

View File

@ -140,7 +140,7 @@ fn mount(
return proxmox_backup::tools::runtime::main(mount_do(param, None)); return proxmox_backup::tools::runtime::main(mount_do(param, None));
} }
// Process should be deamonized. // Process should be daemonized.
// Make sure to fork before the async runtime is instantiated to avoid troubles. // Make sure to fork before the async runtime is instantiated to avoid troubles.
let (pr, pw) = proxmox_backup::tools::pipe()?; let (pr, pw) = proxmox_backup::tools::pipe()?;
match unsafe { fork() } { match unsafe { fork() } {

View File

@ -84,7 +84,7 @@ pub fn encryption_key_commands() -> CommandLineInterface {
schema: TAPE_ENCRYPTION_KEY_FINGERPRINT_SCHEMA, schema: TAPE_ENCRYPTION_KEY_FINGERPRINT_SCHEMA,
}, },
subject: { subject: {
description: "Include the specified subject as titel text.", description: "Include the specified subject as title text.",
optional: true, optional: true,
}, },
"output-format": { "output-format": {
@ -128,7 +128,7 @@ fn paper_key(
}, },
}, },
)] )]
/// Print tthe encryption key's metadata. /// Print the encryption key's metadata.
fn show_key( fn show_key(
param: Value, param: Value,
rpcenv: &mut dyn RpcEnvironment, rpcenv: &mut dyn RpcEnvironment,

View File

@ -1,6 +1,6 @@
/// Tape command implemented using scsi-generic raw commands /// Tape command implemented using scsi-generic raw commands
/// ///
/// SCSI-generic command needs root priviledges, so this binary need /// SCSI-generic command needs root privileges, so this binary need
/// to be setuid root. /// to be setuid root.
/// ///
/// This command can use STDIN as tape device handle. /// This command can use STDIN as tape device handle.

View File

@ -16,11 +16,11 @@ pub const PROXMOX_BACKUP_RUN_DIR: &str = PROXMOX_BACKUP_RUN_DIR_M!();
/// namespaced directory for persistent logging /// namespaced directory for persistent logging
pub const PROXMOX_BACKUP_LOG_DIR: &str = PROXMOX_BACKUP_LOG_DIR_M!(); pub const PROXMOX_BACKUP_LOG_DIR: &str = PROXMOX_BACKUP_LOG_DIR_M!();
/// logfile for all API reuests handled by the proxy and privileged API daemons. Note that not all /// logfile for all API requests handled by the proxy and privileged API daemons. Note that not all
/// failed logins can be logged here with full information, use the auth log for that. /// failed logins can be logged here with full information, use the auth log for that.
pub const API_ACCESS_LOG_FN: &str = concat!(PROXMOX_BACKUP_LOG_DIR_M!(), "/api/access.log"); pub const API_ACCESS_LOG_FN: &str = concat!(PROXMOX_BACKUP_LOG_DIR_M!(), "/api/access.log");
/// logfile for any failed authentication, via ticket or via token, and new successfull ticket /// logfile for any failed authentication, via ticket or via token, and new successful ticket
/// creations. This file can be useful for fail2ban. /// creations. This file can be useful for fail2ban.
pub const API_AUTH_LOG_FN: &str = concat!(PROXMOX_BACKUP_LOG_DIR_M!(), "/api/auth.log"); pub const API_AUTH_LOG_FN: &str = concat!(PROXMOX_BACKUP_LOG_DIR_M!(), "/api/auth.log");

View File

@ -509,7 +509,7 @@ impl BackupWriter {
} }
// We have no `self` here for `h2` and `verbose`, the only other arg "common" with 1 other // We have no `self` here for `h2` and `verbose`, the only other arg "common" with 1 other
// funciton in the same path is `wid`, so those 3 could be in a struct, but there's no real use // function in the same path is `wid`, so those 3 could be in a struct, but there's no real use
// since this is a private method. // since this is a private method.
#[allow(clippy::too_many_arguments)] #[allow(clippy::too_many_arguments)]
fn upload_chunk_info_stream( fn upload_chunk_info_stream(

View File

@ -86,7 +86,7 @@ impl tower_service::Service<Uri> for VsockConnector {
Ok(connection) Ok(connection)
}) })
// unravel the thread JoinHandle to a useable future // unravel the thread JoinHandle to a usable future
.map(|res| match res { .map(|res| match res {
Ok(res) => res, Ok(res) => res,
Err(err) => Err(format_err!("thread join error on vsock connect: {}", err)), Err(err) => Err(format_err!("thread join error on vsock connect: {}", err)),

View File

@ -82,7 +82,7 @@ pub fn check_netmask(mask: u8, is_v6: bool) -> Result<(), Error> {
Ok(()) Ok(())
} }
// parse ip address with otional cidr mask // parse ip address with optional cidr mask
pub fn parse_address_or_cidr(cidr: &str) -> Result<(String, Option<u8>, bool), Error> { pub fn parse_address_or_cidr(cidr: &str) -> Result<(String, Option<u8>, bool), Error> {
lazy_static! { lazy_static! {

View File

@ -4,10 +4,10 @@
//! indexed by key fingerprint. //! indexed by key fingerprint.
//! //!
//! We store the plain key (unencrypted), as well as a encrypted //! We store the plain key (unencrypted), as well as a encrypted
//! version protected by passowrd (see struct `KeyConfig`) //! version protected by password (see struct `KeyConfig`)
//! //!
//! Tape backups store the password protected version on tape, so that //! Tape backups store the password protected version on tape, so that
//! it is possible to retore the key from tape if you know the //! it is possible to restore the key from tape if you know the
//! password. //! password.
use std::collections::HashMap; use std::collections::HashMap;

View File

@ -590,7 +590,7 @@ impl TfaUserChallengeData {
} }
/// Save the current data. Note that we do not replace the file here since we lock the file /// Save the current data. Note that we do not replace the file here since we lock the file
/// itself, as it is in `/run`, and the typicall error case for this particular situation /// itself, as it is in `/run`, and the typical error case for this particular situation
/// (machine loses power) simply prevents some login, but that'll probably fail anyway for /// (machine loses power) simply prevents some login, but that'll probably fail anyway for
/// other reasons then... /// other reasons then...
/// ///

View File

@ -43,7 +43,7 @@ Deduplication Factor: {{deduplication-factor}}
Garbage collection successful. Garbage collection successful.
Please visit the web interface for futher details: Please visit the web interface for further details:
<https://{{fqdn}}:{{port}}/#DataStore-{{datastore}}> <https://{{fqdn}}:{{port}}/#DataStore-{{datastore}}>
@ -57,7 +57,7 @@ Datastore: {{datastore}}
Garbage collection failed: {{error}} Garbage collection failed: {{error}}
Please visit the web interface for futher details: Please visit the web interface for further details:
<https://{{fqdn}}:{{port}}/#pbsServerAdministration:tasks> <https://{{fqdn}}:{{port}}/#pbsServerAdministration:tasks>
@ -71,7 +71,7 @@ Datastore: {{job.store}}
Verification successful. Verification successful.
Please visit the web interface for futher details: Please visit the web interface for further details:
<https://{{fqdn}}:{{port}}/#DataStore-{{job.store}}> <https://{{fqdn}}:{{port}}/#DataStore-{{job.store}}>
@ -89,7 +89,7 @@ Verification failed on these snapshots/groups:
{{/each}} {{/each}}
Please visit the web interface for futher details: Please visit the web interface for further details:
<https://{{fqdn}}:{{port}}/#pbsServerAdministration:tasks> <https://{{fqdn}}:{{port}}/#pbsServerAdministration:tasks>
@ -105,7 +105,7 @@ Remote Store: {{job.remote-store}}
Synchronization successful. Synchronization successful.
Please visit the web interface for futher details: Please visit the web interface for further details:
<https://{{fqdn}}:{{port}}/#DataStore-{{job.store}}> <https://{{fqdn}}:{{port}}/#DataStore-{{job.store}}>
@ -121,7 +121,7 @@ Remote Store: {{job.remote-store}}
Synchronization failed: {{error}} Synchronization failed: {{error}}
Please visit the web interface for futher details: Please visit the web interface for further details:
<https://{{fqdn}}:{{port}}/#pbsServerAdministration:tasks> <https://{{fqdn}}:{{port}}/#pbsServerAdministration:tasks>
@ -152,7 +152,7 @@ Tape Drive: {{job.drive}}
Tape Backup successful. Tape Backup successful.
Please visit the web interface for futher details: Please visit the web interface for further details:
<https://{{fqdn}}:{{port}}/#DataStore-{{job.store}}> <https://{{fqdn}}:{{port}}/#DataStore-{{job.store}}>
@ -171,7 +171,7 @@ Tape Drive: {{job.drive}}
Tape Backup failed: {{error}} Tape Backup failed: {{error}}
Please visit the web interface for futher details: Please visit the web interface for further details:
<https://{{fqdn}}:{{port}}/#pbsServerAdministration:tasks> <https://{{fqdn}}:{{port}}/#pbsServerAdministration:tasks>
@ -448,6 +448,30 @@ pub fn send_tape_backup_status(
Ok(()) Ok(())
} }
/// Send email to a person to request a manual media change
pub fn send_load_media_email(
drive: &str,
label_text: &str,
to: &str,
reason: Option<String>,
) -> Result<(), Error> {
let subject = format!("Load Media '{}' request for drive '{}'", label_text, drive);
let mut text = String::new();
if let Some(reason) = reason {
text.push_str(&format!("The drive has the wrong or no tape inserted. Error:\n{}\n\n", reason));
}
text.push_str("Please insert the requested media into the backup drive.\n\n");
text.push_str(&format!("Drive: {}\n", drive));
text.push_str(&format!("Media: {}\n", label_text));
send_job_status_mail(to, &subject, &text)
}
fn get_server_url() -> (String, usize) { fn get_server_url() -> (String, usize) {
// user will surely request that they can change this // user will surely request that they can change this

View File

@ -207,6 +207,8 @@ pub fn upid_read_status(upid: &UPID) -> Result<TaskState, Error> {
let mut iter = last_line.splitn(2, ": "); let mut iter = last_line.splitn(2, ": ");
if let Some(time_str) = iter.next() { if let Some(time_str) = iter.next() {
if let Ok(endtime) = proxmox::tools::time::parse_rfc3339(time_str) { if let Ok(endtime) = proxmox::tools::time::parse_rfc3339(time_str) {
// set the endtime even if we cannot parse the state
status = TaskState::Unknown { endtime };
if let Some(rest) = iter.next().and_then(|rest| rest.strip_prefix("TASK ")) { if let Some(rest) = iter.next().and_then(|rest| rest.strip_prefix("TASK ")) {
if let Ok(state) = TaskState::from_endtime_and_message(endtime, rest) { if let Ok(state) = TaskState::from_endtime_and_message(endtime, rest) {
status = state; status = state;
@ -749,7 +751,7 @@ impl WorkerTask {
match data.abort_listeners.pop() { match data.abort_listeners.pop() {
None => { break; }, None => { break; },
Some(ch) => { Some(ch) => {
let _ = ch.send(()); // ignore erros here let _ = ch.send(()); // ignore errors here
}, },
} }
} }

View File

@ -1,36 +0,0 @@
use anyhow::Error;
use proxmox::tools::email::sendmail;
/// Send email to a person to request a manual media change
pub fn send_load_media_email(
drive: &str,
label_text: &str,
to: &str,
reason: Option<String>,
) -> Result<(), Error> {
let subject = format!("Load Media '{}' request for drive '{}'", label_text, drive);
let mut text = String::new();
if let Some(reason) = reason {
text.push_str(&format!("The drive has the wrong or no tape inserted. Error:\n{}\n\n", reason));
}
text.push_str("Please insert the requested media into the backup drive.\n\n");
text.push_str(&format!("Drive: {}\n", drive));
text.push_str(&format!("Media: {}\n", label_text));
sendmail(
&[to],
&subject,
Some(&text),
None,
None,
None,
)?;
Ok(())
}

View File

@ -1,8 +1,5 @@
//! Media changer implementation (SCSI media changer) //! Media changer implementation (SCSI media changer)
mod email;
pub use email::*;
pub mod sg_pt_changer; pub mod sg_pt_changer;
pub mod mtx; pub mod mtx;
@ -35,7 +32,7 @@ use crate::api2::types::{
/// Changer element status. /// Changer element status.
/// ///
/// Drive and slots may be `Empty`, or contain some media, either /// Drive and slots may be `Empty`, or contain some media, either
/// with knwon volume tag `VolumeTag(String)`, or without (`Full`). /// with known volume tag `VolumeTag(String)`, or without (`Full`).
#[derive(Serialize, Deserialize, Debug)] #[derive(Serialize, Deserialize, Debug)]
pub enum ElementStatus { pub enum ElementStatus {
Empty, Empty,
@ -87,7 +84,7 @@ pub struct MtxStatus {
pub drives: Vec<DriveStatus>, pub drives: Vec<DriveStatus>,
/// List of known storage slots /// List of known storage slots
pub slots: Vec<StorageElementStatus>, pub slots: Vec<StorageElementStatus>,
/// Tranport elements /// Transport elements
/// ///
/// Note: Some libraries do not report transport elements. /// Note: Some libraries do not report transport elements.
pub transports: Vec<TransportElementStatus>, pub transports: Vec<TransportElementStatus>,
@ -261,7 +258,7 @@ pub trait MediaChange {
/// List online media labels (label_text/barcodes) /// List online media labels (label_text/barcodes)
/// ///
/// List acessible (online) label texts. This does not include /// List accessible (online) label texts. This does not include
/// media inside import-export slots or cleaning media. /// media inside import-export slots or cleaning media.
fn online_media_label_texts(&mut self) -> Result<Vec<String>, Error> { fn online_media_label_texts(&mut self) -> Result<Vec<String>, Error> {
let status = self.status()?; let status = self.status()?;
@ -378,7 +375,7 @@ pub trait MediaChange {
/// Unload media to a free storage slot /// Unload media to a free storage slot
/// ///
/// If posible to the slot it was previously loaded from. /// If possible to the slot it was previously loaded from.
/// ///
/// Note: This method consumes status - so please use returned status afterward. /// Note: This method consumes status - so please use returned status afterward.
fn unload_to_free_slot(&mut self, status: MtxStatus) -> Result<MtxStatus, Error> { fn unload_to_free_slot(&mut self, status: MtxStatus) -> Result<MtxStatus, Error> {

View File

@ -1,4 +1,4 @@
//! Wrapper around expernal `mtx` command line tool //! Wrapper around external `mtx` command line tool
mod parse_mtx_status; mod parse_mtx_status;
pub use parse_mtx_status::*; pub use parse_mtx_status::*;

View File

@ -246,7 +246,7 @@ pub fn unload(
Ok(()) Ok(())
} }
/// Tranfer medium from one storage slot to another /// Transfer medium from one storage slot to another
pub fn transfer_medium<F: AsRawFd>( pub fn transfer_medium<F: AsRawFd>(
file: &mut F, file: &mut F,
from_slot: u64, from_slot: u64,
@ -362,7 +362,7 @@ pub fn read_element_status<F: AsRawFd>(file: &mut F) -> Result<MtxStatus, Error>
bail!("got wrong number of import/export elements"); bail!("got wrong number of import/export elements");
} }
if (setup.transfer_element_count as usize) != drives.len() { if (setup.transfer_element_count as usize) != drives.len() {
bail!("got wrong number of tranfer elements"); bail!("got wrong number of transfer elements");
} }
// create same virtual slot order as mtx(1) // create same virtual slot order as mtx(1)
@ -428,7 +428,7 @@ struct SubHeader {
element_type_code: u8, element_type_code: u8,
flags: u8, flags: u8,
descriptor_length: u16, descriptor_length: u16,
reseved: u8, reserved: u8,
byte_count_of_descriptor_data_available: [u8;3], byte_count_of_descriptor_data_available: [u8;3],
} }

View File

@ -196,7 +196,7 @@ struct SspDataEncryptionCapabilityPage {
page_code: u16, page_code: u16,
page_len: u16, page_len: u16,
extdecc_cfgp_byte: u8, extdecc_cfgp_byte: u8,
reserverd: [u8; 15], reserved: [u8; 15],
} }
#[derive(Endian)] #[derive(Endian)]
@ -241,13 +241,13 @@ fn decode_spin_data_encryption_caps(data: &[u8]) -> Result<u8, Error> {
let desc: SspDataEncryptionAlgorithmDescriptor = let desc: SspDataEncryptionAlgorithmDescriptor =
unsafe { reader.read_be_value()? }; unsafe { reader.read_be_value()? };
if desc.descriptor_len != 0x14 { if desc.descriptor_len != 0x14 {
bail!("got wrong key descriptior len"); bail!("got wrong key descriptor len");
} }
if (desc.control_byte_4 & 0b00000011) != 2 { if (desc.control_byte_4 & 0b00000011) != 2 {
continue; // cant encrypt in hardware continue; // can't encrypt in hardware
} }
if ((desc.control_byte_4 & 0b00001100) >> 2) != 2 { if ((desc.control_byte_4 & 0b00001100) >> 2) != 2 {
continue; // cant decrypt in hardware continue; // can't decrypt in hardware
} }
if desc.algorithm_code == 0x00010014 && desc.key_size == 32 { if desc.algorithm_code == 0x00010014 && desc.key_size == 32 {
aes_cgm_index = Some(desc.algorythm_index); aes_cgm_index = Some(desc.algorythm_index);
@ -276,7 +276,7 @@ struct SspDataEncryptionStatusPage {
control_byte: u8, control_byte: u8,
key_format: u8, key_format: u8,
key_len: u16, key_len: u16,
reserverd: [u8; 8], reserved: [u8; 8],
} }
fn decode_spin_data_encryption_status(data: &[u8]) -> Result<DataEncryptionStatus, Error> { fn decode_spin_data_encryption_status(data: &[u8]) -> Result<DataEncryptionStatus, Error> {

View File

@ -72,14 +72,14 @@ static MAM_ATTRIBUTES: &[ (u16, u16, MamFormat, &str) ] = &[
(0x08_02, 8, MamFormat::ASCII, "Application Version"), (0x08_02, 8, MamFormat::ASCII, "Application Version"),
(0x08_03, 160, MamFormat::ASCII, "User Medium Text Label"), (0x08_03, 160, MamFormat::ASCII, "User Medium Text Label"),
(0x08_04, 12, MamFormat::ASCII, "Date And Time Last Written"), (0x08_04, 12, MamFormat::ASCII, "Date And Time Last Written"),
(0x08_05, 1, MamFormat::BINARY, "Text Localization Identifer"), (0x08_05, 1, MamFormat::BINARY, "Text Localization Identifier"),
(0x08_06, 32, MamFormat::ASCII, "Barcode"), (0x08_06, 32, MamFormat::ASCII, "Barcode"),
(0x08_07, 80, MamFormat::ASCII, "Owning Host Textual Name"), (0x08_07, 80, MamFormat::ASCII, "Owning Host Textual Name"),
(0x08_08, 160, MamFormat::ASCII, "Media Pool"), (0x08_08, 160, MamFormat::ASCII, "Media Pool"),
(0x08_0B, 16, MamFormat::ASCII, "Application Format Version"), (0x08_0B, 16, MamFormat::ASCII, "Application Format Version"),
(0x08_0C, 50, MamFormat::ASCII, "Volume Coherency Information"), (0x08_0C, 50, MamFormat::ASCII, "Volume Coherency Information"),
(0x08_20, 36, MamFormat::ASCII, "Medium Globally Unique Identifer"), (0x08_20, 36, MamFormat::ASCII, "Medium Globally Unique Identifier"),
(0x08_21, 36, MamFormat::ASCII, "Media Pool Globally Unique Identifer"), (0x08_21, 36, MamFormat::ASCII, "Media Pool Globally Unique Identifier"),
(0x10_00, 28, MamFormat::BINARY, "Unique Cartridge Identify (UCI)"), (0x10_00, 28, MamFormat::BINARY, "Unique Cartridge Identify (UCI)"),
(0x10_01, 24, MamFormat::BINARY, "Alternate Unique Cartridge Identify (Alt-UCI)"), (0x10_01, 24, MamFormat::BINARY, "Alternate Unique Cartridge Identify (Alt-UCI)"),
@ -101,12 +101,13 @@ lazy_static::lazy_static!{
fn read_tape_mam<F: AsRawFd>(file: &mut F) -> Result<Vec<u8>, Error> { fn read_tape_mam<F: AsRawFd>(file: &mut F) -> Result<Vec<u8>, Error> {
let mut sg_raw = SgRaw::new(file, 32*1024)?; let alloc_len: u32 = 32*1024;
let mut sg_raw = SgRaw::new(file, alloc_len as usize)?;
let mut cmd = Vec::new(); let mut cmd = Vec::new();
cmd.extend(&[0x8c, 0u8, 0u8, 0u8, 0u8, 0u8, 0u8, 0u8]); cmd.extend(&[0x8c, 0u8, 0u8, 0u8, 0u8, 0u8, 0u8, 0u8]);
cmd.extend(&[0u8, 0u8]); // first attribute cmd.extend(&[0u8, 0u8]); // first attribute
cmd.extend(&[0u8, 0u8, 0x8f, 0xff]); // alloc len cmd.extend(&alloc_len.to_be_bytes()); // alloc len
cmd.extend(&[0u8, 0u8]); cmd.extend(&[0u8, 0u8]);
sg_raw.do_command(&cmd) sg_raw.do_command(&cmd)
@ -114,7 +115,7 @@ fn read_tape_mam<F: AsRawFd>(file: &mut F) -> Result<Vec<u8>, Error> {
.map(|v| v.to_vec()) .map(|v| v.to_vec())
} }
/// Read Medium auxiliary memory attributes (cartridge memory) using raw SCSI command. /// Read Medium auxiliary memory attributes (cartridge memory) using raw SCSI command.
pub fn read_mam_attributes<F: AsRawFd>(file: &mut F) -> Result<Vec<MamAttribute>, Error> { pub fn read_mam_attributes<F: AsRawFd>(file: &mut F) -> Result<Vec<MamAttribute>, Error> {
let data = read_tape_mam(file)?; let data = read_tape_mam(file)?;
@ -130,8 +131,12 @@ fn decode_mam_attributes(data: &[u8]) -> Result<Vec<MamAttribute>, Error> {
let expected_len = data_len as usize; let expected_len = data_len as usize;
if reader.len() != expected_len {
if reader.len() < expected_len {
bail!("read_mam_attributes: got unexpected data len ({} != {})", reader.len(), expected_len); bail!("read_mam_attributes: got unexpected data len ({} != {})", reader.len(), expected_len);
} else if reader.len() > expected_len {
// Note: Quantum hh7 returns the allocation_length instead of real data_len
reader = &data[4..expected_len+4];
} }
let mut list = Vec::new(); let mut list = Vec::new();

View File

@ -51,7 +51,10 @@ use crate::{
VirtualTapeDrive, VirtualTapeDrive,
LinuxTapeDrive, LinuxTapeDrive,
}, },
server::WorkerTask, server::{
send_load_media_email,
WorkerTask,
},
tape::{ tape::{
TapeWrite, TapeWrite,
TapeRead, TapeRead,
@ -66,7 +69,6 @@ use crate::{
changer::{ changer::{
MediaChange, MediaChange,
MtxMediaChanger, MtxMediaChanger,
send_load_media_email,
}, },
}, },
}; };
@ -209,7 +211,7 @@ pub trait TapeDriver {
/// Set or clear encryption key /// Set or clear encryption key
/// ///
/// We use the media_set_uuid to XOR the secret key with the /// We use the media_set_uuid to XOR the secret key with the
/// uuid (first 16 bytes), so that each media set uses an uique /// uuid (first 16 bytes), so that each media set uses an unique
/// key for encryption. /// key for encryption.
fn set_encryption( fn set_encryption(
&mut self, &mut self,
@ -465,7 +467,7 @@ pub fn request_and_load_media(
} }
} }
/// Aquires an exclusive lock for the tape device /// Acquires an exclusive lock for the tape device
/// ///
/// Basically calls lock_device_path() using the configured drive path. /// Basically calls lock_device_path() using the configured drive path.
pub fn lock_tape_device( pub fn lock_tape_device(
@ -539,7 +541,7 @@ fn tape_device_path(
pub struct DeviceLockGuard(std::fs::File); pub struct DeviceLockGuard(std::fs::File);
// Aquires an exclusive lock on `device_path` // Acquires an exclusive lock on `device_path`
// //
// Uses systemd escape_unit to compute a file name from `device_path`, the try // Uses systemd escape_unit to compute a file name from `device_path`, the try
// to lock `/var/lock/<name>`. // to lock `/var/lock/<name>`.

View File

@ -429,7 +429,7 @@ impl MediaChange for VirtualTapeHandle {
} }
fn transfer_media(&mut self, _from: u64, _to: u64) -> Result<MtxStatus, Error> { fn transfer_media(&mut self, _from: u64, _to: u64) -> Result<MtxStatus, Error> {
bail!("media tranfer is not implemented!"); bail!("media transfer is not implemented!");
} }
fn export_media(&mut self, _label_text: &str) -> Result<Option<u64>, Error> { fn export_media(&mut self, _label_text: &str) -> Result<Option<u64>, Error> {

View File

@ -27,11 +27,8 @@ pub fn read_volume_statistics<F: AsRawFd>(file: &mut F) -> Result<Lp17VolumeSta
fn sg_read_volume_statistics<F: AsRawFd>(file: &mut F) -> Result<Vec<u8>, Error> { fn sg_read_volume_statistics<F: AsRawFd>(file: &mut F) -> Result<Vec<u8>, Error> {
let buffer_size = 8192; let alloc_len: u16 = 8192;
let mut sg_raw = SgRaw::new(file, buffer_size)?; let mut sg_raw = SgRaw::new(file, alloc_len as usize)?;
// Note: We cannjot use LP 2Eh TapeAlerts, because that clears flags on read.
// Instead, we use LP 12h TapeAlert Response. which does not clear the flags.
let mut cmd = Vec::new(); let mut cmd = Vec::new();
cmd.push(0x4D); // LOG SENSE cmd.push(0x4D); // LOG SENSE
@ -41,7 +38,7 @@ fn sg_read_volume_statistics<F: AsRawFd>(file: &mut F) -> Result<Vec<u8>, Error>
cmd.push(0); cmd.push(0);
cmd.push(0); cmd.push(0);
cmd.push(0); cmd.push(0);
cmd.push((buffer_size >> 8) as u8); cmd.push(0); // alloc len cmd.extend(&alloc_len.to_be_bytes()); // alloc len
cmd.push(0u8); // control byte cmd.push(0u8); // control byte
sg_raw.do_command(&cmd) sg_raw.do_command(&cmd)
@ -145,8 +142,13 @@ fn decode_volume_statistics(data: &[u8]) -> Result<Lp17VolumeStatistics, Error>
let page_len: u16 = unsafe { reader.read_be_value()? }; let page_len: u16 = unsafe { reader.read_be_value()? };
if (page_len as usize + 4) != data.len() { let page_len = page_len as usize;
if (page_len + 4) > data.len() {
bail!("invalid page length"); bail!("invalid page length");
} else {
// Note: Quantum hh7 returns the allocation_length instead of real data_len
reader = &data[4..page_len+4];
} }
let mut stat = Lp17VolumeStatistics::default(); let mut stat = Lp17VolumeStatistics::default();

View File

@ -77,7 +77,7 @@ impl <R: Read> BlockedReader<R> {
if seq_nr != buffer.seq_nr() { if seq_nr != buffer.seq_nr() {
proxmox::io_bail!( proxmox::io_bail!(
"detected tape block with wrong seqence number ({} != {})", "detected tape block with wrong sequence number ({} != {})",
seq_nr, buffer.seq_nr()) seq_nr, buffer.seq_nr())
} }

View File

@ -25,7 +25,7 @@ use crate::tape::{
/// ///
/// A chunk archive consists of a `MediaContentHeader` followed by a /// A chunk archive consists of a `MediaContentHeader` followed by a
/// list of chunks entries. Each chunk entry consists of a /// list of chunks entries. Each chunk entry consists of a
/// `ChunkArchiveEntryHeader` folowed by the chunk data (`DataBlob`). /// `ChunkArchiveEntryHeader` followed by the chunk data (`DataBlob`).
/// ///
/// `| MediaContentHeader | ( ChunkArchiveEntryHeader | DataBlob )* |` /// `| MediaContentHeader | ( ChunkArchiveEntryHeader | DataBlob )* |`
pub struct ChunkArchiveWriter<'a> { pub struct ChunkArchiveWriter<'a> {
@ -153,7 +153,7 @@ impl <R: Read> ChunkArchiveDecoder<R> {
Self { reader } Self { reader }
} }
/// Allow access to the underyling reader /// Allow access to the underlying reader
pub fn reader(&self) -> &R { pub fn reader(&self) -> &R {
&self.reader &self.reader
} }

View File

@ -21,7 +21,7 @@ use crate::tape::{
/// ///
/// This ignores file attributes like ACLs and xattrs. /// This ignores file attributes like ACLs and xattrs.
/// ///
/// Returns `Ok(Some(content_uuid))` on succees, and `Ok(None)` if /// Returns `Ok(Some(content_uuid))` on success, and `Ok(None)` if
/// `LEOM` was detected before all data was written. The stream is /// `LEOM` was detected before all data was written. The stream is
/// marked inclomplete in that case and does not contain all data (The /// marked inclomplete in that case and does not contain all data (The
/// backup task must rewrite the whole file on the next media). /// backup task must rewrite the whole file on the next media).

View File

@ -85,7 +85,7 @@ impl SnapshotReader {
Ok(file) Ok(file)
} }
/// Retunrs an iterator for all used chunks. /// Returns an iterator for all used chunks.
pub fn chunk_iterator(&self) -> Result<SnapshotChunkIterator, Error> { pub fn chunk_iterator(&self) -> Result<SnapshotChunkIterator, Error> {
SnapshotChunkIterator::new(&self) SnapshotChunkIterator::new(&self)
} }

View File

@ -276,7 +276,7 @@ impl Inventory {
continue; // belong to another pool continue; // belong to another pool
} }
if set.uuid.as_ref() == [0u8;16] { // should we do this?? if set.uuid.as_ref() == [0u8;16] {
list.push(MediaId { list.push(MediaId {
label: entry.id.label.clone(), label: entry.id.label.clone(),
media_set_label: None, media_set_label: None,
@ -561,7 +561,7 @@ impl Inventory {
// Helpers to simplify testing // Helpers to simplify testing
/// Genreate and insert a new free tape (test helper) /// Generate and insert a new free tape (test helper)
pub fn generate_free_tape(&mut self, label_text: &str, ctime: i64) -> Uuid { pub fn generate_free_tape(&mut self, label_text: &str, ctime: i64) -> Uuid {
let label = MediaLabel { let label = MediaLabel {
@ -576,7 +576,7 @@ impl Inventory {
uuid uuid
} }
/// Genreate and insert a new tape assigned to a specific pool /// Generate and insert a new tape assigned to a specific pool
/// (test helper) /// (test helper)
pub fn generate_assigned_tape( pub fn generate_assigned_tape(
&mut self, &mut self,
@ -600,7 +600,7 @@ impl Inventory {
uuid uuid
} }
/// Genreate and insert a used tape (test helper) /// Generate and insert a used tape (test helper)
pub fn generate_used_tape( pub fn generate_used_tape(
&mut self, &mut self,
label_text: &str, label_text: &str,

View File

@ -3,11 +3,11 @@
//! A set of backup medias. //! A set of backup medias.
//! //!
//! This struct manages backup media state during backup. The main //! This struct manages backup media state during backup. The main
//! purpose is to allocate media sets and assing new tapes to it. //! purpose is to allocate media sets and assign new tapes to it.
//! //!
//! //!
use std::path::Path; use std::path::{PathBuf, Path};
use anyhow::{bail, Error}; use anyhow::{bail, Error};
use ::serde::{Deserialize, Serialize}; use ::serde::{Deserialize, Serialize};
@ -41,6 +41,7 @@ pub struct MediaPoolLockGuard(std::fs::File);
pub struct MediaPool { pub struct MediaPool {
name: String, name: String,
state_path: PathBuf,
media_set_policy: MediaSetPolicy, media_set_policy: MediaSetPolicy,
retention: RetentionPolicy, retention: RetentionPolicy,
@ -82,6 +83,7 @@ impl MediaPool {
Ok(MediaPool { Ok(MediaPool {
name: String::from(name), name: String::from(name),
state_path: state_path.to_owned(),
media_set_policy, media_set_policy,
retention, retention,
changer_name, changer_name,
@ -135,7 +137,7 @@ impl MediaPool {
&self.name &self.name
} }
/// Retruns encryption settings /// Returns encryption settings
pub fn encrypt_fingerprint(&self) -> Option<Fingerprint> { pub fn encrypt_fingerprint(&self) -> Option<Fingerprint> {
self.encrypt_fingerprint.clone() self.encrypt_fingerprint.clone()
} }
@ -284,7 +286,7 @@ impl MediaPool {
Ok(list) Ok(list)
} }
// tests if the media data is considered as expired at sepcified time // tests if the media data is considered as expired at specified time
pub fn media_is_expired(&self, media: &BackupMedia, current_time: i64) -> bool { pub fn media_is_expired(&self, media: &BackupMedia, current_time: i64) -> bool {
if media.status() != &MediaStatus::Full { if media.status() != &MediaStatus::Full {
return false; return false;
@ -386,7 +388,13 @@ impl MediaPool {
} }
// sort empty_media, newest first -> oldest last // sort empty_media, newest first -> oldest last
empty_media.sort_unstable_by(|a, b| b.label().ctime.cmp(&a.label().ctime)); empty_media.sort_unstable_by(|a, b| {
let mut res = b.label().ctime.cmp(&a.label().ctime);
if res == std::cmp::Ordering::Equal {
res = b.label().label_text.cmp(&a.label().label_text);
}
res
});
if let Some(media) = empty_media.pop() { if let Some(media) = empty_media.pop() {
// found empty media, add to media set an use it // found empty media, add to media set an use it
@ -416,7 +424,11 @@ impl MediaPool {
// sort expired_media, newest first -> oldest last // sort expired_media, newest first -> oldest last
expired_media.sort_unstable_by(|a, b| { expired_media.sort_unstable_by(|a, b| {
b.media_set_label().unwrap().ctime.cmp(&a.media_set_label().unwrap().ctime) let mut res = b.media_set_label().unwrap().ctime.cmp(&a.media_set_label().unwrap().ctime);
if res == std::cmp::Ordering::Equal {
res = b.label().label_text.cmp(&a.label().label_text);
}
res
}); });
if let Some(media) = expired_media.pop() { if let Some(media) = expired_media.pop() {
@ -429,7 +441,12 @@ impl MediaPool {
println!("no expired media in pool, try to find unassigned/free media"); println!("no expired media in pool, try to find unassigned/free media");
// try unassigned media // try unassigned media
// fixme: lock free media pool to avoid races
// lock artificial "__UNASSIGNED__" pool to avoid races
let _lock = MediaPool::lock(&self.state_path, "__UNASSIGNED__")?;
self.inventory.reload()?;
let mut free_media = Vec::new(); let mut free_media = Vec::new();
for media_id in self.inventory.list_unassigned_media() { for media_id in self.inventory.list_unassigned_media() {
@ -447,6 +464,15 @@ impl MediaPool {
free_media.push(media_id); free_media.push(media_id);
} }
// sort free_media, newest first -> oldest last
free_media.sort_unstable_by(|a, b| {
let mut res = b.label.ctime.cmp(&a.label.ctime);
if res == std::cmp::Ordering::Equal {
res = b.label.label_text.cmp(&a.label.label_text);
}
res
});
if let Some(media_id) = free_media.pop() { if let Some(media_id) = free_media.pop() {
println!("use free media '{}'", media_id.label.label_text); println!("use free media '{}'", media_id.label.label_text);
let uuid = media_id.label.uuid.clone(); let uuid = media_id.label.uuid.clone();
@ -538,7 +564,7 @@ impl MediaPool {
} }
/// Lock the pool /// Lock the pool
pub fn lock(base_path: &Path, name: &str) -> Result<MediaPoolLockGuard, Error> { pub fn lock(base_path: &Path, name: &str) -> Result<MediaPoolLockGuard, Error> {
let mut path = base_path.to_owned(); let mut path = base_path.to_owned();
path.push(format!(".{}", name)); path.push(format!(".{}", name));
path.set_extension("lck"); path.set_extension("lck");

View File

@ -48,7 +48,7 @@ impl MediaSet {
let seq_nr = seq_nr as usize; let seq_nr = seq_nr as usize;
if self.media_list.len() > seq_nr { if self.media_list.len() > seq_nr {
if self.media_list[seq_nr].is_some() { if self.media_list[seq_nr].is_some() {
bail!("found duplicate squence number in media set '{}/{}'", bail!("found duplicate sequence number in media set '{}/{}'",
self.uuid.to_string(), seq_nr); self.uuid.to_string(), seq_nr);
} }
} else { } else {

View File

@ -271,7 +271,7 @@ impl PoolWriter {
} }
} }
/// Move to EOM (if not aleady there), then creates a new snapshot /// Move to EOM (if not already there), then creates a new snapshot
/// archive writing specified files (as .pxar) into it. On /// archive writing specified files (as .pxar) into it. On
/// success, this return 'Ok(true)' and the media catalog gets /// success, this return 'Ok(true)' and the media catalog gets
/// updated. /// updated.
@ -330,7 +330,7 @@ impl PoolWriter {
Ok((done, bytes_written)) Ok((done, bytes_written))
} }
/// Move to EOM (if not aleady there), then creates a new chunk /// Move to EOM (if not already there), then creates a new chunk
/// archive and writes chunks from 'chunk_iter'. This stops when /// archive and writes chunks from 'chunk_iter'. This stops when
/// it detect LEOM or when we reach max archive size /// it detect LEOM or when we reach max archive size
/// (4GB). Written chunks are registered in the media catalog. /// (4GB). Written chunks are registered in the media catalog.
@ -374,7 +374,8 @@ impl PoolWriter {
let elapsed = start_time.elapsed()?.as_secs_f64(); let elapsed = start_time.elapsed()?.as_secs_f64();
worker.log(format!( worker.log(format!(
"wrote {:.2} MB ({} MB/s)", "wrote {} chunks ({:.2} MiB at {:.2} MiB/s)",
saved_chunks.len(),
bytes_written as f64 / (1024.0*1024.0), bytes_written as f64 / (1024.0*1024.0),
(bytes_written as f64)/(1024.0*1024.0*elapsed), (bytes_written as f64)/(1024.0*1024.0*elapsed),
)); ));
@ -398,7 +399,7 @@ impl PoolWriter {
/// write up to <max_size> of chunks /// write up to <max_size> of chunks
fn write_chunk_archive<'a>( fn write_chunk_archive<'a>(
worker: &WorkerTask, _worker: &WorkerTask,
writer: Box<dyn 'a + TapeWrite>, writer: Box<dyn 'a + TapeWrite>,
datastore: &DataStore, datastore: &DataStore,
chunk_iter: &mut std::iter::Peekable<SnapshotChunkIterator>, chunk_iter: &mut std::iter::Peekable<SnapshotChunkIterator>,
@ -444,7 +445,7 @@ fn write_chunk_archive<'a>(
} }
if writer.bytes_written() > max_size { if writer.bytes_written() > max_size {
worker.log("Chunk Archive max size reached, closing archive".to_string()); //worker.log("Chunk Archive max size reached, closing archive".to_string());
break; break;
} }
} }

View File

@ -67,7 +67,7 @@ pub trait TapeWrite {
/// ///
/// See: https://github.com/torvalds/linux/blob/master/Documentation/scsi/st.rst /// See: https://github.com/torvalds/linux/blob/master/Documentation/scsi/st.rst
/// ///
/// On sucess, this returns if we en countered a EOM condition. /// On success, this returns if we en countered a EOM condition.
pub fn tape_device_write_block<W: Write>( pub fn tape_device_write_block<W: Write>(
writer: &mut W, writer: &mut W,
data: &[u8], data: &[u8],

View File

@ -173,7 +173,7 @@ fn test_alloc_writable_media_4() -> Result<(), Error> {
// next call fail because there is no free media // next call fail because there is no free media
assert!(pool.alloc_writable_media(start_time + 5).is_err()); assert!(pool.alloc_writable_media(start_time + 5).is_err());
// Create new nedia set, so that previous set can expire // Create new media set, so that previous set can expire
pool.start_write_session(start_time + 10)?; pool.start_write_session(start_time + 10)?;
assert!(pool.alloc_writable_media(start_time + 10).is_err()); assert!(pool.alloc_writable_media(start_time + 10).is_err());

View File

@ -302,7 +302,7 @@ impl<K, V> LinkedList<K, V> {
} }
} }
/// Remove the node referenced by `node_ptr` from the linke list and return it. /// Remove the node referenced by `node_ptr` from the linked list and return it.
fn remove(&mut self, node_ptr: *mut CacheNode<K, V>) -> Box<CacheNode<K, V>> { fn remove(&mut self, node_ptr: *mut CacheNode<K, V>) -> Box<CacheNode<K, V>> {
let node = unsafe { Box::from_raw(node_ptr) }; let node = unsafe { Box::from_raw(node_ptr) };

View File

@ -138,10 +138,10 @@ impl<I: Send + 'static> ParallelHandler<I> {
if let Err(panic) = handle.join() { if let Err(panic) = handle.join() {
match panic.downcast::<&str>() { match panic.downcast::<&str>() {
Ok(panic_msg) => msg_list.push( Ok(panic_msg) => msg_list.push(
format!("thread {} ({}) paniced: {}", self.name, i, panic_msg) format!("thread {} ({}) panicked: {}", self.name, i, panic_msg)
), ),
Err(_) => msg_list.push( Err(_) => msg_list.push(
format!("thread {} ({}) paniced", self.name, i) format!("thread {} ({}) panicked", self.name, i)
), ),
} }
} }

View File

@ -4,7 +4,7 @@
//! //!
//! See: `/usr/include/scsi/sg_pt.h` //! See: `/usr/include/scsi/sg_pt.h`
//! //!
//! The SCSI Commands Reference Manual also contains some usefull information. //! The SCSI Commands Reference Manual also contains some useful information.
use std::os::unix::io::AsRawFd; use std::os::unix::io::AsRawFd;
use std::ptr::NonNull; use std::ptr::NonNull;

View File

@ -210,7 +210,7 @@ fn test_parse_register_response() -> Result<(), Error> {
Ok(()) Ok(())
} }
/// querys the up to date subscription status and parses the response /// queries the up to date subscription status and parses the response
pub fn check_subscription(key: String, server_id: String) -> Result<SubscriptionInfo, Error> { pub fn check_subscription(key: String, server_id: String) -> Result<SubscriptionInfo, Error> {
let now = proxmox::tools::time::epoch_i64(); let now = proxmox::tools::time::epoch_i64();
@ -299,7 +299,7 @@ pub fn delete_subscription() -> Result<(), Error> {
Ok(()) Ok(())
} }
/// updates apt authenification for repo access /// updates apt authentication for repo access
pub fn update_apt_auth(key: Option<String>, password: Option<String>) -> Result<(), Error> { pub fn update_apt_auth(key: Option<String>, password: Option<String>) -> Result<(), Error> {
let auth_conf = std::path::Path::new(APT_AUTH_FN); let auth_conf = std::path::Path::new(APT_AUTH_FN);
match (key, password) { match (key, password) {

View File

@ -122,7 +122,7 @@ Ext.define('PBS.view.main.NavigationTree', {
if (view.tapestore === undefined) { if (view.tapestore === undefined) {
view.tapestore = Ext.create('Proxmox.data.UpdateStore', { view.tapestore = Ext.create('Proxmox.data.UpdateStore', {
autoStart: true, autoStart: true,
interval: 2 * 1000, interval: 60 * 1000,
storeid: 'pbs-tape-drive-list', storeid: 'pbs-tape-drive-list',
model: 'pbs-tape-drive-list', model: 'pbs-tape-drive-list',
}); });
@ -188,11 +188,13 @@ Ext.define('PBS.view.main.NavigationTree', {
} }
} }
let toremove = [];
list.eachChild((child) => { list.eachChild((child) => {
if (!newSet[child.data.path]) { if (!newSet[child.data.path]) {
list.removeChild(child, true); toremove.push(child);
} }
}); });
toremove.forEach((child) => list.removeChild(child, true));
if (view.pathToSelect !== undefined) { if (view.pathToSelect !== undefined) {
let path = view.pathToSelect; let path = view.pathToSelect;
@ -267,6 +269,15 @@ Ext.define('PBS.view.main.NavigationTree', {
}, },
}, },
reloadTapeStore: function() {
let me = this;
if (!PBS.enableTapeUI) {
return;
}
me.tapestore.load();
},
select: function(path, silent) { select: function(path, silent) {
var me = this; var me = this;
if (me.rstore.isLoaded() && (!PBS.enableTapeUI || me.tapestore.isLoaded())) { if (me.rstore.isLoaded() && (!PBS.enableTapeUI || me.tapestore.isLoaded())) {

View File

@ -42,7 +42,7 @@ Ext.define('PBS.Datastore.Options', {
rows: { rows: {
"notify": { "notify": {
required: true, required: true,
header: gettext('Notfiy'), header: gettext('Notify'),
renderer: (value) => { renderer: (value) => {
let notify = PBS.Utils.parsePropertyString(value); let notify = PBS.Utils.parsePropertyString(value);
let res = []; let res = [];
@ -59,7 +59,7 @@ Ext.define('PBS.Datastore.Options', {
"notify-user": { "notify-user": {
required: true, required: true,
defaultValue: 'root@pam', defaultValue: 'root@pam',
header: gettext('Notfiy User'), header: gettext('Notify User'),
editor: { editor: {
xtype: 'pbsNotifyOptionEdit', xtype: 'pbsNotifyOptionEdit',
}, },

View File

@ -33,7 +33,7 @@ Ext.define('PBS.form.CalendarEvent', {
config: { config: {
deleteEmpty: true, deleteEmpty: true,
}, },
// overide framework function to implement deleteEmpty behaviour // override framework function to implement deleteEmpty behaviour
getSubmitData: function() { getSubmitData: function() {
let me = this, data = null; let me = this, data = null;
if (!me.disabled && me.submitValue) { if (!me.disabled && me.submitValue) {

View File

@ -11,6 +11,11 @@ Ext.define('PBS.TapeManagement.ChangerPanel', {
controller: { controller: {
xclass: 'Ext.app.ViewController', xclass: 'Ext.app.ViewController',
reloadTapeStore: function() {
let navtree = Ext.ComponentQuery.query('navigationtree')[0];
navtree.reloadTapeStore();
},
onAdd: function() { onAdd: function() {
let me = this; let me = this;
Ext.create('PBS.TapeManagement.ChangerEditWindow', { Ext.create('PBS.TapeManagement.ChangerEditWindow', {
@ -40,6 +45,7 @@ Ext.define('PBS.TapeManagement.ChangerPanel', {
reload: function() { reload: function() {
this.getView().getStore().rstore.load(); this.getView().getStore().rstore.load();
this.reloadTapeStore();
}, },
stopStore: function() { stopStore: function() {

View File

@ -19,6 +19,11 @@ Ext.define('PBS.TapeManagement.DrivePanel', {
controller: { controller: {
xclass: 'Ext.app.ViewController', xclass: 'Ext.app.ViewController',
reloadTapeStore: function() {
let navtree = Ext.ComponentQuery.query('navigationtree')[0];
navtree.reloadTapeStore();
},
onAdd: function() { onAdd: function() {
let me = this; let me = this;
Ext.create('PBS.TapeManagement.DriveEditWindow', { Ext.create('PBS.TapeManagement.DriveEditWindow', {
@ -57,6 +62,7 @@ Ext.define('PBS.TapeManagement.DrivePanel', {
reload: function() { reload: function() {
this.getView().getStore().rstore.load(); this.getView().getStore().rstore.load();
this.reloadTapeStore();
}, },
stopStore: function() { stopStore: function() {