docs: move host sysadmin out to own chapter, fix ZFS one

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
This commit is contained in:
Thomas Lamprecht 2020-07-08 18:15:33 +02:00
parent 1f24d9114c
commit 24406ebc0c
5 changed files with 64 additions and 43 deletions

View File

@ -1,9 +1,8 @@
Administration Guide Backup Management
==================== =================
The administration guide. .. The administration guide.
.. todo:: either add a bit more explanation or remove the previous sentence
.. todo:: either add a bit more explanation or remove the previous sentence
Terminology Terminology
----------- -----------
@ -182,6 +181,7 @@ File Layout
After creating a datastore, the following default layout will appear: After creating a datastore, the following default layout will appear:
.. code-block:: console .. code-block:: console
# ls -arilh /backup/disk1/store1 # ls -arilh /backup/disk1/store1
276493 -rw-r--r-- 1 backup backup 0 Jul 8 12:35 .lock 276493 -rw-r--r-- 1 backup backup 0 Jul 8 12:35 .lock
276490 drwxr-x--- 1 backup backup 1064960 Jul 8 12:35 .chunks 276490 drwxr-x--- 1 backup backup 1064960 Jul 8 12:35 .chunks
@ -192,6 +192,7 @@ The `.chunks` directory contains folders, starting from `0000` and taking hexade
directories will store the chunked data after a backup operation has been executed. directories will store the chunked data after a backup operation has been executed.
.. code-block:: console .. code-block:: console
# ls -arilh /backup/disk1/store1/.chunks # ls -arilh /backup/disk1/store1/.chunks
545824 drwxr-x--- 2 backup backup 4.0K Jul 8 12:35 ffff 545824 drwxr-x--- 2 backup backup 4.0K Jul 8 12:35 ffff
545823 drwxr-x--- 2 backup backup 4.0K Jul 8 12:35 fffe 545823 drwxr-x--- 2 backup backup 4.0K Jul 8 12:35 fffe
@ -933,7 +934,3 @@ After that you should be able to see storage status with:
.. include:: command-line-tools.rst .. include:: command-line-tools.rst
.. include:: services.rst .. include:: services.rst
.. include host system admin at the end
.. include:: sysadmin.rst

View File

@ -112,7 +112,7 @@ exclude_patterns = [
'pxar/man1.rst', 'pxar/man1.rst',
'epilog.rst', 'epilog.rst',
'pbs-copyright.rst', 'pbs-copyright.rst',
'sysadmin.rst', 'local-zfs.rst'
'package-repositories.rst', 'package-repositories.rst',
] ]

View File

@ -19,6 +19,7 @@ in the section entitled "GNU Free Documentation License".
introduction.rst introduction.rst
installation.rst installation.rst
administration-guide.rst administration-guide.rst
sysadmin.rst
.. raw:: latex .. raw:: latex

View File

@ -1,6 +1,5 @@
ZFS on Linux ZFS on Linux
============= ------------
.. code-block:: console.. code-block:: console.. code-block:: console
ZFS is a combined file system and logical volume manager designed by ZFS is a combined file system and logical volume manager designed by
Sun Microsystems. There is no need for manually compile ZFS modules - all Sun Microsystems. There is no need for manually compile ZFS modules - all
@ -31,7 +30,7 @@ General ZFS advantages
* Encryption * Encryption
Hardware Hardware
--------- ~~~~~~~~~
ZFS depends heavily on memory, so you need at least 8GB to start. In ZFS depends heavily on memory, so you need at least 8GB to start. In
practice, use as much you can get for your hardware/budget. To prevent practice, use as much you can get for your hardware/budget. To prevent
@ -47,10 +46,8 @@ HBA adapter is the way to go, or something like LSI controller flashed
in ``IT`` mode. in ``IT`` mode.
ZFS Administration ZFS Administration
------------------ ~~~~~~~~~~~~~~~~~~
This section gives you some usage examples for common tasks. ZFS This section gives you some usage examples for common tasks. ZFS
itself is really powerful and provides many options. The main commands itself is really powerful and provides many options. The main commands
@ -58,61 +55,68 @@ to manage ZFS are `zfs` and `zpool`. Both commands come with great
manual pages, which can be read with: manual pages, which can be read with:
.. code-block:: console .. code-block:: console
# man zpool # man zpool
# man zfs # man zfs
Create a new zpool Create a new zpool
~~~~~~~~~~~~~~~~~~ ^^^^^^^^^^^^^^^^^^
To create a new pool, at least one disk is needed. The `ashift` should To create a new pool, at least one disk is needed. The `ashift` should
have the same sector-size (2 power of `ashift`) or larger as the have the same sector-size (2 power of `ashift`) or larger as the
underlying disk. underlying disk.
.. code-block:: console .. code-block:: console
# zpool create -f -o ashift=12 <pool> <device> # zpool create -f -o ashift=12 <pool> <device>
Create a new pool with RAID-0 Create a new pool with RAID-0
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Minimum 1 disk Minimum 1 disk
.. code-block:: console .. code-block:: console
# zpool create -f -o ashift=12 <pool> <device1> <device2> # zpool create -f -o ashift=12 <pool> <device1> <device2>
Create a new pool with RAID-1 Create a new pool with RAID-1
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Minimum 2 disks Minimum 2 disks
.. code-block:: console .. code-block:: console
# zpool create -f -o ashift=12 <pool> mirror <device1> <device2> # zpool create -f -o ashift=12 <pool> mirror <device1> <device2>
Create a new pool with RAID-10 Create a new pool with RAID-10
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Minimum 4 disks Minimum 4 disks
.. code-block:: console .. code-block:: console
# zpool create -f -o ashift=12 <pool> mirror <device1> <device2> mirror <device3> <device4> # zpool create -f -o ashift=12 <pool> mirror <device1> <device2> mirror <device3> <device4>
Create a new pool with RAIDZ-1 Create a new pool with RAIDZ-1
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Minimum 3 disks Minimum 3 disks
.. code-block:: console .. code-block:: console
# zpool create -f -o ashift=12 <pool> raidz1 <device1> <device2> <device3> # zpool create -f -o ashift=12 <pool> raidz1 <device1> <device2> <device3>
Create a new pool with RAIDZ-2 Create a new pool with RAIDZ-2
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Minimum 4 disks Minimum 4 disks
.. code-block:: console .. code-block:: console
# zpool create -f -o ashift=12 <pool> raidz2 <device1> <device2> <device3> <device4> # zpool create -f -o ashift=12 <pool> raidz2 <device1> <device2> <device3> <device4>
Create a new pool with cache (L2ARC) Create a new pool with cache (L2ARC)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
It is possible to use a dedicated cache drive partition to increase It is possible to use a dedicated cache drive partition to increase
the performance (use SSD). the performance (use SSD).
@ -121,10 +125,11 @@ As `<device>` it is possible to use more devices, like it's shown in
"Create a new pool with RAID*". "Create a new pool with RAID*".
.. code-block:: console .. code-block:: console
# zpool create -f -o ashift=12 <pool> <device> cache <cache_device> # zpool create -f -o ashift=12 <pool> <device> cache <cache_device>
Create a new pool with log (ZIL) Create a new pool with log (ZIL)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
It is possible to use a dedicated cache drive partition to increase It is possible to use a dedicated cache drive partition to increase
the performance (SSD). the performance (SSD).
@ -133,10 +138,11 @@ As `<device>` it is possible to use more devices, like it's shown in
"Create a new pool with RAID*". "Create a new pool with RAID*".
.. code-block:: console .. code-block:: console
# zpool create -f -o ashift=12 <pool> <device> log <log_device> # zpool create -f -o ashift=12 <pool> <device> log <log_device>
Add cache and log to an existing pool Add cache and log to an existing pool
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
If you have a pool without cache and log. First partition the SSD in If you have a pool without cache and log. First partition the SSD in
2 partition with `parted` or `gdisk` 2 partition with `parted` or `gdisk`
@ -148,18 +154,20 @@ physical memory, so this is usually quite small. The rest of the SSD
can be used as cache. can be used as cache.
.. code-block:: console .. code-block:: console
# zpool add -f <pool> log <device-part1> cache <device-part2> # zpool add -f <pool> log <device-part1> cache <device-part2>
Changing a failed device Changing a failed device
~~~~~~~~~~~~~~~~~~~~~~~~ ^^^^^^^^^^^^^^^^^^^^^^^^
.. code-block:: console .. code-block:: console
# zpool replace -f <pool> <old device> <new device> # zpool replace -f <pool> <old device> <new device>
Changing a failed bootable device Changing a failed bootable device
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Depending on how Proxmox Backup was installed it is either using `grub` or `systemd-boot` Depending on how Proxmox Backup was installed it is either using `grub` or `systemd-boot`
as bootloader. as bootloader.
@ -169,6 +177,7 @@ the ZFS partition are the same. To make the system bootable from the new disk,
different steps are needed which depend on the bootloader in use. different steps are needed which depend on the bootloader in use.
.. code-block:: console .. code-block:: console
# sgdisk <healthy bootable device> -R <new device> # sgdisk <healthy bootable device> -R <new device>
# sgdisk -G <new device> # sgdisk -G <new device>
# zpool replace -f <pool> <old zfs partition> <new zfs partition> # zpool replace -f <pool> <old zfs partition> <new zfs partition>
@ -178,6 +187,7 @@ different steps are needed which depend on the bootloader in use.
With `systemd-boot`: With `systemd-boot`:
.. code-block:: console .. code-block:: console
# pve-efiboot-tool format <new disk's ESP> # pve-efiboot-tool format <new disk's ESP>
# pve-efiboot-tool init <new disk's ESP> # pve-efiboot-tool init <new disk's ESP>
@ -190,12 +200,13 @@ With `grub`:
Usually `grub.cfg` is located in `/boot/grub/grub.cfg` Usually `grub.cfg` is located in `/boot/grub/grub.cfg`
.. code-block:: console .. code-block:: console
# grub-install <new disk> # grub-install <new disk>
# grub-mkconfig -o /path/to/grub.cfg # grub-mkconfig -o /path/to/grub.cfg
Activate E-Mail Notification Activate E-Mail Notification
~~~~~~~~~~~~~~~~~~~~~~~~~~~~ ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
ZFS comes with an event daemon, which monitors events generated by the ZFS comes with an event daemon, which monitors events generated by the
ZFS kernel module. The daemon can also send emails on ZFS events like ZFS kernel module. The daemon can also send emails on ZFS events like
@ -203,12 +214,14 @@ pool errors. Newer ZFS packages ship the daemon in a separate package,
and you can install it using `apt-get`: and you can install it using `apt-get`:
.. code-block:: console .. code-block:: console
# apt-get install zfs-zed # apt-get install zfs-zed
To activate the daemon it is necessary to edit `/etc/zfs/zed.d/zed.rc` with your To activate the daemon it is necessary to edit `/etc/zfs/zed.d/zed.rc` with your
favourite editor, and uncomment the `ZED_EMAIL_ADDR` setting: favourite editor, and uncomment the `ZED_EMAIL_ADDR` setting:
.. code-block:: console .. code-block:: console
ZED_EMAIL_ADDR="root" ZED_EMAIL_ADDR="root"
Please note Proxmox Backup forwards mails to `root` to the email address Please note Proxmox Backup forwards mails to `root` to the email address
@ -218,7 +231,7 @@ IMPORTANT: The only setting that is required is `ZED_EMAIL_ADDR`. All
other settings are optional. other settings are optional.
Limit ZFS Memory Usage Limit ZFS Memory Usage
~~~~~~~~~~~~~~~~~~~~~~ ^^^^^^^^^^^^^^^^^^^^^^
It is good to use at most 50 percent (which is the default) of the It is good to use at most 50 percent (which is the default) of the
system memory for ZFS ARC to prevent performance shortage of the system memory for ZFS ARC to prevent performance shortage of the
@ -226,6 +239,7 @@ host. Use your preferred editor to change the configuration in
`/etc/modprobe.d/zfs.conf` and insert: `/etc/modprobe.d/zfs.conf` and insert:
.. code-block:: console .. code-block:: console
options zfs zfs_arc_max=8589934592 options zfs zfs_arc_max=8589934592
This example setting limits the usage to 8GB. This example setting limits the usage to 8GB.
@ -233,11 +247,12 @@ This example setting limits the usage to 8GB.
.. IMPORTANT:: If your root file system is ZFS you must update your initramfs every time this value changes: .. IMPORTANT:: If your root file system is ZFS you must update your initramfs every time this value changes:
.. code-block:: console .. code-block:: console
# update-initramfs -u # update-initramfs -u
SWAP on ZFS SWAP on ZFS
~~~~~~~~~~~ ^^^^^^^^^^^
Swap-space created on a zvol may generate some troubles, like blocking the Swap-space created on a zvol may generate some troubles, like blocking the
server or generating a high IO load, often seen when starting a Backup server or generating a high IO load, often seen when starting a Backup
@ -251,31 +266,35 @@ installer. Additionally, you can lower the `swappiness` value.
A good value for servers is 10: A good value for servers is 10:
.. code-block:: console .. code-block:: console
# sysctl -w vm.swappiness=10 # sysctl -w vm.swappiness=10
To make the swappiness persistent, open `/etc/sysctl.conf` with To make the swappiness persistent, open `/etc/sysctl.conf` with
an editor of your choice and add the following line: an editor of your choice and add the following line:
.. code-block:: console .. code-block:: console
vm.swappiness = 10 vm.swappiness = 10
.. table:: Linux kernel `swappiness` parameter values .. table:: Linux kernel `swappiness` parameter values
:widths:auto :widths:auto
========= ============
==================== ===============================================================
Value Strategy Value Strategy
========= ============ ==================== ===============================================================
vm.swappiness = 0 The kernel will swap only to avoid an 'out of memory' condition vm.swappiness = 0 The kernel will swap only to avoid an 'out of memory' condition
vm.swappiness = 1 Minimum amount of swapping without disabling it entirely. vm.swappiness = 1 Minimum amount of swapping without disabling it entirely.
vm.swappiness = 10 This value is sometimes recommended to improve performance when sufficient memory exists in a system. vm.swappiness = 10 Sometimes recommended to improve performance when sufficient memory exists in a system.
vm.swappiness = 60 The default value. vm.swappiness = 60 The default value.
vm.swappiness = 100 The kernel will swap aggressively. vm.swappiness = 100 The kernel will swap aggressively.
========= ============ ==================== ===============================================================
ZFS Compression ZFS Compression
~~~~~~~~~~~~~~~ ^^^^^^^^^^^^^^^
To activate compression: To activate compression:
.. code-block:: console .. code-block:: console
# zpool set compression=lz4 <pool> # zpool set compression=lz4 <pool>
We recommend using the `lz4` algorithm, since it adds very little CPU overhead. We recommend using the `lz4` algorithm, since it adds very little CPU overhead.
@ -286,12 +305,13 @@ I/O performance.
You can disable compression at any time with: You can disable compression at any time with:
.. code-block:: console .. code-block:: console
# zfs set compression=off <dataset> # zfs set compression=off <dataset>
Only new blocks will be affected by this change. Only new blocks will be affected by this change.
ZFS Special Device ZFS Special Device
~~~~~~~~~~~~~~~~~~ ^^^^^^^^^^^^^^^^^^
Since version 0.8.0 ZFS supports `special` devices. A `special` device in a Since version 0.8.0 ZFS supports `special` devices. A `special` device in a
pool is used to store metadata, deduplication tables, and optionally small pool is used to store metadata, deduplication tables, and optionally small
@ -312,11 +332,13 @@ performance. Use fast SSDs for the `special` device.
Create a pool with `special` device and RAID-1: Create a pool with `special` device and RAID-1:
.. code-block:: console .. code-block:: console
# zpool create -f -o ashift=12 <pool> mirror <device1> <device2> special mirror <device3> <device4> # zpool create -f -o ashift=12 <pool> mirror <device1> <device2> special mirror <device3> <device4>
Adding a `special` device to an existing pool with RAID-1: Adding a `special` device to an existing pool with RAID-1:
.. code-block:: console .. code-block:: console
# zpool add <pool> special mirror <device1> <device2> # zpool add <pool> special mirror <device1> <device2>
ZFS datasets expose the `special_small_blocks=<size>` property. `size` can be ZFS datasets expose the `special_small_blocks=<size>` property. `size` can be
@ -335,20 +357,23 @@ in the pool will opt in for small file blocks).
Opt in for all file smaller than 4K-blocks pool-wide: Opt in for all file smaller than 4K-blocks pool-wide:
.. code-block:: console .. code-block:: console
# zfs set special_small_blocks=4K <pool> # zfs set special_small_blocks=4K <pool>
Opt in for small file blocks for a single dataset: Opt in for small file blocks for a single dataset:
.. code-block:: console .. code-block:: console
# zfs set special_small_blocks=4K <pool>/<filesystem> # zfs set special_small_blocks=4K <pool>/<filesystem>
Opt out from small file blocks for a single dataset: Opt out from small file blocks for a single dataset:
.. code-block:: console .. code-block:: console
# zfs set special_small_blocks=0 <pool>/<filesystem> # zfs set special_small_blocks=0 <pool>/<filesystem>
Troubleshooting Troubleshooting
~~~~~~~~~~~~~~~ ^^^^^^^^^^^^^^^
Corrupted cachefile Corrupted cachefile
@ -358,11 +383,13 @@ boot until mounted manually later.
For each pool, run: For each pool, run:
.. code-block:: console .. code-block:: console
# zpool set cachefile=/etc/zfs/zpool.cache POOLNAME # zpool set cachefile=/etc/zfs/zpool.cache POOLNAME
and afterwards update the `initramfs` by running: and afterwards update the `initramfs` by running:
.. code-block:: console .. code-block:: console
# update-initramfs -u -k all # update-initramfs -u -k all
and finally reboot your node. and finally reboot your node.

View File

@ -1,5 +1,5 @@
Host System Administration Host System Administration
-------------------------- ==========================
`Proxmox Backup`_ is based on the famous Debian_ Linux `Proxmox Backup`_ is based on the famous Debian_ Linux
distribution. That means that you have access to the whole world of distribution. That means that you have access to the whole world of
@ -23,8 +23,4 @@ either explain things which are different on `Proxmox Backup`_, or
tasks which are commonly used on `Proxmox Backup`_. For other topics, tasks which are commonly used on `Proxmox Backup`_. For other topics,
please refer to the standard Debian documentation. please refer to the standard Debian documentation.
ZFS .. include:: local-zfs.rst
~~~
.. todo:: Add local ZFS admin guide (local.zfs.adoc)