| 
							
							
							
						 |  |  | @ -1,6 +1,5 @@ | 
		
	
		
			
				|  |  |  |  | ZFS on Linux | 
		
	
		
			
				|  |  |  |  | ============= | 
		
	
		
			
				|  |  |  |  | .. code-block:: console.. code-block:: console.. code-block:: console | 
		
	
		
			
				|  |  |  |  | ------------ | 
		
	
		
			
				|  |  |  |  |  | 
		
	
		
			
				|  |  |  |  | ZFS is a combined file system and logical volume manager designed by | 
		
	
		
			
				|  |  |  |  | Sun Microsystems. There is no need for manually compile ZFS modules - all | 
		
	
	
		
			
				
					
					|  |  |  | @ -31,7 +30,7 @@ General ZFS advantages | 
		
	
		
			
				|  |  |  |  | * Encryption | 
		
	
		
			
				|  |  |  |  |  | 
		
	
		
			
				|  |  |  |  | Hardware | 
		
	
		
			
				|  |  |  |  | --------- | 
		
	
		
			
				|  |  |  |  | ~~~~~~~~~ | 
		
	
		
			
				|  |  |  |  |  | 
		
	
		
			
				|  |  |  |  | ZFS depends heavily on memory, so you need at least 8GB to start. In | 
		
	
		
			
				|  |  |  |  | practice, use as much you can get for your hardware/budget. To prevent | 
		
	
	
		
			
				
					
					|  |  |  | @ -47,10 +46,8 @@ HBA adapter is the way to go, or something like LSI controller flashed | 
		
	
		
			
				|  |  |  |  | in ``IT`` mode. | 
		
	
		
			
				|  |  |  |  |  | 
		
	
		
			
				|  |  |  |  |  | 
		
	
		
			
				|  |  |  |  |  | 
		
	
		
			
				|  |  |  |  |  | 
		
	
		
			
				|  |  |  |  | ZFS Administration | 
		
	
		
			
				|  |  |  |  | ------------------ | 
		
	
		
			
				|  |  |  |  | ~~~~~~~~~~~~~~~~~~ | 
		
	
		
			
				|  |  |  |  |  | 
		
	
		
			
				|  |  |  |  | This section gives you some usage examples for common tasks. ZFS | 
		
	
		
			
				|  |  |  |  | itself is really powerful and provides many options. The main commands | 
		
	
	
		
			
				
					
					|  |  |  | @ -58,61 +55,68 @@ to manage ZFS are `zfs` and `zpool`. Both commands come with great | 
		
	
		
			
				|  |  |  |  | manual pages, which can be read with: | 
		
	
		
			
				|  |  |  |  |  | 
		
	
		
			
				|  |  |  |  | .. code-block:: console | 
		
	
		
			
				|  |  |  |  |  | 
		
	
		
			
				|  |  |  |  |   # man zpool | 
		
	
		
			
				|  |  |  |  |   # man zfs | 
		
	
		
			
				|  |  |  |  |  | 
		
	
		
			
				|  |  |  |  | Create a new zpool | 
		
	
		
			
				|  |  |  |  | ~~~~~~~~~~~~~~~~~~ | 
		
	
		
			
				|  |  |  |  | ^^^^^^^^^^^^^^^^^^ | 
		
	
		
			
				|  |  |  |  |  | 
		
	
		
			
				|  |  |  |  | To create a new pool, at least one disk is needed. The `ashift` should | 
		
	
		
			
				|  |  |  |  | have the same sector-size (2 power of `ashift`) or larger as the | 
		
	
		
			
				|  |  |  |  | underlying disk. | 
		
	
		
			
				|  |  |  |  |  | 
		
	
		
			
				|  |  |  |  | .. code-block:: console | 
		
	
		
			
				|  |  |  |  |  | 
		
	
		
			
				|  |  |  |  |   # zpool create -f -o ashift=12 <pool> <device> | 
		
	
		
			
				|  |  |  |  |  | 
		
	
		
			
				|  |  |  |  | Create a new pool with RAID-0 | 
		
	
		
			
				|  |  |  |  | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ | 
		
	
		
			
				|  |  |  |  | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ | 
		
	
		
			
				|  |  |  |  |  | 
		
	
		
			
				|  |  |  |  | Minimum 1 disk | 
		
	
		
			
				|  |  |  |  |  | 
		
	
		
			
				|  |  |  |  | .. code-block:: console | 
		
	
		
			
				|  |  |  |  |  | 
		
	
		
			
				|  |  |  |  |   # zpool create -f -o ashift=12 <pool> <device1> <device2> | 
		
	
		
			
				|  |  |  |  |  | 
		
	
		
			
				|  |  |  |  | Create a new pool with RAID-1 | 
		
	
		
			
				|  |  |  |  | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ | 
		
	
		
			
				|  |  |  |  | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ | 
		
	
		
			
				|  |  |  |  |  | 
		
	
		
			
				|  |  |  |  | Minimum 2 disks | 
		
	
		
			
				|  |  |  |  |  | 
		
	
		
			
				|  |  |  |  | .. code-block:: console | 
		
	
		
			
				|  |  |  |  |  | 
		
	
		
			
				|  |  |  |  |   # zpool create -f -o ashift=12 <pool> mirror <device1> <device2> | 
		
	
		
			
				|  |  |  |  |  | 
		
	
		
			
				|  |  |  |  | Create a new pool with RAID-10 | 
		
	
		
			
				|  |  |  |  | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ | 
		
	
		
			
				|  |  |  |  | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ | 
		
	
		
			
				|  |  |  |  |  | 
		
	
		
			
				|  |  |  |  | Minimum 4 disks | 
		
	
		
			
				|  |  |  |  |  | 
		
	
		
			
				|  |  |  |  | .. code-block:: console | 
		
	
		
			
				|  |  |  |  |  | 
		
	
		
			
				|  |  |  |  |   # zpool create -f -o ashift=12 <pool> mirror <device1> <device2> mirror <device3> <device4> | 
		
	
		
			
				|  |  |  |  |  | 
		
	
		
			
				|  |  |  |  | Create a new pool with RAIDZ-1 | 
		
	
		
			
				|  |  |  |  | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ | 
		
	
		
			
				|  |  |  |  | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ | 
		
	
		
			
				|  |  |  |  |  | 
		
	
		
			
				|  |  |  |  | Minimum 3 disks | 
		
	
		
			
				|  |  |  |  |  | 
		
	
		
			
				|  |  |  |  | .. code-block:: console | 
		
	
		
			
				|  |  |  |  |  | 
		
	
		
			
				|  |  |  |  |   # zpool create -f -o ashift=12 <pool> raidz1 <device1> <device2> <device3> | 
		
	
		
			
				|  |  |  |  |  | 
		
	
		
			
				|  |  |  |  | Create a new pool with RAIDZ-2 | 
		
	
		
			
				|  |  |  |  | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ | 
		
	
		
			
				|  |  |  |  | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ | 
		
	
		
			
				|  |  |  |  |  | 
		
	
		
			
				|  |  |  |  | Minimum 4 disks | 
		
	
		
			
				|  |  |  |  |  | 
		
	
		
			
				|  |  |  |  | .. code-block:: console | 
		
	
		
			
				|  |  |  |  |  | 
		
	
		
			
				|  |  |  |  |   # zpool create -f -o ashift=12 <pool> raidz2 <device1> <device2> <device3> <device4> | 
		
	
		
			
				|  |  |  |  |  | 
		
	
		
			
				|  |  |  |  | Create a new pool with cache (L2ARC) | 
		
	
		
			
				|  |  |  |  | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ | 
		
	
		
			
				|  |  |  |  | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ | 
		
	
		
			
				|  |  |  |  |  | 
		
	
		
			
				|  |  |  |  | It is possible to use a dedicated cache drive partition to increase | 
		
	
		
			
				|  |  |  |  | the performance (use SSD). | 
		
	
	
		
			
				
					
					|  |  |  | @ -121,10 +125,11 @@ As `<device>` it is possible to use more devices, like it's shown in | 
		
	
		
			
				|  |  |  |  | "Create a new pool with RAID*". | 
		
	
		
			
				|  |  |  |  |  | 
		
	
		
			
				|  |  |  |  | .. code-block:: console | 
		
	
		
			
				|  |  |  |  |  | 
		
	
		
			
				|  |  |  |  |   # zpool create -f -o ashift=12 <pool> <device> cache <cache_device> | 
		
	
		
			
				|  |  |  |  |  | 
		
	
		
			
				|  |  |  |  | Create a new pool with log (ZIL) | 
		
	
		
			
				|  |  |  |  | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ | 
		
	
		
			
				|  |  |  |  | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ | 
		
	
		
			
				|  |  |  |  |  | 
		
	
		
			
				|  |  |  |  | It is possible to use a dedicated cache drive partition to increase | 
		
	
		
			
				|  |  |  |  | the performance (SSD). | 
		
	
	
		
			
				
					
					|  |  |  | @ -133,10 +138,11 @@ As `<device>` it is possible to use more devices, like it's shown in | 
		
	
		
			
				|  |  |  |  | "Create a new pool with RAID*". | 
		
	
		
			
				|  |  |  |  |  | 
		
	
		
			
				|  |  |  |  | .. code-block:: console | 
		
	
		
			
				|  |  |  |  |  | 
		
	
		
			
				|  |  |  |  |   # zpool create -f -o ashift=12 <pool> <device> log <log_device> | 
		
	
		
			
				|  |  |  |  |  | 
		
	
		
			
				|  |  |  |  | Add cache and log to an existing pool | 
		
	
		
			
				|  |  |  |  | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ | 
		
	
		
			
				|  |  |  |  | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ | 
		
	
		
			
				|  |  |  |  |  | 
		
	
		
			
				|  |  |  |  | If you have a pool without cache and log. First partition the SSD in | 
		
	
		
			
				|  |  |  |  | 2 partition with `parted` or `gdisk` | 
		
	
	
		
			
				
					
					|  |  |  | @ -148,18 +154,20 @@ physical memory, so this is usually quite small. The rest of the SSD | 
		
	
		
			
				|  |  |  |  | can be used as cache. | 
		
	
		
			
				|  |  |  |  |  | 
		
	
		
			
				|  |  |  |  | .. code-block:: console | 
		
	
		
			
				|  |  |  |  |  | 
		
	
		
			
				|  |  |  |  |   # zpool add -f <pool> log <device-part1> cache <device-part2> | 
		
	
		
			
				|  |  |  |  |  | 
		
	
		
			
				|  |  |  |  |  | 
		
	
		
			
				|  |  |  |  | Changing a failed device | 
		
	
		
			
				|  |  |  |  | ~~~~~~~~~~~~~~~~~~~~~~~~ | 
		
	
		
			
				|  |  |  |  | ^^^^^^^^^^^^^^^^^^^^^^^^ | 
		
	
		
			
				|  |  |  |  |  | 
		
	
		
			
				|  |  |  |  | .. code-block:: console | 
		
	
		
			
				|  |  |  |  |  | 
		
	
		
			
				|  |  |  |  |   # zpool replace -f <pool> <old device> <new device> | 
		
	
		
			
				|  |  |  |  |  | 
		
	
		
			
				|  |  |  |  |  | 
		
	
		
			
				|  |  |  |  | Changing a failed bootable device | 
		
	
		
			
				|  |  |  |  | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ | 
		
	
		
			
				|  |  |  |  | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ | 
		
	
		
			
				|  |  |  |  |  | 
		
	
		
			
				|  |  |  |  | Depending on how Proxmox Backup was installed it is either using `grub` or `systemd-boot` | 
		
	
		
			
				|  |  |  |  | as bootloader. | 
		
	
	
		
			
				
					
					|  |  |  | @ -169,6 +177,7 @@ the ZFS partition are the same. To make the system bootable from the new disk, | 
		
	
		
			
				|  |  |  |  | different steps are needed which depend on the bootloader in use. | 
		
	
		
			
				|  |  |  |  |  | 
		
	
		
			
				|  |  |  |  | .. code-block:: console | 
		
	
		
			
				|  |  |  |  |  | 
		
	
		
			
				|  |  |  |  |   # sgdisk <healthy bootable device> -R <new device> | 
		
	
		
			
				|  |  |  |  |   # sgdisk -G <new device> | 
		
	
		
			
				|  |  |  |  |   # zpool replace -f <pool> <old zfs partition> <new zfs partition> | 
		
	
	
		
			
				
					
					|  |  |  | @ -178,6 +187,7 @@ different steps are needed which depend on the bootloader in use. | 
		
	
		
			
				|  |  |  |  | With `systemd-boot`: | 
		
	
		
			
				|  |  |  |  |  | 
		
	
		
			
				|  |  |  |  | .. code-block:: console | 
		
	
		
			
				|  |  |  |  |  | 
		
	
		
			
				|  |  |  |  |   # pve-efiboot-tool format <new disk's ESP> | 
		
	
		
			
				|  |  |  |  |   # pve-efiboot-tool init <new disk's ESP> | 
		
	
		
			
				|  |  |  |  |  | 
		
	
	
		
			
				
					
					|  |  |  | @ -190,12 +200,13 @@ With `grub`: | 
		
	
		
			
				|  |  |  |  | Usually `grub.cfg` is located in `/boot/grub/grub.cfg` | 
		
	
		
			
				|  |  |  |  |  | 
		
	
		
			
				|  |  |  |  | .. code-block:: console | 
		
	
		
			
				|  |  |  |  |  | 
		
	
		
			
				|  |  |  |  |   # grub-install <new disk> | 
		
	
		
			
				|  |  |  |  |   # grub-mkconfig -o /path/to/grub.cfg | 
		
	
		
			
				|  |  |  |  |  | 
		
	
		
			
				|  |  |  |  |  | 
		
	
		
			
				|  |  |  |  | Activate E-Mail Notification | 
		
	
		
			
				|  |  |  |  | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~ | 
		
	
		
			
				|  |  |  |  | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^ | 
		
	
		
			
				|  |  |  |  |  | 
		
	
		
			
				|  |  |  |  | ZFS comes with an event daemon, which monitors events generated by the | 
		
	
		
			
				|  |  |  |  | ZFS kernel module. The daemon can also send emails on ZFS events like | 
		
	
	
		
			
				
					
					|  |  |  | @ -203,12 +214,14 @@ pool errors. Newer ZFS packages ship the daemon in a separate package, | 
		
	
		
			
				|  |  |  |  | and you can install it using `apt-get`: | 
		
	
		
			
				|  |  |  |  |  | 
		
	
		
			
				|  |  |  |  | .. code-block:: console | 
		
	
		
			
				|  |  |  |  |  | 
		
	
		
			
				|  |  |  |  |   # apt-get install zfs-zed | 
		
	
		
			
				|  |  |  |  |  | 
		
	
		
			
				|  |  |  |  | To activate the daemon it is necessary to edit `/etc/zfs/zed.d/zed.rc` with your | 
		
	
		
			
				|  |  |  |  | favourite editor, and uncomment the `ZED_EMAIL_ADDR` setting: | 
		
	
		
			
				|  |  |  |  |  | 
		
	
		
			
				|  |  |  |  | .. code-block:: console | 
		
	
		
			
				|  |  |  |  |  | 
		
	
		
			
				|  |  |  |  |   ZED_EMAIL_ADDR="root" | 
		
	
		
			
				|  |  |  |  |  | 
		
	
		
			
				|  |  |  |  | Please note Proxmox Backup forwards mails to `root` to the email address | 
		
	
	
		
			
				
					
					|  |  |  | @ -218,7 +231,7 @@ IMPORTANT: The only setting that is required is `ZED_EMAIL_ADDR`. All | 
		
	
		
			
				|  |  |  |  | other settings are optional. | 
		
	
		
			
				|  |  |  |  |  | 
		
	
		
			
				|  |  |  |  | Limit ZFS Memory Usage | 
		
	
		
			
				|  |  |  |  | ~~~~~~~~~~~~~~~~~~~~~~ | 
		
	
		
			
				|  |  |  |  | ^^^^^^^^^^^^^^^^^^^^^^ | 
		
	
		
			
				|  |  |  |  |  | 
		
	
		
			
				|  |  |  |  | It is good to use at most 50 percent (which is the default) of the | 
		
	
		
			
				|  |  |  |  | system memory for ZFS ARC to prevent performance shortage of the | 
		
	
	
		
			
				
					
					|  |  |  | @ -226,6 +239,7 @@ host. Use your preferred editor to change the configuration in | 
		
	
		
			
				|  |  |  |  | `/etc/modprobe.d/zfs.conf` and insert: | 
		
	
		
			
				|  |  |  |  |  | 
		
	
		
			
				|  |  |  |  | .. code-block:: console | 
		
	
		
			
				|  |  |  |  |  | 
		
	
		
			
				|  |  |  |  |   options zfs zfs_arc_max=8589934592 | 
		
	
		
			
				|  |  |  |  |  | 
		
	
		
			
				|  |  |  |  | This example setting limits the usage to 8GB. | 
		
	
	
		
			
				
					
					|  |  |  | @ -233,11 +247,12 @@ This example setting limits the usage to 8GB. | 
		
	
		
			
				|  |  |  |  | .. IMPORTANT:: If your root file system is ZFS you must update your initramfs every time this value changes: | 
		
	
		
			
				|  |  |  |  |  | 
		
	
		
			
				|  |  |  |  | .. code-block:: console | 
		
	
		
			
				|  |  |  |  |  | 
		
	
		
			
				|  |  |  |  |   # update-initramfs -u | 
		
	
		
			
				|  |  |  |  |  | 
		
	
		
			
				|  |  |  |  |  | 
		
	
		
			
				|  |  |  |  | SWAP on ZFS | 
		
	
		
			
				|  |  |  |  | ~~~~~~~~~~~ | 
		
	
		
			
				|  |  |  |  | ^^^^^^^^^^^ | 
		
	
		
			
				|  |  |  |  |  | 
		
	
		
			
				|  |  |  |  | Swap-space created on a zvol may generate some troubles, like blocking the | 
		
	
		
			
				|  |  |  |  | server or generating a high IO load, often seen when starting a Backup | 
		
	
	
		
			
				
					
					|  |  |  | @ -251,31 +266,35 @@ installer. Additionally, you can lower the `swappiness` value. | 
		
	
		
			
				|  |  |  |  | A good value for servers is 10: | 
		
	
		
			
				|  |  |  |  |  | 
		
	
		
			
				|  |  |  |  | .. code-block:: console | 
		
	
		
			
				|  |  |  |  |  | 
		
	
		
			
				|  |  |  |  |   # sysctl -w vm.swappiness=10 | 
		
	
		
			
				|  |  |  |  |  | 
		
	
		
			
				|  |  |  |  | To make the swappiness persistent, open `/etc/sysctl.conf` with | 
		
	
		
			
				|  |  |  |  | an editor of your choice and add the following line: | 
		
	
		
			
				|  |  |  |  |  | 
		
	
		
			
				|  |  |  |  | .. code-block:: console | 
		
	
		
			
				|  |  |  |  |  | 
		
	
		
			
				|  |  |  |  |   vm.swappiness = 10 | 
		
	
		
			
				|  |  |  |  |  | 
		
	
		
			
				|  |  |  |  | .. table:: Linux kernel `swappiness` parameter values | 
		
	
		
			
				|  |  |  |  |   :widths:auto | 
		
	
		
			
				|  |  |  |  |   =========             ============ | 
		
	
		
			
				|  |  |  |  |  | 
		
	
		
			
				|  |  |  |  |   ====================  =============================================================== | 
		
	
		
			
				|  |  |  |  |    Value                Strategy | 
		
	
		
			
				|  |  |  |  |   =========             ============ | 
		
	
		
			
				|  |  |  |  |   ====================  =============================================================== | 
		
	
		
			
				|  |  |  |  |    vm.swappiness = 0    The kernel will swap only to avoid an 'out of memory' condition | 
		
	
		
			
				|  |  |  |  |    vm.swappiness = 1    Minimum amount of swapping without disabling it entirely. | 
		
	
		
			
				|  |  |  |  |    vm.swappiness = 10   This value is sometimes recommended to improve performance when sufficient memory exists in a system. | 
		
	
		
			
				|  |  |  |  |    vm.swappiness = 10   Sometimes recommended to improve performance when sufficient memory exists in a system. | 
		
	
		
			
				|  |  |  |  |    vm.swappiness = 60   The default value. | 
		
	
		
			
				|  |  |  |  |    vm.swappiness = 100  The kernel will swap aggressively. | 
		
	
		
			
				|  |  |  |  |   =========             ============ | 
		
	
		
			
				|  |  |  |  |   ====================  =============================================================== | 
		
	
		
			
				|  |  |  |  |  | 
		
	
		
			
				|  |  |  |  | ZFS Compression | 
		
	
		
			
				|  |  |  |  | ~~~~~~~~~~~~~~~ | 
		
	
		
			
				|  |  |  |  | ^^^^^^^^^^^^^^^ | 
		
	
		
			
				|  |  |  |  |  | 
		
	
		
			
				|  |  |  |  | To activate compression: | 
		
	
		
			
				|  |  |  |  | .. code-block:: console | 
		
	
		
			
				|  |  |  |  |  | 
		
	
		
			
				|  |  |  |  |   # zpool set compression=lz4 <pool> | 
		
	
		
			
				|  |  |  |  |  | 
		
	
		
			
				|  |  |  |  | We recommend using the `lz4` algorithm, since it adds very little CPU overhead. | 
		
	
	
		
			
				
					
					|  |  |  | @ -286,12 +305,13 @@ I/O performance. | 
		
	
		
			
				|  |  |  |  |  | 
		
	
		
			
				|  |  |  |  | You can disable compression at any time with: | 
		
	
		
			
				|  |  |  |  | .. code-block:: console | 
		
	
		
			
				|  |  |  |  |  | 
		
	
		
			
				|  |  |  |  |   # zfs set compression=off <dataset> | 
		
	
		
			
				|  |  |  |  |  | 
		
	
		
			
				|  |  |  |  | Only new blocks will be affected by this change. | 
		
	
		
			
				|  |  |  |  |  | 
		
	
		
			
				|  |  |  |  | ZFS Special Device | 
		
	
		
			
				|  |  |  |  | ~~~~~~~~~~~~~~~~~~ | 
		
	
		
			
				|  |  |  |  | ^^^^^^^^^^^^^^^^^^ | 
		
	
		
			
				|  |  |  |  |  | 
		
	
		
			
				|  |  |  |  | Since version 0.8.0 ZFS supports `special` devices. A `special` device in a | 
		
	
		
			
				|  |  |  |  | pool is used to store metadata, deduplication tables, and optionally small | 
		
	
	
		
			
				
					
					|  |  |  | @ -312,11 +332,13 @@ performance. Use fast SSDs for the `special` device. | 
		
	
		
			
				|  |  |  |  | Create a pool with `special` device and RAID-1: | 
		
	
		
			
				|  |  |  |  |  | 
		
	
		
			
				|  |  |  |  | .. code-block:: console | 
		
	
		
			
				|  |  |  |  |  | 
		
	
		
			
				|  |  |  |  |   # zpool create -f -o ashift=12 <pool> mirror <device1> <device2> special mirror <device3> <device4> | 
		
	
		
			
				|  |  |  |  |  | 
		
	
		
			
				|  |  |  |  | Adding a `special` device to an existing pool with RAID-1: | 
		
	
		
			
				|  |  |  |  |  | 
		
	
		
			
				|  |  |  |  | .. code-block:: console | 
		
	
		
			
				|  |  |  |  |  | 
		
	
		
			
				|  |  |  |  |   # zpool add <pool> special mirror <device1> <device2> | 
		
	
		
			
				|  |  |  |  |  | 
		
	
		
			
				|  |  |  |  | ZFS datasets expose the `special_small_blocks=<size>` property. `size` can be | 
		
	
	
		
			
				
					
					|  |  |  | @ -335,20 +357,23 @@ in the pool will opt in for small file blocks). | 
		
	
		
			
				|  |  |  |  | Opt in for all file smaller than 4K-blocks pool-wide: | 
		
	
		
			
				|  |  |  |  |  | 
		
	
		
			
				|  |  |  |  | .. code-block:: console | 
		
	
		
			
				|  |  |  |  |  | 
		
	
		
			
				|  |  |  |  |   # zfs set special_small_blocks=4K <pool> | 
		
	
		
			
				|  |  |  |  |  | 
		
	
		
			
				|  |  |  |  | Opt in for small file blocks for a single dataset: | 
		
	
		
			
				|  |  |  |  |  | 
		
	
		
			
				|  |  |  |  | .. code-block:: console | 
		
	
		
			
				|  |  |  |  |  | 
		
	
		
			
				|  |  |  |  |   # zfs set special_small_blocks=4K <pool>/<filesystem> | 
		
	
		
			
				|  |  |  |  |  | 
		
	
		
			
				|  |  |  |  | Opt out from small file blocks for a single dataset: | 
		
	
		
			
				|  |  |  |  |  | 
		
	
		
			
				|  |  |  |  | .. code-block:: console | 
		
	
		
			
				|  |  |  |  |  | 
		
	
		
			
				|  |  |  |  |   # zfs set special_small_blocks=0 <pool>/<filesystem> | 
		
	
		
			
				|  |  |  |  |  | 
		
	
		
			
				|  |  |  |  | Troubleshooting | 
		
	
		
			
				|  |  |  |  | ~~~~~~~~~~~~~~~ | 
		
	
		
			
				|  |  |  |  | ^^^^^^^^^^^^^^^ | 
		
	
		
			
				|  |  |  |  |  | 
		
	
		
			
				|  |  |  |  | Corrupted cachefile | 
		
	
		
			
				|  |  |  |  |  | 
		
	
	
		
			
				
					
					|  |  |  | @ -358,11 +383,13 @@ boot until mounted manually later. | 
		
	
		
			
				|  |  |  |  | For each pool, run: | 
		
	
		
			
				|  |  |  |  |  | 
		
	
		
			
				|  |  |  |  | .. code-block:: console | 
		
	
		
			
				|  |  |  |  |  | 
		
	
		
			
				|  |  |  |  |   # zpool set cachefile=/etc/zfs/zpool.cache POOLNAME | 
		
	
		
			
				|  |  |  |  |  | 
		
	
		
			
				|  |  |  |  | and afterwards update the `initramfs` by running: | 
		
	
		
			
				|  |  |  |  |  | 
		
	
		
			
				|  |  |  |  | .. code-block:: console | 
		
	
		
			
				|  |  |  |  |  | 
		
	
		
			
				|  |  |  |  |   # update-initramfs -u -k all | 
		
	
		
			
				|  |  |  |  |  | 
		
	
		
			
				|  |  |  |  | and finally reboot your node. | 
		
	
	
		
			
				
					
					|  |  |  | 
 |