7. Storage¶
The Storage section of the graphical interface allows configuration of these options:
- Volumes creates and manages storage volumes.
- Periodic Snapshot Tasks schedules automatic creation of filesystem snapshots.
- Replication Tasks automate the replication of snapshots to a remote system.
- Resilver Priority controls the priority of resilvers.
- Scrubs schedules scrubs as part of ongoing disk maintenance.
- Snapshots manages local snapshots.
- VMware-Snapshot coordinates ZFS snapshots with a VMware datastore.
Note
When using an HA (High Availability) TrueNAS® system, connecting to the graphical interface on the passive node only shows a screen indicating that it is the passive node. All of the options discussed in this chapter can only be configured on the active node.
7.1. Volumes¶
The Volumes section of the TrueNAS® graphical interface can be used to format volumes, attach a disk to copy data onto an existing volume, or import a ZFS volume. It can also be used to create ZFS datasets and zvols and to manage their permissions.
Note
In ZFS terminology, groups of storage devices managed by ZFS are referred to as a pool. The TrueNAS® graphical interface uses the term volume to refer to a ZFS pool.
Proper storage design is important for any NAS. Please read through this entire chapter before configuring storage disks. Features are described to help make it clear which are beneficial for particular uses, and caveats or hardware restrictions which limit usefulness.
7.1.1. Volume Manager¶
The Volume Manager is used to add disks to a ZFS pool. Any old data on added disks is overwritten, so save it elsewhere before reusing a disk. Please see the ZFS Primer for information on ZFS redundancy with multiple disks before using Volume Manager. It is important to realize that different layouts of virtual devices (vdevs) affect which operations can be performed on that volume later. For example, drives can be added to a mirror to increase redundancy, but that is not possible with RAIDZ arrays.
Selecting
Storage → Volumes → Volume Manager opens
a screen like the example shown in
Figure 7.1.1.
Fig. 7.1.1 Creating a ZFS Pool Using Volume Manager
Table 7.1.1 summarizes the configuration options of this screen.
| Setting | Value | Description |
|---|---|---|
| Volume name | string | ZFS volumes must conform to these
naming conventions;
choosing a name that will stick out in the logs (e.g. not a generic term like
data or freenas) is recommended |
| Volume to extend | drop-down menu | extend an existing ZFS pool; see Extending a ZFS Volume for more details |
| Encryption | checkbox | see the warnings in Encryption before enabling encryption |
| Available disks | display | display the number and size of available disks; hover over show to list the available device names; click the + to add all of the disks to the pool |
| Volume layout | drag and drop | click and drag the icon to select the desired number of disks for a vdev; when at least one disk is selected, the layouts supported by the selected number of disks are added to the drop-down menu |
| Add Extra Device | button | configure multiple vdevs or add log or cache devices during pool creation |
| Manual setup | button | create a pool manually (not recommended); see Manual Setup for details |
Drag the slider to select the desired number of disks. Volume Manager displays the resulting storage capacity, taking reserved swap space into account. To change the layout or the number of disks, drag the slider to the desired volume layout. The Volume layout drop-down menu can also be clicked if a different level of redundancy is required.
Note
For performance and capacity reasons, this screen does not allow creating a volume from disks of differing sizes. While it is not recommended, it is possible to create a volume of differently-sized disks with the Manual setup button. Follow the instructions in Manual Setup.
Volume Manager only allows choosing a configuration if enough disks have been selected to create that configuration. These layouts are supported:
- Stripe: requires at least one disk
- Mirror: requires at least two disks
- RAIDZ1: requires at least three disks
- RAIDZ2: requires at least four disks
- RAIDZ3: requires at least five disks
- log device: requires at least one dedicated device, a fast, low-latency, power-protected SSD is recommended
- cache device: requires at least one dedicated device, SSD is recommended
When more than five disks are used, consideration must be given to the optimal layout for the best performance and scalability. An overview of the recommended disk group sizes as well as more information about log and cache devices can be found in the ZFS Primer.
The Add Volume button warns that existing data will be cleared. In other words, creating a new volume reformats the selected disks. To preserve existing data, click the Cancel button and refer to Import Disk and Import Volume to see if the existing format is supported. If so, perform that action instead. If the current storage format is not supported, it is necessary to back up the data to external media, format the disks, then restore the data to the new volume.
Depending on the size and number of disks, the type of controller, and
whether encryption is selected, creating the volume may take some
time. After the volume is created, the screen refreshes and the new
volume is listed in the tree under
Storage → Volumes.
Click the + next to the volume name to access
Change Permissions, Create Dataset, and
Create zvol options for that volume.
7.1.1.1. Encryption¶
TrueNAS® supports GELI full disk encryption for ZFS volumes. It is important to understand the details when considering whether encryption is right for your TrueNAS® system:
- This is not the encryption method used by Oracle’s version of ZFS. That version is not open source and is the property of Oracle.
- This is full disk encryption and not per-filesystem encryption. The underlying drives are first encrypted, then the pool is created on top of the encrypted devices.
- This type of encryption is primarily targeted at users who store sensitive data and want to retain the ability to remove disks from the pool without having to first wipe the disk’s contents.
- This design is only suitable for safe disposal of disks independent of the encryption key. As long as the key and the disks are intact, the system is vulnerable to being decrypted. The key should be protected by a strong passphrase and any backups of the key should be securely stored.
- On the other hand, if the key is lost, the data on the disks is inaccessible. Always back up the key!
- The encryption key is per ZFS volume (pool). Multiple pools each have their own encryption key.
- Data in the ARC cache and the contents of RAM are unencrypted.
- Swap is always encrypted, even on unencrypted volumes.
- There is no way to convert an existing, unencrypted volume. Instead, the data must be backed up, the existing pool destroyed, a new encrypted volume created, and the backup restored to the new volume.
- Hybrid pools are not supported. In other words, newly created vdevs must match the existing encryption scheme. When extending a volume, Volume Manager automatically encrypts the new vdev being added to the existing encrypted pool.
- The more drives in an encrypted volume, the more encryption and decryption overhead. Encrypted volumes composed of more than eight drives can suffer severe performance penalties, even with AES-NI encryption acceleration. If encryption is desired, please benchmark such volumes before using them in production.
Note
The encryption facility used by TrueNAS® is designed to protect against physical theft of the disks. It is not designed to protect against unauthorized software access. Ensure that only authorized users have access to the administrative GUI and that proper permissions are set on shares if sensitive data is stored on the system.
To create an encrypted volume, check the Encryption box shown in Figure 7.1.1. A pop-up message shows a reminder that it is extremely important to make a backup of the key. Without the key, the data on the disks is inaccessible. Refer to Managing Encrypted Volumes for instructions.
7.1.1.2. Manual Setup¶
The Manual Setup button shown in Figure 7.1.1 can be used to create a ZFS volume manually. While this is not recommended, it can, for example, be used to create a non-optimal volume containing disks of different sizes.
Note
The usable space of each disk in a volume is limited to the size of the smallest disk in the volume. Because of this, creating volumes with disks of the same size through the Volume Manager is recommended.
Figure 7.1.2 shows the Manual Setup screen. Table 7.1.2 shows the available options.
Fig. 7.1.2 Manually Creating a ZFS Volume
Note
Because of the disadvantages of creating volumes with disks of different sizes, the displayed list of disks is sorted by size.
| Setting | Value | Description |
|---|---|---|
| Volume name | string | ZFS volumes must conform to these
naming conventions;
choose a name that will stand out in the logs (e.g. not data or freenas) |
| Encryption | checkbox | see the warnings in Encryption before using encryption |
| Member disks | list | highlight desired number of disks from list of available disks |
| Deduplication | drop-down menu | do not change this setting unless instructed to do so by an iXsystems support engineer |
| ZFS Extra | bullet selection | specify disk usage: storage (None), a log device, a cache device, or a spare |
7.1.1.3. Extending a ZFS Volume¶
The Volume to extend drop-down menu in
Storage → Volumes → Volume Manager,
shown in
Figure 7.1.1,
is used to add disks to an existing ZFS volume to increase capacity.
This menu is empty if there are no ZFS volumes yet.
If more than one disk is added, the arrangement of the new disks into stripes, mirrors, or RAIDZ vdevs can be specified. Mirrors and RAIDZ arrays provide redundancy for data protection if an individual drive fails.
Note
If the existing volume is encrypted, a warning message shows a reminder that extending a volume resets the passphrase and recovery key. After extending the volume, immediately recreate both using the instructions in Managing Encrypted Volumes.
After an existing volume has been selected from the drop-down menu, drag and drop the desired disks and select the desired volume layout. For example, disks can be added to increase the capacity of the volume.
When adding disks to increase the capacity of a volume, ZFS supports the addition of virtual devices, or vdevs, to an existing ZFS pool. A vdev can be a single disk, a stripe, a mirror, a RAIDZ1, RAIDZ2, or a RAIDZ3. After a vdev is created, more drives cannot be added to that vdev. However, a new vdev can be striped with another of the same type of existing vdev to increase the overall size of the volume. Extending a volume often involves striping similar vdevs. Here are some examples:
- to extend a ZFS stripe, add one or more disks. Since there is no redundancy, disks do not have to be added in the same quantity as the existing stripe.
- to extend a ZFS mirror, add the same number of drives. The resulting striped mirror is a RAID 10. For example, if ten new drives are available, a mirror of two drives could be created initially, then extended by creating another mirror of two drives, and repeating three more times until all ten drives have been added.
- to extend a three drive RAIDZ1, add three additional drives. The result is a RAIDZ+0, similar to RAID 50 on a hardware controller.
- to extend a RAIDZ2 requires a minimum of four additional drives. The result is a RAIDZ2+0, similar to RAID 60 on a hardware controller.
If an attempt is made to add a non-matching number of disks to the existing vdev, an error message appears, indicating the number of disks that are required. Select the correct number of disks to continue.
7.1.1.3.1. Adding L2ARC or SLOG Devices¶
Storage → Volumes → Volume Manager
(see Figure 7.1.1)
is also used to add L2ARC or SLOG SSDs to improve specific types of
volume performance. This is described in more detail in the
ZFS Primer.
After the SSDs have been physically installed, click the Volume Manager button and choose the volume from the Volume to extend drop-down menu. Click the + next to the SSD in the Available disks list. In the Volume layout drop-down menu, select Cache (L2ARC) to add a cache device, or Log (ZIL) to add a log device. Finally, click Extend Volume to add the SSD.
7.1.2. Change Permissions¶
Setting permissions is an important aspect of configuring volumes. The graphical administrative interface is meant to set the initial permissions for a volume or dataset in order to make it available as a share. Once a share is available, the client operating system should be used to fine-tune the permissions of the files and directories that are created by the client.
The chapter on Sharing contains configuration examples for several types of permission scenarios. This section provides an overview of the screen that is used to set permissions.
Note
For users and groups to be available, they must either be first created using the instructions in Account or imported from a directory service using the instructions in Directory Services. If more than 50 users or groups are available, the drop-down menus described in this section will automatically truncate their display to 50 for performance reasons. In this case, start to type in the desired user or group name so that the display narrows its search to matching results.
After a volume or dataset is created, it is listed by its mount point
name in
Storage → Volumes.
Clicking the Change Permissions icon for a specific
volume/dataset displays the screen shown in
Figure 7.1.3.
Table 7.1.3
summarizes the options in this screen.
Fig. 7.1.3 Changing Permissions on a Volume or Dataset
| Setting | Value | Description |
|---|---|---|
| Apply Owner (user) | checkbox | uncheck to prevent new permission change from being applied to Owner (user), see Note below |
| Owner (user) | drop-down menu | user to control the volume/dataset; users which were manually created or imported from a directory service will appear in the drop-down menu |
| Apply Owner (group) | checkbox | uncheck to prevent new permission change from being applied to Owner (group), see Note below |
| Owner (group) | drop-down menu | group to control the volume/dataset; groups which were manually created or imported from a directory service will appear in the drop-down menu |
| Apply Mode | checkbox | uncheck to prevent new permission change from being applied to Mode, see Note below |
| Mode | checkboxes | only applies to the Unix or Mac “Permission Type” so will be grayed out if Windows is selected |
| Permission Type | bullet selection | choices are Unix, Mac or Windows; select the type which matches the type of client accessing the volume/dataset |
| Set permission recursively | checkbox | if checked, permissions will also apply to subdirectories of the volume/dataset; if data already exists on the volume/dataset, change the permissions on the client side to prevent a performance lag |
Note
The Apply Owner (user), Apply Owner (group), and Apply Mode checkboxes allow fine-tuning of the change permissions behavior. By default, all boxes are checked and TrueNAS® resets the owner, group, and mode when the Change button is clicked. These checkboxes allow choosing which settings to change. For example, to change just the Owner (group) setting, uncheck the boxes Apply Owner (user) and Apply Mode.
The Windows Permission Type is used for SMB shares or when the TrueNAS® system is a member of an Active Directory domain. This adds ACLs to traditional Unix permissions. When the Windows Permission Type is set, ACLs are set to Windows defaults for new files and directories. A Windows client can be used to further fine-tune permissions as needed.
The Unix Permission Type is usually used with NFS shares. These permissions are compatible with most network clients and generally work well with a mix of operating systems or clients. However, Unix permissions do not support Windows ACLs and should not be used with SMB shares.
The Mac Permission Type is used with AFP shares.
After a volume or dataset has been set to Windows, it cannot be changed to Unix permissions because that would remove extended permissions provided by Windows ACLs.
7.1.3. Create Dataset¶
An existing ZFS volume can be divided into datasets. Permissions, compression, deduplication, and quotas can be set on a per-dataset basis, allowing more granular control over access to storage data. Like a folder or directory, permissions can be set on dataset. Datasets are also similar to filesystems in that properties such as quotas and compression can be set, and snapshots created.
Note
ZFS provides thick provisioning using quotas and thin provisioning using reserved space.
Selecting an existing ZFS volume in the tree and clicking Create Dataset shows the screen in Figure 7.1.4.
Fig. 7.1.4 Creating a ZFS Dataset
Table 7.1.4
summarizes the options available when creating a ZFS
dataset. Some settings are only available in
Advanced Mode. To see these settings, either click the
Advanced Mode button, or configure the system to always
display these settings by checking the box
Show advanced fields by default in
System → Advanced.
Most attributes, except for the Dataset Name,
Case Sensitivity, and Record Size, can be
changed after dataset creation by highlighting the dataset name and
clicking its Edit Options button in
Storage → Volumes.
| Setting | Value | Description |
|---|---|---|
| Dataset Name | string | mandatory; enter a unique name for the dataset |
| Comments | string | short comments or user notes about this dataset |
| Compression Level | drop-down menu | see the section on Compression for a description of the available algorithms |
| Share type | drop-down menu | select the type of share that will be used on the dataset; choices are UNIX for an NFS share, Windows for a SMB share, or Mac for an AFP share |
| Enable atime | Inherit, On, or Off | controls whether the access time for files is updated when they are read; setting this property to Off avoids producing log traffic when reading files and can result in significant performance gains |
| Quota for this dataset | integer | only available in Advanced Mode; default of 0 disables quotas; specifying a value means to use no more than the specified size and is suitable for user datasets to prevent users from hogging available space |
| Quota for this dataset and all children | integer | only available in Advanced Mode; a specified value applies to both this dataset and any child datasets |
| Reserved space for this dataset | integer | only available in Advanced Mode; default of 0 is unlimited; specifying a value means to keep at least this much space free and is suitable for datasets containing logs which could take up all available free space |
| Reserved space for this dataset and all children | integer | only available in Advanced Mode; a specified value applies to both this dataset and any child datasets |
| ZFS Deduplication | drop-down menu | do not change this setting unless instructed to do so by your iXsystems support engineer |
| Read-Only | drop-down menu | only available in Advanced Mode; choices are Inherit (off), On, or Off |
| Record Size | drop-down menu | only available in Advanced Mode; while ZFS automatically adapts the record size dynamically to adapt to data, if the data has a fixed size (e.g. a database), matching that size may result in better performance |
| Case Sensitivity | drop-down menu | choices are sensitive (default, assumes filenames are case sensitive), insensitive (assumes filenames are not case sensitive), or mixed (understands both types of filenames) |
After a dataset is created, you can click on that dataset and select Create Dataset, thus creating a nested dataset, or a dataset within a dataset. A zvol can also be created within a dataset. When creating datasets, double-check that you are using the Create Dataset option for the intended volume or dataset. If you get confused when creating a dataset on a volume, click all existing datasets to close them–the remaining Create Dataset will be for the volume.
Tip
Deduplication is often considered when using a group of very similar virtual machine images. However, other features of ZFS can provide dedup-like functionality more efficiently. For example, create a dataset for a standard VM, then clone that dataset for other VMs. Only the difference between each created VM and the main dataset are saved, giving the effect of deduplication without the overhead.
7.1.3.1. Compression¶
When selecting a compression type, you need to balance performance with the amount of disk space saved by compression. Compression is transparent to the client and applications as ZFS automatically compresses data as it is written to a compressed dataset or zvol and automatically decompresses that data as it is read. These compression algorithms are supported:
- lz4: recommended compression method as it allows compressed datasets to operate at near real-time speed. This algorithm only compresses the files that will benefit from compression. By default, ZFS pools made using TrueNAS® 9.2.1 or higher use this compression method, meaning that this algorithm is used if the Compression level is left at Inherit when creating a dataset or zvol.
- gzip: varies from levels 1 to 9 where gzip fastest (level 1) gives the least compression and gzip maximum (level 9) provides the best compression but is discouraged due to its performance impact.
- zle: fast but simple algorithm to eliminate runs of zeroes.
- lzjb: provides decent data compression, but is considered deprecated as lz4 provides much better performance.
If you select Off as the Compression level when creating a dataset or zvol, compression will not be used on the dataset/zvol. This is not recommended as using lz4 has a negligible performance impact and allows for more storage capacity.
7.1.4. Create zvol¶
A zvol is a feature of ZFS that creates a raw block device over ZFS. This allows you to use a zvol as an iSCSI device extent.
To create a zvol, select an existing ZFS volume or dataset from the tree then click Create zvol to open the screen shown in Figure 7.1.5.
Fig. 7.1.5 Creating a Zvol
The configuration options are described in
Table 7.1.5.
Some settings are only available in Advanced Mode. To see
these settings, either click the Advanced Mode button or
configure the system to always display these settings by checking
Show advanced fields by default in
System → Advanced.
| Setting | Value | Description |
|---|---|---|
| zvol Name | string | mandatory; enter a name for the zvol; note that there is a 63-character limit on device path names in devfs, so using long zvol names can prevent accessing zvols as devices; for example, a zvol with a 70-character filename or path cannot be used as an iSCSI extent |
| Comments | string | short comments or user notes about this zvol |
| Size for this zvol | integer | specify size and value such as 10Gib; if the size is more than 80% of the available capacity, the creation will fail with an “out of space” error unless Force size is checked |
| Force size | checkbox | by default, the system will not let you create a zvol if that operation will bring the pool to over 80% capacity; while NOT recommended, checking this box will force the creation of the zvol in this situation |
| Compression level | drop-down menu | see the section on Compression for a description of the available algorithms |
| Sparse volume | checkbox | used to provide thin provisioning; use with caution for when this option is selected, writes will fail when the pool is low on space |
| Block size | drop-down menu | only available in Advanced Mode and by default is based on the number of disks in pool; can be set to match the block size of the filesystem which will be formatted onto the iSCSI target |
7.1.5. Import Disk¶
The
Volume → Import Disk
screen, shown in
Figure 7.1.6,
is used to import a single disk that has been formatted with the
UFS, NTFS, MSDOS, or EXT2 filesystem. The import is meant to be a
temporary measure to copy the data from a disk to an existing ZFS
dataset. Only one disk can be imported at a time.
Note
Imports of EXT3 or EXT4 filesystems are possible in some cases, although neither is fully supported. EXT3 journaling is not supported, so those filesystems must have an external fsck utility, like the one provided by E2fsprogs utilities, run on them before import. EXT4 filesystems with extended attributes or inodes greater than 128 bytes are not supported. EXT4 filesystems with EXT3 journaling must have an fsck run on them before import, as described above.
Fig. 7.1.6 Importing a Disk
Use the drop-down menu to select the disk to import, select the type of filesystem on the disk, and browse to the ZFS dataset that will hold the copied data. When you click Import Volume, the disk is mounted, its contents are copied to the specified ZFS dataset, and the disk is unmounted after the copy operation completes.
7.1.6. Import Volume¶
If you click
Storage → Volumes → Import Volume,
you can configure TrueNAS® to use an existing ZFS pool. This
action is typically performed when an existing TrueNAS® system is
re-installed. Since the operating system is separate from the storage
disks, a new installation does not affect the data on the disks.
However, the new operating system needs to be configured to use the
existing volume.
Figure 7.1.7 shows the initial pop-up window that appears when you import a volume.
Fig. 7.1.7 Initial Import Volume Screen
If you are importing an unencrypted ZFS pool, select No: Skip to import to open the screen shown in Figure 7.1.8.
Fig. 7.1.8 Importing a Non-Encrypted Volume
Existing volumes should be available for selection from the drop-down menu. In the example shown in Figure 7.1.8, the TrueNAS® system has an existing, unencrypted ZFS pool. Once the volume is selected, click the OK button to import the volume.
If an existing ZFS pool does not show in the drop-down menu, run zpool import from Shell to import the pool.
If you plan to physically install ZFS formatted disks from another system, be sure to export the drives on that system to prevent an “in use by another machine” error during the import.
7.1.6.1. Importing an Encrypted Pool¶
If you are importing an existing GELI-encrypted ZFS pool, you must decrypt the disks before importing the pool. In Figure 7.1.7, select Yes: Decrypt disks to access the screen shown in Figure 7.1.9.
Fig. 7.1.9 Decrypting Disks Before Importing a ZFS Pool
Select the disks in the encrypted pool, browse to the location of the saved encryption key, input the passphrase associated with the key, then click OK to decrypt the disks.
Note
The encryption key is required to decrypt the pool. If the pool cannot be decrypted, it cannot be re-imported after a failed upgrade or lost configuration. This means that it is very important to save a copy of the key and to remember the passphrase that was configured for the key. Refer to Managing Encrypted Volumes for instructions on how to manage the keys for encrypted volumes.
Once the pool is decrypted, it will appear in the drop-down menu of Figure 7.1.8. Click the OK button to finish the volume import.
7.1.7. View Disks¶
Storage → Volumes → View Disks
shows all of the disks recognized by the TrueNAS® system. An example is
shown in
Figure 7.1.10.
Fig. 7.1.10 Viewing Disks
The current configuration of each device is displayed. Click a disk entry and the Edit button to change its configuration. The configurable options are described in Table 7.1.6.
| Setting | Value | Description |
|---|---|---|
| Name | string | read-only value showing FreeBSD device name for disk |
| Serial | string | read-only value showing the disk’s serial number |
| Description | string | optional |
| HDD Standby | drop-down menu | indicates the time of inactivity (in minutes) before the drive enters standby mode in order to conserve energy; this forum post demonstrates how to determine if a drive has spun down |
| Advanced Power Management | drop-down menu | default is Disabled, can select a power management profile from the menu |
| Acoustic Level | drop-down menu | default is Disabled; can be modified for disks that understand AAM |
| Enable S.M.A.R.T. | checkbox | enabled by default if the disk supports S.M.A.R.T.; unchecking this box will disable any configured S.M.A.R.T. Tests for the disk |
| S.M.A.R.T. extra options | string | additional smartctl(8) options |
Note
If a disk’s serial number is not displayed in this screen, use the smartctl command from Shell. For example, to determine the serial number of disk ada0, type smartctl -a /dev/ada0 | grep Serial.
The Wipe function is provided for when an unused disk is to be discarded.
Warning
Make certain that all data has been backed up and that the disk is no longer in use. Triple-check that the correct disk is being selected to be wiped, as recovering data from a wiped disk is usually impossible. If there is any doubt, physically remove the disk, verify that all data is still present on the TrueNAS® system, and wipe the disk in a separate computer.
Clicking Wipe offers several choices. Quick erases only the partitioning information on a disk, making it easy to reuse but without clearing other old data. For more security, Full with zeros overwrites the entire disk with zeros, while Full with random data overwrites the entire disk with random binary data.
Quick wipes take only a few seconds. A Full with zeros wipe of a large disk can take several hours, and a Full with random data takes longer. A progress bar is displayed during the wipe to track status.
7.1.8. View Enclosure¶
Click Storage → Volumes → View Enclosure to
receive a status summary of the appliance’s disks and hardware. An
example is shown in
Figure 7.1.11.
Fig. 7.1.11 View Enclosure
This screen is divided into the following sections:
Array Device Slot: has an entry for each slot in the storage array, indicating the disk’s current status and FreeBSD device name. To blink the status light for that disk as a visual indicator, click its Identify button.
Cooling: has an entry for each fan, its status, and its RPM.
Enclosure: shows the status of the enclosure.
Power Supply: shows the status of each power supply.
SAS Expander: shows the status of the expander.
Temperature Sensor: shows the current temperature of each expander and the disk chassis.
Voltage Sensor: shows the current voltage for each sensor, VCCP, and VCC.
7.1.9. Volumes¶
Storage → Volumes
is used to view and further configure existing ZFS pools, datasets,
and zvols. The example shown in
Figure 7.1.12
shows one ZFS pool (volume1) with two datasets (the one
automatically created with the pool, volume1, and dataset1) and
one zvol (zvol1).
Note that in this example, there are two datasets named volume1. The first represents the ZFS pool and its Used and Available entries reflect the total size of the pool, including disk parity. The second represents the implicit or root dataset and its Used and Available entries indicate the amount of disk space available for storage.
Buttons are provided for quick access to Volume Manager, Import Disk, Import Volume, and View Disks. If the system has multipath-capable hardware, an extra button will be added, View Multipaths. For each entry, the columns indicate the Name, how much disk space is Used, how much disk space is Available, the type of Compression, the Compression Ratio, the Status, whether it is mounted as read-only, and any Comments entered for the volume.
Fig. 7.1.12 Viewing Volumes
Clicking the entry for a pool causes several buttons to appear at the bottom of the screen. The buttons perform these actions:
Detach Volume: allows you to either export the pool or to delete the contents of the pool, depending upon the choice you make in the screen shown in Figure 7.1.13. The Detach Volume screen displays the current used space and indicates if there are any shares, provides checkboxes to Mark the disks as new (destroy data) and to Also delete the share’s configuration, asks if you are sure that you want to do this, and the browser will turn red to alert you that you are about to do something that will make the data inaccessible. If you do not check the box to mark the disks as new, the volume will be exported. This means that the data is not destroyed and the volume can be re-imported at a later time. If you will be moving a ZFS pool from one system to another, perform this export action first as it flushes any unwritten data to disk, writes data to the disk indicating that the export was done, and removes all knowledge of the pool from the system. If you do check the box to mark the disks as new, the pool and all the data in its datasets, zvols, and shares will be destroyed and the underlying disks will be returned to their raw state.
![]()
Fig. 7.1.13 Detach or Delete a Volume
Scrub Volume: scrubs and scheduling them are described in more detail in Scrubs. This button allows manually initiating a scrub. Scrubs are I/O intensive and can negatively impact performance. Avoid initiating a scrub when the system is busy.
A Cancel button is provided to cancel a scrub. When a scrub is cancelled, it is abandoned. The next scrub to run starts from the beginning, not where the cancelled scrub left off.
The status of a running scrub or the statistics from the last completed scrub can be seen by clicking the Volume Status button.
Volume Status: as shown in the example in Figure 7.1.14, this screen shows the device name and status of each disk in the ZFS pool as well as any read, write, or checksum errors. It also indicates the status of the latest ZFS scrub. Clicking the entry for a device causes buttons to appear to edit the device’s options (shown in Figure 7.1.15), offline or online the device, or replace the device (as described in Replacing a Failed Drive).
Upgrade: used to upgrade the pool to the latest ZFS features, as described in Upgrading a ZFS Pool. This button does not appear if the pool is running the latest version of feature flags.
Fig. 7.1.14 Volume Status
Selecting a disk in Volume Status and clicking its Edit Disk button shows the screen in Figure 7.1.15. Table 7.1.6 summarizes the configurable options.
Fig. 7.1.15 Editing a Disk
Clicking a dataset in
Storage → Volumes
causes buttons to appear at the bottom of the screen, providing these
options:
Change Permissions: edit the dataset’s permissions as described in Change Permissions.
Create Snapshot: create a one-time snapshot. To schedule the regular creation of snapshots, instead use Periodic Snapshot Tasks.
Promote Dataset: only applies to clones. When a clone is promoted, the origin filesystem becomes a clone of the clone making it possible to destroy the filesystem that the clone was created from. Otherwise, a clone can not be destroyed while its origin filesystem exists.
Destroy Dataset: clicking the Destroy Dataset button causes the browser window to turn red to indicate that this is a destructive action. The Destroy Dataset screen forces you to check the box I’m aware this will destroy all child datasets and snapshots within this dataset before it will perform this action.
Edit Options: edit the volume’s properties described in Table 7.1.4. Note that it will not allow changing the dataset’s name.
Create Dataset: used to create a child dataset within this dataset.
Create zvol: create a child zvol within this dataset.
Clicking a zvol in
Storage → Volumes causes
icons to appear at the bottom of the screen:
Create Snapshot, Edit zvol, and
Destroy zvol. Similar to datasets, a zvol’s name cannot be
changed, and destroying a zvol requires confirmation.
7.1.9.1. Managing Encrypted Volumes¶
If the Encryption box is checked during the creation of a
pool, additional buttons appear in the entry for the volume in
Storage → Volumes.
An example is shown in
Figure 7.1.16.
Fig. 7.1.16 Encryption Icons Associated with an Encrypted Volume
These additional encryption buttons are used to:
Create/Change Passphrase: set and confirm a passphrase associated with the GELI encryption key. The desired passphrase is entered and repeated for verification. A red warning is a reminder to Remember to add a new recovery key as this action invalidates the previous recovery key. Unlike a password, a passphrase can contain spaces and is typically a series of words. A good passphrase is easy to remember (like the line to a song or piece of literature) but hard to guess (people who know you should not be able to guess the passphrase). Remember this passphrase. An encrypted volume cannot be reimported without it. In other words, if the passphrase is forgotten, the data on the volume can become inaccessible if it becomes necessary to reimport the pool. Protect this passphrase, as anyone who knows it could reimport the encrypted volume, thwarting the reason for encrypting the disks in the first place.
Fig. 7.1.17 Add or Change a Passphrase to an Encrypted Volume
After the passphrase is set, the name of this button changes to Change Passphrase. After setting or changing the passphrase, it is important to immediately create a new recovery key by clicking the Add recovery key button. This way, if the passphrase is forgotten, the associated recovery key can be used instead.
Encrypted volumes with a passphrase display an additional lock button:
Fig. 7.1.18 Lock Button
These encrypted volumes can be locked. The data is not accessible until the volume is unlocked by suppying the passphrase or encryption key, and the button changes to an unlock button:
Fig. 7.1.19 Unlock Button
To unlock the volume, click the unlock button to display the Unlock dialog:
Fig. 7.1.20 Unlock Locked Volume
Unlock the volume by entering a passphrase or using the Browse button to load the recovery key. If both a passphrase and a recovery key are entered, only the passphrase is used. By default, the services listed will restart when the volume is unlocked. This allows them to see the new volume and share or access data on it. Individual services can be prevented from restarting by unchecking them. However, a service that is not restarted might not be able to access the unlocked volume.
Download Key: download a backup copy of the GELI encryption key. The encryption key is saved to the client system, not on the TrueNAS® system. The TrueNAS® administrative password must be entered, then the directory in which to store the key is chosen. Since the GELI encryption key is separate from the TrueNAS® configuration database, it is highly recommended to make a backup of the key. If the key is ever lost or destroyed and there is no backup key, the data on the disks is inaccessible.
Encryption Re-key: generate a new GELI encryption key. Typically this is only performed when the administrator suspects that the current key may be compromised. This action also removes the current passphrase.
Note
A re-key is not allowed if Failover (High Availability) has been enabled and the standby node is down.
Add recovery key: generate a new recovery key. This screen prompts for the TrueNAS® administrative password and then the directory in which to save the key. Note that the recovery key is saved to the client system, not on the TrueNAS® system. This recovery key can be used if the passphrase is forgotten. Always immediately add a recovery key whenever the passphrase is changed.
Remove recovery key: Typically this is only performed when the administrator suspects that the current recovery key may be compromised. Immediately create a new passphrase and recovery key.
Note
The passphrase, recovery key, and encryption key must be protected. Do not reveal the passphrase to others. On the system containing the downloaded keys, take care that the system and its backups are protected. Anyone who has the keys has the ability to re-import the disks if they are discarded or stolen.
Warning
If a re-key fails on a multi-disk system, an alert is generated. Do not ignore this alert as doing so may result in the loss of data.
7.1.10. View Multipaths¶
TrueNAS® uses gmultipath(8) to provide multipath I/O support on systems containing hardware that is capable of multipath. An example would be a dual SAS expander backplane in the chassis or an external JBOD.
Multipath hardware adds fault tolerance to a NAS as the data is still available even if one disk I/O path has a failure.
TrueNAS® automatically detects active/active and active/passive
multipath-capable hardware. Any multipath-capable devices that are
detected will be placed in multipath units with the parent devices
hidden. The configuration will be displayed in
Storage → Volumes → View Multipaths.
Note that this option is not be displayed in the
Storage → Volumes
tree on systems that do not contain multipath-capable hardware.
7.1.11. Replacing a Failed Drive¶
Replace failed drives as soon as possible to repair the degraded state of the RAID.
Note
Striping (RAID0) does not provide redundancy. If a disk in a stripe fails, the volume will be destroyed and must be recreated and the data restored from backup.
Note
If the volume is encrypted with GELI, refer to Replacing an Encrypted Drive before proceeding.
Before physically removing the failed device, go to
Storage → Volumes.
Select the volume’s name. At the bottom of the interface are
several icons, one of which is Volume Status. Click the
Volume Status icon and locate the failed disk. Then
perform these steps:
Click the disk’s entry, then its Offline button to change that disk’s status to OFFLINE. This step is needed to properly remove the device from the ZFS pool and to prevent swap issues. Click the disk’s Offline button and pull the disk. If there is no Offline button but only a Replace button, the disk is already offlined and this step can be skipped.
Note
If the process of changing the disk’s status to OFFLINE fails with a “disk offline failed - no valid replicas” message, the ZFS volume must be scrubbed first with the Scrub Volume button in
Storage → Volumes. After the scrub completes, try to Offline the disk again before proceeding.After the disk has been replaced and is showing as OFFLINE, click the disk again and then click its Replace button. Select the replacement disk from the drop-down menu and click the Replace Disk button. After clicking the Replace Disk button, the ZFS pool begins resilvering.
After the drive replacement process is complete, re-add the replaced disk in the S.M.A.R.T. Tests screen.
In the example shown in
Figure 7.1.21,
a failed disk is being replaced by disk ada5 in the volume named
volume1.
Fig. 7.1.21 Replacing a Failed Disk
After the resilver is complete, Volume Status shows a Completed resilver status and indicates any errors. Figure 7.1.22 indicates that the disk replacement was successful in this example.
Note
A disk that is failing but has not completely failed can be replaced in place, without first removing it. Whether this is a good idea depends on the overall condition of the failing disk. A disk with a few newly-bad blocks that is otherwise functional can be left in place during the replacement to provide data redundancy. A drive that is experiencing continuous errors can actually slow down the replacement. In extreme cases, a disk with serious problems might spend so much time retrying failures that it could prevent the replacement resilvering from completing before another drive fails.
Fig. 7.1.22 Disk Replacement is Complete
7.1.11.1. Replacing an Encrypted Drive¶
If the ZFS pool is encrypted, additional steps are needed when replacing a failed drive.
First, make sure that a passphrase has been set using the instructions in Encryption before attempting to replace the failed drive. Then, follow the steps 1 and 2 as described above. During step 3, you will be prompted to input and confirm the passphrase for the pool. Enter this information then click the Replace Disk button. Wait until the resilvering is complete.
Next, restore the encryption keys to the pool. If the following additional steps are not performed before the next reboot, access to the pool might be permanently lost.
Highlight the pool that contains the disk that was just replaced and click the Encryption Re-key button in the GUI. Entry of the root password will be required.
Note
A re-key is not allowed if Failover (High Availability) has been enabled and the standby node is down.
Highlight the pool that contains the disk you just replaced and click Create Passphrase and enter the new passphrase. The old passphrase can be reused if desired.
Highlight the pool that contains the disk you just replaced and click the Download Key button to save the new encryption key. Since the old key will no longer function, any old keys can be safely discarded.
Highlight the pool that contains the disk that was just replaced and click the Add Recovery Key button to save the new recovery key. The old recovery key will no longer function, so it can be safely discarded.
7.1.11.2. Removing a Log or Cache Device¶
Added log or cache devices appear in
Storage → Volumes → Volume Status.
Clicking the device enables its Replace and
Remove buttons.
Log and cache devices can be safely removed or replaced with these buttons. Both types of devices improve performance, and throughput can be impacted by their removal.
7.1.12. Replacing Drives to Grow a ZFS Pool¶
The recommended method for expanding the size of a ZFS pool is to pre-plan the number of disks in a vdev and to stripe additional vdevs using Volume Manager as additional capacity is needed.
However, this is not an option if there are no open drive ports and a SAS/SATA HBA card cannot be added. In this case, one disk at a time can be replaced with a larger disk, waiting for the resilvering process to incorporate the new disk into the pool, then repeating with another disk until all of the original disks have been replaced.
The safest way to perform this is to use a spare drive port or an eSATA port and a hard drive dock. The process follows these steps:
- Shut down the system.
- Install one new disk.
- Start up the system.
- Go to
Storage → Volumes, select the pool to expand and click the Volume Status button. Select a disk and click the Replace button. Choose the new disk as the replacement. - The status of the resilver process can be viewed by running zpool status. When the new disk has resilvered, the old one will be automatically offlined. The system is then shut down to physically remove the replaced disk. One advantage of this approach is that there is no loss of redundancy during the resilver.
If a spare drive port is not available, a drive can be replaced with a larger one using the instructions in Replacing a Failed Drive. This process is slow and places the system in a degraded state. Since a failure at this point could be disastrous, do not attempt this method unless the system has a reliable backup. Replace one drive at a time and wait for the resilver process to complete on the replaced drive before replacing the next drive. After all the drives are replaced and the final resilver completes, the added space will appear in the pool.
7.1.13. Hot Spares¶
ZFS provides the ability to have “hot” spares. These are drives that are connected to a volume, but not in use. If the volume experiences the failure of a data drive, the system uses the hot spare as a temporary replacement. If the failed drive is replaced with a new drive, the hot spare drive is no longer needed and reverts to being a hot spare. If the failed drive is instead removed from the volume, the spare is promoted to a full member of the volume.
Hot spares can be added to a volume during or after creation. On TrueNAS®, hot spare actions are implemented by zfsd(8).
7.2. Periodic Snapshot Tasks¶
A periodic snapshot task allows scheduling the creation of read-only versions of ZFS volumes and datasets at a given point in time. Snapshots can be created quickly and, if little data changes, new snapshots take up very little space. For example, a snapshot where no files have changed takes 0 MB of storage, but as changes are made to files, the snapshot size changes to reflect the size of the changes.
Snapshots provide a clever way of keeping a history of files, providing a way to recover an older copy or even a deleted file. For this reason, many administrators take snapshots often (perhaps every fifteen minutes), store them for a period of time (possibly a month), and store them on another system (typically using Replication Tasks). Such a strategy allows the administrator to roll the system back to a specific point in time. If there is a catastrophic loss, an off-site snapshot can be used to restore the system up to the time of the last snapshot.
An existing ZFS volume is required before creating a snapshot. Creating a volume is described in Volume Manager.
To create a periodic snapshot task, click
Storage → Periodic Snapshot Tasks
→ Add Periodic Snapshot
which opens the screen shown in
Figure 7.2.1.
Table 7.2.1
summarizes the fields in this screen.
Note
If only a one-time snapshot is needed, instead use
Storage → Volumes
and click the Create Snapshot button for the volume or
dataset to snapshot.
Fig. 7.2.1 Creating a Periodic Snapshot
| Setting | Value | Description |
|---|---|---|
| Volume/Dataset | drop-down menu | select an existing ZFS volume, dataset, or zvol |
| Recursive | checkbox | select this box to take separate snapshots of the volume/dataset and each of its child datasets; if unchecked, a single snapshot is taken of only the specified volume/dataset, but not any child datasets |
| Snapshot Lifetime | integer and drop-down menu | length of time to retain the snapshot on this system; if the snapshot is replicated, it is not removed from the receiving system when the lifetime expires |
| Begin | drop-down menu | do not create snapshots before this time of day |
| End | drop-down menu | do not create snapshots after this time of day |
| Interval | drop-down menu | how often to take snapshot between Begin and End times |
| Weekday | checkboxes | which days of the week to take snapshots |
| Enabled | checkbox | uncheck to disable the scheduled snapshot task without deleting it |
If the Recursive box is checked, child datasets of this dataset are included in the snapshot and there is no need to create snapshots for each child dataset. The downside is that there is no way to exclude particular child datasets from a recursive snapshot.
When the OK button is clicked, a snapshot is taken and the task will be repeated according to your settings.
After creating a periodic snapshot task, an entry for the snapshot task will be added to View Periodic Snapshot Tasks. Click an entry to access its Edit and Delete buttons.
7.3. Replication Tasks¶
Replication is the duplication of snapshots from one TrueNAS® system to another computer. When a new snapshot is created on the source computer, it is automatically replicated to the destination computer. Replication is typically used to keep a copy of files on a separate system, with that system sometimes being at a different physical location.
The basic configuration requires a source system with the original data and a destination system where the data will be replicated. The destination system is prepared to receive replicated data, a periodic snapshot of the data on the source system is created, and then a replication task is created. As snapshots are automatically created on the source computer, they are automatically replicated to the destination computer.
Note
Replicated data is not visible on the receiving system until the replication task completes.
Note
The target dataset on the receiving system is automatically created in read-only mode to protect the data. To mount or browse the data on the receiving system, create a clone of the snapshot and use the clone. Clones are created in read/write mode, making it possible to browse or mount them. See Snapshots for more information on creating clones.
7.3.1. Examples: Common Configuration¶
The examples shown here use the same setup of source and destination computers.
7.3.1.1. Alpha (Source)¶
Alpha is the source computer with the data to be replicated. It is at IP address 10.0.0.102. A volume named alphavol has already been created, and a dataset named alphadata has been created on that volume. This dataset contains the files which will be snapshotted and replicated onto Beta.
This new dataset has been created for this example, but a new dataset is not required. Most users will already have datasets containing the data they wish to replicate.
Create a periodic snapshot of the source dataset by selecting
Storage → Periodic Snapshot Tasks.
Click the alphavol/alphadata dataset to highlight it. Create a
periodic snapshot of it by clicking
Periodic Snapshot Tasks, then
Add Periodic Snapshot as shown in
Figure 7.3.1.
This example creates a snapshot of the alphavol/alphadata dataset every two hours from Monday through Friday between the hours of 9:00 and 18:00 (6:00 PM). Snapshots are automatically deleted after their chosen lifetime of two weeks expires.
Fig. 7.3.1 Create a Periodic Snapshot for Replication
7.3.1.2. Beta (Destination)¶
Beta is the destination computer where the replicated data will be copied. It is at IP address 10.0.0.118. A volume named betavol has already been created.
Snapshots are transferred with SSH. To allow incoming connections, this service is enabled on Beta. The service is not required for outgoing connections, and so does not need to be enabled on Alpha.
7.3.2. Example: TrueNAS® to TrueNAS® Semi-Automatic Setup¶
TrueNAS® offers a special semi-automatic setup mode that simplifies setting up replication. Create the replication task on Alpha by clicking Replication Tasks and Add Replication. alphavol/alphadata is selected as the dataset to replicate. betavol is the destination volume where alphadata snapshots are replicated. The Setup mode dropdown is set to Semi-automatic as shown in Figure 7.3.2. The IP address of Beta is entered in the Remote hostname field. A hostname can be entered here if local DNS resolves for that hostname.
Note
If WebGUI HTTP –> HTTPS Redirect has been
enabled in
System → General
on the destination computer,
Remote HTTP/HTTPS Port must be set to the HTTPS port
(usually 443) and Remote HTTPS must be enabled when
creating the replication on the source computer.
Fig. 7.3.2 Add Replication Dialog, Semi-Automatic
The Remote Auth Token field expects a special token from
the Beta computer. On Beta, choose
Storage → Replication Tasks,
then click Temporary Auth Token. A dialog showing the
temporary authorization token is shown as in
Figure 7.3.3.
Highlight the temporary authorization token string with the mouse and copy it.
Fig. 7.3.3 Temporary Authentication Token on Destination
On the Alpha system, paste the copied temporary authorization token string into the Remote Auth Token field as shown in Figure 7.3.4.
Fig. 7.3.4 Temporary Authentication Token Pasted to Source
Finally, click the OK button to create the replication task. After each periodic snapshot is created, a replication task will copy it to the destination system. See Limiting Replication Times for information about restricting when replication is allowed to run.
Note
The temporary authorization token is only valid for a few minutes. If a Token is invalid message is shown, get a new temporary authorization token from the destination system, clear the Remote Auth Token field, and paste in the new one.
7.3.3. Example: TrueNAS® to TrueNAS® Dedicated User Replication¶
A dedicated user can be used for replications rather than the root user. This example shows the process using the semi-automatic replication setup between two TrueNAS® systems with a dedicated user named repluser. SSH key authentication is used to allow the user to log in remotely without a password.
In this example, the periodic snapshot task has not been created yet.
If the periodic snapshot shown in the
example configuration has already
been created, go to
Storage → Periodic Snapshot Tasks,
click on the task to select it, and click Delete to remove
it before continuing.
On Alpha, select
Account → Users.
Click the Add User. Enter repluser for
Username, enter /mnt/alphavol/repluser in the
Create Home Directory In field, enter
Replication Dedicated User for the Full Name, and set
the Disable password login checkbox. Leave the other
fields at their default values, but note the User ID
number. Click OK to create the user.
On Beta, the same dedicated user must be created as was created on
the sending computer. Select
Account → Users.
Click the Add User. Enter the User ID number from
Alpha, repluser for Username, enter
/mnt/betavol/repluser in the Create Home Directory In
field, enter Replication Dedicated User for the
Full Name, and set the Disable password login
checkbox. Leave the other fields at their default values. Click
OK to create the user.
A dataset with the same name as the original must be created on the
destination computer, Beta. Select
Storage → Volumes,
click on betavol, then click the Create Dataset icon at
the bottom. Enter alphadata as the Dataset Name, then
click Add Dataset.
The replication user must be given permissions to the destination dataset. Still on Beta, open a Shell and enter this command:
zfs allow -ldu repluser create,destroy,diff,mount,readonly,receive,release,send,userprop betavol/alphadata
The destination dataset must also be set to read-only. Enter this command in the Shell:
zfs set readonly=on betavol/alphadata
Close the Shell by typing exit and pressing
Enter.
The replication user must also be able to mount datasets. Still on
Beta, go to
System → Tunables.
Click Add Tunable. Enter vfs.usermount for the
Variable, 1 for the Value, and choose
Sysctl from the Type drop-down. Click OK to
save the tunable settings.
Back on Alpha, create a periodic snapshot of the source dataset by
selecting
Storage → Periodic Snapshot Tasks.
Click the alphavol/alphadata dataset to highlight it. Create a
periodic snapshot of it by clicking
Periodic Snapshot Tasks, then
Add Periodic Snapshot as shown in
Figure 7.3.1.
Still on Alpha, create the replication task by clicking Replication Tasks and Add Replication. alphavol/alphadata is selected as the dataset to replicate. betavol/alphadata is the destination volume and dataset where alphadata snapshots are replicated.
The Setup mode dropdown is set to Semi-automatic as shown in Figure 7.3.2. The IP address of Beta is entered in the Remote hostname field. A hostname can be entered here if local DNS resolves for that hostname.
Note
If WebGUI HTTP –> HTTPS Redirect has been
enabled in
System → General
on the destination computer,
Remote HTTP/HTTPS Port must be set to the HTTPS port
(usually 443) and Remote HTTPS must be enabled when
creating the replication on the source computer.
The Remote Auth Token field expects a special token from
the Beta computer. On Beta, choose
Storage → Replication Tasks,
then click Temporary Auth Token. A dialog showing the
temporary authorization token is shown as in
Figure 7.3.3.
Highlight the temporary authorization token string with the mouse and copy it.
On the Alpha system, paste the copied temporary authorization token string into the Remote Auth Token field as shown in Figure 7.3.4.
Set the Dedicated User checkbox. Choose repluser in the Dedicated User drop-down.
Click the OK button to create the replication task.
Note
The temporary authorization token is only valid for a few minutes. If a Token is invalid message is shown, get a new temporary authorization token from the destination system, clear the Remote Auth Token field, and paste in the new one.
Replication will begin when the periodic snapshot task runs.
Additional replications can use the same dedicated user that has already been set up. The permissions and read only settings made through the Shell must be set on each new destination dataset.
7.3.4. Example: TrueNAS® to TrueNAS® or Other Systems, Manual Setup¶
This example uses the same basic configuration of source and destination computers shown above, but the destination computer is not required to be a TrueNAS® system. Other operating systems can receive the replication if they support SSH, ZFS, and the same features that are in use on the source system. The details of creating volumes and datasets, enabling SSH, and copying encryption keys will vary when the destination computer is not a TrueNAS® system.
7.3.4.1. Encryption Keys¶
A public encryption key must be copied from Alpha to Beta to
allow a secure connection without a password prompt. On Alpha,
select
Storage → Replication Tasks → View Public Key,
producing the window shown in
Figure 7.3.5.
Use the mouse to highlight the key data shown in the window, then copy
it.
Fig. 7.3.5 Copy the Replication Key
On Beta, select
Account → Users → View Users. Click the root
account to select it, then click Modify User. Paste the
copied key into the SSH Public Key field and click
OK as shown in
Figure 7.3.6.
Fig. 7.3.6 Paste the Replication Key
Back on Alpha, create the replication task by clicking Replication Tasks and Add Replication. alphavol/alphadata is selected as the dataset to replicate. The destination volume is betavol. The alphadata dataset and snapshots are replicated there. The IP address of Beta is entered in the Remote hostname field as shown in Figure 7.3.7. A hostname can be entered here if local DNS resolves for that hostname.
Click the SSH Key Scan button to retrieve the SSH host keys from Beta and fill the Remote hostkey field. Finally, click OK to create the replication task. After each periodic snapshot is created, a replication task will copy it to the destination system. See Limiting Replication Times for information about restricting when replication is allowed to run.
7.3.5. Replication Options¶
Table 7.3.1 describes the options in the replication task dialog.
| Setting | Value | Description |
|---|---|---|
| Volume/Dataset | drop-down menu | ZFS volume or dataset on the source computer containing the snapshots to be replicated; the drop-down menu is empty if a snapshot does not already exist |
| Remote ZFS Volume/Dataset | string | ZFS volume on the remote or destination computer which will store the snapshots; if the destination dataset
is not present, it will be created; /mnt/ is assumed, do not include it in the path |
| Recursively replicate child dataset’s snapshots | checkbox | when checked, also replicate snapshots of datasets that are children of the main dataset |
| Delete stale snapshots | checkbox | when checked, delete previous snapshots on the remote or destination computer which are no longer present on the source computer |
| Replication Stream Compression | drop-down menu | choices are lz4 (fastest), pigz (all rounder), plzip (best compression), or Off (no compression); selecting a compression algorithm can reduce the size of the data being replicated |
| Limit (kB/s) | integer | limit replication speed to the specified value in kilobytes/second; default of 0 is unlimited |
| Begin | drop-down menu | replication is not allowed to start before this time; times entered in the Begin and End fields set when replication can occur |
| End | drop-down menu | replication must start by this time; once started, replication will continue until it is finished |
| Enabled | checkbox | uncheck to disable the scheduled replication task without deleting it |
| Setup mode | drop-down menu | Manual or Semi-automatic |
| Remote hostname | string | IP address or DNS name of remote computer where replication is sent |
| Remote port | string | the port used by the SSH server on the remote or destination computer |
| Dedicated User Enabled | checkbox | allow a user account other than root to be used for replication |
| Dedicated User | drop-down menu | only available if Dedicated User Enabled is checked; select the user account to be used for replication |
| Encryption Cipher | drop-down menu | Standard, Fast, or Disabled |
| Remote hostkey | string | use the SSH Key Scan button to retrieve the public host key of the remote or destination computer and populate this field with that key |
The replication task runs after a new periodic snapshot is created. The periodic snapshot and any new manual snapshots of the same dataset are replicated onto the destination computer.
When multiple replications have been created, replication tasks run serially, one after another. Completion time depends on the number and size of snapshots and the bandwidth available between the source and destination computers.
The first time a replication runs, it must duplicate data structures from the source to the destination computer. This can take much longer to complete than subsequent replications, which only send differences in data.
Warning
Snapshots record incremental changes in data. If the receiving system does not have at least one snapshot that can be used as a basis for the incremental changes in the snapshots from the sending system, there is no way to identify only the data that has changed. In this situation, the snapshots in the receiving system target dataset are removed so a complete initial copy of the new replicated data can be created.
Selecting
Storage → Replication Tasks displays
Figure 7.3.8, the list of
replication tasks. The Last snapshot sent to remote side
column shows the name of the last snapshot that was successfully
replicated, and Status shows the current status of each
replication task. The display is updated every five seconds, always
showing the latest status.
Note
The encryption key that was copied from the source computer
(Alpha) to the destination computer (Beta) is an RSA public
key located in the /data/ssh/replication.pub file on the
source computer. The host public key used to identify the
destination computer (Beta) is from the
/etc/ssh/ssh_host_rsa_key.pub file on the destination
computer.
7.3.6. Replication Encryption¶
The default Encryption Cipher Standard setting provides good security. Fast is less secure than Standard but can give reasonable transfer rates for devices with limited cryptographic speed. For networks where the entire path between source and destination computers is trusted, the Disabled option can be chosen to send replicated data without encryption.
7.3.7. Limiting Replication Times¶
The Begin and End times in a replication task make it possible to restrict when replication is allowed. These times can be set to only allow replication after business hours, or at other times when disk or network activity will not slow down other operations like snapshots or Scrubs. The default settings allow replication to occur at any time.
These times control when replication task are allowed to start, but will not stop a replication task that is already running. Once a replication task has begun, it will run until finished.
7.3.8. Replication Topologies and Scenarios¶
The replication examples shown above are known as simple or A to B replication, where one machine replicates data to one other machine. Replication can also be set up in more sophisticated topologies to suit various purposes and needs.
7.3.8.1. Star Replication¶
In a star topology, a single TrueNAS® computer replicates data to multiple destination computers. This can provide data redundancy with the multiple copies of data, and geographical redundancy if the destination computers are located at different sites.
An Alpha computer with three separate replication tasks to replicate data to Beta, then Gamma, and finally Delta computers demonstrates this arrangement. A to B replication is really just a star arrangement with only one target computer.
The star topology is simple to configure and manage, but it can place relatively high I/O and network loads on the source computer, which must run an individual replication task for each target computer.
7.3.8.2. Tiered Replication¶
In tiered replication, the data is replicated from the source computer onto one or a few destination computers. The destination computers then replicate the same data onto other computers. This allows much of the network and I/O load to be shifted away from the source computer.
For example, consider both Alpha and Beta computers to be located inside the same data center. Replicating data from Alpha to Beta does not protect that data from events that would involve the whole data center, like flood, fire, or earthquake. Two more computers, called Gamma and Delta, are set up. To provide geographic redundancy, Gamma is in a data center on the other side of the country, and Delta is in a data center on another continent. A single periodic snapshot replicates data from Alpha to Beta. Beta then replicates the data onto Gamma, and again onto Delta.
Tiered replication shifts most of the network and I/O overhead of repeated replication off the source computer onto the target computers. The source computer only replicates to the second-tier computers, which then handle replication to the third tier, and so on. In this example, Alpha only replicates data onto Beta. The I/O and network load of repeated replications is shifted onto Beta.
7.3.8.3. N-way Replication¶
N-way replication topologies recognize that hardware is sometimes idle, and computers can be used for more than a single dedicated purpose. An individual computer can be used as both a source and destination for replication. For example, the Alpha system can replicate a dataset to Beta, while Beta can replicate datasets to both Alpha and Gamma.
With careful setup, this topology can efficiently use I/O, network bandwidth, and computers, but can quickly become complex to manage.
7.3.8.4. Disaster Recovery¶
Disaster recovery is the ability to recover complete datasets from a replication destination computer. The replicated dataset is replicated back to new hardware after an incident caused the source computer to fail.
Recovering data onto a replacement computer can be done manually with the zfs send and zfs recv commands, or a replication task can be defined on the target computer containing the backup data. This replication task would normally be disabled. If a disaster damages the source computer, the target computer’s replication task is temporarily enabled, replicating the data onto the replacement source computer. After the disaster recovery replication completes, the replication task on the target computer is disabled again.
7.3.9. Troubleshooting Replication¶
Replication depends on SSH, disks, network, compression, and encryption to work. A failure or misconfiguration of any of these can prevent successful replication.
7.3.9.1. SSH¶
SSH must be able to connect from the source system to the destination system with an encryption key. This can be tested from Shell by making an SSH connection from the source system to the destination system. From the previous example, this is a connection from Alpha to Beta at 10.0.0.118. Start the Shell on the source machine (Alpha), then enter this command:
ssh -vv -i /data/ssh/replication 10.0.0.118
On the first connection, the system might say
No matching host key fingerprint found in DNS.
Are you sure you want to continue connecting (yes/no)?
Verify that this is the correct destination computer from the
preceding information on the screen and type yes. At this
point, an SSH shell connection is open to the destination
system, Beta.
If a password is requested, SSH authentication is not working. See
Figure 7.3.5 above. This key
value must be present in the /root/.ssh/authorized_keys file
on Beta, the destination computer. The /var/log/auth.log
file can show diagnostic errors for login problems on the destination
computer also.
7.3.9.2. Compression¶
Matching compression and decompression programs must be available on
both the source and destination computers. This is not a problem when
both computers are running TrueNAS®, but other operating systems might
not have lz4, pigz, or plzip compression programs installed by
default. An easy way to diagnose the problem is to set
Replication Stream Compression to Off. If the
replication runs, select the preferred compression method and check
/var/log/debug.log on the TrueNAS® system for errors.
7.3.9.3. Manual Testing¶
On Alpha, the source computer, the /var/log/messages file
can also show helpful messages to locate the problem.
On the source computer, Alpha, open a Shell and manually send
a single snapshot to the destination computer, Beta. The snapshot
used in this example is named auto-20161206.1110-2w. As
before, it is located in the alphavol/alphadata dataset. A
@ symbol separates the name of the dataset from the name of
the snapshot in the command.
zfs send alphavol/alphadata@auto-20161206.1110-2w | ssh -i /data/ssh/replication 10.0.0.118 zfs recv betavol
If a snapshot of that name already exists on the destination computer, the system will refuse to overwrite it with the new snapshot. The existing snapshot on the destination computer can be deleted by opening a Shell on Beta and running this command:
zfs destroy -R betavol/alphadata@auto-20161206.1110-2w
Then send the snapshot manually again. Snapshots on the destination
system, Beta, can be listed from the Shell with
zfs list -t snapshot or by going to
Storage → Snapshots.
Error messages here can indicate any remaining problems.
7.4. Resilver Priority¶
Resilvering, or the process of copying data to a replacement disk, is
best completed as quickly as possible. Increasing the priority of
resilvers can help them to complete more quickly. The
Resilver Priority tab makes it possible to increase the
priority of resilvering at times where the additional I/O or CPU usage
will not affect normal usage. Select
Storage → Resilver Priority
to display the screen shown in
Figure 7.4.1.
Table 7.4.1
describes the fields on this screen.
Fig. 7.4.1 Resilver Priority
| Setting | Value | Description |
|---|---|---|
| Enabled | checkbox | check to enable higher-priority resilvering |
| Begin higher priority resilvering at this time | drop-down | start time to begin higher-priority resilvering |
| End higher priority resilvering at this time | drop-down | end time to begin higher-priority resilvering |
| Weekday | checkboxes | use higher-priority resilvering on these days of the week |
7.5. Scrubs¶
A scrub is the process of ZFS scanning through the data on a volume. Scrubs help to identify data integrity problems, detect silent data corruptions caused by transient hardware issues, and provide early alerts of impending disk failures. TrueNAS® makes it easy to schedule periodic automatic scrubs.
Each volume should be scrubbed at least once a month. Bit errors in critical data can be detected by ZFS, but only when that data is read. Scheduled scrubs can find bit errors in rarely-read data. The amount of time needed for a scrub is proportional to the quantity of data on the volume. Typical scrubs take several hours or longer.
The scrub process is I/O intensive and can negatively impact performance. Schedule scrubs for evenings or weekends to minimize impact to users. Make certain that scrubs and other disk-intensive activity like S.M.A.R.T. Tests are scheduled to run on different days to avoid disk contention and extreme performance impacts.
Scrubs only check used disk space. To check unused disk space, schedule S.M.A.R.T. Tests of Type Long Self-Test to run once or twice a month.
Scrubs are scheduled and managed with
Storage → Scrubs.
When a volume is created, a ZFS scrub is automatically scheduled. An
entry with the same volume name is added to
Storage → Scrubs.
A summary of this entry can be viewed with
Storage → Scrubs → View Scrubs.
Figure 7.5.1
displays the default settings for the volume named volume1. In
this example, the entry has been highlighted and the Edit
button clicked to display the Edit screen.
Table 7.5.1
summarizes the options in this screen.
Fig. 7.5.1 Viewing a Volume’s Default Scrub Settings
| Setting | Value | Description |
|---|---|---|
| Volume | drop-down menu | volume to be scrubbed |
| Threshold days | integer | prevent scrub from running for this number of days after a scrub has completed, regardless of the calendar schedule; the default is a multiple of 7 to ensure that the scrub always occurs on the same day of the week |
| Description | string | optional text description of scrub |
| Minute | slider or minute selections | if the slider is used, a scrub occurs every N minutes; if specific minutes are chosen, a scrub runs only at the selected minute values |
| Hour | slider or hour selections | if the slider is used, a scrub occurs every N hours; if specific hours are chosen, a scrub runs only at the selected hour values |
| Day of Month | slider or month selections | if the slider is used, a scrub occurs every N days; if specific days of the month are chosen, a scrub runs only on the selected days of the selected months |
| Month | checkboxes | a scrub occurs on the selected months |
| Day of week | checkboxes | a scrub occurs on the selected days; the default is Sunday to least impact users; note that this field and the Day of Month field are ORed together: setting Day of Month to 01,15 and Day of week to Thursday will cause scrubs to run on the 1st and 15th days of the month, but also on any Thursday |
| Enabled | checkbox | uncheck to disable the scheduled scrub without deleting it |
Review the default selections and, if necessary, modify them to meet the needs of the environment. Note that the Threshold field is used to prevent scrubs from running too often, and overrides the schedule chosen in the other fields.
Scheduled crubs can be deleted with the Delete button, but this is not recommended. Scrubs can provide an early indication of disk issues before a disk failure. If a scrub is too intensive for the hardware, consider temporarily unchecking the Enabled button for the scrub until the hardware can be upgraded.
7.6. Snapshots¶
The Snapshots tab is used to review the listing of available snapshots. An example is shown in Figure 7.6.1.
Note
If snapshots do not appear, check that the current time
configured in Periodic Snapshot Tasks does not conflict with
the Begin, End, and Interval
settings. If the snapshot was attempted but failed, an entry is
added to /var/log/messages. This log file can be viewed in
Shell.
Fig. 7.6.1 Viewing Available Snapshots
The listing includes the name of the volume or dataset, the name of each snapshot, and the amount of used and referenced data.
Used is the amount of space consumed by this dataset and all of its descendants. This value is checked against the dataset’s quota and reservation. The space used does not include the dataset’s reservation, but does take into account the reservations of any descendent datasets. The amount of space that a dataset consumes from its parent, as well as the amount of space that are freed if this dataset is recursively destroyed, is the greater of its space used and its reservation. When a snapshot is created, the space is initially shared between the snapshot and the filesystem, and possibly with previous snapshots. As the filesystem changes, space that was previously shared becomes unique to the snapshot, and is counted in the snapshot’s space used. Additionally, deleting snapshots can increase the amount of space unique to (and used by) other snapshots. The amount of space used, available, or referenced does not take into account pending changes. While pending changes are generally accounted for within a few seconds, disk changes do not necessarily guarantee that the space usage information is updated immediately.
Tip
Space used by individual snapshots can be seen by running
zfs list -t snapshot from Shell.
Refer indicates the amount of data accessible by this dataset, which may or may not be shared with other datasets in the pool. When a snapshot or clone is created, it initially references the same amount of space as the file system or snapshot it was created from, since its contents are identical.
Snapshots have icons on the right side for several actions.
Clone Snapshot prompts for the name of the clone to create. A
clone is a writable copy of the snapshot. Since a clone is actually a
dataset which can be mounted, it appears in the Volumes
tab rather than the Snapshots tab. By default,
-clone is added to the name of a snapshot when a clone is
created.
Destroy Snapshot a pop-up message asks for confirmation. Child clones must be destroyed before their parent snapshot can be destroyed. While creating a snapshot is instantaneous, deleting a snapshot can be I/O intensive and can take a long time, especially when deduplication is enabled. In order to delete a block in a snapshot, ZFS has to walk all the allocated blocks to see if that block is used anywhere else; if it is not, it can be freed.
The most recent snapshot also has a Rollback Snapshot icon. Clicking the icon asks for confirmation before rolling back to this snapshot state. Confirming by clicking Yes causes any files that have changed since the snapshot was taken to be reverted back to their state at the time of the snapshot.
Note
Rollback is a potentially dangerous operation and causes any configured replication tasks to fail as the replication system uses the existing snapshot when doing an incremental backup. To restore the data within a snapshot, the recommended steps are:
- Clone the desired snapshot.
- Share the clone with the share type or service running on the TrueNAS® system.
- After users have recovered the needed data, destroy the clone in the Active Volumes tab.
This approach does not destroy any on-disk data and has no impact on replication.
A range of snapshots can be selected with the mouse. Click on the
checkbox in the left column of the first snapshot, then press and hold
Shift and click on the checkbox for the end snapshot. This can
be used to select a range of obsolete snapshots to be deleted with the
Destroy icon at the bottom. Be cautious and careful when
deleting ranges of snapshots.
Periodic snapshots can be configured to appear as shadow copies in newer versions of Windows Explorer, as described in Configuring Shadow Copies. Users can access the files in the shadow copy using Explorer without requiring any interaction with the TrueNAS® graphical administrative interface.
The ZFS Snapshots screen allows the creation of filters to view snapshots by selected criteria. To create a filter, click the Define filter icon (near the text No filter applied). When creating a filter:
- Select the column or leave the default of Any Column.
- Select the condition. Possible conditions are: contains (default), is, starts with, ends with, does not contain, is not, does not start with, does not end with, and is empty.
- Enter a value that meets your view criteria.
- Click the Filter button to save the filter and exit the define filter screen. Alternately, click the + button to add another filter.
When creating multiple filters, select the filter to use before leaving the define filter screen. After a filter is selected, the No filter applied text changes to Clear filter. Clicking Clear filter produces a pop-up message indicates that this removes the filter and all available snapshots are listed.
7.7. VMware-Snapshot¶
Storage → VMware-Snapshot
allows you to coordinate ZFS snapshots when using TrueNAS® as a VMware
datastore. Once this type of snapshot is created, TrueNAS® will
automatically snapshot any running VMware virtual machines before
taking a scheduled or manual ZFS snapshot of the dataset or zvol
backing that VMware datastore. The temporary VMware snapshots are then
deleted on the VMware side but still exist in the ZFS snapshot and can
be used as stable resurrection points in that snapshot. These
coordinated snapshots will be listed in Snapshots.
Figure 7.7.1 shows the menu for adding a VMware snapshot and Table 7.7.1 summarizes the available options.
Fig. 7.7.1 Adding a VMware Snapshot
| Setting | Value | Description |
|---|---|---|
| Hostname | string | IP address or hostname of VMware host; when clustering, this is the vCenter server for the cluster |
| Username | string | user on VMware host with enough permission to snapshot virtual machines |
| Password | string | password associated with Username |
| ZFS Filesystem | drop-down menu | the filesystem to snapshot |
| Datastore | drop-down menu | after entering the Hostname, Username, and Password, click Fetch Datastores to populate the menu and select the datastore with which to synchronize |

