Part 3: Building Your Own FreeBSD-based NAS with ZFS
Building Your Own FreeBSD-based NAS with ZFS
Part 3: NAS Sharing Using NFS, Samba and iSCSI Shares
Today, we’ll concentrate on exposing the data on your NAS to the network using NFS, Samba, and iSCSI shares. We’ll provide an overview of each type of share to help guide you in deciding which is most suited to the clients that will be accessing the NAS. We’ll also point out configuration parameters which are unique to FreeBSD or OpenZFS, as well as any resources for more information.
Network File System (NFS) is a commonly used protocol for accessing NAS systems, especially with Linux or FreeBSD clients. Most client operating systems either ship with an NFS client or have NFS clients readily available for download. Once a client has mounted an exported NFS share, it can be used like any other directory on the client system.
v3 or v4?
FreeBSD supports both NFSv3 and NFSv4. The differences in these two protocols can have a big impact on both configuration and performance. Some of the major differences include:
- NFSv3 uses separate RPC services to implement the NFS daemon, status monitoring, mounting, locking, and port mapping. By contrast, NFSv4 uses a single RPC service with integrated binding, mounting, and locking, and thus no need for port mapping. In particular, the locking operations of leasing, timeouts, and client-server negotiation are built into the NFSv4 protocol. NFSv4 clients are aware of the server’s state and vice versa, which allows for a more graceful recovery after a lost connection.
- NFSv3 uses both TCP and UDP. NFSv4 is TCP-only.
- NFSv3 supports POSIX ACLs. NFSv4 supports ACLs based on the Microsoft Windows NT model, meaning they provide a richer permissions set and are implemented on more clients than POSIX ACLs.
- In NFSv3, all exports are mounted separately. In NFSv4, exports can be mounted together in a directory tree structure as part of a pseudo-filesystem.
- NFSv3 security is limited to file permissions on the exported data and allowed host addresses defined in the exports file. NFSv4 implements the RPSEC_GSS protocol standard to provide authentication, integrity, and privacy over the network between the NFS server and client.
- FreeBSD supports pNFS (parallel NFS) for high-performance storage using NFSv4.2. Refer to pnfsserver(4) and pnfs(4) for configuration information.
Not surprisingly, NFSv4’s slew of new features increases its configuration complexity. For example, it requires that a configured KDC (Key Distribution Center) and directory service are available in the network to provide user authentication.
Sync or Async?
Our article on Understanding OpenZFS SLOGs provides a good overview on the differences between async and sync writes, and how that the difference impacts an OpenZFS system. By default, NFS writes are always sync, though some NFS clients can override that default when mounting the NFS share (even though mount(8) warns that doing so is a bad idea).
The speed of the storage disks and the amount of NFS writes factor into whether or not the NAS system would benefit from the addition of a mirrored SLOG. When monitoring the system’s performance, be on the lookout for disk contention.
Central or distributed authentication?
In order to configure NFSv4, you’ll need Kerberos to manage the KDC, which in turn will refer to a listing of users and their permissions. While you could create users locally by adding them to the storage system’s passwd database, a more typical and scalable solution is to integrate Kerberos with a directory service such as LDAP or Active Directory.
If your network contains a Windows domain controller, it probably already has a configured KDC and Active Directory.
For networks without a Windows domain controller, a FreeBSD system can be configured with Kerberos and one of the directory services. The FreeBSD Handbook sections on Kerberos and LDAP are good starting points.
The Kerberized NFS section of the FreeBSD Wiki contains some useful tips and troubleshooting suggestions.
FreeBSD/OpenZFS Considerations and Resources
To see one great example in action, check out the FreeBSD advocacy project.
- When configuring NFS, it’s important to remember that each shared pool or dataset is considered to be a unique filesystem. This can make exports tricky on NFSv3 as individual NFS shares cannot cross filesystem boundaries. Adding paths to share more directories only works if those directories are within the same filesystem.
- While NFSv4 doesn’t have this restriction, not all clients support a mount which spans across multiple filesystems. All OpenZFS datasets mounted below the NFSv4 root will be exported as well, unless sharenfs is explicitly set to “no.”
- FreeBSD uses -maproot or -mapall rather than root_squash to control the remapping of root permissions to those of a non-privileged user. See exports(5) for details.
- Our article on NFS Shares with ZFS describes how to leverage the OpenZFS sharenfs property.
- nfsd(8) provides some FreeBSD-specific tuning parameters.
- This forum thread discusses some configuration examples for automounting NFS shares on FreeBSD.
Did you know?
Improve the way you make use of ZFS in your company
Did you know you can rely on Klara engineers for anything from a ZFS performance audit, to developing new ZFS features to ultimately deploying an entire storage system on ZFS?
SMB Using Samba
Server Message Block (SMB) is a sharing protocol created by Microsoft. It is included with Windows systems, and most operating systems either include an SMB client or have readily available SMB clients for download.
Non-Windows systems—including FreeBSD—can provide SMB sharing using an SMB server such as Samba.
SMB Multi-Channel improves performance by distributing SMB traffic over multiple network connections and multiple CPU cores through the use of RSS (receive-side scaling). Code Insecurity has an excellent how-to for configuring SMB multi-channel on FreeBSD.
OpenZFS and Windows ACEs
An important part of the SMB configuration is making sure ACLs map properly to Windows ACEs (Access Control Entries). OpenZFS provides the aclmode and aclinherit properties to configure how ACLs are handled. Their default values are aclmode=discard and aclinherit=restricted and their possible settings are listed in zfsprops(8).
Since the default settings are more suited to POSIX systems—and some POSIX commands such as chmod(8) can result in loss of extended ACL information—we recommend changing the properties for each SMB-shared dataset as follows:
- change aclmode to restricted. This will cause chmod to error out rather than clobber the ACEs.
- change aclinherit to passthrough. The default behavior is to remove the write and write_owner permissions; setting it to passthrough tells OpenZFS to inherit all ACEs without any modifications.
When configuring Samba, several choices are available for share-based access control:
- Configuring a Standalone Samba Server: this is the simplest configuration but is only suited for small environments with few users. Rather than using a directory service for user authentication, valid/invalid users/groups or allowed/denied hosts are explicitly defined in the Samba configuration file (smb.conf).
- Configuring Samba as an Active Directory Domain Controller: suited for most environments without a Windows domain. Requires configuration of Active Directory, DNS, Kerberos, and time synchronization.
- Joining an Existing Windows Domain: Samba can be configured either as a Domain Controller or as a domain member.
OpenZFS provides a sharesmb property; if it is set on a file system, the zfs share and zfs unshare commands can be used to toggle the share’s availability. Before setting this property, be aware of its default settings which are described in zfsprops(8):
“The share is created with the ACL (Access Control List) “Everyone:F” (“F” stands for “full permissions”, i.e.. read and write permissions) and no guest access (which means Samba must be able to authenticate a real user, system passwd/shadow, LDAP or smbpasswd based) by default. This means that any additional access control (disallow specific user specific access etc) must be done on the underlying file system.”
The Samba Section of the FreeBSD Handbook can get you started on installing and starting Samba. From there you’ll want to follow the Samba links in the Authentication section above for the desired Samba configuration.
Although the Internet is full of performance tuning suggestions for SMB/Samba, the majority are badly outdated, and frequently significantly decrease performance when applied to modern Samba servers. Modern Samba servers typically don’t require tuning at all in most environments—and for those which do need tuning, we recommend consulting the Performance Tuning section of the Samba Wiki first.
Internet Small Computer Systems Interface (iSCSI) provides block-level access to storage devices (called targets) to authorized clients (called initiators) over a network.
This standard uses the following terminology, which appears in iSCSI man pages and configuration examples.
- IQN (iSCSI Qualified Name): unique for each target.
- Extent: the iSCSI share which appears as an unformatted disk to iSCSI initiators. In OpenZFS, it is typically a zvol (a volume which is exported as a block device). See zfs-create(8) for more information on creating a zvol.
- Portal: indicates which IP addresses and ports to listen on for connection requests.
- LUN (Logical Unit Number): represents a logical SCSI device. Rather than mounting remote directories, initiators format and directly manage filesystems on iSCSI LUNs.
ISCSI supports two types of authentication for the connection between the target and initiator: NONE (the default), and CHAP (Challenge-Handshake Authentication Protocol), which requires that the same username and secret (password) be set on the target and initiator. Additional configuration should explicitly map initiators to just the specific target LUNs they should access.
iSCSI operates as a cleartext protocol during iSCSI transactions. Environments with sensitive data should use consider IPSec or VPN protocols to protect data as it crosses the network.
Due to the security and performance considerations of iSCSI, iSCSI traffic is generally transmitted over dedicated network segments or VLANs (virtual LAN). A logically isolated backchannel network can ensure that only valid initiators connect to storage arrays and that unauthorized users are not provisioned for the iSCSI network, while isolating iSCSI traffic from other network traffic may also prevent network congestion and minimize experienced storage latency.
Network design can also impact iSCSI performance. In addition to preferring dedicated network segments, consider these best practice considerations for iSCSI networks:
- Configure jumbo frames, if all the targets, initiators, and intervening Layer 2 and 3 devices support it.
- Keep routing hops between targets and initiators to a minimum in order to minimize latency. Ideally, they should live on the same subnet and be not more than one hop away.
- Multipathing can be used to distribute workload and provide reliability. A Review of Storage Multipathing provides an overview of this feature on FreeBSD with some iSCSI configuration examples.
FreeBSD Native iSCSI Target or istgt?
FreeBSD provides a native, high performance, in-kernel target daemon. This iSCSI target is known as the CAM Target Layer, so all the associated commands and man pages begin with “ctl”:
- The iSCSI target daemon: ctld(8)
- Target configuration file: ctl.conf(5)
- Administrative utility for checking status and health of target: ctladm(8)
- Command for gathering target statistics: ctlstat(8)
FreeBSD 10.0 and higher also provide a native iSCSI initiator. Its associated commands all begin with “iscsi”:
- The iSCSI daemon must be running in order to establish new connections or recover after a connection error: iscsid(8)
- The kernel component of the initiator: iscsi(4). This man page contains the available initiator tunables.
- Initiator configuration file: iscsi.conf(5)
- Initiator management utility: iscsictl(8)
In the last article of this series, we’ll discuss maintenance and upkeep of the NAS system.