FreeBSD vs. Linux - OpenZFS

FreeBSD vs. Linux – Which Operating System to use for OpenZFS

FreeBSD vs. Linux – Where and How To Run OpenZFS

In this article, we’re not going to set out to tell you which operating system you should use—they’re both excellent!—but we’ll lay out their remaining OpenZFS differences, to help anyone on the fence decide which OS to use beneath our favorite filesystem.

In December 2020, the OpenZFS project finally unified the OpenZFS codebase between the FreeBSD and Linux platforms.

This helps ensure cross-compatibility between the two—but there are still some implementation and even a few feature differences worth paying attention to.


In this article, we’re not going to set out to tell you which operating system you should use—they’re both excellent!—but we’ll lay out their remaining OpenZFS differences, to help anyone on the fence decide which OS to use beneath our favorite filesystem.

Setting up the Environment with Ubuntu Linux and FreeBSD

Although you can install OpenZFS on the vast majority of Linux distributions, for reasons of space and clarity, the only one we’re covering directly today is Ubuntu—specifically, the latest LTS version, Ubuntu 22.04.

Similarly, on the BSD side of things, it’s worth noting that we’re specifically talking about FreeBSD (and its children, such as the desktop-focused GhostBSD). Nothing we cover today should be interpreted to apply to either NetBSD or OpenBSD.

Installation

There are two major ways to install OpenZFS: on top of an existing operating system, or during the process of installing the OS itself.

Adding OpenZFS support to an existing system

FreeBSD has native OpenZFS support built right into every installation, so there are no packages to add to an existing FreeBSD system in order to get OpenZFS support. Ubuntu 22.04 has pieces of OpenZFS built in—specifically, the default kernel has the necessary OpenZFS headers, whether you’ve installed OpenZFS or not—but you will need to add a package in order to use them:

root@ubu2204:~# apt update ; apt install zfsutils-linux

That’s all it takes.

Installing FreeBSD 13 on a ZFS root

If you’re looking to install a fresh operating system on OpenZFS, FreeBSD currently has a significant advantage—built-in, comprehensive support in the OS installer itself. While running FreeBSD 13’s installer, you’re directly presented with the option to use an OpenZFS root—in fact, it’s the default option! There really are no “gotchas” here; FreeBSD’s installer defaults to a single drive, but natively supports more complex pool options as well—and if you don’t see the right options for exactly the pool topology you’d like to end up with, it’s unlikely to matter. You can always install the OS to a single vdev, and add more vdevs later if necessary.

Installing Ubuntu 22.04 on a ZFS root

Ubuntu 22.04 doesn’t make life anywhere near so easy, for those who would like an OpenZFS root filesystem. If you’re using the “desktop” ISO, there is an option for an OpenZFS root—but it comes with a fairly significant “gotcha” in the form of zsys, a project Canonical (the company behind Ubuntu) left no more than half-finished.

Zsys was/is a very ambitious project, which will split up your ZFS root into a bewildering and unnecessarily large array of individual datasets. This makes the operating system root difficult and awkward to manage—which the zsys devs didn’t consider to be a problem, since they envisioned zsys itself abstracting its management away from the user entirely.

Zsys also automatically takes snapshots prior to installation of new software using apt—a laudable goal, but did we mention the project was never properly finished? Although zsys automatically takes snapshots, it doesn’t automatically destroy them later, which means many users learn about snapshot management for the first time when their OS stops working properly due to a lack of free disk space.

Canonical’s desktop ZFS installer also, unfortunately, lacks support for installing to multiple drives. This can be worked around to some degree by zfs attaching another drive later, to turn the single-disk vdev your new OS’s root is on into a mirror vdev—but there’s no such workaround if you prefer, for example, a RAIDz2 root.

The best thing about zsys, in our opinion, is that it can be safely removed once the OS installation is over. If you used Ubuntu’s ZFS on root desktop installer but don’t want zsys managing it anymore, you can simply apt remove zsys –purge and it will trouble you no more… although you’ll still be saddled with far too many datasets in your OS root.

The final zsys gotcha is one we don’t much mind, given the rest of its drawbacks: it’s only available on the desktop edition, not the server edition.

If you need an OpenZFS root on Ubuntu 22.04, you’re likely better off with a third-party solution such as zfsbootmenu. Although the installation process itself is a bit clunky—requiring the admin to boot to a live desktop environment and run commands from a shell—the end result avoids all the pitfalls listed above, resulting in a ZFS root as flexible as the one FreeBSD’s native installer provides.

Boot Environments

Proper support for ZFS boot environments means allowing you to—for example—take a snapshot of your filesystem root prior to a potentially dangerous operation (like an in-place major operating system upgrade), then be able to boot the system cleanly to the old snapshot if something went wrong.

This barely scratches the surface of what can be done with boot environments, but it should be enough to give you an idea of what they’re about.

FreeBSD fully supports ZFS boot environments right out of the box, just like it fully supports ZFS on root in the first place. 

On Ubuntu, the zsys desktop installer gives you some support for boot environments—but we find it unnecessarily complicated. The third-party zfsbootmenu solution we discussed earlier offers more robust support. If you need proper ZFS boot environments under Ubuntu (or other Linux distributions), we recommend zfsbootmenu.

Management of Kernel Tunables

The vast majority of ZFS tuning is done from the command line, using the zfs set command to change properties like recordsize, or using zpool -o at pool or vdev creation to set ashift. But there are some broader tunables that must be set at the operating system kernel level.

These kernel-level tunables are handled differently under FreeBSD and Linux, in accordance with the way those operating systems manage all kernel tunables.

Under FreeBSD, ZFS kernel tunables and variables are set or read using the sysctl command. For example, one might check L2ARC cache stats with the command sysctl kstat.zfs.misc.arcstats:

root@freebsd:~# sysctl kstat.zfs.misc.arcstats | egrep 'l2_(hits|misses)' kstat.zfs.misc.arcstats.l2_misses: 29549340042
kstat.zfs.misc.arcstats.l2_hits: 1893762538

Under Linux, the same stats would be viewed using the special /proc directory, which exposes kernel variables as a filesystem:

root@banshee:~# egrep 'l2_(hits|misses)' /proc/spl/kstat/zfs/arcstats
l2_hits 4 490701
l2_misses 4 3366016

When the goal is to set kernel variables rather than simply read them, FreeBSD users add config lines to /boot/loader.conf, and Ubuntu users add them to /etc/modprobe.d/zfs.conf. In either case, changes are only read when the ZFS kernel module is first loaded—which most commonly means rebooting the system in order to get them to take effect.

Some kernel tunables can be changed dynamically, using the sysctl command under FreeBSD, or by changing the values in “files” exposed under Linux’s /sys directory. Changes made this way will disappear on next boot—assuming they take effect at all; the majority of ZFS kernel-level tunables simply cannot be changed dynamically on a running system in the first place.

root@freebsd:~# sysctl vfs.zfs.arc.max=34359738368
vfs.zfs.arc.max: 51539607552 -> 34359738368
root@banshee:~# echo 34359738368 > /sys/module/zfs/parameters/zfs_arc_max

Storage Device Naming

Although OpenZFS is generally happy to use “raw” device names on either FreeBSD or Linux, this isn’t considered best practice. Which disk is /dev/sda and which disk is /dev/sdb on a Linux system can change from one boot to the next—and FreeBSD is no different, when it comes to raw device names like /dev/ada0.

Instead, it’s best practice to use either custom labels or WWN IDs (factory-assigned, universally unique identifiers) when feeding drives or partitions into a pool. Shortened versions of the same labels can and should be affixed to the easily-visible portion of the disks themselves, to add clarity and reduce confusion when it comes time to replace a failed disk.

Under Linux, the /dev/disk/by-id directory is enabled by default, and will show both manufacturer serial numbers and WWN IDs alike. These values can be used directly when creating pools: for example, zpool create mypool /dev/disk/by-id/wwn-0x[bunchofnumbers] rather than zpool create mypool /dev/sda.

Under FreeBSD, we can similarly use partition labels or WWN IDs for drives—but /dev/diskid is disabled by the installer; we must first enable it. Once we enable /dev/diskid, we can use it to add disks by serial number in the same way that we would under Linux.

NFS and SMB integration

OpenZFS has built-in support for managing NFS shares across both operating systems. Managing NFS shares with ZFS commands  allows for easy ZFS property inheritance, which is much simpler than sharing a large number of filesystems using an exports file. 

A simple zfs set sharenfs=on is sufficient to expose that dataset via NFS, regardless of which host operating system ZFS is running on. ZFS can also manage additional configuration—for example, a common NFSv3 configuration on FreeBSD might be:

root@freebsd:~# zfs set sharenfs=-alldirs,-network,192.168.0.0,-mask 255.255.255.0,-maproot=nobody pool/dataset

This allows passing any supported options to the NFS server. You can use commas in place of spaces to ensure the property value doesn’t need to be quoted. Any child dataset of pool/dataset will inherit these settings and also be shared, unless you specifically override the child with sharenfs=off or its own different settings. 

However, this does pose one small problem—the syntax of the options you pass to the Linux NFS server are different. This generally doesn’t pose a problem, but if you move a pool between the two operating systems, you’ll need to update the sharenfs properties to use the new target operating system’s supported syntax. 

There have been discussions in the OpenZFS project about trying to standardize these settings—or split the property into a per-operating system property so that incompatible configurations will not be applied—but that’s still a work in progress.

With NFSv4 things are generally much simpler. Under NFSv4, you set up an export at a single point in the filesystem heirarchy, and everything below that is shared via NFS. You can then simply set ZFS’s sharenfs property to on or offto control which datasets are accessible. 

In /etc/exports:

FreeBSD:

V4: /mypool -network 192.168.100.0 -mask 255.255.255.0

Linux: 

/mypool 192.168.100.0/24(rw,fsid=0,no_subtree_check)

The original Solaris version of ZFS also included a sharesmb property which controlled the in-kernel SMB server. On FreeBSD this setting is unsupported. On Linux it can be configured to work with Samba’s USERSHARES feature—but this requires setting up special permissions to the directory where the configuration is stored, and just configuring Samba directly is usually better practice. 

Extended File Attributes

Although extended attributes—perhaps better known as Access Control Lists (ACLs)—are available under OpenZFS for either FreeBSD or Linux, they are unfortunately handled a bit differently on each.

On either FreeBSD or Linux, extended attributes may be enabled with zfs set xattr=on or disabled with zfs set xattr=off. Enabling ACLs means creating a very small metadata object for each file stored within the dataset where xattr=on; that additional object contains the access control list values.

So far, so good—but Linux supports an additional setting, xattr=sa, which FreeBSD does not. On a Linux system, zfs set xattr=sa enables ACLs—but it stores the ACL in the file’s existing dnode, rather than in a separate metadata object.

In filesystems with very large numbers of files, particularly small files, xattr=sa is a significant performance and storage efficiency improvement. But care must be taken not to rely on this setting in environments where a dataset may be migrated from Linux to FreeBSD, since FreeBSD does not support internal ACLs.

If you plan to host datasets which need extended attributes and will expose tens of thousands of files, xattr=sasupport may be an excellent reason to consider Linux for your host operating system.

Conclusion

Ultimately, the differences between FreeBSD and Linux as OpenZFS host operating systems are quite minor. Your pool will perform about as well beneath one OS as it does another, and the vast majority of its day-to-day maintenance won’t be different either.

For most users and admins, we’d recommend not overthinking this—if you’re primarily a FreeBSD admin and more comfortable with that operating system, that’s the ZFS OS for you; if you’re primarily a Linux admin and more comfortable with it, then feel free to stick with it as well.

For the rare few admins who are equally familiar with both operating systems and genuinely have no existing preference, we’d give FreeBSD the nod for easier root installation and better boot environment support—or if your data comes in the form of tens of thousands of small files and you need extended attributes, to Linux for its xattr=saoption.

<strong>Meet the Author: Jim Salter</strong>
Meet the Author: Jim Salter


Jim Salter (@jrssnet) is an authorpublic speaker, mercenary sysadmin, and father of three—not necessarily in that order. He got his first real taste of open source by running Apache on his very own dedicated FreeBSD 3.1 server back in 1999, and he’s been a fierce advocate of FOSS ever since. He’s the author of the Sanoid hyperconverged infrastructure project, and co-host of the 2.5 Admins podcast.

<strong>Meet the Author: Jim Salter</strong>
Meet the Author: Jim Salter


Jim Salter (@jrssnet) is an authorpublic speaker, mercenary sysadmin, and father of three—not necessarily in that order. He got his first real taste of open source by running Apache on his very own dedicated FreeBSD 3.1 server back in 1999, and he’s been a fierce advocate of FOSS ever since. He’s the author of the Sanoid hyperconverged infrastructure project, and co-host of the 2.5 Admins podcast.

Tell us what you think!