Announcement

Join us for ZFS Basecamp Launch: A Panel with the People Behind ZFS! Learn More

Klara

Webinar Overview

ON DEMAND

Benchmarking ZFS performance is more than just running a few I/O tests—without the right methodology, results can be misleading or even completely inaccurate. Many traditional benchmarking tools and approaches fail to capture how ZFS actually works, leading to flawed comparisons and incorrect conclusions.

Join us as we cut through the noise and show you how to properly measure ZFS performance, with insights from industry experts like Allan Jude and the FreeBSD/ZFS community. Whether you’re tuning ZFS for production, evaluating hardware choices, or just curious about performance, this session will help you separate real insights from misleading metrics.

Your takeaways after the webinar:

  • Common mistakes in ZFS benchmarking (e.g., not accounting for the ZFS ARC, improper dataset configurations, using inappropriate tools).
  • How to design benchmarks that reflect real-world performance, including database workloads, virtualization, and high-throughput applications.
  • Why ZFS caching, compression, and copy-on-write impact benchmarks, and how to interpret results correctly.
  • Recommended tools and methodologies for meaningful performance analysis.

Top Questions from the Session—Answered!

We handpicked two of the most thought-provoking questions from the session, with expert answers from the panel:

🗨️ At work, they want to compare ZFS RAIDZ vs XFS on top of RAID6. They disabled file system cache due to the workload. I disabled ARC for fairness — is that right? What’s a better way to compare the two?          

Disabling ARC entirely (primarycache=off) is too extreme — instead, set primarycache=metadata.
This preserves essential metadata caching, which ZFS needs more than XFS due to its more complex structure (copy-on-write, indirect blocks, etc.), while still avoiding caching of user data.

Fair comparison tips:

  • Use iostat(hardware-level) to monitor actual disk activity — compare how much work the hardware is doing for each filesystem.
  • A fair test means equal caching conditions, aligned block sizes, and matching workloads.
  • If ZFS is doing significantly more work (e.g. 4× as many writes), something's misconfigured — possibly misaligned blocks, incorrect record sizes, or extra metadata operations.

🗨️ Can you share best practices for benchmarking and profiling specific ZFS components — like individual VDEVs or time spent on checksumming and allocation?

Benchmarking the different vdev types is just a different version of what we have been discussion throughout the webinar, making sure your comparisons are realistic to your workload.
For looking at the time spent on a particular type of operation, like checksumming or compression, OS profiling tools like dtrace on FreeBSD/MacOS, and perf on Linux, can be used to measure how much time is spent on these operations and determine which parts of the code are consuming the most time.

To avoid misleading results:

  • When comparing different ZFS vdev types, remember to factor in the characteristics and limitations, such as parity and padding.
  • When profiling, eliminate as many other processes as possible to avoid cluttering the results
  • Select a sample rate that is both high enough to catch many small operations, but not so high as to overload the system
Date: April 30th, 2025
Time: 12:00 PM EST
Duration: 45 minutes.
Know someone else who might be interested? Share this entry with them!

 

 

Meet the Hosts

Principal Solutions Architect and co-Founder of Klara Inc., Allan Jude has been on the team since the beginning. Shepherding an amazing team of developers and sysadmins, he is the technical heart of our team. A community go-to person for ZFS and open source through and through, Allan enjoys spending his time improving on ZFS, FreeBSD and making open source code better.

Learn About Klara

JT Pennington is a ZFS Solutions Engineer at Klara Inc, an avid hardware geek, photographer, and podcast producer. JT is involved in many open source projects including the Lumina desktop environment and Fedora.

Learn more