The Challenge
When a leading animation studio was looking to reduce render pipeline delays and accelerate handoff to post-production, they came to Klara for advice.
The studio needed as much throughput as possible to shorten the time between runs that their render farm would idle while final render files were copied off to the storage system.
Transferring massive final render files, often tens of gigabytes each, to post-production or archive was becoming a bottleneck, and the new solution needed to address that while also being future-proof for the studio’s growing throughput demands.

The Solution
Klara engineered a performance-tuned OpenZFS solution, specifically optimized for the studio’s workflow, everything from the hardware architecture and pool layout, down to the ZFS dataset properties.
Optimized ZFS Pool Layout
To maximize throughput, Klara engineered a pool layout that abandoned the generally accepted wisdom of creating as many VDEVs as possible. The design leveraged a pair of RAID-Z2 VDEVs and a NUMA-aware architecture to ensure balanced performance across all NVMe drives and network interfaces, tuned specifically for large sequential file handling.
ZFS Tuning and Configuration
The default tuning in ZFS will work well in many situations. Klara fine-tuned the OpenZFS parameters to align with the studio’s specific real-world workload: many concurrent streaming reads and writes. This included adjusting record sizes, disabling compression, and optimizing the prefetcher to reduce overhead and unlock raw throughput.
Validation & Testing
Extensive benchmarks confirmed the gains: a 57% throughput increase over the default configuration, peaking at 55 GB/s in sustained reads. The system now scales cleanly with concurrency—faster transfers, more concurrent renders, and fewer idle render nodes.

Business Impact
An optimized ZFS storage solution delivered measurable impact—reducing delays, accelerating handoffs, and freeing the team to focus on creating, not waiting.
57% Faster Render Transfers
Tuning ZFS for high-throughput eliminated transfer bottlenecks—final frames now move off the render cluster 57% faster, reducing idle time and keeping artists productive.
Faster Post-Production
Editors and other teams receive completed frames in minutes instead of hours—enabling more efficient collaboration and faster project delivery.
Long-Term ROI & Scalability
By unlocking additional performance from the NVMe and 100 GbE hardware, the studio was able to increase storage capacity while gaining 55 GB/s of headroom for future growth.