Announcement

Upcoming Webinar: Cost-Efficient Storage on TrueNAS with Fast Dedup  Learn More

Klara

The market trends impacting server hardware through 2025 show no signs of relenting and will continue their outsized impact on the availability and price of system components. It will become increasingly important to be able to compensate for these price increases to continue to meet the storage demands of the most critical workloads.

 

#1 Memory Price Increases Will Change Buying Behaviour

The market for memory is primarily being driven by demand for AI servers, causing fundamental shifts in the industry, while drastically impacting price and availability. Counterpoint Research predicts DRAM prices will continue climbing, building on the 80% across 2025, with a further 20% expected in early 2026. That could mean $700 USD per 64 GB DDR5 ECC RDIMM by the spring, and as bad as $1000 USD by the end of the year. Beyond just the price increases, this has resulted in shifts on the production side, with Micron announcing that they will focus solely on the data center market.

The ZFS Adaptive Replacement Cache (ARC) optimizes any RAM not consumed by applications to maximize the storage performance of your workload. ZFS can compensate for less available memory using its second level cache, the L2ARC. This can leverage high speed media such as NVMe to extend the cache at a significantly lower cost than additional RAM. Enterprise NVMe tend to cost in the $250-350 USD per TB range, offers reasonable trade-offs again RAM that is generally over $7000 USD per TB.

 

 

#2 NVMe Design Changes: Larger Sectors for Less DRAM

NVMe devices need an amount of internal DRAM that scales linearly with the capacity of the drive to store the Flash Translation Layer (FTL). The only way to reduce the amount of DRAM required is to shrink the FTL by increasing the sector size.

Some models of low-endurance high-capacity NVMe have already started using larger sector sizes, from 16 KiB to as much as 128 KiB, with some vendors discussing 512 KiB. These larger sector sizes present new problems for filesystems. Many traditional filesystems including EXT4 and XFS have limitations to their maximum sector size. EXT4 can support a block size of up to 64 KiB, but only on systems where the kernel page size is also 64 KiB, on x86_64 systems, it is often limited to just 4 KiB.

 

 

For the most price sensitive workloads such as bulk storage and archival, ZFS offers the ability to leverage modest amount of flash to accelerate HDDs to meet performance demands, easing the pain of higher NAND prices. Moving metadata and small blocks to flash and letting the HDDs play to their strong suit of bulk data streaming avoids the expense of all-flash arrays while still meeting throughput requirements.

 

#3 Media Regeneration Will Reduce Drive Replacements

Improved component failure reporting will see worn or partially failed devices avoid needing replacement. In the NVMe 1.4 specification, the new “Rebuild Assist” feature, allows devices to report the status of LBAs that the firmware identifies as potentially unrecoverable. This allows the host or filesystem to react, rebuilding that LBA from parity pre-emptively.

On the HDD side, the SAS and SATA specifications have been extended with the “Storage Element Depopulation” feature. A typical 20 TB HDD is made up of 10 platters and 20 heads. If one of those heads fails, we would typically replace the entire drive. With this new feature, the drive is able to communicate the range of LBAs that are no longer usable due to the damaged head, and the HDD can continue to be used as a 19 TB drive, effectively regenerating the drive in place. With support from the volume manager or filesystem, this could allow the drive to continue to be used rather than replaced.

 

 

#4 Disaster Recovery Extends to Cyber Resilience

As the threat landscape evolves to include ransomware, advanced persistent threat actors (APTs), novel hardware failure modes, and insider threats, traditional disaster recovery plans fall well short of business requirements. Simply recovering after an incident still involves downtime, lost revenue, compliance costs and reputational damage. Organizations must build storage systems that are resilient against attack and failures and be able to respond swiftly to prevent or minimize damage and accelerate recovery.

 

“72% of CISOs said they are now responsible for leading the organization’s recovery after incidents that interrupt business.”

New research from Absolute Security cited by the National CIO Review

 

Core to Cyber Resilience is preparedness. Beyond evaluating internal infrastructure, what external services does the organization depend on? What happens when those 3rd parties suffer an outage or breach? Source code escrow agreements are gaining popularity, but with an increasing focus on sovereignty, organizations should consider what role open-source software and infrastructure can play in reducing these external risks.

 

 

OpenZFS 2.4 features a new delegation permission that enables an automated backup user to only access encrypted copies of snapshots for replication. This prevents both accidental leaking of unencrypted data, and ensure that even if the automation is compromised, the attack doesn’t gain access to the encryption keys or plain text data.

 

What 2026 Has in Stor(ag)e

We expect to see continued growth in the capacities of both HDD and NVMe devices, but the decrease in cost per TB looks like it may take longer due to AI demand. New capabilities like autonomous regeneration for HDDs offer new ways to try to lower TCO even if the cost of individual disks remains firm. We expect we will continue to see the expansion of high-capacity QLC flash, with the lower endurance and sector-size trade-offs that entail. It will be interesting to participate and watch as the software and filesystem ecosystems adapt as the fundamental realities of storage continue to evolve.

 

Back to Articles