Could the flash industry fully replace the hard drive industry’s capacity output by 2028? Explore the common myths about hard drives and SSDs and the reasons why both technologies will remain central to data storage architectures for the foreseeable future.
“Hard drives will soon be a thing of the past.”
“The data centre of the future is all-flash.”
Such predictions foretelling hard drives’ demise, perennially uttered by a few vocal proponents of flash-only technology, have not aged well.
Without question, flash storage is well-suited to support applications that require high-performance and speed. And flash revenue is growing, as is all-flash-array (AFA) revenue. But not at the expense of hard drives.
We are living in an era where the ubiquity of the cloud and the emergence of AI use cases have driven up the value of massive data sets. Hard drives, which today store by far the majority of the world’s exabytes (EB), are more indispensable to data centre operators than ever.
Industry analysts expect hard drives to be the primary beneficiary of continued EB growth, especially in enterprise and large cloud data centres—where the vast majority of the world’s data sets reside.
Let’s take a closer look at three common myths about hard drives and SSDs and three reasons why both technologies will remain central to data storage architectures for the foreseeable future.
1st Myth: SSD pricing will soon match the pricing of hard drives.
Truth: SSD and hard drive pricing will not converge at any point in the next decade.
Hard drives hold a firm cost-per-terabyte (TB) advantage over SSDs, which positions them as the unquestionable cornerstone of data center storage infrastructure.
Seagate’s analysis of research by IDC, TRENDFOCUS, and Forward Insights confirms that hard drives will remain the most cost-effective option for most enterprise tasks. The price-per-TB difference between enterprise SSDs and enterprise hard drives is projected to remain at or above a 6 to 1 premium through at least 2027.
This differential is particularly evident in the data centre, where device acquisition cost is by far the dominant component in total cost of ownership (TCO). Taking all storage system costs into consideration—including device acquisition, power, networking, and compute costs—a far superior TCO is rendered by hard drive-based systems on a per-TB basis.
2nd Myth: Supply of NAND can ramp to replace all hard drive capacity.
Truth: Entirely replacing hard drives with NAND would require untenable CapEx investments.
The notion that the NAND industry would or could rapidly increase its supply to replace all hard drive capacity isn’t just optimistic—such an attempt would lead to financial ruin.
According to the Q4 2023 NAND Market Monitor report from industry analyst Yole Intelligence, the entire NAND industry shipped 3.1 zettabytes (ZB) from 2015 to 2023, while having to invest a staggering $208 billion in CapEx—approximately 47% of their combined revenue.
In contrast, the hard drive industry addresses the vast majority—almost 90%—of large-scale data center storage needs in a highly capital-efficient manner. To help crystalize this, take a look at the chart below where byte production efficiency of the NAND and hard drive industries are compared, using Seagate Technology as a proxy for the hard drive industry. Simply put, the hard drive industry is far more efficient at delivering ZBs to the data centre.
Could the flash industry fully replace the entire hard drive industry’s capacity output by 2028?
Yole Intelligence report cited above indicates that from 2025 to 2027, the NAND industry will invest about $73 billion, which is estimated to yield 963EB of output for enterprise SSDs as well as other NAND products for tablets and phones. This translates to an investment of about $76 per TB of flash storage output. Applying that same capital price per bit, it would require a staggering $206 billion in additional investment to support the 2.723ZB of hard drive capacity forecast to ship in 2027. In total, that’s nearly $279 billion of investment for a total addressable market of approximately $25 billion. A 10:1 loss.
This level of investment is unlikely for an industry facing uncertain returns, especially after losing money throughout 2023.
3rd Myth: Only AFAs can meet the performance requirements of modern enterprise workloads.
Truth: Enterprise storage architecture usually mixes media types to optimise for the cost, capacity, and performance needs of specific workloads.
At issue here is a false dichotomy. All-flash vendors advise enterprises to “simplify” and “future-proof” by going all-in on flash for high performance. Otherwise, they posit, enterprises risk finding themselves unable to keep pace with the performance demands of modern workloads. This zero-sum logic fails because:
- Most modern workloads do not require the performance advantage offered by flash.
- Enterprises must balance capacity and cost, as well as performance.
- The purported simplicity of a single-tier storage architecture is a solution in search of a problem.
Let’s address these one by one.
First, most of the world’s data resides in the cloud and large data centres. There, only a small percentage of the workload requires a significant percentage of the performance. This is why according to IDC over the last five years, hard drives have amounted to almost 90% of the storage installed base in cloud service providers and hyperscale data centers.
In some cases, all-flash systems are not even required at all as part of the highest performance solutions. There are hybrid storage systems that perform as well as or faster than all-flash.
Second, TCO considerations are key to most data centre infrastructure decisions. This forces a balance of cost, capacity, and performance. Optimal TCO is achieved by aligning the most cost-effective media—hard drive, flash, or tape—to the workload requirement. Hard drives and hybrid arrays (built from hard drives and SSDs) are a great fit for most enterprise and cloud storage and application use cases.
Also Recommended: Unified Approach to Observability and Security Lead to Efficient Collaboration
While flash storage excels in read-intensive scenarios, its endurance diminishes with increased write activity. Manufacturers address this with error correction and overprovisioning—extra, unseen storage, to replace worn cells. However, overprovisioning greatly increases the imbedded product cost and constant power is needed to avoid data loss, posing cost challenges in data centers.
Additionally, while technologies like triple level cell (TLC) and quad-level cell (QLC) allow flash to handle data-heavy workloads like hard drives, the economic rationale weakens for larger data sets or long-term retention. In these cases, disk drives, with their growing areal density, offer a more cost-effective solution.
Thirdly, the claim that using an AFA is “simpler” than adopting a mix of media types in a tiered architecture is a solution in search of a problem.
Many hybrid storage systems employ a well-proven and finely-tuned software-defined architecture that seamlessly integrates and harnesses the strengths of diverse media types into singular units. In scale-out private or public cloud data centre architectures, file systems or software-defined storage is used to manage the data storage workloads across data centre locations and regions. AFAs and SSDs are a great fit for high-performance, read-intensive workloads. But it’s a mistake to extrapolate from niche use cases or small-scale deployments to the mass market and hyperscale, where AFAs provide an unnecessarily expensive way to do what hard drives already deliver at a much lower TCO.
The data bears it out. Seagate analysis of data from IDC and TRENDFOCUS predicts an almost 250% increase in EB outlook for hard drives by 2028. Extrapolating further out in time, that ratio holds well into the next decade.
Hard drives, indeed, are here to stay—in synergy with flash storage.