The future of flash: the all-flash datacenter, the flash technology curve, and the prospects for the next new solid-state memory

It’s been around 20 years since flash memory – in its hugely dominant NAND variant – first entered the enterprise datacenter. Since then, it has transformed data storage and majorly increased the performance of a wide range of applications by replacing far slower spinning disk as the default medium for primary data storage. With these two decades of flash history now behind us we thought it was time to ask some basic questions about the future of the technology. We assembled a panel of experts and asked them when the all-flash datacenter might become commonplace, where flash is on its technology curve, and when the next ground-breaking new solid-state memory might emerge as a complement or successor to flash.

  • 9 months ago Posted in

Tape didn’t die and neither will disk 

When flash first entered datacenters in the late 90s, it was used only to store a subset of data, for a subset of performance-sensitive applications. But as flash prices continued to fall and flash was used to store data for an ever-widening range of applications, industry watchers began asking how long before flash completely displaced disk to create so-called all-flash datacenters.  Two decades later, we put this question to our panel. All the experts agreed flash will not fully displace disk for many years yet, and the majority said the all-flash datacenter will remain a rarity for the foreseeable future. However other responses were surprisingly different.

Alfred Chase Hui, Vice President of International Business at DapuStor, a vendor of flash drives, system-on-a-chip processors, and other edge-related products, identified the major factors involved in answering this question: “it’s reasonable to expect that all-flash datacenters will become more common in the future. However, the transition to all-flash datacenters may take some time due to factors such as cost, compatibility, and performance requirements,” he said.

Other experts on the panel pointed to two causes for the indefinitely prolonged future of disk. The first was what they estimated is a five to seven-fold difference in price per TB capacity between flash and disk. The second was enterprises’ need to store large and ever-growing volumes of data that is  not in active use, for purposes such as AI/ML and analytics training, archiving and compliance, and backups. 

Shawn Meyers, field CTO at Tintri, a maker of storage systems tailored for virtualized environments, said: “The need for lower-cost archival storage, which still includes tape in many places, will remain. The amount of disk purchased by hyperscalers exceeds the amount of flash drives today.  You only need fast storage for things you are actively working on, not for things you are just storing for later use.” 

Meyers’ comment on behalf of Tintri was very similar to statements made by others on our panel, which collectively might be labeled as the ‘tape didn’t die’ argument.   Despite predictions of the death of tape over the last two decades, tape usage – at least in terms of sheer volume of data stored on it – has increased rather than contracted, because of the need to store ever-growing volumes of cold or infrequently-accessed data. 

Peter Donnelly, director of products at storage networking vendor ATTO, shared the majority view that all-flash datacenters “do not make sense,” and that there will always be a need for multiple tiers of storage. He added that, counter-intuitively, emerging technologies such as AI/ML are strengthening this argument: “Companies need access to massive storage pools for machine learning training, but once that is complete that data needs to be offloaded to more cost-effective storage technologies. So, while it may be counterintuitive, a strong argument can be made that emerging AI applications actually increase the need for second and third-tier storage systems like disk and even tape.”

Coby Hanoch, CEO and founder at Weebit Nano, a developer of next-generation memories shares the views of others about tape and adds another reason why he thinks all-flash datacenters will never be widespread, which is that alternative solid-state technologies will be used in datacenters. 

“I doubt there will ever be all-flash datacenters, for several reasons. There will always be a need for tapes or disks simply because they can store huge amounts of data in a cheap way off-line, and there will be a growing amount of data that is rarely accessed but still needs to be kept. And by the time flash takes over the datacenters the newer NVMs [non-volatile memories], like ReRAM and MRAM, will start taking parts of the datacenters,” said Hanoch for Weebit Nano.

Steven Umbehocker, founder and CEO at OSNexus, a vendor of scale-out, software-defined storage systems pointed out that disk is entrenched in object-based storage systems, and that disk-making giant Seagate has predicted continued technology development. “Today the 5:1 cost savings of flash vs disk is making a larger home for disk used in object storage a stronger alternative to tape and with Seagate delivering on long awaited 30, 40, and 50TB disk drives over the next couple years that will extend the runway for disk,” said Umbehocker for OSNEXUS.

However IT teams consider more than just upfront purchase costs when choosing between disk and flash.  Randy Kerns, senior strategist at analyst firm the Futurum Group said: “There is one aspect of moving to flash technology that is often overlooked as to its value for customers: with the acceleration in performance from flash, there is a simplicity value. By simplicity, I mean the need to manage to device characteristics for data placement/distribution in regards to performance. It is just simpler when there is more performance from storage. This is a factor that will move organizations to higher performing technology.”

However some datacenters are already all-flash

For Tintri, Meyers said that all-flash datacenters already exist. “The answer to this [question about all-flash datacenters] will be based upon the size, scale, and scope of the datacenter.  There are many smaller to mid-sized datacenters which are already all flash. But these tend to be more single-customer datacenters.  The large enterprises, regional hosting [service providers], and the massive hyperscalers will have spinning rust for any time period you want to forecast.”

Dennis Hahn, principal analyst at research firm Omdia, agreed and predicted that there is a class of enterprise datacenters that will be all-flash by 2028. These are the datacenters operated by enterprises that are increasingly using public infrastructure clouds to host their less performance-sensitive or critical applications that do not require flash storage and are using the same clouds to store their cold or infrequently-accessed data. 

“On-premises datacenters that are largely focused on running mission-critical applications are swiftly transitioning to all flash storage. In the following three to five years, Omdia predicts that the majority of these on-premises datacenters will switch entirely to flash technology,” said Hahn.

In other words, disk is moving out of these enterprise datacenters and into the hyper-cloud datacenters – where Hahn, like other members of our panel, said  it will exist for many years yet, because of its low cost, and the lack of need for performance when storing cold data. Hahn gave another reason for not using flash to store this type of data: “Since these [bulk, archive and backup stores] often interface with the relatively slow internet, throughput rather than low latency retrieval, is more crucial.” For the same reason of throughput being more important than random access, he adds: “Major use-cases like video and rich media will efficiently be able to use HDD for a long time, as well as other technologies such as IoT and ELT [Extract, Load, Transform] data collection pipelines.”

Roy Illsley, chief analyst at Omdia, added: “Another consideration is the trend to extend the life of IT equipment as part of an environmental sustainability and cost saving strategy. Therefore, customers are now less willing to rip a perceived old technology out and replace it with a shiny new one. The impact on datacenters will be the running of a mixture of technologies that could be as old as seven years in some cases, which means the arrival of all flash datacenters is not an immediate prospect.”

Curtis Anderson, software architect at Panasas, a supplier of storage software for performance-hungry workloads, holds a similar view about the prevalence of all-flash datacenters, but his is based on the size of an enterprise datacenter rather than the workloads its hosts: “We believe that there is a line where deployments less than a given capacity make sense as all-flash and where deployments above that line may not.  That line will slowly move upward but in our opinion will mostly keep pace with the growth in capacity needs, so the all-flash datacenter will be forever ‘two years from now’,“ he said.

To illustrate his argument, Anderson said a company might be happy to store relatively small 200TB filesystems in flash because that would cost  only around $150,000 more than storing it on disk and would perform much better.  But for a 100-times bigger capacity of 20PB, the extra cost would be $12m, which would be hard to justify. “Unless you’ve got some very special requirements, that money could better be applied to purchasing CPUs and GPUs,” he said.

Plenty of steam left in the flash technology curve 

Technology development often follows a curve in which the rate of improvement to cost or performance slowly reduces over time, following a curve that steadily becomes flatter as technical advances become increasingly harder to achieve. Indeed, by around 2010, many observers were predicting that the technical development of NAND flash was about to hit a brick wall in terms of the number of memory cells that could be packed into a single flash chip. By then, flash was a well-established and growing feature of the  enterprise IT landscape, thanks not only to its performance and other advantages compared to disk, but also because its price had been tumbling for the previous decade. If flash chip-makers hit that predicted technology wall, prices would start falling a lot more slowly in terms of dollars per unit of storage capacity.    

However, in 2013, Samsung side-stepped the predicted limitation by shipping the first so-called 3D flash chips that consisted of multiple layers of memory cells, rather than the single layer of cells used previously. This meant more memory cells per chip, and as an extremely valuable side-effect, the ability to store more data bits in each memory cell, again reducing per-TB prices. All major flash makers soon followed Samsung’s lead, and since then the number of layers per chip has grown rapidly. But that was ten years ago. Is flash now approaching the end or flatter part of its technology curve?

“People who say: ‘Moore’s Law is dead’ are ignoring 3D NAND.  This technology has given NAND flash a new engine to continue to add bits to the chip, and every year process engineers find ingenious ways to push it farther than anyone would have thought possible.  That’s a long way to say ‘No’ to this question.  Expect to see at least another couple of orders of magnitude of cost decreases over the next several years as chip densities continue to increase,” said Jim Handy, general director of analyst firm Objective Research. 

Announcements at the latest Flash Memory Summit confirmed that outlook, according to Leander Yu, president and CEO of Graid Technology, a vendor of GPU-powered software-defined storage systems. “Flash memory manufacturers such as Samsung, SK Hynix, Kioxia, Western Digital, and Micron will continue to innovate with roadmaps for greater density with more layers using stacking techniques, architecture and design innovations, and more bits per cell (e.g., penta-level cell or PLC),” he said. The first multilayer flash chips that shipped in 2013 comprised 24 layers of memory cells, and stored 128Gbits. Yu pointed to SK Hynix’ demonstration this year of a 321-layer chip storing 1Tbit, and Samsung’s prediction made last year that it will ship 1,000-layer chips by 2030.

Anderson added important context to this outlook by highlighting the fact that disk technology is also still developing, and therefore disk prices will also continue to fall at around the same rate as for flash. “Flash technology will continue its inexorable improvement curve but we don’t see that curve accelerating to gain ground on disks, i.e. lowering that 5x-7x multiplier on $/TB) or decelerating compared to disk,” he said. 

Amos Ankrah, solutions specialist at Boston, a provider of high-performance servers and storage systems confirmed the view that flash is still developing:: “There are a few factors, some outlined in previous answers, which would indicate that flash is still on the rise in terms of its technology curve. There is an argument to be had to considered where current flash technology transitions into new technologies, however with the levels of development which are still being undertaken by companies that develop flash storage this would seem to suggest that there is still more upward trajectory to move along before the plateau is reached,” said Ankrah for Boston.

It’s  not just inside the chips that flash has plenty of technology curve to ride

Donnelly at ATTO gave a more holistic answer to the question whether flash is at the end of its technology curve. Referring to the development of the network and storage access protocols that connect flash drives and storage systems to servers, Donnelly said for ATTO: “Not by a long shot. We’re really just beginning to see how flash can be employed in datacenters. The value of NVMe communication protocols is just starting to be recognized in datacenters, and it will take at least a decade for it to replace the massive SCSI-based infrastructure. Also, the evolution of the PCIe interface and emerging technologies like CXL will bring new possibilities for implementing flash storage. Additionally, transport protocols such as NVM-oF, typically via RDMA Ethernet, are just starting to come together as a viable alternative. So, rather than peaking, I believe that we’re just starting to see the first steps of a flash technology revolution.”

When will we see the next new discrete solid-state memory?  

Flash has transformed enterprise data storage and was a major force that drove the mobile computing revolution. It is now a major technology. Quite separately to the manufacture of flash-powered products such as storage drives or full storage systems, the manufacture of NAND flash chips alone now generates around $80bn annual revenue, and that number continues to grow.

This raises an obvious question: when will the next new solid-state data storage technology emerge with the same mass-market impact? Billions of dollars have been spent in research laboratories for the last several decades attempting to find another such technology. Intel and Micron’s jointly-developed Optane memory was the fruit of such research, and first shipped in solid-state drives in 2017. Faster but more expensive than flash, Optane was heralded as the first of a coming class of so-called storage-class memories (SCMs) that would either complement or replace flash and would have a similarly large overall impact on IT.  However Optane sold poorly and in 2021 Intel announced its plan to end manufacturing of the memory, only four years after it first shipped. 

Meanwhile research into other potential SCMs continues. For Objective Analysis, which has a major focus on emerging memories, Handy  said that over this decade and the next there will probably be no new memory technologies with the potential to make the same impact as flash.  He made a distinction between two types of memory: those that are embedded in processors or other chips, and those like NAND flash and Optane that are or were sold as discrete memory-only chips, in far greater quantities and therefore have far greater market potential. 

“Optane failed because of its cost. We  warned about that as soon as it was announced. But other technologies are likely to thrive in certain markets, especially as embedded memory in microcontrollers, ASICs, and other SoCs [Systems-on-a-Chip processors.]  Discrete memory chips though, are highly unlikely to convert en masse to an emerging SCM in the 2020s, and probably not in the 2030s.” 

Coby Hanoch, CEO and founder of Weebit Nano, a developer of next-generation solid-state memories, agrees with Handy’s assessment that Optane – also known as 3D XPoint - failed for economic reasons, but says the creation of an alternative to flash is inevitable:  

“Several NVM technologies including ReRAM, MRAM, PCM and FRAM are emerging as potential alternatives to flash. Intel’s 3D XPoint was an initial attempt to address the issue but didn’t succeed largely for economic reasons. The key to a successful flash alternative is development of a memory that can scale to large enough densities but at a low enough price. Intel was only able to meet the density part of this challenge. It’s only a matter of time until we see a technology that can meet both criteria, and we believe that ReRAM will be an answer since it has fundamental technical advantages including speed, power efficiency and cost. Development is underway to get the technology to ever higher densities,” said Hanoch for Weebit Nano.

David Norfolk, practice leader for development and government at analyst firm Bloor Research summed up the difficulty of predicting a schedule for the arrival of the next new mass-market memory: “ A lot can happen in 10 years - saying what will happen is  easy (for example atomic memory as being researched by IBM) is easy; saying when is much harder.”

Technology momentum and the SLC variant of flash will be hurdles for any new memory

For Omdia, Hahn agreed with Handy’s belief that the inability to reach sufficient sales to justify the production volumes needed to allow viably low prices caused the cancellation of Optane. Illsley added that the relatively new high-speed variants of SLC [single-level cell]  flash are handling tasks that Optane was intended for: “It is easy to visualize where Storage-Class Memory might fit into a storage-memory pyramid, but it has been difficult to deliver the right combination of performance, latency, and cost characteristics while providing data persistence in the real world. Optane SCM seemed to be a good effort, but its production is being spun down for lack of proper volume economics. There are clearly a few good use-cases for Optane SCM but, honestly, those are being addressed using recently-released NAND flash SLC technology. SLC NAND SSD offerings are on the rise for their fantastic durability and good write performance, especially for hot data array tiering and data caching usage.”

SLC flash builds on existing NAND flash technology, and Hahn’s comment about its value resonated with a statement made by Boyan Krosnov, CTO and co-founder at StorPool, a vendor of software-defined, distributed storage systems: “Any new technology has to overcome the existing technology, which benefits from decades of optimization and large-scale manufacturing. So, the new technology will be at a significant disadvantage.” 

Anderson said he also did not expect an SCM to emerge in the next ten years and pointed to another hurdle that Optane needed to cross. “SCMs had/have huge promise, but the change in software architecture required for applications to gain all the advantages that SCM can offer was too high, so applications never adopted them.“  Flash has not faced this hurdle, because  its specific combination of cost and performance never required or justified its use as an adjunct to DRAM memory.”  Furthering his argument about the IT industry’s reluctance to modify software, Anderson refers to NVDIMMs, which are devices that combine DRAM memory with flash to offer the performance of DRAM with  the persistence (the ability to retain data safely after a power interruption) of flash: “Intel and AMD have both now fully backed away from vanilla NVDIMMs, let alone the more exotic Optane.”

By Peter Hayles, Product Marketing Manager at Western Digital.
By Narek Tatevosyan, Product Director at Nebius AI.
By Shane Geary, SVP Manufacturing & Operations, Pragmatic Semiconductor.
By Graham Jarvis, Freelance Business and Technology Journalist, Lead Journalist – Business and...
By Richard Connolly, Regional Director for UKI at Infinidat.
By Isaac Douglas, CRO at global IaaS hosting platform Servers.com.