In five to ten years, we will look back with awe on the storage-media transition from spinning, rust-coated platters and electro-mechanical recording arms to solid-state, silicon-based media. We will look back on the magnetic hard disk era of data storage with glee and relief. Archeologists will unearth these things and ponder how they ever worked in the first place.
Make no mistake about it – the magnetic hard disk is being replaced by solid-state storage, or flash. It’s happening now and it’s just a matter of time until the last hard disk is given a ceremonial “so long and thanks for all the IO’s.” Already in Asia, companies in developing markets, unburdened by legacy infrastructure, are upgrading directly to all-flash to “leap-frog” competition. In more developed markets like Singapore and Australia, public and private sector organisations are embracing – and investing in – hyper convergence and flash technology to gain competitive edge through digital transformation.
How will we look back on this transition? How will the flash storage story pan out?
They say the best way to predict the future is to look at the past. If anyone has doubts that the end of the spinning disk is on the horizon, then ask yourself one question: When was the last time you met anyone running their workloads from magnetic tape? I would imagine the answer to be, around 35 to 45 years ago.
There will be three distinct phases to the flash storage story: Vertical, Horizontal, and Tardis.
By my reckoning, this phase ended one or two years ago. Flash started its three-phase journey by accelerating niche and individual workloads. Flash was expensive, but no one cared. These workloads, mostly database and online transaction processing (OLTP), needed to go fast at any cost. Spending money on hardware to accelerate via all-flash was a better option than spending millions on hard disks.
We typically saw direct-connect flash appliances providing all-flash storage to one or maybe two servers to accelerate mission critical applications. Think financial services, stock trading, high-volume online auctions, and so on. Application managers were prepared to trade off data service richness and simplicity for the performance gains.
The flash technology was “SLC media,” packaged in some proprietary manner, and the big topic of discussion was endurance and wear-out. How many page erases? How many writes, and so on? Those in charge of performance-critical applications didn’t really care because flash took the monkey off their backs and bought them time.
This vertical era of single (or few) flash-workload acceleration is over. It was, however, a very important time for learning about and getting ready for what came next.
Flash has gone mainstream and the all-flash data centre for primary workloads is no longer just a possibility, it is a reality. The economics and mainstream availability of flash in mature and proven array solutions has combined to form the perfect storm against the hard disk. Start preparing for that ceremonial “so long and thanks for all the IO’s” farewell event!
The horizontal phase, mainstreaming, began around 2014-2015. The NAND media technology had evolved from SLC to MLC to TLC, and most commonly now 3D-TLC (or “3D-NAND”). We have already seen flash shipments surpass 15K RPM HDD shipments. The tipping point here was probably the culmination of data-reduction services, higher capacity and lower cost 3D-NAND, and platforms offering the trifecta of requirements in an all-flash data centre platform. More on that trifecta later.
Within the horizontal phase, concerns over flash endurance and wear out has all but ceased. Every popular platform from mainstream vendors has this issue well under control now. Typical ROI periods for a migration from a legacy HDD array to an all-flash array are typically less than 18 months. As all those HDD arrays reach the end of their lives, customers are skipping the hybrid approach and moving directly to all-flash. This simplifies life for the storage admins and returns free floor tiles back to the data centre manager.
Upon replacement of the all-HDD array with an all-flash array, the storage admin becomes a hero. Application managers wonder why they waited so long, and the “what if” discussions commence. Enter phase three.
Once the benefits of the migration to all-flash were absorbed and understood, clever people who experienced the sheer performance capability of the technology started to think and ponder. They realised that flash has much more to offer. They realised that flash removed the data processing bottlenecks and constraints that ultimately limited application capability.
Software vendors and application innovators combined to create a new breed of app. I’ll call them “flash apps.” These are apps that couldn’t possibly run on traditional infrastructure. They fully leverage flash (or other solid-state storage) technology to deliver a new breed of application that is dependent on near-instantaneous, high-volume data processing.
The first of these “flash apps” was the combination of data warehouses and their associated transaction-processing workloads into one stack. These were originally separated because storage couldn’t cope with multiple workloads referencing the same dataset. Soon we will witness an avalanche of data-analytics apps that bring the notion of real-time Big Data to the stage. One only has to couple what’s happening at present with flash storage and compute interconnects with the Moore’s Law continuum of compute capacity and it is easy to see that the “flash apps” era is just around the corner. What will soon be possible is unimaginable from where we are now.
In the horizontal phase, the single biggest use-case is the replacement of shared all-HDD arrays with shared all-flash arrays. Storage bottlenecks disappear overnight, and ROI is just 18 months away. It’s a quick win.
However, there is no need to compromise on what will turn that “quick win” into a “long-term win.” The long-term success of the all-flash data centre, and the innovation offered by the Tardis phase, will be governed by a no-compromise migration from HDD to all-flash.
This no-compromise migration will require attention to three key evaluation dimensions for all-flash storage solutions:
Many vendors often focus their discussions on just one of these dimensions, but be wary. This is often a ploy used to emphasise a single-strength and avoid discussing weaknesses. What will determine long-term success is achieving balance across all three of these dimensions.
There actually could be a fourth phase in the flash story. The all-flash data centre for primary workloads is an absolute given, as mentioned earlier. What is less clear is the obsolescence of the low-performance, bulk-capacity HDD’s, the 7200K RPM units. I believe the fourth phase, what I will call “Bye-Bye HDD,” will be driven by the next flash technology evolution, 3D-QLC (quad level cells). These will be high-capacity, high-density, 4-bits per cell flash. 3D-QLC has the potential to be that replacement technology for archive-grade HDD storage.
With each major transition in storage media, capacity has increased, performance has increased, and costs have decreased, allowing bigger workloads and bigger ideas to become a reality. History is on our side here. So it’s not a matter of if, but when, and how fast, making the flash story an archetype of “Speed and Change” – the two defining attributes of IT today.
Article by Paul Haverfield, Chief Technology Officer, Storage, HPE Asia Pacific and Japan