From cloud to instant failover - 5 industry trends disrupting data storage
The traditional methodology of simply adding more physical storage to resolve storage issues no longer works.
It's too expensive, inefficient, unmanageable and slow.
Organisations are faced with an ever-increasing volume of data that needs to be moved, processed, stored and managed. But data is only as good as the company's ability to use it.
Traditional storage systems are not capable of handling the data-intensive virtualized workloads of the new enterprise. These new workloads are driving the need for new technologies, solutions and business models.
The emphasis now is on data management at today's scale. Intelligent, software-led solutions are required to provide scalable, fast and reliable data management.
This enables organisations to focus on extracting value from data and having an always-on data infrastructure rather than managing storage products.
Manage data not storage: A shift from focusing on managing storage to managing data is essential to deliver value and enable digital transformation. Data needs to be managed by a software-led / data fabric software that is agnostic to all storage hardware.
Outcome: Traditional storage products are simply ‘big buckets' that provide little insight into the real value of data. Big data, IoT, CAD, GIS, VDI, analytics and digital transformation all need faster access to relevant information.
Data must flow freely and ubiquitously across all storage. Storage needs to become invisible to an organisation's data and its business. The old methodology of purchasing more and more physical storage simply doesn't work anymore. It's too expensive, inefficient, unmanageable and slow.
Software-led storage solutions: A shift from hardware-based (proprietary) approaches to intelligent software-led data fabric solutions based on commodity hardware.
Outcome: Deploy a software-led data fabric solution that creates a single storage pool across all existing and future storage infrastructure that includes enterprise class features using artificial intelligence and orchestration to manage data. Existing storage simply becomes a bucket or repository for data.
The intelligence is built into the software.
Eliminates storage vendor lock-in, storage silos and forklift upgrades. Delivers substantially improved reliability and recoverability of data.
Instant failover: A shift from point in time disaster recovery to an always-on, always available data fabric that eliminates downtime and business disruption after a storage failure.
Outcome: Data needs to be available in multiple locations, including multiple clouds to deliver instant failover with no downtime or business disruption after a storage hardware failure.
Data must be always-on and always available.
Flash to replace spinning disk for active data: A shift from all hard disk drive based storage systems to high-performance flash-based hybrid storage systems for active (hot and warm) data is now a reality.
Outcome: Implement flash for exceptional performance and lower latency for applications active data only. Typically, only 20 – 40% of all data is active and should reside in and flash.
1PB of data requires just 200TB – 400TB of flash. The inactive data should be stored in the cloud or on inexpensive high capacity (10TB+) commodity spinning disks private or public cloud, not on expensive proprietary storage silos. No changes to users or how applications access or use data. For ‘insane' performance, add RAM to the performance layer. The solution must have the ability for any application to use any RAM across all hosts. Once an application finishes using the RAM/ flash, all data is flowed out to slower storage so that that premium resources are available for different applications to use.
The Cloud: A shift from inactive data being stored in (proprietary) on-premise storage silos to a cloud first approach will replace on-premise spinning disk for inactive data to reduce storage costs, electricity, cooling and rack space required.
Outcome: Data should be able to move freely from on-premise to multiple sites and multiple clouds including public, private and hybrid to deliver absolute data availability and reliability without manual intervention.
In its most basic implementation, data fabric software will move all inactive data to the preferred cloud to free up 60 - 80% of on-premise storage capacity immediately.
Once on-premise storage issues are resolved, and data is in multiple locations for reliability, it is easier to look at how to further leverage the cloud across an organisation.
In the new enterprise, all storage hardware should be a single pool of storage regardless of vendor, make or model.
Storage should be invisible to data. The focus should be on ensuring that the correct data is in the correct storage tier RAM, SSD/flash for performance, and less expensive private or public cloud (spinning disk) for redundancy.
As Charles Darwin said: “It is not the strongest of the species that survives, nor the most intelligent. It is the one most adaptable to change.