Story image

How storage virtualization delivers data: Always on, always available, always fast

03 Aug 17

Virtualization has dramatically changed how companies manage their servers and applications. Software-defined networking (SDN) is removing the complexity from networking infrastructure. The remaining challenge is storage.

In the beginning everyone was reluctant, or even scared, to be the first to move to server virtualization – however, those companies immediately received strong competitive advantages and substantial reductions in cost and complexity of managing their environment.

Traditional SAN and NAS storage products were developed for physical servers and quite simply can’t deliver the scalability, reliability, performance or value that the new enterprise requires.

Consider this, 50 to 80% of all data is inactive, cold or stale – yet it is sitting on expensive storage – why?

Because storage vendors lock us into their storage silo and make it hard to move or migrate data.

Ask a SAN, NAS, All Flash or Hyper-converged vendor if you can buy the same commodity SSD or spinning disk that they use and add them to their array or frame yourself.

Of course not.

That is called vendor lock-in and a storage silo.

If you need more performance - doesn’t it make sense to add more RAM and flash/SSD, or for extra capacity simply leverage AWS, Azure or Google to deliver web scale storage on a low monthly subscription?

Inevitably companies will make the transition to storage virtualization solutions that remove storage silos, create an open, single storage pool utilising any available storage from RAM, SSD, DAS, SAN, NAS, JBOD or even any public or private cloud provider.

Active data is moved to RAM and SSD, warm data to SSD and the fastest spinning disks, and cold data is eventually migrated to the slowest storage or the cloud regardless of storage vendor, make or model – all without any user intervention.

The latest in storage virtualization solutions are totally hardware agnostic and tier data in real time according to application requirements or objectives.

An example of why this is so valuable: Use VDI (Virtual Desktop Infrastructure) you will almost certainly have an issue with boot storms – which is when everyone is logging in. To solve that you have probably purchased an All Flash Array and have dedicated 24 x 7 for example 2TB of that expensive all flash resource to the VDI boot storm.

The boot storm only lasts from 07:30 to 09:30 – so doesn’t it make sense to have a storage virtualization solution that moves the VDI images in RAM and flash during the boot storm, but once the boot storm is finished, release all that expensive performance storage for another application to use – for example running an Oracle report – all without any user intervention. Again, once the report is finished running, that expensive storage is freed up for another application to use.

Storage virtualization eradicates expensive storage silos and eliminates complexity by providing one dashboard from which you can view and manage all storage assets making capacity planning vastly more simplified. If you need more performance, this is as available as moving a slider without downtime or disruption to the users or applications.

If there’s not enough RAM or flash, the system will tell you and you can invest in commodity RAM and flash and add it to the network – it is instantly available to all applications, if needed.

The latest storage virtualization solutions are built using an Objective-Defined Storage platform. Data from all existing storage hardware is absorbed into the single virtual storage pool.

For each application simply set the performance (IOPS or latency), capacity and protection (number of Live Instances, snap copies) objectives for that application or group of applications and everything else is fully orchestrated and driven by the latest in Artificial Intelligence and swarm technologies.

The outcomes of migrating to an open storage virtualization platform such as Objective Defined Storage are simple – if data is important, and required to be always on, always available and always fast, it is time to consider Objective-Defined Storage.

Deployment is simple, typically less than one hour and can be deployed one application at a time without any changes to the application or how users access their data.

Note: when looking at storage virtualization solutions, it is critical to ensure that is an open platform, manage any storage from any vendor, otherwise you may find yourself locked into a storage hardware platform and a storage silo. Just remember for your server virtualization deployments – it doesn’t matter if you had HP, DELL, Lenovo or IBM servers – the virtualization platform works across all of them.

Finally storage virtualization and Objective Defined Storage deliver data freedom from storage hardware, and enable digital transformation.

Article by Greg Wyman, VP Asia Pacific, ioFabric

Data centre cybersecurity actions that most people overlook
Schneider’s Steven Carlini discusses ways to improve data centre cybersecurity that most people don’t think of until it’s too late.
Alibaba Cloud showcases commitment to Hong Kong
The company’s service capability in Hong Kong has doubled since it established its first data centre in the city in 2014.
5 tips to reduce data centre transceiver costs
Keysight Technologies' Nicole Faubert shares her advice on how organisations can significantly reduce test time and cost of next-generation transceivers.
The new world of edge data centre management
Schneider Electric’s Kim Povlsen debates whether the data centre as we know it today will soon cease to exist.
Can it be trusted? Huawei’s founder speaks out
Ren Zhengfei spoke candidly in a recent media roundtable about security, 5G, his daughter’s detainment, the USA, and the West’s perception of Huawei.
SUSE partners with Intel and SAP to accelerate IT transformation
SUSE announced support for Intel Optane DC persistent memory with SAP HANA.
Inspur uses L11 rack level integration to deploy 10,000 nodes in 8 hours
Inspur recently delivered a shipment of rack scale servers of more than 10,000 nodes to the Baidu Beijing Shunyi data center within 8 hours.
How HCI helps enterprises stay on top of data regulations
Increasing data protection requirements will supposedly drive the demand for Hyper-Converged Infrastructure solutions across the globe.