Story image

How data centers are evolving to meet Information Age demands

14 Jun 2016

Article by Andrew Foot, general manager, Australia and New Zealand at VCE, The Converged Platforms Division of EMC

It doesn’t take an Einstein to recognise the truism that if you do what you’ve always done, you’re going to get what you’ve always got. That applies in the data center as much as it does anywhere else; the answer to the familiar refrains of spiralling complexity, ever-expanding volumes of data and mind-boggling scale is to do something else.

It is time for next-generation data center infrastructure to help organisations take better control of information, drive down complexity and equip themselves for the inevitable scale which comes with the exponential growth of Moore’s Law and the imperative of digital transformation.

These problems aren’t just faced by a data center in a galaxy far, far away (or across the oceans in the United States and elsewhere). They are faced by the operators of data centers everywhere – including right here in Australia and New Zealand. And, with data centers already playing a fundamental role in every one of our lives every day, that means a set of challenges which will continue to grow (there are, according to DatacenterMap, 25 colocation data centers in New Zealand and 98 in Australia).

Aware of the realities of information-driven business and society, and the concerted push towards digital transformation which can only further accelerate data growth, the providers of data center infrastructure haven’t been resting on their laurels. Instead, they’ve come up with an answer which embodies the by-now familiar concepts of convergence and integration: hyperconverged infrastructure. It is this which is playing a central role in addressing the challenges of the data center today and into the future.

Hyperconverged infrastructure is just what it sounds like: it incorporates all the elements required by the data center into a preconfigured box. With a software-centric architecture, hyperconverged infrastructure tightly integrates compute, storage, networking and virtualisation resources (and management software) into a (literal) single device. Those devices can be added to the data center rapidly and easily, allowing for expansion without further driving up complexity.

There is a little more to it than simply chucking in more hyperconverged boxes (there would be, wouldn’t there). Some of the secrets are in the hardware itself; for example, speeds and feeds remain essential measures of ultimate system performance, so all-flash arrays are becoming more commonplace.

But most of the magic is in the software, and specifically, in the architecture. Software-defined data centers which scale out (rather than scale up) are becoming the norm – where additional nodes can be added ad infinitum, theoretically at least. Scale out is a requirement because it enables IT to efficiently manage massive capacities with very few resources in a way scale-up systems cannot.

The software-defined data center which uses commodity hardware delivers the economics needed to manage and maintain the massive data volumes associated with how the world works today. In other words, it will help manage 1000x more data, but without 1000x more budget. The software-defined model also automates the configuration and deployment of IT services, delivering greater business agility and a more flexible, programmable approach to managing data services.

It should go without saying, but we’re going to say it anyway: data center infrastructure has to be cloud-enabled, whether it is traditional (no, legacy equipment doesn’t just disappear) or brand new hyperconverged infrastructure. That’s a necessity because even an on-premise data center can take advantage of public cloud economics non-disruptively.

Capacity, complexity and scale, these are the familiar buzzwords of the data center environment purely because they are very real problems faced in an increasingly data-driven world. We know data centers are complex and need to be simplified and streamlined. We know business users expect more, at a faster rate and with increased reliability. We know budgets are not increasing to address either of these two.

And we know how these issues can be addressed with a new breed of hyperconverged technology.

Article by Andrew Foot, general manager, Australia and New Zealand at VCE, The Converged Platforms Division of EMC

Silicon Valley to lose its tech centre crown to global cities
A new survey of tech industry leaders found the majority believe it is likely the Valley will be usurped within four years by other cities around the world.
Hybrid cloud set to mitigate vendor lock-in within Thailand
IDC has released its top 10 predictions for Thailand's IT industry through to 2022.
French cloud giant sets up shop in two APAC data centres
OVH Infrastructure has expanded its public cloud services in the Asia Pacific (APAC) market operating from two data centres within the region.
SecOps: Clear opportunities for powerful collaboration
If there’s one thing security and IT ops professionals should do this year, the words ‘team up’ should be top priority.
Data center colocation market to hit $90b in next five years
As data center services grow in popularity across enterprises large and small, the colocation market is seeing the benefits in market size.
Google doubles down on hybrid cloud strategy
CSP is a platform that aims to simplify building, running, and managing services both on-premise and in the cloud.
OVH launches public cloud down under
OVH Public Cloud services is expanding to Australia out of two data centres - one in Sydney and one in Singapore.
Huawei invests in cloud deployment for Singapore
The company says its new strategic investment reflects growing demand for cloud service solutions across Asia Pacific.