Story image

How data centers are evolving to meet Information Age demands

14 Jun 16

Article by Andrew Foot, general manager, Australia and New Zealand at VCE, The Converged Platforms Division of EMC

It doesn’t take an Einstein to recognise the truism that if you do what you’ve always done, you’re going to get what you’ve always got. That applies in the data center as much as it does anywhere else; the answer to the familiar refrains of spiralling complexity, ever-expanding volumes of data and mind-boggling scale is to do something else.

It is time for next-generation data center infrastructure to help organisations take better control of information, drive down complexity and equip themselves for the inevitable scale which comes with the exponential growth of Moore’s Law and the imperative of digital transformation.

These problems aren’t just faced by a data center in a galaxy far, far away (or across the oceans in the United States and elsewhere). They are faced by the operators of data centers everywhere – including right here in Australia and New Zealand. And, with data centers already playing a fundamental role in every one of our lives every day, that means a set of challenges which will continue to grow (there are, according to DatacenterMap, 25 colocation data centers in New Zealand and 98 in Australia).

Aware of the realities of information-driven business and society, and the concerted push towards digital transformation which can only further accelerate data growth, the providers of data center infrastructure haven’t been resting on their laurels. Instead, they’ve come up with an answer which embodies the by-now familiar concepts of convergence and integration: hyperconverged infrastructure. It is this which is playing a central role in addressing the challenges of the data center today and into the future.

Hyperconverged infrastructure is just what it sounds like: it incorporates all the elements required by the data center into a preconfigured box. With a software-centric architecture, hyperconverged infrastructure tightly integrates compute, storage, networking and virtualisation resources (and management software) into a (literal) single device. Those devices can be added to the data center rapidly and easily, allowing for expansion without further driving up complexity.

There is a little more to it than simply chucking in more hyperconverged boxes (there would be, wouldn’t there). Some of the secrets are in the hardware itself; for example, speeds and feeds remain essential measures of ultimate system performance, so all-flash arrays are becoming more commonplace.

But most of the magic is in the software, and specifically, in the architecture. Software-defined data centers which scale out (rather than scale up) are becoming the norm – where additional nodes can be added ad infinitum, theoretically at least. Scale out is a requirement because it enables IT to efficiently manage massive capacities with very few resources in a way scale-up systems cannot.

The software-defined data center which uses commodity hardware delivers the economics needed to manage and maintain the massive data volumes associated with how the world works today. In other words, it will help manage 1000x more data, but without 1000x more budget. The software-defined model also automates the configuration and deployment of IT services, delivering greater business agility and a more flexible, programmable approach to managing data services.

It should go without saying, but we’re going to say it anyway: data center infrastructure has to be cloud-enabled, whether it is traditional (no, legacy equipment doesn’t just disappear) or brand new hyperconverged infrastructure. That’s a necessity because even an on-premise data center can take advantage of public cloud economics non-disruptively.

Capacity, complexity and scale, these are the familiar buzzwords of the data center environment purely because they are very real problems faced in an increasingly data-driven world. We know data centers are complex and need to be simplified and streamlined. We know business users expect more, at a faster rate and with increased reliability. We know budgets are not increasing to address either of these two.

And we know how these issues can be addressed with a new breed of hyperconverged technology.

Article by Andrew Foot, general manager, Australia and New Zealand at VCE, The Converged Platforms Division of EMC

HPE extends cloud-based AI tool InfoSight to servers
HPE asserts it is a big deal as the system can drive down operating costs, plug disruptive performance gaps, and free up time to allow IT staff to innovate.
Digital Realty opens new AU data centre – and announces another one
On the day that Digital Realty cut the ribbon for its new Sydney data centre, it revealed that it will soon begin developing another one.
'Public cloud is not a panacea' - 91% of IT leaders want hybrid
Nutanix research suggests cloud interoperability and app mobility outrank cost and security for primary hybrid cloud benefits.
Altaro introduces WAN-optimised replication for VMs
"WAN-optimised replication allows businesses to continue working in the case of damage to on-premise servers."
DDN part of data mining mission on Mars
DataDirect Networks (DDN) today announced that it will be playing a role in one of NASA’s most critical missions.
Opinion: Data centre management can learn from the Navy
While a nuclear submarine may seem like a completely different beast from a data centre, the similarities in how they should be managed are striking and many.
14 milestones Workday has achieved in 2018
We look into the key achievements of business software vendor Workday this year
HPE building new supercomputer with €38m price tag
It will be installed at the High Performance Computing Center of the University of Stuttgart and will be the world's fastest for industrial production.