Story image

Using converged data center infrastructure to compete at the speed of disruption

10 Mar 2016

cDisruption, disruption, disruption. That’s all you hear about these days and with good reason. From the sharing economy to wearables to industrialised crowdsourcing, the business world changes every time you blink.

This is the idea economy,  where the lifecycle of idea-to-product has never been shorter. Take Uber for example – in just a few years since its launch in Australia, it has forced more established taxi companies to innovate at an accelerated pace or risk getting left behind.

In this economy, all companies – large or small - must possess the visionary mindset and technological agility to rapidly turn ideas into reality, or risk becoming obsolete.

Recent HPE research revealed that 88% of respondents from leading enterprises believe changes and improvements to their IT infrastructure are needed to stay competitive.

In addition, 70% of these firms say that making such changes stands to deliver an extremely significant impact, such as an improved customer experience.

How do you keep up? How do you pull out your IT team’s legacy roots and set them free to be as agile as a startup? It all begins with building your data center on the right infrastructure.

Businesses can no longer afford to have computing, storage, and networking operating separately, siphoning the time and resources that your IT team needs for innovation and progress.

What you need to compete at the speed of disruption is a data center infrastructure architecture that increases application and workload development to meet business needs. In order to succeed, that infrastructure must have the capacity to be software-defined, modular, secure, open, fluid, and managed by a common management application.

Many organisations have already begun taking steps to converge their data center infrastructure. This removes siloes and combines the datacentre’s disparate components into something that can be centrally managed. The goal of a converged infrastructure is to minimise compatibility issues and simplify infrastructure management while reducing costs for cabling, cooling, power, and floor space. It breaks down silos by flattening data center architectures, combining servers, data storage, networking fabric, and software into a single optimised system that can run a wide variety of workloads. It also abstracts the operating systems from the CPUs and provides the users with the ability to manage all of this from a single management console.

The next step is to create a new class of infrastructure, such as HPE’s Composable Infrastructure, where independent software vendors and developers can programmatically control infrastructure and build out new workloads using “’infrastructure as code’ capabilities through a unified API.

The API, native in HPE OneView, enables fast integration and automation of compute, storage, and fabric resources. Composable Infrastructure is the next evolution of convergence, enabling customers to accelerate Dev Ops and bring new applications to market faster. Through the open API, DevOps tools like Chef, Puppet, and Docker work hand in hand with HPE OneView to simplify provisioning and speed application deployment. 

A converged datacentre infrastructure provides the foundation for a composable infrastructure and the agility you need to meet current and future business demands. It will provide a secure, future-ready IT foundation that supports virtualisation all the way to cloud computing. It enables IT to support their traditional business as they start the transition to a new breed of applications powered by the data-driven enterprise, where the speed of DevOps becomes more critical for responding to business needs.

The changes in technology and the disruptions they cause are going to continue at a rapid pace, so you must adapt your IT infrastructure as well. To be successful in the Idea Economy, enterprises with legacy IT systems need to find a balance between operating and evolving if they want to meet the business needs of today and be prepared for what is coming tomorrow.

By Raj Thakur, Vice President and General Manager, Hybrid IT, Asia Pacific and Japan, Hewlett Packard Enterprise

Storage is all the rage, and SmartNICs are the key
Mellanox’s Kevin Deierling shares the results from a new survey that identifies the key role of the network in boosting data centre performance.
Opinion: Moving applications between cloud and data centre
OpsRamp's Bhanu Singh discusses the process of moving legacy systems and applications to the cloud, as well as pitfalls to avoid.
Global server market maintains healthy growth in Q4 2018
New data from Gartner reveals that while there was growth in the market as a whole, some of the big vendors actually declined.
Cloud application attacks in Q1 up by 65% - Proofpoint
Proofpoint found that the education sector was the most targeted of both brute-force and sophisticated phishing attempts.
Huawei to deploy Open Rack in all its public cloud data centres
Tech giant Huawei has unveiled plans to adopt Open Rack proposed by the Open Compute Project in its new public cloud data centres across the globe.
Beyond renewables: Emerging technologies for “greening” the data centre
Park Place Technologies’ CEO shares his views on innovations aside from renewable energy that can slim a data centre’s footprint.
Interview: Cisco on digital transformation and data centres at the edge
"On-premise we speak English, Amazon speaks French, and Amazon and Microsoft speak something else. But someone has to translate all of that and Cisco is involved with normalising those rule sets.”
Flashpoint: APAC companies must factor geopolitics in cyber strategies
The diverse geopolitical and economic interests of the states in the region play a significant role in driving and shaping cyber threat activity against entities operating in APAC.