Story image

Redefining efficiencies for downsized, on-premise enterprise data centers

15 Jun 17

Over the last five years, three major transformations have rocked the data center industry.

These transformations have occurred so quickly that many data center owners have been caught flat-footed, uncertain of the best way to shape their data center modernization strategies. The first transformational phase involved a rapid, virtualization-driven consolidation of on-premise data centers.

In a typical consolidation example, five medium-sized data centers would consolidate into one larger data center. This approach was replicated across many companies to varying degrees.

In the midst of all of this, the second transformational phase occurred: the mass exodus of applications to the cloud. This left behind thousands of downsized on-premise data centers that were mere shells of their former selves.

Today, the industry is undergoing a third transformational phase: a retrenchment, and even a renewed growth, of enterprise on-premise data centers.

What is driving this unanticipated third phase of transformation? Major market and technology trends such as the Internet of Things (IoT) have driven exponential growth in the amount of data that needs to be captured, stored, analyzed and connected. That data is being gathered and analyzed to create both business value and competitive advantage.

As a result, analysts are forecasting growth for both cloud/colo and on-premise data centers (in the case of on-premise, the growth forecast is 5% over the next five years).

On the enterprise side a second driver is emerging: entrenched applications that integrate into many on-premise business systems (like Lotus Notes) are staying put. These are applications that are difficult to cost-effectively break up into separate applications and push off to the cloud. Stakeholders deem these applications simpler and less costly to maintain on-premise. And these applications are growing.

The downsized data centers that have been left behind are quite different from their pre-cloud predecessors. As a result, the approach to managing and operating them has to be different. If not, stakeholders will be forced to sustain high OPEX costs as they preside over what are essentially very inefficient data centers.

Consider the example of power and cooling systems.  Even though 50-75% of the servers may have been displaced and those applications moved to the cloud, oversized power and cooling systems remain. When downsizing IT, the utilization of power and cooling systems can drop as low as 10%.

Power and cooling systems are not proportionally reduced in an IT downsizing scenario. Oversized gear is not energy efficient and is costly to maintain.

Therefore, the challenge within downsized data centers is to determine what pieces of equipment are inefficient (and how inefficient are they) and to measure how much these inefficiencies are inflating operational costs. Then, once reliable data is gathered and analyzed, decisions can be made regarding changes that render the downsized data center more efficient.

Monitoring and analytics are the keys for improvement

Both on-premise and cloud-based data center infrastructure management (DCIM) tools can assist in fixing inefficiencies in downsized data centers. On-premise tools, can start recording the power draw from all of the components of the data center physical infrastructure.

Then, through benchmarking, opportunities for improvement are identified. These tools are also effective in planning capacity (forecasting how much power and cooling is really needed to address the current data center requirements).

New, cloud-based tools are also emerging that are capable of capturing data center physical infrastructure asset performance data. These systems not only remotely monitor equipment performance, but they also perform predictive diagnostics that can leverage data from multiple similar data centers to create more precise performance benchmarks.

By recording factors such as operating temperature and the number of battery discharge operations sustained (e.g., in the case of a UPS), the predictive analytics will show what the probability of failure will be within a given window of time.

Article by Steven Carlini, Schneider Electric Data Center Blog Network

Lenovo DCG moves Knight into A/NZ general manager role
Knight will now relocate to Sydney where he will be tasked with managing and growing the company’s data centre business across A/NZ.
The key to financial institutions’ path to digital dominance
By 2020, about 1.7 megabytes a second of new information will be created for every human being on the planet.
Is Supermicro innocent? 3rd party test finds no malicious hardware
One of the larger scandals within IT circles took place this year with Bloomberg firing shots at Supermicro - now Supermicro is firing back.
Record revenues from servers selling like hot cakes
The relentless demand for data has resulted in another robust quarter for the global server market with impressive growth.
Opinion: Critical data centre operations is just like F1
Schneider's David Gentry believes critical data centre operations share many parallels to a formula 1 race car team.
MulteFire announces industrial IoT network specification
The specification aims to deliver robust wireless network capabilities for Industrial IoT and enterprises.
Google Cloud, Palo Alto Networks extend partnership
Google Cloud and Palo Alto Networks have extended their partnership to include more security features and customer support for all major public clouds.
DigiCert conquers Google's distrust of Symantec certs
“This could have been an extremely disruptive event to online commerce," comments DigiCert CEO John Merrill.