DataCenterNews Asia Pacific - Specialist news for cloud & data center decision-makers
Story image
How Big Data is slowly destroying our environment
Wed, 6th Jun 2018
FYI, this story is more than a year old

I started my career around 30 years ago at one of the largest investment banks on Wall Street.

Their data center was huge.

We had mainframes, midrange computers and various types of minicomputers. The cost of running the data center was enormous, in terms of energy, maintenance, real estate and resources to keep the data center operational.

In 1991 I joined Information Builders in the group that released EDA/SQL to market.

You could say this was the beginning of the client/server era as microcomputers decreased in price and increased in power.

Many enterprises moved from centralized data centers to a distributed client/server architecture with departmental servers and fat clients. The mantra was that the mainframe and centralized data centers were dead - long live client/server.

Well, not exactly.

While client/server provided greater flexibility for individual users and more control over specific computing resources, it also complicated the management of IT resources, servers and standalone databases.

In addition, managing these distributed computing networks became a huge challenge, with the multiples of servers taking up lots of space and energy and requiring many more resources to manage them.

Most large companies ended up with a hybrid architecture of the centralized data center working in tandem with the distributed or departmental computing servers and resources.

Fast forward to the 2000s, with the applications running on the web mature enough to take the place of applications developed for CPU-based systems, mini or even midrange-computers.

This advancement together with affordable storage were among the influences that gave way to the initial move into cloud computing.

Now organizations were beginning to change their computing architecture more drastically, with many evolving their client/server applications to the cloud.

While there has been a lot of talk about the move to the cloud, and in fact, organizations have already taken steps to move their applications and/or data to the cloud, many enterprises are not planning on moving to full cloud operations.

Enterprises often create hybrid cloud architectures, merging existing or new data centers with integration into the cloud. This is true of many of the large financial and healthcare organizations who are still running mainframes, albeit smaller ones that require less energy and space.

The phenomenon of organizations using the hybrid model as a first move to a potentially longer term and fuller cloud implementation is still not clear.

The hybrid model works well for many enterprises, especially for those wishing to keep certain data stores close to the data's origin or where it will be analyzed.

Regardless of whether an organization decides to move to the cloud, a move to more innovative processing methods can help significantly downsize computing resources, whether within an existing data center or in the cloud.

By embracing new GPU computing technology from NVIDIA, with massive parallel processing, enterprises are able to significantly reduce their number of servers, which in turn reduces costs associated with energy, space, maintenance and more.

NVIDA has developed its GPU processor which breaks tasks into small problems, solving them in parallel processing on the GPU. This has proven much more efficient that the traditional CPU which works in a linear way, processing one task at a time.

Not only do GPUs perform up to 100x faster than CPUs in many tasks, they are also much smaller. A 16 GPU server is akin to a 1,000 CPU cluster.

A GPU can enable near real-time data exploration and very fast data ingest, which is simply not possible with a traditional CPU. That said, the CPU is still a very important piece of the puzzle for creating high-performance applications.

For example, the architecture of the GPU makes it less suitable for random text operations.

As the thousands of GPU cores process data in a strictly parallel way for high throughput, having contents that will cause the code to behave differently will harm performance.

We are seeing more and more organizations taking a green approach to computing these days. While it would be nice to say that these organizations are self-sacrificing when it comes to the environment, this may be true for a small minority. Regardless of what their reasons are, organizations can benefit greatly from going green.

The responsibility for data center costs is often not in the hands of IT, but the responsibility falls in the hands of the department that managers the organization's facilities.

IT management is often not involved with budgeting data center energy costs, real estate, and other facility-oriented costs. Data centers realize great benefits from going green including significant energy cost reductions, much lower real estate costs and more.

So looking back thirty years, anyone who ever predicted the death of the mainframe has been proved wrong. As it appears now, those conglomerates who are using mainframes will probably continue to do so, especially as they become smaller and more energy efficient.

Seemingly, the days of client/server are coming to an end and are being replaced by a hybrid of more cost effective, lower resource intensive servers based on the most innovative efficient parallel processing available in GPU processors together with the continuing movement into the cloud.

While applications continue to evolve, data is increasing and will continue to grow at unparalleled rates. Enterprises need to adapt innovative technologies that will enable them to store and analyze these growing data stores in a rapid and comprehensive way.

Expenditure on managing and analyzing big data will grow to billions of dollars annually and will only provide enterprises with competitive advantage if they are able to effectively and rapidly analyze the data and extract actionable intelligence from it.

By using the most advanced processors available for computing, such as GPU-based servers and databases, much less hardware is needed, making maintenance significantly easier and less time-consuming.

In addition, the ability to respond to business needs and advanced analytics is much greater. So not only are IT investments reduced and overall costs decreased, but the business benefits overall.

In fact, many enterprises are already adapting GPU servers and databases to realize dramatic gains in energy efficiency, while increasing the performance of their applications and data analytics.

This adoption of GPU technology, either stand-alone or in a hybrid model, appears to be the next stage in the ever-evolving paradigm of computing.