The 2016 data center: On-demand, on the edge and hyperscale
The data center, as we have come to know it, has changed. With bandwidth needs driven by trends such as wearable technology and big data, we see a shift in how organisations are viewing, building and planning their data centers.
We see many organisations migrate new data centers to leased co-location facilities and public cloud. When organisations choose to build their own data center, the facilities need to be more efficient and achieve higher density.
With changes taking place in how technologies are used and valued within the enterprise, there are many shifts I believe will happen in the near future.
Shift from storage to compute and on-demand access
Previous generations of data centers focused primarily on storage of information and disaster recovery. Geographic diversity was required for backup and data was retrieved on a periodic basis.
Now the focus has shifted to analysing and processing data for on-demand access. The rise of mobility and wearable technology creates requirements for latency that previously has never been seen.
Consumers and business users alike have an expectation of on-demand access to data from the cloud with the same user experience when accessing data residing on the device. This results in data centers that are far more distributed. The most efficient way for most business to do this is with cloud computing.
Where the growth is happening
Data centers will need to be more efficient and achieve higher density. From a service provider and co-location perspective, there will be a large growth in providing distributed computing. The large wave of growth will be in point of presence (PoP) data centers, supporting content delivery networks for service providers as well as promoting network virtualisation and software defined networks.
A combination of growth within PoP and co-location will increase the need for interconnecting or peering between service providers.
Bringing compute power to the edge
A big expansion in the coming year will be the idea of moving computing power to the edge of the network.
We are seeing service providers want to push as much computing resources to the edge of the network as possible to reduce latency by reducing the number of 'hops' the data has to take in order to reach the end user.
A large amount of data is shifting from storage to algorithms that manipulate and analyse the data stored. As we use the data, we need to reduce latency.
Ten years ago, in the era of programs, you would pull a program up on our laptops that took some time to load and we would look at it for long periods of time a couple of times a day. Now, we shifted towards an app-driven world where we look at the data hundreds of times a day in shorter durations. Users are starting to feel that data should be predictive and instantly serve up information from the cloud.
As an example, look at how social networks launched in the early 2000s. One factor that limited growth during the first few years was the need to increase the number of servers available.
Today, a new social network can have instant access to nearly unlimited compute resources on every continent with the use of cloud services. This provides instant scalability, especially for start-ups and tech companies. So naturally, small and medium-sized businesses are going that way.
One aspect playing into the growth of computing power into the edge of the network is modular data center. We are seeing a trend in hyperscale and service providers deploying modular at the base of cell sites to bring compute as close to the consumer point of use as possible. They are deploying an appropriately sized data center at a geographically correlative location that cuts down latency.
How DCIM and ITSM play in the mix
As you build out these data centers, it becomes a game of how efficient you can make these facilities. You can't afford to have an inefficient data center. You need to know exactly where everything is, how it is being used and powered.
Any form of inefficiency in the data center can be costly and data center infrastructure management (DCIM) will be paramount in helping keep these data centers running smoothly.
The hype around hyperscale
The scale of demand stemming from the number of consumers doing online shopping caused several high profile outages.
These events will drive organisations to move some of their operations to the cloud in hyperscale data centers. This will give them the ability to flex into a cloud capability when their network becomes stressed.
Some phenomenon happening within the hyperscale arena include:
- The streaming, uninterruptable, low latency services of music, video, and information is spurring the growth of hyperscale. Users want steaming information without delay.
- More people are moving their compute services to the edge of the network. Data can exist in multiple places to provide static information without latency.
By John Schmidt, CommScope data center solutions lead