More servers and more CPU power used to be the answer to boosting data center performance, but it appears this is no longer the case.
A new research report conducted by Futuriom and sponsored by Mellanox Technologies - ‘Untold Secrets of the Efficient Data Center' - asserts with the deceleration of Moore's Law (the observation that the number of transistors in a dense integrated circuit doubles about every two years) and the drive for domain-specific processors and edge computing, there is a growing focus on solutions to optimise network efficiency.
The report was based on the responses from more than 200 director level or higher data center professionals from the US, the UK, and China to dig deeper into how they're addressing the challenge of supporting high-power applications like artificial intelligence and big data analytics across public, private, and hybrid clouds, by examining their actual working practices and the key trends.
According to the study, there is an increasing interest in software-defined virtualisation and network optimisation strategies. What's more, processor offload and SmartNICs (more advanced network interface cards) are now the favoured solutions for improving data center performance, while deploying more servers is least favoured.
Above all, it concludes that the network, a key engine of performance to the cloud, needs specific adaptations to keep up with data centers that have ambitions to be cloud-scale.
“If you want to cut through hype and rumour to find out what is really happening, you ask the people at the coal face,” says Mellanox Technologies VP of marketing Kevin Deierling.
“There was a lot of interest in SmartNICs – a bare 10 percent did not know what they were. Their applications included improving the efficiency of VMs and/or containers (56 percent), virtualising and sharing flash storage more efficiently (55 percent), isolating and stopping security threats (47 percent), accelerating hyperconverged infrastructure (50 percent), and enabling SDN (54 percent).
Futuriom chief analyst Scott Raynovich says there is a clear recognition among data center professionals that network optimisation technologies are a key way to improve data center performance. Potential benefits in upgrading the network identified by respondents include faster application performance (64 percent), stronger security (59 percent), greater flexibility (57 percent), and application reliability (57 percent).
Backing up Raynovich's claims, 84 percent of respondents deemed network infrastructure to be either ‘very important' or ‘important' in delivering artificial intelligence and machine learning.
Highly efficient utilisation of servers and storage topped the list when asked which aspect of hyperscale cloud operations they would most like to emulate. The next tier of results included the use of flexible converged 25/50/100Gb Ethernet networking for everything (19 percent), automated infrastructure deployment, management, and monitoring (17 percent), and simplified resource provisioning, reporting, and billing (15 percent).
Raynovich says data center operators now see using network optimisation and SmartNIC technologies as a major goal towards building low-latency and high-performance data centers.
“The data center is being reinvented. It's a real challenge to build a cloud infrastructure that can scale to support demanding applications that can embrace big data, analytics, self-driving cars, and artificial intelligence,” says Raynovich.
“The very techniques developed by hyperscale cloud giants are now migrating to the enterprise, where distributed applications now rule. There's more pressure than ever for networks to perform, and new technologies are beginning to be deployed to make sure that networks don't become the bottleneck for the cloud.