The future of data centres and how they will look and behave is a hotly contested topic.
The on-premises data centre, once famously said to be ‘dead’ in a popular Gartner article from David Cappuccio, has proved to be steadfast after digging its nails in with hybrid cloud solutions.
To get a better idea of the situation, we spoke with James Leavers, chief technology officer at Cloudhelix, a specialist in high-performance cloud hosting, managed services and expert support.
So really, what are the main benefits of an on-premise cloud solution?
"Public cloud is not yet available in every country, so any project with strict data residency requirements can still require on-premise private cloud. Completely bespoke hardware configuration is also possible with an on-premise cloud solution,” says Leavers.
“While it is true that workloads with huge datasets, such as business intelligence (BI) or machine learning (ML), are suited to public cloud, it can be an expensive move. So if an organisation has access to existing infrastructure it is always useful to get as much life out of that hardware as possible; especially for analysis and data processing workloads that have statically high resource requirements.”
In terms of what the future cloud solution will look like, Leavers says there are a number of external factors that will determine its evolution.
“Cloud providers would like to gobble up the high-value, large on-premise data sets mentioned above. In the short term, this may mean more rollout of hybrid cloud, both by larger public cloud operators that have traditionally avoided these deployments, and by smaller, more bespoke companies,” says Leavers.
“However, deploying equipment on-premise is only the first step: what consumers of these products need are seamless links to complementary data centre-based cloud services. Connectivity to the cloud and seamless extension of on-premise networks will become ever more important.”
Leavers says the much talked about ‘edge’ will have a part to play too.
“Some workloads will move in the opposite direction, from the data centre out to the edge, as software development transitions further and further from monolithic applications to smaller microservices, I expect the use of serverless functions to become more widespread,” Leavers says.
“This, in turn, will promote the use of edge computing services in which functions can run outside the data centre, at the closest possible point to the user. Naturally, CDN providers are poised to take advantage of this as shown by the AWS Lambda@Edge service or Cloudflare Workers.”
Leavers says change is certainly in place, but the on-premises data centre isn’t going anywhere just yet.
“Regulatory issues combined with the spread of public cloud to all countries will play a big part here. If a legal firm is analysing and processing large volumes of case data, and if legally, this case data is not permitted to leave the country, then public cloud cannot be used. Either on-premises infrastructure or private cloud in a local datacenter will be required,” says Leavers.
“Compliance standards have evolved over time to suit the changing cloud landscape. For example, the evolution in PCI DSS v3 in 2018, or the removal of the requirement for HIPAA-compliant customers on AWS to have dedicated instances in 2017. But remember, even regulation permits you to use an external provider, the concept of shared responsibility is key. Even if a cloud infrastructure is compliant, the customer must make sure that they are using it in a compliant way, as they are still ultimately responsible for the security of their data - for example, cardholder data (CHD) under PCI DSS or Protected Health Information (PHI) under HIPAA.”
Another popular talk is the role of AI in modern data centre operations.
“To focus on machine learning, as a branch of artificial intelligence, it is becoming more and more prevalent; and not just for complex quantitative trading strategies at hedge funds, but also things that affect our day-to-day like the Netflix recommendation system (for more information on this interesting topic, search for the ‘Netflix at Spark+AI Summit 2018’ blog),” says Leavers.
“So while there are case-studies featuring big names - for example, Uber using Microsoft’s Face API to verify drivers via selfies - there are also many other ML use cases that can be used by MSPs to provide next-generation monitoring and anomaly detection. As software moves towards a distributed model with far more components, it becomes ever more important to cut down on storms of alerts and get straight to the root cause.”
There are many interesting trends emerging in data centre circles, with the aforementioned machine learning a prominent one.
“Increased use of machine learning has driven the update of liquid cooling at large scale. For example, Google's announcement of version 3.0 of their Tensor Processing Units in May 2018 divulged that this generation of their custom hardware requires liquid cooling. Air is no longer good enough. This is a good example of the general trend of the hyperscale providers to move more and more towards completely custom hardware stacks,” says Leavers. “In the long term, Microsoft's interest in using DNA as a storage medium for archive data is an interesting concept - while it might sound strange initially, if and when it becomes practical the density (approximately 1 exabyte per cubic millimetre) and durability (a half-life of 500 years or more) could be incredibly useful.”
And finally, I asked what makes Cloudhelix’s offering unique from others on the market.
“One word: people. Whether it be the 24x7 bespoke monitoring and support from our NOC, or the attentive natures of our friendly Client teams, we’ll always go the extra mile. Our flexibility to build and support completely bespoke systems is what gives us the edge - agility is key,” concludes Leavers.