Story image

Testing, testing 1-2-3: Three reasons why cloud testing matters

24 Nov 17

It has been nearly three years since an Amazon Web Services senior executive said “Cloud is the new normal”.  

Since that time, the momentum behind cloud migrations has become unstoppable, as enterprises look to take advantage of the agility, scalability and cost benefits of the cloud. 
  
In its 2017 State of the Hybrid Cloud report, Microsoft found that 63% of large and midsized enterprises have already implemented a hybrid cloud environment, consisting of on-premise and public cloud infrastructures.  

Cisco’s latest Global Cloud Index predicted that 92% of enterprise workloads will be processed in public and private cloud data centers, and just 8% in physical data centers, by 2020.   

So the future is cloudy, with enterprises adopting hybrid cloud strategies using services from a mix of providers.  

But irrespective of the cloud services they use, or the sector in which they operate, all enterprises share common goals: they want their business applications to deliver a quality user experience under all conditions; they want those applications to be secure and resilient; and they want them to run as efficiently as possible.

Shared responsibility 

However, achieving those goals is not always straightforward.  

To paraphrase computer security analyst Graham Cluley, the public cloud is simply somebody else’s computers.

While the provider should offer a strong foundation for high-performance and secure applications, the enterprise still needs to assume responsibility for the security, availability, performance and management of the processes associated with those applications because that responsibility cannot be abdicated.  

More importantly, the enterprise is responsible for properly configuring and managing the security controls provided by the cloud provider. 

Let’s examine the challenges enterprises face in ensuring their cloud applications are secure, deliver a quality user experience and are cost efficient. 

Challenge #1: Cloud Security 

Achieving robust security in the cloud is challenging for three reasons.  

First, understanding an organization’s current security levels, where additional protection is needed and where potential vulnerabilities may lie, is difficult regardless of whether the environment is on premise or in the cloud.

As there are more and more security products and platforms to manage across complex hybrid environments, having a single comprehensive view of the security posture becomes more difficult. 

Second, the highly dynamic nature of cloud environments, coupled with an every widening cyber threat landscape, requires security in those environments to be similarly flexible and fluid.

Policies need to scale up in line with the infrastructures they are protecting.

Third, there is a shortage of security expertise, with IT teams’ already stretched to manage the tools and processes in place across the hybrid environment.   

Cloud security solutions also generate huge volumes of security events, making it difficult for personnel to prioritize and remediate risks.

Challenge #2: User experience 

While different applications have slightly different SLAs and user expectations – think about the difference between a training sandbox and a real-time online retail applications.  

User experience is typically predicated on two things: application performance and service availability.  When these are compromised, user dissatisfaction can quickly translate into loss of business. 

Yet the complexity of multiple design choices in the public cloud, from hardware architectures to instance types optimised for different applications, make guaranteeing a consistent user experience that much more complicated.  

Factors such as the underlying cloud infrastructure hosting the application, the network connectivity between user and application, the performance of application delivery elements (for example session load balancers), and the actual design and architecture of the application, can all impact the user experience. 

Challenge #3: Cost and efficiency 

Cloud providers enable a variety of options to build cost-effective, scalable and high-available applications.

From utility-based models with on-demand charges to reserved price options and spot instances or price bidding, there is flexibility for an enterprise to choose the model that suits their needs.  The challenge is to identify which is best. 

Cost optimisation is a case of weighing price and performance, according to the precise needs of the organization in question. Settings and architecture designs must be optimised to deliver required application auto-scaling, and support demand peaks and troughs as they occur.  

Design choices relating to securing workloads range from security endpoints running inside each instance, to network security appliances in various locations, to a security control offered by the cloud provider. 

Each of these choices operates at different cost rates, impacts application performance in different ways, and delivers various levels of security effectiveness.

Given this complexity, understanding how to select the solutions that are most efficient is not an easy task, unless organizations can model the applications and threat vectors targeting them. 

Meeting the challenges: How testing can provide value 

To meet these challenges, organizations migrating some or all of their workflows to the cloud must be prepared to embed consistent testing into their processes, in both pre-production and production.

There is a direct relationship between test and risk - by getting testing procedures right from the start, enterprises can dramatically reduce their risk exposure, and ensure they successfully harness the full benefits of the cloud. 

In pre-production, before a cloud migration actually takes place, testing can provide quantifiable insights to empower security architects, network architects, and security teams during vendor selection, performance and cost optimisation processes, scaling up, availability, and training.  

For example, on the vendor selection side, assuming the functional requirements are met, procurement managers need to ascertain which public cloud vendor is cost-efficient in terms of price and performance.  

They need to establish which of the available tools for securing application workloads are efficient, secure and, ultimately, ideal for their specific requirements. 

Moving on to questions of performance and cost optimisation, IT and security managers need to confirm how security policies and architectures can be optimised, and what the best settings are for an auto-scaling policy.

These decisions are based on a range of factors, from memory utilization to new connection rates, and again, consolidating and analyzing those factors can only be done via a rigorous, real-world testing process. 

Then there are questions around how the cloud architecture will perform once deployed.

Where are the bottlenecks in the application architecture as it scales?

How fast will applications self-recover from errors, and how will the user experience be impacted if some application services fail? 

Testing from pre- to post-production

Answering these questions requires an extensive pre-production testing program, with realistic loads and modeling threat vectors, as well as failover scenarios.

This provides the assurance that the cloud architecture will empower rather than restrict the business.  It also enables security engineers and analysts to better understand what they are working with. 

And testing must not end once a cloud environment has gone live. Production-stage, continuous testing is essential in order to monitor for service degradations, while continuous security validation is essential in order to provide security service assurance. 

In conclusion, as cloud is the new normal, continuous testing of cloud workloads needs to be embraced as the new normal too, at all stages of application deployment and delivery.

Testing is the only means of ensuring that organizations can fully realize the benefits of the cloud, without the risks of security breaches, poor user experience, or unnecessary costs. 

Article by Ardy Sharifnia, general manager, Australia and New Zealand, Ixia.

HPE unveils AI-driven operations for ProLiant, Synergy and Apollo servers
With global learning and predictive analytics capabilities based on real-world operational data, HPE InfoSight supposedly drives down operating costs.
Deloitte bolsters AWS offerings with CloudinIT
“By joining forces we can help even more organisations adopt cloud technologies and put their customers at the heart of their digital agendas.”
Enterprises to begin closing their data centres
Dan Hushon predicts next year companies will begin bidding farewell (if they haven't already) to their onsite data centres.
Exclusive: How the separation of Amazon and AWS could affect the cloud market
"Amazon Web Services is one of the rare companies that can be a market leader but remain ruthlessly innovative and agile."
Huawei unveils new cloud region in South Africa
The announcement makes it the world’s first cloud service provider that operates a local data centre to provide cloud services in Africa.
HPE extends cloud-based AI tool InfoSight to servers
HPE asserts it is a big deal as the system can drive down operating costs, plug disruptive performance gaps, and free up time to allow IT staff to innovate.
Digital Realty opens new AU data centre – and announces another one
On the day that Digital Realty cut the ribbon for its new Sydney data centre, it revealed that it will soon begin developing another one.
'Public cloud is not a panacea' - 91% of IT leaders want hybrid
Nutanix research suggests cloud interoperability and app mobility outrank cost and security for primary hybrid cloud benefits.