Story image

IT scalability on demand: Pay-per-use arrives in the data center

05 Apr 2016

The past few years have seen many dramatic changes in the way we utilise IT. Mobile devices and the cloud have altered both how we interact with enterprise IT systems and how we utilise back-end IT services.

The resulting consumerisation of IT and explosive growth of pay-as-you go cloud services have led to users expecting and demanding the same simplicity and speed to deploy new applications, workloads, and users internally.

Additionally, the emergence of big data analytics and the data deluge big data encompasses have put further pressure on IT resources.

As a result, businesses, and service providers, often find themselves in the difficult situation of trying to strike a balance between over- and under-provisioning their internal IT resources.

Furthermore, it can be difficult to determine whether the capacity on hand is adequate to meet user service level agreements (SLAs) or for that matter to understand how much capacity, server, storage, and networking, is actually available.

What’s been tried

These challenges are often exacerbated by older technology in need of upgrading or migrating to new platforms, such as the recent discontinuation of support for Windows Server 2003, which raises concerns about vulnerability of those servers to new threats.

Two traditional approaches have been used to address these issues.

First, some choose to over-provision on-premises infrastructure to handle peaks and growth, which can lead to expensive capital equipment sitting idle most of the time.

Other firms are turning to public cloud services such as Microsoft Azure to handle spikes in demand, for dev-test projects, as overflow capacity, or to just move workloads to the cloud service entirely. But governance, security, or privacy issues may demand certain workloads remain behind the firewall and on the premises.

Flexible capacity – best of both worlds

To address these many challenges, companies are increasing looking at ‘pay-as-you-go’ flexible capacity solutions that enable IT to scale quickly to handle growth needs using the available buffer capacity without the usual long procurement process. This approach is designed to offer a myriad of benefits in process, technology, and finance.  Most importantly:

Capacity on demand Companies first undergo a joint assessment of current and anticipated demand with their technology vendor before the combination of servers, storage, networking, and software is installed based upon current demand as well as a buffer to support projected growth or peak demand. If companies use up buffer capacity, additional capacity can be added through the change management process to help adjust to meet any increased spare capacity needs beyond the initial projected amounts.

Flexible finance With a flexible capacity model, businesses only pay for what they use – per GB, network port, virtual machine, or server instance actually used. Monthly service based billing does not require up-front expenditures, reserving cash for other business needs.

 No forklift upgrade If a company already has a substantial IT infrastructure, a good “pay-as-you-go” model can include existing supported multivendor systems, and provide a single pane of glass to manage capacity for all eligible resources – including on-premise and cloud assets. 

For service providers, the flexible approach can potentially match cash flow to cash receipts, and gives the flexibility that supports growth or shrinkage as the business climate evolves, as well as proving enterprise class support for the IT environment.

Whether your enterprise wants to have a hybrid IT model, is looking for help regarding financial predictability for IT expenditures, or IT is becoming the services broker for the enterprise, a flexible capacity approach can help better optimise your company’s resources to better drive business growth.

By Alan Hyde, Vice President and General Manager, Enterprise Group, Hewlett Packard Enterprise South Pacific

Bluzelle launches data delivery network to futureproof the edge
“Currently applications are limited to data caching technologies that require complex configuration and management of 10+ year old technology constrained to a few data centers."
DDN completes Nexenta acquisition
DDN holds a suite of products, solutions, and services that aim to enable AI and multi-cloud.
Trend Micro introduces cloud and container workload security offering
Container security capabilities added to Trend Micro Deep Security have elevated protection across the DevOps lifecycle and runtime stack.
Veeam joins the ranks of $1bil-revenue software companies
It’s also marked a milestone of 350,000 customers and outlined how it will begin the next stage of its growth.
Veeam enables secondary storage solutions with technology partner program
Veeam has worked with its strategic technology alliance partners to provide flexible deployment options for customers that have continually led to tighter levels of integration.
Veeam Availability Orchestrator update aims to democratise DR
The ability to automatically test, document and reliably recover entire sites, as well as individual workloads from backups in a completely orchestrated way lowers the total cost of ownership (TCO) of DR.
Why flash should be considered the storage king
Not only is flash storage being used for recovery, it has found a role in R&D environments and in the cloud with big players including AWS, Azure and Google opting for block flash storage options.
NVIDIA's data center business slumps 10% in one year
The company recently released its Q1 financial results for fiscal 2020, which puts the company’s revenue at US$2.22 billion – a slight raise from $2.21 billion in the previous quarter.