Story image

Speak like a data center geek: Virtualization

09 Dec 16

Our blog series, “How to Speak Like a Data Center Geek,” works to demystify the complexities of the data center, and this time, we’re taking on virtualization. This is a topic that has managed to remain on the cutting edge for more than five decades, ever since virtualization technology was applied to software starting in the 1960s.

Virtualization is, in some sense, about illusion, though not the kind that involves, um, spitting out live frogs. It can create what the O’Reilly Media book “Virtualization: A Manager’s Guide,” called “the artificial view that many computers are a single computing resource or that a single machine is really many individual computers.”

Or: “It can make a single large storage resource appear to be many smaller ones or make many smaller storage devices appear to be a single device.”

The goals of virtualization include:

  • Higher performance levels
  • Improved scalability and agility
  • Better reliability/availability
  • To create a unified security and management domain

Whatever the goal, odds are virtualization technology is at work in your data center right now. In this “Data Center Geek” entry, we’ll look at a few different layers of virtualization.

Virtualization

First, we start with a baseline definition. Virtualization is a way to extract applications and their underlying components from the hardware supporting them and present a logical or virtual view of these resources. This logical view may be strikingly different from the physical view.

Consider a virtually partitioned hard drive, for example. Physically, it’s plainly just one hard drive. But virtualization allows us to construct a logical division of the hard drive that creates two separate hard drives that operate independently, maximizing processing power.

Access virtualization

This layer allows individuals to work from wherever they are, while using whatever networking media and whatever endpoint device is available. Access virtualization technology makes it possible for nearly any type of device to access nearly any type of application without forcing the individual or the application to know too much about the underlying technology.

Application virtualization

This technology works above the operating system, making it possible for applications to be encapsulated and allowing them to execute on older or newer operating systems that would normally pose incompatibilities. Some forms of this technology allow applications to be “streamed” down to remote systems, execute there and then be removed. This approach can increase levels of security and prevent data loss.

Processing virtualization

This technology is the current media darling. This layer hides the physical hardware configuration from system services, operating systems or applications. One type makes it possible for one system to appear to be many, so it can support many independent workloads. The second type makes it possible for many systems to be viewed as a single computing resource.

Network virtualization

This layer can hide the actual hardware configuration from systems, making it possible for many groups of systems to share a single, high-performance network while thinking each of those groups has a network all to itself. See? Illusion.

Network virtualization can use system memory to provide caching, or system processors to provide compression or eliminate redundant data to enhance network performance.

Storage virtualization

Like the network virtualization layer, this layer hides where storage systems are and what type of device is actually storing applications and data. It allows many systems to share the same storage devices without knowing that others are also accessing them. This technology also makes it possible to take a snapshot of a live system so that it can be backed up without hindering online or transactional applications.

Article by Jim Poole, Equinix blog network

China to usurp Europe in becoming AI research world leader
A new study has found China is outpacing Europe and the US in terms of AI research output and growth.
Fujitsu’s WA data centre undergoing efficiency upgrade
Fujitsu's Malaga data centre in Perth has hit a four-star rating from National Australia Built Environment Rating System (NABERS).
Google says ‘circular economy’ needed for data centres
Google's Sustainability Officer believes major changes are critical in data centres to emulate the cyclical life of nature.
How to keep network infrastructure secure and available
Two OVH executives have weighed in on how network infrastructure and the challenges in that space will be evolving in the coming year.
52mil users affected by Google+’s second data breach
Google+ APIs will be shut down within the next 90 days, and the consumer platform will be disabled in April 2019 instead of August 2019 as originally planned.
How Fujitsu aims to tackle digitalisation and the data that comes with it
Fujitsu CELSIUS workstations aim to be the ideal platform for accelerating innovation and data-rich design.
QNAP launches a new hybrid structure NAS
"By combining AMD Ryzen processors with a hybrid storage structure and 10GbE SFP+ connectivity, the system packs performance into a compact 1U frame."
Ramping up security with next-gen firewalls
The classic firewall lacked the ability to distinguish between different kinds of web traffic.