Our blog series, “How to Speak Like a Data Center Geek,” works to demystify the complexities of the data center, and this time, we’re taking on virtualization. This is a topic that has managed to remain on the cutting edge for more than five decades, ever since virtualization technology was applied to software starting in the 1960s.
Virtualization is, in some sense, about illusion, though not the kind that involves, um, spitting out live frogs. It can create what the O’Reilly Media book “Virtualization: A Manager’s Guide,” called “the artificial view that many computers are a single computing resource or that a single machine is really many individual computers.”
Or: “It can make a single large storage resource appear to be many smaller ones or make many smaller storage devices appear to be a single device.”
The goals of virtualization include:
- Higher performance levels
- Improved scalability and agility
- Better reliability/availability
- To create a unified security and management domain
Whatever the goal, odds are virtualization technology is at work in your data center right now. In this “Data Center Geek” entry, we’ll look at a few different layers of virtualization.
First, we start with a baseline definition. Virtualization is a way to extract applications and their underlying components from the hardware supporting them and present a logical or virtual view of these resources. This logical view may be strikingly different from the physical view.
Consider a virtually partitioned hard drive, for example. Physically, it’s plainly just one hard drive. But virtualization allows us to construct a logical division of the hard drive that creates two separate hard drives that operate independently, maximizing processing power.
This layer allows individuals to work from wherever they are, while using whatever networking media and whatever endpoint device is available. Access virtualization technology makes it possible for nearly any type of device to access nearly any type of application without forcing the individual or the application to know too much about the underlying technology.
This technology works above the operating system, making it possible for applications to be encapsulated and allowing them to execute on older or newer operating systems that would normally pose incompatibilities. Some forms of this technology allow applications to be “streamed” down to remote systems, execute there and then be removed. This approach can increase levels of security and prevent data loss.
This technology is the current media darling. This layer hides the physical hardware configuration from system services, operating systems or applications. One type makes it possible for one system to appear to be many, so it can support many independent workloads. The second type makes it possible for many systems to be viewed as a single computing resource.
This layer can hide the actual hardware configuration from systems, making it possible for many groups of systems to share a single, high-performance network while thinking each of those groups has a network all to itself. See? Illusion.
Network virtualization can use system memory to provide caching, or system processors to provide compression or eliminate redundant data to enhance network performance.
Like the network virtualization layer, this layer hides where storage systems are and what type of device is actually storing applications and data. It allows many systems to share the same storage devices without knowing that others are also accessing them. This technology also makes it possible to take a snapshot of a live system so that it can be backed up without hindering online or transactional applications.
Article by Jim Poole, Equinix blog network