Getting Real With Virtualization

By Andy Lewis

The very essence of virtualization is to fool us. That is, it takes what is real and makes it infinitely more valuable by making its “realness” disappear. This ability to fool us provides the key enablement for solutions such as IT as a service (ITaaS) and Cloud. That is, solutions like ITaaS demand ubiquity, which can occur when we can create granular processing entities afforded by different virtualization techniques that are now available. I say “now available” because virtualization has not been introduced as one disruptive technology; rather, it has been a realization made possible through a progressive evolution of hardware and software enhancements. So what’s the current situation in the world of the virtualized data center? Are we exploiting virtualization in the ways we should, rather than the ways we are used to doing?

Virtualization has continued to mature and fool us over the years. With each layer of abstraction added by hardware and software, and more recently by service providers, we rarely know where our data is stored and processed. Policies such as “virtualize first” were scorned just a few years ago, but have since become the norm as the dependability on virtual platforms has increased to meet business needs.

Let’s look at some financial common sense. Using virtualization has allowed us to obtain greater utilization out of our assets. It has reduced the need for dedicated IT equipment, such as servers, storage and networks, and the associated physical attributes. Fewer physical assets to manage and fewer physical cable interconnects result in less-expensive data center floor space and a reduced need for power and cooling. Then we have the less tangible benefits, such as greater flexibility in the way we operate the environment and the enhanced ability to deliver IT solutions faster.

So, why spend any time questioning the value proposition of virtualization? When you can do more with less, it just makes sense! Let’s take a closer look at how the benefits of virtualization have changed over recent years and see if there are some new considerations to ensure this technology is working for us.

Virtualization Developments and Trends

Mainframe computers have used holistic virtualization techniques for many years. The UNIX and x86 processing environments emerged using component-level virtualization. That is, a virtualized storage subsystem with a separate virtual network working independently with vendor-specific “smarts” being connected with very little virtualization-rich integration. The processor began to run another layer of embedded guest systems that could access these virtualized components and present them through the “hypervisor,” virtualized yet again into various formats. We created a logical set of systems that were based on objects and pointers, but the underlying infrastructure was still too complex. To resolve this, we added appliances, reference architectures and converged infrastructure with a higher-level virtualization engine that brings the components together in a more integrated fashion.

What else has changed? Capacity, namely, the effect of Moore’s law. As Gordon Moore noted in 1965, the number of components on an integrated circuit doubles approximately every two years. Year after year, we have expected to be able to apply this dense computing factor to the IT ecosystem. It’s something we have come to rely on, and it has been great for supporting virtualization. We can do “moore” with less!

While considering trends, let’s look at this in a slightly different way. From the viewpoint of Moore’s law, we have hit an inflexion point. That is, given the exponential growth of processing capabilities possible in current technologies, we find that most businesses have not grown anywhere near this rate. This shouldn’t be a problem as we can simply add more virtual systems and applications to a small physical footprint, increasing the physical-to-virtual ratio and lowering unit cost. But when is enough enough? What should be the maximum number of virtual systems on a physical platform? What is the maximum utilization rate in this shared environment? Forty percent, 60 percent or 80 percent?

CIO Insight Staff
CIO Insight Staff
CIO Insight offers thought leadership and best practices in the IT security and management industry while providing expert recommendations on software solutions for IT leaders. It is the trusted resource for security professionals who need network monitoring technology and solutions to maintain regulatory compliance for their teams and organizations.

Get the Free Newsletter!

Subscribe to Daily Tech Insider for top news, trends, and analysis.

Latest Articles