Getting Real With Virtualization

Posted 03-15-2013

Getting Real With Virtualization

By Andy Lewis

The very essence of virtualization is to fool us. That is, it takes what is real and makes it infinitely more valuable by making its “realness” disappear. This ability to fool us provides the key enablement for solutions such as IT as a service (ITaaS) and Cloud. That is, solutions like ITaaS demand ubiquity, which can occur when we can create granular processing entities afforded by different virtualization techniques that are now available. I say “now available” because virtualization has not been introduced as one disruptive technology; rather, it has been a realization made possible through a progressive evolution of hardware and software enhancements. So what’s the current situation in the world of the virtualized data center? Are we exploiting virtualization in the ways we should, rather than the ways we are used to doing?

Virtualization has continued to mature and fool us over the years. With each layer of abstraction added by hardware and software, and more recently by service providers, we rarely know where our data is stored and processed. Policies such as “virtualize first” were scorned just a few years ago, but have since become the norm as the dependability on virtual platforms has increased to meet business needs.

Let’s look at some financial common sense. Using virtualization has allowed us to obtain greater utilization out of our assets. It has reduced the need for dedicated IT equipment, such as servers, storage and networks, and the associated physical attributes. Fewer physical assets to manage and fewer physical cable interconnects result in less-expensive data center floor space and a reduced need for power and cooling. Then we have the less tangible benefits, such as greater flexibility in the way we operate the environment and the enhanced ability to deliver IT solutions faster.

So, why spend any time questioning the value proposition of virtualization? When you can do more with less, it just makes sense! Let’s take a closer look at how the benefits of virtualization have changed over recent years and see if there are some new considerations to ensure this technology is working for us.

Virtualization Developments and Trends

Mainframe computers have used holistic virtualization techniques for many years. The UNIX and x86 processing environments emerged using component-level virtualization. That is, a virtualized storage subsystem with a separate virtual network working independently with vendor-specific “smarts” being connected with very little virtualization-rich integration. The processor began to run another layer of embedded guest systems that could access these virtualized components and present them through the “hypervisor,” virtualized yet again into various formats. We created a logical set of systems that were based on objects and pointers, but the underlying infrastructure was still too complex. To resolve this, we added appliances, reference architectures and converged infrastructure with a higher-level virtualization engine that brings the components together in a more integrated fashion.

What else has changed? Capacity, namely, the effect of Moore’s law. As Gordon Moore noted in 1965, the number of components on an integrated circuit doubles approximately every two years. Year after year, we have expected to be able to apply this dense computing factor to the IT ecosystem. It’s something we have come to rely on, and it has been great for supporting virtualization. We can do “moore” with less!

While considering trends, let’s look at this in a slightly different way. From the viewpoint of Moore’s law, we have hit an inflexion point. That is, given the exponential growth of processing capabilities possible in current technologies, we find that most businesses have not grown anywhere near this rate. This shouldn’t be a problem as we can simply add more virtual systems and applications to a small physical footprint, increasing the physical-to-virtual ratio and lowering unit cost. But when is enough enough? What should be the maximum number of virtual systems on a physical platform? What is the maximum utilization rate in this shared environment? Forty percent, 60 percent or 80 percent?

Getting Real With Virtualization

Remember the compelling financial argument for virtualization? It seems like a rock-solid case. It was great to reduce some of the physical complexity, but when we add layers of abstraction we create additional complexity. This complexity can be considered even worse, though, as the sprawl is controlled by a few keyboard strokes rather than the heavy lifting necessary in the physical world. When we consider the process of finding the root cause of a problem in a heavily virtualized environment, we must traverse the layers and decipher what is real and what is not before determining if we have a software or hardware problem. Let’s remember that what looks like hardware is probably just a ghost image these days.

So virtualization, which was once a solution that offered simple capacity, flexibility and cost-reduction benefits, now needs more serious consideration of impact and risk across the pool of resources used by the enterprise. The challenge now shifts into policy and environment management to protect the IT business assets. Just as we were skeptical of migrating our business-critical environments to a virtual world a few years ago, we are now skeptical of deploying the software that becomes the manager to control our IT environment. Just imagine the situation if it breaks. Will you have any insight into your IT ecosystem? Will you have the skills and processes to resume operations? Do these considerations outweigh the benefits of the virtual stack? Or do we need to run the numbers, quantifying our risk and probability of failure to evaluate these investments and approaches?

Ensuring Virtualization’s Success

With the ever-increasing complexity of IT technology, let’s take a simple, back-to-basics approach by stating some of the principles for success:

·        Virtualization needs economies of scale. Create standards and stick to them. Standardize, starting from the bottom of the stack upward when possible. The power of integration is predictability.

·        Businesses need consistency. Identify your business policies, form agreements and align a set of architecture and patterns that meet the business needs. Most importantly, start quantifying risk in monetary terms, determine your company’s tolerance for risk and socialize it.

·        Know your capacity in terms of relevant units of work to your business, but more importantly understand the trends and the forecasts of each business unit. This is Economics 101: Align the real-world demands with the virtual supply.

·        You are as strong as your weakest link. Keep identifying it, whether it is physical or virtual, as it shifts.

·        At some point you’ll leave a legacy. Without a doubt, that legacy will include virtualization technologies. Take control of the virtualization strategy, and don’t be the scapegoat for your IT successor to blame for a piecemeal deployment.

To the general user of IT services, virtualized solutions offer a tremendous ability for self-empowered, easy to use and dynamic capabilities. We know that this is somewhat of a façade, and we work diligently behind the scenes to create that experience with virtualization products. As we increase our dependency on these solutions, let’s keep our awareness of the environment real by remembering our basic principles.

About the Author

Andy Lewis is the CIO at Kovarus, where he is responsible for providing thought leadership on the enablement of IT-as-a-service transformation strategies. Lewis has more than 20 years of experience with companies such as EMC, Visa, Barclays Bank, Lloyds TSB and Galileo International. He has extensive experience with cloud-based computing, technology strategy and process excellence. Lewis has also co-authored books on storage area networks, and served on many CIO and CTO councils.