The TCO value of lesser-known virtualisation applications

Written By:
Published:
Content Copyright © 2007 Bloor. All Rights Reserved.

Until this week I had probably underestimated the value of virtualisation technologies. Having heard some arguments from IBM, I see there is great potential.

I am not sure I would go as far as Frank Reichert, IBM’s North Europe director of server service product line, who told me that he believed virtualisation should be mandatory for all enterprises—but I understand the reasons why he said this. As Reichert put it: “Virtualisation addresses their pain points with a much better TCO (total cost of ownership).”

I will explain some of the less-advertised uses for virtualisation and you can judge whether he is right, but first I had better cover off the simple and more obvious view of what virtualisation can do. The simple view says: if you group a series of disparate server or storage disks together to appear as though all were one device, you can drive up percentage disk utilisation as well as simplifying management. While these two things are true, that ignores much of virtualisation’s potential in helping to solve a number of enterprise management problems cost effectively. So here are some arguably more enlightened views about virtualisation benefits.

View 1
For historical reasons, many enterprises have a real huge hotch-potch of systems and servers, with applications often spread illogically among them. IT managers know this is inefficient and wasteful—and accept it is possible to consolidate down to far fewer servers—but are often too busy trying to fight fires to spend a lot of time trying to sort things out. It is an “I wouldn’t start from here” scenario. However, to move forward everyone has to start from ‘here’ and begin by getting to grips with what they have and where.

Of the two main problems to address; one is the number of different servers in use. By using virtualisation creatively, to group several servers to look like a single virtual server, the overall infrastructure begins to look a little simpler (as covered). However, if you group them sensibly with a view towards consolidation, you can start, in effect, to ‘model’ an already consolidated system. You can repeat this server pooling as often as you like, then make adjustments—say move one physical server between virtual server pools and/or switch around a few applications that appear to be in the wrong pool—until you achieve, conceptually, your optimum consolidated system. You may not have actually moved anything but have managed to start ‘seeing the wood for the trees.’ After ‘modelling’ this, it becomes a much simpler task to map the virtualised ‘model’ to replacement physical servers of appropriate sizes and performance levels as consolidation is implemented in practice.

View 2
Another pain point, particularly for cash-strapped enterprises and mid-sized organisations, is needing 24/7 and ‘five 9s’ systems availability to run some business-critical on-line, real-time applications. It is very expensive to achieve this with a physical, totally mirrored system where every server needs its own back-up and automated failover. But, by using virtualisation to pool a group of servers and perhaps adding just one spare, you can service all of them with automated failover, and a little more. A server malfunction causing application failure can be overcome by the software shutting out the faulty server and switching its tasks to ANY of the other live servers in the pool, so not only to the spare. While performance during auto-remapping may not match hardware mirroring, slight and temporary degradation is often acceptable—with multiple redundancy maintained at much lower cost.

View 3
Pain point three is over-provisioning because the amount and speed of future expansion can only be guessed. This is the other positive side of the higher utilisation coin in fact. If you purchase, say, 10 different storage drives for different functions, you must estimate how much expansion each will need over the next ‘n’ years until replacement. You build in a margin of error so you (possibly) over-provision every drive because the alternative of hitting a capacity brick wall early is unthinkable. When you group your devices into a virtualised pool you can push up overall utilisation—even though you over-provision—since you can apply a smaller percentage margin of error overall. By virtualising a pool of storage your initial hardware purchase should be less (if anything) as you will have freed up spare capacity; then subsequent upgrades can be more measured and incremental—typically adding a single drive or server from time to time.

View 4
SOA and other factors are creating pressure for software on demand provisioning. We are not there yet and there is still a lack of good usage monitoring and billing applications, or sensible application licensing based on usage. But virtualisation can be used in preparation by enabling a break in the tight coupling between, for instance, an application’s storage location and the application code. IBM’s SAN Volume Controller (SVC) is one well-established example of a storage virtualisation appliance that isolates host server applications from the physical storage they are accessing. Whereas on demand capabilities are not necessarily a pain point yet, this also emphasises how virtualisation can introduce flexibility into the enterprise infrastructure—and so it could address a pain point holding back more creative future equipment purchasing decisions.

View 5
Another angle relates to when a new system is planned. It may be costly to test it out in a production environment with actual hardware capacity before going live. The pain point is often ‘time to market’ because the new system is wanted yesterday. You may at least be able to create temporary storage and server pools by borrowing from other applications to simulate the true production environment for the application, and this is again through virtualisation. Virtualised pools can be repeatedly set up at a terminal, tested out then torn down again—and development costs are better contained.

To me the theory is reasonable. There are some words of caution of course. Enterprises have a mix of manufacturers’ hardware and software, some of which may be old and unsupported. Hardware vendors may be great at virtualising their own environments but not so good at bringing others’ systems into the mix (where they do so at all). Virtualisation normally makes management simpler but does actually add a layer of hidden complexity—and, if a physical device malfunctions, the source of the fault may not be as obvious—unless the software includes tools that fully compensate.

Still, there is a lot to be said for a bit of lateral thinking about software. If virtualisation can be applied in ways that genuinely help IT managers reduce their space, time and cost pain points—as well as making IT management easier—then it will surely, de facto, find a home in every enterprise, mandated or not.