Virtualisation needs a network performance boost – try IT infrastructure optimisation

Written By:
Published:
Content Copyright © 2009 Bloor. All Rights Reserved.

Server
virtualisation is well-established and is a great way to reduce the number of
physical servers used by an organisation. Storage virtualisation ought to
complement this—and eventually will do—by greatly reducing the number of
storage arrays; used together they will make for a really agile business able
to adapt quickly to changing needs.

What both also do
is centralise the management, which is partly to simplify it and make it easier
for one person to manage many servers or storage arrays; the physical equipment
tends to become more centralised too—partly as an effect of removing ‘silos’
of information held in various remote locations.

What is not always
fully considered is the impact of virtualisation and centralisation on overall infrastructure
performance. A first, and well-understood, reason is that when information is
consolidated onto less physical devices the extra traffic per device means increased
contention between applications and their need to access data—but virtualisation
techniques can mitigate this by, for instance, wide striping the data and
segmenting particular applications.

The second reason,
often forgotten in cost calculations, is that branch offices that once had
local systems that are now centralised, still need to access them; this pushes
up the amount of network traffic and slows down access performance, and when
WANs are involved, this pushes up telecom bills. Then there are things like
consolidated backups perhaps sharing a single pipe instead of being spread. So
servers or storage arrays may be drastically reduced in number but, as well as
performance issues, the OPEX saving from consolidation is counterbalanced by higher
communications charges.

The problem can be
much worse than this. For instance, service level agreements (SLAs) and
internal access performance criteria may be critically undermined by slow
responses. It is well-documented that mission-critical applications are often
ring-fenced outside of virtualisation because of performance worries. So what’s
to be done?

I am certainly not
advocating abandoning virtualisation, but the most obvious solution does
involve another investment. It is broadly described under the heading of ‘IT
infrastructure optimisation’. This means speeding up any or all of: the
applications, the networks (and especially the WAN) and storage access. It is
not hard to see why this is a growing market niche. There are a few dedicated providers,
notably Riverbed Technology, competing head to head with the likes of Cisco,
Juniper and Bluecoat.

Riverbed is the
market leader with over 6,000 customers. Mark Lewis, Riverbed’s senior director
of marketing and alliances in EMEA, sees networking, application and storage
performance problems all converging. “For networking you need more bandwidth
with latency the secret throughput killer,” he said.

For storage it is
data sprawl and islands of storage, backup and replication with compliance
worries a side-effect; while some of these may be reduced by virtualisation,
the sheer increase in the amount of stored data applies its own pressure.

Some of the
applications most affected are those delivered via the web—including e-mail
and FTP—as well as for instance, Lotus Notes, database access, FTP and ERP.
When response times become too slow the applications can become unusable.

However, the
biggest focus for me is centralisation (usually involving virtualisation) with
remote access. For someone like me who follows storage and data protection
closely, use of single instancing (SI) and data de-duplication (de-dupe) are
hot topics. WAN optimisation involves the same idea—taken further. Lewis told
me SI is applied across all connected sites including down to desktops, with duplicate
data strings removed at very high granularity; along with other compression
techniques this will reduce WAN traffic by 60–99%, he said.

It doesn’t take a
genius to work out how network performance can rocket by implementing such
systems. Included amongst this is an ability to intercept some of the heavy
overhead of TCP/IP as well.

There is a long
list of potential major and minor infrastructure benefits that organisations
could experience as a knock-on effect of the performance boosts from using some
of these solutions, such as: reducing costs by avoiding new network bandwidth
or deferring upgrades, more effective and flexible use of IT resources
including improved worker collaboration, reduced risk and cost from service
outages, much faster and lower cost remote replication, backup and disaster
recovery (DR), reduced risk with data consolidated into secure sites, improved
compliance with government and industry regulations, and so on and on.

A couple of years
ago IDC did a survey looking at these types of potential savings when using
Riverbed’s Steelhead appliances and software; it concluded that the ROI period
would average about 7.3 months and thereafter it would all be savings.

Obviously, each
organisation’s circumstances will differ, and the competitors in the market
will continue to enhance their portfolios. The bottom line is: in these
cash-strapped times, if you have any issues with your network (and not least if
this is as a result of virtualising servers or storage), these types of
solutions could remove a multitude of problems without breaking the bank.