Applying the utility model to software

Written By:
Published:
Content Copyright © 2006 Bloor. All Rights Reserved.

Kognitio has recently introduced version 6.0 of WX2,
which is what was formerly the WhiteCross virtual data warehouse
appliance. From a technical viewpoint there are two major
differences between this version of the product and the previous
one: first, the company has introduced graceful degradation and,
secondly, you can now interactively scale the product down. I had
better explain this.

In so far as graceful degradation is concerned, you couldn’t
previously run, say, a 5Tb system on 5 blades. Now you can. The
performance will be poor but it is actually possible. In other
words performance degrades gradually now as you reduce below what
Kognitio calls a “balanced architecture” rather than
simply collapsing. In terms of scaling an implementation,
previously you could interactively scale up but there were no
facilities for reducing the number of blades in use without
re-installing. Now you can do this interactively as well. Note that
the balanced architecture is balanced between performance and
price: you can exceed the balanced architecture in terms of
performance as well as go beneath it.

So, Kognitio is supporting down sizing. Moreover, it is
explicitly supporting the ability to run on what one might call
sub-standard systems. Why?

The answer is that these technology changes tie in with the
company’s new pricing model, which offers a flexible pay-as-you-go
approach, based on the number of terabytes that you are analysing
per hour. This enables a variable approach. For example, you might
have four 5Tb data marts for a period of time, then kill these and
create a 20Tb mart and then when that has done its job, implement
twenty 1Tb marts, and so on. A single 20Tb license will cover all
of these implementations.

Secondly, you might be performing a migration from, say, an
Oracle data warehouse to Kognitio. This might be a process that
takes many months. Normally you would have to purchase a full
system for the Kognitio system—say 60 blades: this is
expensive. With the scaled down option now available from Kognitio
you could run on, say, 10 blades for development and testing and
only increase to 60 blades when you want to go to live parallel
running.

Another thought is that you might have 20 blades as a standard
requirement but occasionally need to increase this to 40 blades for
a short period—using a utility computing model you could rent
this extra hardware capacity for the time period needed and you
would only get charged by Kognitio for the additional Terabyte
hours that you used.

One can also imagine this approach being popular with partners:
you can have a relatively small system in-house for development but
use the utility model to ramp up your system briefly (and
inexpensively) when you need to perform a proof of concept or
demonstration.

This is a very interesting approach to pricing: effectively a
utility-based model for software or at least partly so. While one
cannot imagine the likes of Netezza or DATAllegro being able to
match this flexibility (because their appliances are not virtual)
this sort of licensing could be applied in other parts of the
industry and I dare say that a number of companies will watch
Kognitio’s progress with interest to see how well this works.