Event processing performance

Written By:
Published:
Content Copyright © 2006 Bloor. All Rights Reserved.

The march of the big guns into event processing continues.
Microsoft, SAS and Sybase have all got point solutions that address
this space in one way or another, while it may have escaped your
notice that IBM has now got two offerings in this area.

The first of these is WebSphere Front Office for Financial
Markets, which is essentially a platform for combining and
filtering data feeds from capital markets. In effect, it might act
as a front-end to an event processing engine rather than offering
such a thing directly. Secondly, and from my point of view more
interestingly, there is a now a Data Streaming Extension (DSE) for
the DB2 Data Warehousing Edition. This supports real-time data
feeds from, say, capital markets, directly into a DB2 data
warehouse. However, the purpose of this is so that you can playback
and analyse event streams rather than act upon them. That is, IBM
does not yet offer the sort of static queries through data streams
that is typical of the main players in this space. That said, IBM
has a number of ongoing event processing projects that have yet to
come to fruition, one of which will integrate streamed and
persistent queries.

As an aside, very few of the people in the BI group (at least,
those I met at the recent Information on Demand conference) in IBM
know about the Front Office product and my guess is that very few
of the people dealing with Front Office know about DSE. Tut
tut!

DSE is, in fact, based on what used to be the Informix Real-time
Data Loader but where that used to run at something like 30,000
transactions per second, DSE will run the hundreds of thousands of
transactions per second you need for capital markets, albeit on an
8-way box, which is significantly larger than you would need with
one of the pure play event processing products.

Anyway, this leads me on to the main point of this article,
which is that actually being able to cater for x thousands of
transactions per second. While a useful
measure of performance it is not the be all and end all of event
processing performance. Indeed, it is only the start.

I do not usually reference other companies’ work, least of all
that of vendors, but Syndera (which, incidentally, is also a
provider within the capital markets sector) has recently published
a white paper on event processing performance that is worth a look.
It can be found
here
and it is not vendor specific, which is why I am prepared
to recommend it.

The main point about event processing performance is that it is
not just a question of the events that enter the environment. In
particular, new logical events are generated within the
engine—for example, if Microsoft shares go up and IBM’s down,
that might generate a paired event—and all of these generated
events also need to be processed, along with those that are
external. In addition, of course, there are a bunch of other things
that you may want to do, such as access persistent data in a data
warehouse or other store, generate alerts and actions and so on and
so forth. What Syndera’s paper does is to highlight these issues
and look into their implications in a way that goes beyond the
simplistic approach of merely saying that you can stream
transactions at such and such a speed.