Untangling events

Written By:
Content Copyright © 2008 Bloor. All Rights Reserved.

Complex event processing, business event processing, security event management, log management, data retention systems, event-driven architecture, event warehousing: are these topics and other related ones all subsets of what is essentially a single market or are they distinct markets? In this and the following series of articles I will attempt to untangle this complex weave.

In essence, any application that deals with events (things that happen or, arguably, things that don’t happen) does some or all of the following:

  1. Monitor real-time events (it doesn’t matter where these come from: an event is an event)
  2. Filter and/or aggregate these events
  3. Correlate events with other events, historic patterns of events or business rules, or some combination of these
  4. Generate alerts or initiate processes as the result of individual events or correlations that are of business interest
  5. Store events, which may be at the individual, filtered or aggregated level, possibly with tamper-proofing for compliance reasons
  6. Report against stored events whether via forensics, playback, analysis, search, eDiscovery and so on.

To take a simple example, one company I know provides temperature sensors for industrial fridges (that is, in restaurants and so forth). If the temperature in a fridge being monitored goes above any particular threshold an alert is generated and sent to whoever is appropriate—the restaurateur, the service company and so forth—so that appropriate action can be taken. Also, on an ongoing basis, the sensors write to a tamper proof log file, which can be used both to produce temperature graphs and as evidence in case the Health and Safety Executive gets involved or to put before a service provider.

Now, if we consider the 6 steps above, this provider does not require step 2 (all temperature readings are recorded), and only minimally uses steps 3 (a single business rule) and 6 (a simple graph). Similarly, log management or data retention (for call detail records and the like) have no requirement for step 2 since compliance mandates that all event detail is stored. On the other hand, algorithmic trading and RFID processing would certainly want to filter out uninteresting or duplicated information. If we take probably the simplest example we can think of: an order resulting in stock falling below a particular threshold, which results in a new stock order being placed, then there is no step 2 because there is only one event.

So, different markets have different foci. Moreover, even within each of these steps there will be differences, particularly with respect to how you correlate events and how you report against events after they have happened. Thus security services want search capability against call detail records whereas you need analytics to forensically explore log management data.

Perhaps the single biggest difference is where the emphasis lies: is it on step 4 (doing something now) or on step 6 (analysing the data subsequently). This might be a way to subset the market but, on closer examination, it turns out that it isn’t. Security event management, for example, is very much about the attacks on your firewall (say) that are occurring right now, whereas log management is largely (but not completely) about subsequent forensics (to detect fraud, for example). However, there are products and vendors that do both.

In other words, we have not got very far. Moreover, while scalability (the number of events you can handle) and latency (how quickly you can respond to events) may be key determinants for particular solutions they do not define markets. Therefore we are left with no conclusions (so far) except that all event-based markets exhibit a subset of a common set of characteristics. In future articles I will explore some of the specific sub-markets dealing with events to see if we can learn anything from them.