An Overture to the 2007 CEP Blog Awards

January 9, 2008

Before announcing the winners of the 2007 CEP Blog Awards I thought it would be helpful to introduce the award categories to our readers.

I have given considerable thought to how to structure The CEP Blog Awards. This was not an easy task, as you might imagine, given the confusion in the event processing marketspace. So here goes.

For the 2007 CEP Blog Awards I have created three event processing categories. Here are the categories and a brief description of each one:

The CEP Blog Award for Rule-Based Event Processing

Preface: I was also inclined to call this category “process-based event processing” or “control-based event processing” and might actually do so in the future. As always, your comments and feedback are important and appreciated.

Rule-based (or process-based) event processing is a major subcategory of event processing. Rule-based approaches to event processing are very useful for stateful event-driven process control, track and trace, dynamic resource management and basic pattern detection (see slide 12 of this presentation). Rule-based approaches are optimal for a wide-range of production-related event processing systems.

However, just like any system, there are engineering trade-offs using this approach. Rule-based systems tend not to scale well when the number of rules (facts) are large. Rule-based approaches can also be difficult to manage in a distributed multi-designer environment. Moreover, rule-based approaches are suboptimal for self-learning and tend not to process uncertainty very well. Never the less, rule-based event processing is a very important CEP category.

The CEP Blog Award for Event Stream Processing

Stream-centric approaches to event processing are also a very important overall category of event processing. Unlike a stateful, process-driven rule-based approach, event stream processing optimizes high performance continuous queries over sliding time windows. High performance, low latency event processing is one of the main design goals for many stream processing engines.

Continuous queries over event streams are genenerally designed to be executed in milliseconds, seconds and perhaps a bit longer time intervals. Process-driven event processing, on the other hand, can manage processes, resources, states and patterns over long time intervals, for example, hours and days, not just milliseconds and seconds.

Therefore, event stream processing tends to be optimized for a different set of problems than process-based (which I am calling rule-based this year) event processing. Similar to rule or process-based approaches, most current stream processing engines do not manage or deal with probability, likelihood and uncertainty very well (if at all).

The CEP Blog Award for Advanced Event Processing

For a lack of a better term, I call this category advanced event processing. Advanced event processing will more-than-likely have a rule-based and/or a stream-based event processing component. However, to be categorized as advanced event processing software the software platform must also be able to perform more advanced event processing that can deal with probability, fuzzy logic and/or uncertainty. Event processing software in this category should also have the capability to automatically learn, or be trained, similar to artificial neural networks (ANNs).

Some of my good colleagues might prefer to call this category AI-capable event processing (or intelligent event processing), but I prefer to call this award category advanced event processing for the 2007 awards. If you like the term intelligent event processing, let’s talk about this in 2008!

Ideally, advanced event processing software should have plug-in modules that permit the event processing architect, or systems programmer, to select and configure one or more different analytical methods at design-time. The results from one method should be available to other methods, for example the output of a stream processing module might be the input to a neural network (NN) or Bayesian Belief (BN) module. In another example pipeline operation, the output of a Bayesian classifier could be the input to a process or rule-based event processing module within the same run-time environment.

For all three categories for 2007, there should be a graphical user interface for design-time construction and modeling. There should also be a robust run-time environment and most, if not all, of the other “goodies” that we expect from event processing platforms.

Most importantly, there should be reference customers for the software and the company. The CEP Blog Awards will be only given to companies with a proven and public customer base.

In my next post on this topic, I’ll name the Awardees for 2007. Thank you for standing by. If you have any questions or comments, please contact me directly.

Advertisements

Bare-Bones Requirements for an Event Processing Banking Application

December 8, 2007

I am working on a security-related  event processing banking application for one of the main banks in Thailand.     Here are the basic “must have, bare minimum” requirements:

  —  The event processing engine must run on Linux.

  —  The engine must be configurable and manageable remotely via a web-based interface. (Edit:  A Windows-based fat-client remote manager could also meet this requirement.)

  —  Must have a Windows-based modelling studio for building event logic / rules.

  —  Modelling studio should generate the running code to upload to the engine.

  —  Processing engine must have a UNIX sockets interface (adapter) out-of-the-box.

  —  Must have a data modelling / transformation, mapping tool, such as XPATH, for mapping raw input (in this case a UNIX socket) to event data structure(s).

These are only the bare minimum requirements.

Since I am an ex-TIBCO Principal Global Architect, I was hoping to find other CEP software vendors who have this very basic functionality, because I don’t want to appear to be biased toward TIBCO with the bank.

I tried BEA‘s WebLogic Event Server because they also have a presence in Asia, but their event processsing platform met only one (The Linux Requirement) of the six “bare minimum” requirements above. 

If any CEP vendor can meet the banks “very basic, bare-bones requirements” please comment here on the blog or email directly.

Thank you.

PS:  Latency is less critical than the bare-bones requirements above.  We can easily route the events to instances of the event processing engine, so the main requirements are based on the ease of design (modelling), remote configuration and management and deployment.


COTS Software Versus (Hard) Coding in EP Applications

November 21, 2007

Opher Etzion has kindly asked me to write a paragraph or two on commercial-off-the-shelf  (COTS) software versus (hard) coding software in event processing applications. 

My thoughts on this topic are similar to my earlier blog musings, Latency Takes a Back Seat to Accuracy in CEP Applications.

If you buy a EP engine (today) because it permits you run some quick-and-dirty (rule-based) analytics against a stream of incoming events, and you can do this quickly without spending considerable software development costs, and the learning curve and implementation curve for the COTS is relatively low, this could be a good business decision, obviously.   Having a software license for an EP engine that permits you to quickly develop and change analytics, variables and parameters on-the-fly is useful. 

On the other hand, the current state of many CEP platforms, and their declarative programming modelling capabilities, is that they focus on If-Then-Else, Event-Condition-Action (ECA), rule-based analytics.  Sophisticated processing requires more functionality that just ECA rules, because most advanced detection-oriented applications are not just ECA solutions.

For many classes of EP applications today, writing code may still be the best way to achieve the results (accuracy, confidence) you are looking for, because CEP software platforms have not yet evolved to plug-and-play analytical platforms, providing a wide range of sophisticated analytics in combination with quality modelling tools for the all business users and their advanced detection requirements.

For this reason, and others which I don’t have time to write about today, I don’t think that we can say blanket statements that “CEP is about using engines versus writing programs or hard coding procedures.”   Event processing, in context to specific business problems, is the “what” and using a CEP/EP modelling tool and execution engine is only one of the possible “hows” in an event processing architecture.  

As we know, CEP/EP engines, and the marketplace for using them, are still evolving and maturing; hence, there are many CEP/EP applications, today, and in the foreseeable future, that require hard coding to meet performance objectives, when performance is measured by actual business-decision results (accuracy). 

Furthermore, as many of our friends point out, if you truely want the fastest, lowest latency possible, you need to be as close to the “metal” as possible, so C and C++ will always be faster than Java byte code running in a sandbox written in C or C++.   

And, as you (Opher) correctly point out, along with most of the end users we talk to, they do not process megaevents per second on a single platform, so this is a marketing red herring.  This brings us back to our discussions on the future of distributed object caching, grid computing and virtualization – so I’ll stop and go out for some fried rice and shrimp, and some cold Singha beer.


Original Survey on Event Processing Languages

November 19, 2007

A few of us have been discussing event processing languages (EPLs) for a number of years, advocating that SQL-like languages are appropriate for certain classes of CEP/EP problems, but not all.

Some readers might recall that I published a draft survey on EPLs to the Yahoo! CEP Interest group titled, (DRAFT) A Survey of Event Processing Languages (EPLs), October 15, 2006 (version 14).

A number of us CEP “grey beards” have consistently advocated that there are EPLs and analytics that are optimal for certain classes of event processing problems (and, in turn, there also are EPLs that are suboptimal for certain classes of event processing problems).

For readers who do not frequent the Yahoo! CEP group, below here is a link to a copy of the original survey.


Event Processing Languages and the Nonsense about SQL

November 14, 2007

Louis Lovas, a distinguished Progress Fellow, and one of the event processing  implementation gurus at Apama has been doing a fine job discussing and debating why SQL is suboptimal for many classes of event processing problems.  

His latest contribution to the community, Taking Aim, does a great job responding to a earlier rebuttal to his post on SQL and its suitability as an EPL for CEP.  I agree with Louis and look forward to his views on the usability and scaleability on graphical models, tools and visual design studios for designing complex event processing applications.

One of the problems, as I see it, is the misconception that CEP is a technology that will be dominated by database programmers in the future.    CEP is not intrinsically a database application architecture.   CEP is a solution architecture for solving complex problems in real-time where databases and similar transactional processes are a part of the overall solution, not the whole.

There are certainly many classes of CEP solutions where SQL and extended SQL can be effective.  On the other hand, there are dramatically more CEP applications where a more expressive, functional and modern language is appropriate.  After all, John Bates did not conceive and design a better language for event processing because he had nothing better to do at Cambridge University

OBTW, TIBCO Software, also known for being very innovative and technologically savvy, did not select SQL as their EPL either.

Most of us would never dream of using SQL for writing all possible permutations of event processing applications no more than we would use SQL for writing an event handler for the wonderful mouse we use for browsing the Internet everyday.

Then again, there are always folks that might argue we don’t need calculus because we have math; and we can, in theory, do calculus with only add, subtract, multiply and divide.  No wait!  We really don’t need multiply and divide, because we can multiply with addition and divide with substraction, so let’s toss out multiply and divide as well!

Some might even argue that we don’t need C, C++, or Java because we can write it all in assembly language.   (Actually, I have heard this argument more than once in the past – believe-it-or-not!)

Let jump to the bottom line.  It is simply nonsense, in my opinion, for anyone to argue, no matter how technical, detailed or pedantic, that SQL is the only processing language for all event processing applications.


EPTS Report: Event Processing Reference Architecture Working Group (Slides)

September 23, 2007

The Event Processing Technical Society (EPTS) met in Orlando, Florida, collocated with the Gartner Event Processing Symposium. I reported on the activities of the Event Processing Reference Architecture Working Group (EPRAWG) in this set of slides:

Event Processing Technical Society: Event Processing Reference Architecture Working Group – Roll Call and Open Discussion

The full roll call results of the activities, summarized in the slides above for 2006-2007, may be found in this Google spreadsheet.

Please comment if you have any questions or follow-on comments.


Getting Started in CEP: How to Build an Event Processing Application (Slides)

September 23, 2007

Many of you have asked me for a copy of my slides from the Gartner Event Processing Symposium last week. Here they are:

Getting Started in CEP: How to Build an Event Processing Application

Thank you for all the kind words after the presentation!

I’ll post some of my reflections on the meeting soon.