Update on the LinkedIn CEP Users Group

June 5, 2008

There are now 234 members of the CEP Users Group on LinkedIn.  So, if you have not yet joined the CEP Users Group, please do so by clicking here.


More on CEP: Process, Service or Reference Architecture?

June 2, 2008

In reply to Paul Vincent’s post Is CEP a Service or a Process? I posted Is CEP a Service or a Process? Reloaded.  This post is a follow-up to my dialog with Paul and the CEP community, as a whole.

Some of the more remarkable critical comments on the book “The Power of Events” was that the book did not (for the most part) discuss architecture. 

As we all know, there are many definitions of “architecture;” however, one definition that is easy to discuss, in this context, is that an IT systems “architecture” represents the components of an IT system and the relationships between the various components in the architecture. 

An architecture can be “technical” or “functional” or “operational” or “data” centric.  For example, an architecture can be based on an orchestration of service-components, like an SOA.  In another example, an architecture can be represented by the semantics of the data.  In yet another example, an architecture can be represented by the functionality of the components.

Because David’s book on CEP did not address architecture, folks have been free to use any “tool” or “technique” they like, and call it “CEP”.   My focus has been on overall CEP functionality and reference architectures that depict this functionality for solving CEP classes of problems.

This was one of the first topics (issues) with CEP we identified a few years ago; and is why we, including me at my good ole’ days at TIBCO until now, created a functional reference architecture for CEP (also in this blog and the TIBCO CEP blog).

In that functional reference architecture, we discussed and illustrated how CEP should operate as a cooperative (distributed) functional reference architecture to solve most “real” CEP classes of problems.

Therefore,  CEP should not be, generally speaking, considered as a “process” or a “service”,  per se,  because CEP, as a functional reference architecture, depicts the methodologies (functionaility) required to solve complex detection-oriented problems.  This abstract permits CEP to have meaning in a broad context of event processing applications.

Naturally, a functional reference architecture can be viewed as a “service” if all the components in the architecture cooperate to solve a problem and are encapsulated as a service.  In addition, a functional reference architecture can be viewed as a “process” when solving problems in a specific domain.  So, a “process,” in this case, is an instance of the functional reference architecture; and if the instance is packaged as a solution, this solution can be encapsulated as a service.

So, it is misleading, at least in my opinion, to reduce CEP to a “process” or a “service” unless we are discussing a particular solution to a domain problem within a (functional) reference architecture (functional context).

This confusion also manifests itself in the lively debate between Mark Palmer and the blogosphere regarding the maturity of CEP.   Mark and others have created an instance of event processing in capital markets and call it “CEP,” when in fact, what they are doing is COTS algo trading and using one or more functional components of CEP to realize their solution.

The is an important distinction, in my opinion.


On CEP Maturity and the Gartner Hype Cycle

June 1, 2008

In reply to Mark Palmer’s rebuttal, What Does it Mean to be Mature?, the figure below illustrates the popular Gartner Hype Cycle.  You can click on the illustration to get a clearer image.

In context to the Gartner Hype Cycle, CEP is closer to the “Technology Trigger” phase than anywhere else in the hype cycle.  CEP has not yet reached the “Peak of Inflated Expectations”, but is inching closer and closer.

In addition, as a correlating reference point, if you look at a recent Gartner Hype Cycle that covers EDA, for example, you will find that EDA  (Event Driven Architecture) is at a similar phase, the “Technology Trigger” phase. 


On the Maturity of CEP

May 31, 2008

Deciphering the Myths Around Complex Event Processing  by Ivy Schmerken stimulated a recent flurry of blog posts about the maturity of CEP, including; Mark Palmer’s CEP Myths: Mature or Not? and Opher Etzion’s On Maturity.

I agree with Ivy.  CEP is not yet a mature technology by any stretch of the imagination.  In fact, I agree with all three of Ivy’s main points about CEP.

In 1998 David C. Luckham and Brian Frasca published a paper, Complex Event Processing in Distributed Systems on a new technology called complex event processing, or CEP (Postscript Version).  In that seminal paper on CEP, the authors said, precisely:

“Complex event processing is a new technology for extracting information from message-based systems.”

Ten years later there are niche players, mostly self-proclaimed CEP vendors,  whom do very little in the way of extracting critical, undiscovered, information from message-based, or event-based, systems.  

A handful of these niche players have informally redefined CEP as “performing low latency calculations across streaming market data.”  The calculations they perform are still relatively straight forward and they focus on how to promote white-box algo trading with commercial-off-the-shelf (COTS) software.  In this domain, we might be better off not using the term CEP at all, as this appears to be simply a type of new-fangled COTS algo trading engine.

The real domain of CEP, we thought, was in detecting complex events, sometime referred to as situations, from your digital event-driven infrastructure – the “event soup” for a lack of a better term.    In this domain, CEP, as COTS software, is still relatively immature and the current self-styled COTS CEP software on the market today is not yet tooled to perform complex situational analysis.

This perspective naturally leads to more energy flowing in-and-around the blogosphere, as folks “dumb down” CEP to be redefined as it benefits their marketing strategy, causing more confusion with customers who want CEP capabilties that have zero to do with low latency, high throughput algo trading, streaming market data processing, which maybe we should call “Capital Market Event Stream Processing” or CESP – but wait we don’t really need more acronyms!

Hold on just a minute!  Wasn’t it just a short couple of years ago that folks were arguing that, in capital markets, it was really ESP, not CEP, remember?  Now folks are saying that it is really CEP and that CEP is mature?   

CEP is mature?  CEP is really not ESP?  CEP is really event-driven SOA?  CEP is really real-time BI?  CEP is really low latency, high throughput, white-box COTs algo trading?  CEP is really not a type of BPM?  CEP is not really for detecting complex events?   Complex does not really  mean complex? 

Come on guys, give us a break! 

(Anyway, no one is going to give us a break….  so stay tuned!)

  


Is CEP a Service or a Process? Reloaded

May 30, 2008

In Is CEP a Service or a Process? Paul Vincent of TIBCO blogs that any classification of CEP depends on the application, concluding that CEP is both a process and a service. 

Well (sorry Paul!), I disagree.  CEP is neither a process nor a service; CEP is a concept architecture for processing complex events.   (I have advocated a CEP functional reference architecture, as most readers know.)

To illustrated this point, let’s take a quick look at another functional reference architecture (or, if you perfer, a conceptual architecture), distributed computing.

Is distributed computing a service or a process?

Of course, it is neither a process nor a service, distributed computing is a generic architectural pattern (or style) for processing distributed data, generally across a network.

The same question can be asked of SOA. 

Is SOA a process or a service?

Again, the answer is almost identical. 

SOA is an architectural style (subclass) of distributed computing.

Now, is CEP a product or a service?

CEP is an architectural style (or pattern) for processing complex events.

CEP is neither a process nor a service. 

On the other hand, there are component of a CEP solution that can be represented as a stand alone process or a service.   The same can be said of EAI, SOA, and other subclasses of distributed computing architectural styles and patterns.


Open Service Event Management

May 17, 2008

One of the benefits of working in different countries is to get the perspectives of various client’s event processing problems.    Of interest to event processing professionals, companies are moving away from expensive software solutions and increasingly moving toward experimenting with economical and open software packages to solve complex problems.   

Recently, I was talking with a client about their experience with commercial security event management (SEM) solutions, for example ArcSight.   In his opinion, ArcSight was not an economically viable solution for his company, so he recommended I take a look at Open Service Event Management (OSEM). 
 
OSEM helps organizations collect, filter, and send problem reports for supported systems (ProLiant and Integrity) running compatible agents.   OSEM automatically sends service event notifications when system problems are detected.

I have not had a chance to look under the hood of OSEM and see how it can be used to collect and send events to emerging rule-based event processing engines.    However, this looks like an interesting lab project and I would like to hear from readers who have experimented with this systems architecture.


Clouding and Confusing the CEP Community

April 20, 2008

Ironically, our favorite software vendors have decided, in a nutshell, to redefine Dr. David Luckham’s definition of “event cloud” to match the lack-of-capabilities in their products.  

This is really funny, if you think about it. 

The definition of “event cloud” was coordinated over a long (over two year) period with the leading vendors in the event processing community and is based on the same concepts in David’s book, The Power of Events. 

But, since the stream-processing oriented vendors do not yet have the analytical capability to discover unknown causal relationship in contextually complex data sets, they have chosen to reduce and redefine the term “event cloud” to match their product’s lack-of-capability.  Why not simply admit they can only process a subdomain of the CEP space as defined by both Dr. Luckham and the CEP community-at-large? 

What’s the big deal?   Stream processing is a perfectly respectable profession!

David, along with the “event processing community,” defined the term “event cloud” as follows:

Event cloud: a partially ordered set of events (poset), either bounded or unbounded, where the partial orderings are imposed by the causal, timing and other relationships between the events.

Notes: Typically an event cloud is created by the events produced by one or more distributed systems. An event cloud may contain many event types, event streams and event channels. The difference between a cloud and a stream is that there is no event relationship that totally orders the events in a cloud. A stream is a cloud, but the converse is not necessarily true.

Note: CEP usually refers to event processing that assumes an event cloud as input, and thereby can make no assumptions about the arrival order of events.

Oddly enough, quite a few event processing vendors seem to have succeeded at confusing their customers, as evident in this post, Abstracting the CEP Engine, where a customer has seemingly been convinced by the (disinformational) marketing pitches  – “there are no clouds of events, only ordered streams.”

I think the problem is that folks are not comfortable with uncertainty and hidden causal relationships, so they give the standard “let’s run a calculation over a stream” example and state “that is all there is…” confusing the customers who know there is more to solving complex event processing problems.

So, let’s make this simple (we hope). referencing the invited keynote at DEBS 2007, Mythbusters: Event Stream Processing Versus Complex Event Processing.

In a nutshell…. (these examples are in the PDF above, BTW)

The set of market data from Citigroup (C) is an example of multiple “event streams.”

The set of all events that influence the NASDAQ is an “event cloud”.

Why?

Because a stream  of market data is a linear ordered set of data related by the timestamp of each transaction linked (relatively speaking) in context because it is Citigroup market data.    So, event processing software can process a stream of market data, perform a VWAP if they chose, and estimate a good time to enter and exit the market.  This is “good”.

However, the same software, at this point in time, cannot process many market data feeds in NASDAQ and provide a reasonable estimate of why the market moved a certain direction based on a statistical analysis of a large set of event data where the cause-and-effect features (in this case, relationships) are difficult to extract.  (BTW, this is generally called “feature extraction” in the scientific community.)

Why?

Because the current-state-of-the-art of stream-processing oriented event processing software do not perform the required backwards chaining to infer causality from large sets of data where causality is unknown, undiscovered and uncertain.

Forward chaining, continuous query, time series analytics across sliding time windows of streaming data can only perform a subset of the overall CEP domain as defined by Dr. Luckham et al.

It is really that simple.   Why cloud and confuse the community?

We like forward chaining using continuous queries and time series analysis across sliding time windows of streaming data. 

  • There is nothing dishonorable about forward chaining using continuous queries and time series analysis across sliding time windows of streaming data.   
  • There is nothing wrong with forward chaining using continuous queries and time series analysis across sliding time windows of streaming data. 
  • There is nothing embarrassing about forward chaining using continuous queries and time series analysis across sliding time windows of streaming data. 

Forward chaining using continuous queries and time series analysis across sliding time windows of streaming data is a subset of the CEP space, just like the definition above, repeated below:

The difference between a cloud and a stream is that there is no event relationship that totally orders the events in a cloud. A stream is a cloud, but the converse is not necessarily true.

It is really simple.   Why cloud a concept so simple and so accurate?