Bankers Voice Scepticism Over New Event Processing Technologies

November 28, 2007

This week I completed a presentation on complex event processing at Wealth Management Asia 2007 where I had a chance to field some tough questions from risk management experts working for some of the top banks in the region.

In particular, one of the meeting attendees voiced strong scepticism over emerging event processing technologies.   The basis for his scepticism was, in his words, that the other “65 systems” the bank had deployed to detect fraud and money laundering (AML) simply did not work.  In particular, he referenced Mantas as one of the expensive systems that did not meet the banks requirements. 

My reply was that one of the advantages of emerging event processing platforms is the “white box” ability to add new rules, or other analytics, “on the fly” without the need to go back to the vendor for another expensive upgrade. 

Our friend the banker also mentioned the huge problem of “garbage-in, garbage-out” where the data for real-time analytics is not “clean enough” to provide confidence in the processing results. 

I replied that this is always the problem with stand-alone detection-oriented systems that do not integrate with each other, for example his “65 systems problem.”    Event processing solutions must be based on standards-based distributed communications, for example a high speed messaging backbone or distributed object caching architecture, so enterprises may correlate the output of different detection platforms to increase confidence.   Increasing confidence, in this case, means lowering false alarms while, at the same time, increasing detection sensitivity.

As I have learned over a long 20 year career of IT consulting, the enemy of the right approach to solving a critical IT problem is the trail of previous failed solutions.   In this case, a long history of expensive systems that do not work as promised is creating scepticism over the benefits of CEP.


Customers Voice Concerns Over Rule-Based Systems in APAC

November 25, 2007

We just completed the 7th Cyber Defense Initiative Conference 2007 in Bangkok.   There were more than 700 attendees in the main conference hall and nearly 300 people in the exhibition areas, bringing the total of attendees to approximately 1000 people, according to the conference organizers.

I had many opportunities to discuss event processing and security with a number of Thai business executives, most of whom have technology-related doctorate degrees from the US and Europe, including directors from the National Science and Technology Development Agency (NSTDA), the National National Electronics and Computer Technology Center (NECTEC) and some of the largest telecommuncations and financial services providers in Thailand.

Most of the business leaders expressed an interest in Bayesian and Neural networks for event processing applications related to security and cyber defense initiatives.     These business and technology leaders also mentioned that rule-based event processing systems are not feasible for most enterprise classes of cyber defense applications.   This seems to confirm what I have been posting on this blog, expressed by customers and scientists who are hands-on experts – rules-based systems are useful for some cyber security and cyber defense applications, but most advanced detection techniques require a self-learning, statistical approach.

I also heard from experts that you can find just about every cyberfraud imaginable in the Asia Pacific region, where criminals are aggressively seeking to exploit and profit from any vulnerability they can find.   A CTO of a major telecommunications provider mentioned very positive results with an implementation of neural networks in cyber defense applications.    

This week I am speaking at another conference.  I’ll try to report a bit more on the 7th Cyber Defense Initiative Conference 2007 in Bangkok when things slow down a bit! 

In closing today, Dr. Prinya Hom-Anek, CISSP, CISA, CISM, SANS GIAC, GCFW, CompTIA Security +, founder and acting president of TISA and his ACIS Professional Center team did a fantastic job organizing and hosting the conference.    The conference was one of the best conferences I have attended, to date, in 2007.


Timer and Time-Based Events

November 25, 2007

Opher asks in his blog, Is there a non-event event ? on absence as a pattern.

All “non-event” examples that the event processing community has offered, to date, have been based on the premise that an event is created when something does not happen based on time, schedule or a timer – and many seem to want to refer to this as a “non event.”

Opher is now calling this situation an “absent event.”

In my opinion, the correct term should be a “time-based event” because the event is triggered by a timer or similar time-based object; the term “non-event” should be dropped from the event processing glossary.


CEP and SOA: An Event-Driven Architecture for Operational Risk Management

November 25, 2007

Here are the details for my December 12, 2007 presentation at The Asia Business Forum: Information Security Risk Assessment & Management, Royal Orchid Sheration, Bangkok, Thailand:

11:15 – 12:15

CEP and SOA: An Event-Driven Architecture for Operational Risk Management

—  Review the business and market drivers for event-driven Operational Risk Management (ORM)
—  Learn about Complex Event Processing (CEP)
—  Dive into the details of the CEP reference architecture
—  Understand how to integrate SOA, CEP and BPM for event-driven ORM


COTS Software Versus (Hard) Coding in EP Applications

November 21, 2007

Opher Etzion has kindly asked me to write a paragraph or two on commercial-off-the-shelf  (COTS) software versus (hard) coding software in event processing applications. 

My thoughts on this topic are similar to my earlier blog musings, Latency Takes a Back Seat to Accuracy in CEP Applications.

If you buy a EP engine (today) because it permits you run some quick-and-dirty (rule-based) analytics against a stream of incoming events, and you can do this quickly without spending considerable software development costs, and the learning curve and implementation curve for the COTS is relatively low, this could be a good business decision, obviously.   Having a software license for an EP engine that permits you to quickly develop and change analytics, variables and parameters on-the-fly is useful. 

On the other hand, the current state of many CEP platforms, and their declarative programming modelling capabilities, is that they focus on If-Then-Else, Event-Condition-Action (ECA), rule-based analytics.  Sophisticated processing requires more functionality that just ECA rules, because most advanced detection-oriented applications are not just ECA solutions.

For many classes of EP applications today, writing code may still be the best way to achieve the results (accuracy, confidence) you are looking for, because CEP software platforms have not yet evolved to plug-and-play analytical platforms, providing a wide range of sophisticated analytics in combination with quality modelling tools for the all business users and their advanced detection requirements.

For this reason, and others which I don’t have time to write about today, I don’t think that we can say blanket statements that “CEP is about using engines versus writing programs or hard coding procedures.”   Event processing, in context to specific business problems, is the “what” and using a CEP/EP modelling tool and execution engine is only one of the possible “hows” in an event processing architecture.  

As we know, CEP/EP engines, and the marketplace for using them, are still evolving and maturing; hence, there are many CEP/EP applications, today, and in the foreseeable future, that require hard coding to meet performance objectives, when performance is measured by actual business-decision results (accuracy). 

Furthermore, as many of our friends point out, if you truely want the fastest, lowest latency possible, you need to be as close to the “metal” as possible, so C and C++ will always be faster than Java byte code running in a sandbox written in C or C++.   

And, as you (Opher) correctly point out, along with most of the end users we talk to, they do not process megaevents per second on a single platform, so this is a marketing red herring.  This brings us back to our discussions on the future of distributed object caching, grid computing and virtualization – so I’ll stop and go out for some fried rice and shrimp, and some cold Singha beer.


The Asia Business Forum: Information Security Risk Assessment & Management

November 21, 2007

The Asia Business Forum is hosting a conference on Information Security Risk Assessment & Management, December 12-13 2007 at the Sheraton Royal Orchid in Bangkok.   The conference organizers have kindly invited me to participate as a guest speaker.   I plan to discuss CEP in the context of operational risk management and will post the title and abstract my talk as soon as the details are complete.  Also, I will be presenting CEP-related topics this week, at the 7th Cyber Defense Initiative Conference 2007, and the following week at Wealth Management Asia.    


Latency Takes a Back Seat to Accuracy in CEP Applications

November 21, 2007

Opher asks, The only motivation to use EP COTS is to cope with high performance requirements” – true or false?.

The answer: True and False.

If high performance is discussed in the context of event processing speed and latency, then it is Absolutely False that speed and latency are the most important performance criteria for event processing applications. 

Detection accuracy (the performance of the detection algorithms for detecting derived events or situations) is the most important criteria, hands down. 

Emerging CEP/EP applications are centered around the concept of detecting (and acting upon) opportunities and threats in real-time.   The most important performance criteria is the confidence in the detection of the derived event, or situation, depending on your EP vocabulary.

For example, one of the most promising areas for CEP/EP applications is fraud detection.   There is a fundamental tradeoff in most, if not all, detection-oriented systems – the tradeoff between false positives and negatives.  The same is also true for cybertrading and other detection-oriented applications. 

If you miss an opportunity or threat, it does not matter how fast you missed it, or how low the latency was in processing, you simply missed it!     In theory, you could process events just below the speed of light –  So what?!  Making mistakes faster than others is not considered to be a superior skill that leads to a higher paying job!  (Well, we all have known quite a few who made a lot of mistakes but were buddy-buddy with the boss, but that is another story for another day!)

Likewise, if you detect a false opportunity or threat, if does not matter if you detected it in nanoseconds, or if the latency was just below the the speed of light.   Detecting false positives does not demonstrate superior performance.

Most, but not all, of the current CEP/EP vendors have relatively simple rules-based detection approaches and many have marketed “low latency” as their core capability.  The fact of the matter, well expressed by Kevin Pleiter, highlighted in Complex Event Processing – Believe the Hype? earlier this week, is that performance is critical, if the definition of performance is “accuracy” and “actionable” detection.  Latency takes a back seat to accuracy – as it should.

Kevin echoed what I have been saying for a number of years in the CEP community.  Detection accuracy that leads to high confidence, actionable business decisions is the most important performance criteria for CEP applications.

So, if we define performance in the context of event processing accuracy and confidence in decision making, then the answer is that is it Absolutely True  that performance is one of  the most important criteria for event processing applications. 

Latency discussions are a distraction, a red herring,  something intended to divert attention from the real problem or matter at hand.