Some Comments on the EPTS Member Agreement

April 6, 2008

The Event Processing Technical Society (EPTS) has been meeting, informally, for around three years.  Now there is a Call for EPTS Founding Members, from the EPTS Steering Committee and a related Members Agreement for the EPTS.

Here are my initial comments:

First, the Member Agreement calls for the EPTS Steering Committee (EPTSTC), basically the same committee that has been in place for three years to date, to continue their work for two more years before a general election.  This means that the current committee will have been in place for more than 5 years before a general election is held.

My comment is that the folks currently on the EPTSTC are good people, there is no doubt about this.  However, I believe that the event processing community would be better served if a generally election was held within 3 months of formalizing the EPTS membership.

There are many reasons for doing this, and I don’t think it is necessary to describe all the benefits.  The benefits far outweight any downside, so I would urge the EPTS Steering Committee to revise the Member Agreement immediately (because the agreement was sent out before soliciting general comments from the EPTS community at large).

Second,  the Member Agreement specifies that every two years, only half of the Steering Committee will be elected.      I disagree with this approach and think that all of the Steering Committee should be reelected every two years.  

The rationale for this is that the EPTS Steering Committee is not a governing body like the US Congress where changes in public sentiment can impact national security.   It is much better to have the entire Steering Committee up for relection every two years.   

I have quite a few other concerns the with EPTS Member Agreement.   Basically, the agreement needs to be written with an eye toward a more flexible, open and inclusive process that puts the future of the EPTS square into the hands of the event processing community, not a small group of well intended folks who represent a small part of the overall event processing community and worldview.

In closing, the EPTS Membership Agreement should be rewritten and the draft should go out for open comments before sending out a final version (as was done this time).   The entire process should also be transparent, in my opinion.

Advertisements

Please Welcome Dr. Rainer von Ammon to The CEP Blog

February 12, 2008

Today is an especially joyful occasion on The CEP Blog.    I am pleased to announce that one of the world’s top experts on CEP, Dr. Rainer von Ammon, has joined the blog.

Dr. Rainer von Ammon is managing director of the Centrum für Informations-Technology Transfer (CITT) in Regensburg. Until October 2005 he was Professor for Software Engineering, specializing in E-Business infrastructures and distributed systems, at the University of Applied Sciences Upper Austria. Rainer is still teaching there and at the University of Applied Sciences of Regensburg. From 1998 to 2002, he worked as Principal Consultant and Manager for R+D Cooperations at BEA Systems (Central and Eastern Europe). Prior to this, he was Professor for Software Engineering in Dresden with a focus on development of applications with event driven object oriented user interfaces and component based application development. Before this Rainer was acting as manager of the field Basic Systems at the Mummert + Partner Unternehmensberatung, Hamburg. After finishing his studies of Information Sciences at the University of Regensburg, he started as project leader of Computer Based Office Systems (COBIS) from 1978 to 1983 and afterward founded a start up company with some of his colleagues.

Some of you may recall my recent musings, A Bitter Pill To Swallow: First Generation CEP Software Needs To Evolve.   When you read Rainer’s excellent reply, you will quickly see why we are very pleased to have his thought leadership here at The CEP Blog.  Dr. von Ammon and his team are leading experts in CEP and related business integration domains.  Not only does he provide thought leadership, his team  researches, develops, implements and tests CEP solutions.   

In another example of  his thought leadership, some of you might recall this post, Brandl and Guschakowski Deliver Excellent CEP/BAM Report, where Hans-Martin Brandl and David Guschakowski of the University of Applied Sciences Regensburg, Faculty of Information Technology/Mathematics, advised by Dr. von Ammon, completed an excellent CEP thesis, Complex Event Processing in the context of Business Activity Monitoring

Please join me in extending a warm welcome for Dr. Rainer von Ammon to The CEP Blog.


CEP Center of Excellence for Cybersecurity at Software Park Thailand

December 16, 2007

In July 2007, at InformationSecurityAsia2007,  I unveiled an idea to create a cybersecurity CEP Center of Excellence (COE) in Thailand.  Under the collaborative guidance of Dr. Rom Hiranpruk, Deputy Director, Technology Management Center, National Science and Technology Development Agency (NSTDA), Dr. Prinya Hom-anek, President and Founder, ACIS Professional Center, and Dr. Komain Pipulyarojana, Chief National Security Section, National Electronics and Computer Technology Center (NECTEC), this idea continues to move forward.

Today, in a meeting with Mrs. Suwipa Wanasathop, Director, Software Park Thailand, and her executive team, we reached a tentative agreement to host the CEP COE at Software Park.   

The mission of Software Park Thailand is to be the region’s premier agency supporting entrepreneurs to help create a strong world-class software industry that will enhance the strength and competitiveness of the Thai economy.

Since 2001, Thailand’s software industry has experienced approximately 20% year-over-year (YOY) growth.  Presently, Software Parks Thailand supports a business-technology ecosystem with over 300 active participants employing over 40,000 qualified software engineers across a wide range of technology domains.

I am very pleased that Software Park Thailand is excited about the potential benefits of CEP in the area of cybersecurity and detection-oriented approaches to cyberdefense. The COE will be working with best-of-breed CEP vendors to build, test and refine rule-based (RBS), neural network (NN) based and Bayesian network (BN) based approaches (as well as other detection methods) for cybersecurity.

I will be announcing more details in the future, so stay tuned.  Please feel free to contact me if you have any questions.


Bankers Voice Scepticism Over New Event Processing Technologies

November 28, 2007

This week I completed a presentation on complex event processing at Wealth Management Asia 2007 where I had a chance to field some tough questions from risk management experts working for some of the top banks in the region.

In particular, one of the meeting attendees voiced strong scepticism over emerging event processing technologies.   The basis for his scepticism was, in his words, that the other “65 systems” the bank had deployed to detect fraud and money laundering (AML) simply did not work.  In particular, he referenced Mantas as one of the expensive systems that did not meet the banks requirements. 

My reply was that one of the advantages of emerging event processing platforms is the “white box” ability to add new rules, or other analytics, “on the fly” without the need to go back to the vendor for another expensive upgrade. 

Our friend the banker also mentioned the huge problem of “garbage-in, garbage-out” where the data for real-time analytics is not “clean enough” to provide confidence in the processing results. 

I replied that this is always the problem with stand-alone detection-oriented systems that do not integrate with each other, for example his “65 systems problem.”    Event processing solutions must be based on standards-based distributed communications, for example a high speed messaging backbone or distributed object caching architecture, so enterprises may correlate the output of different detection platforms to increase confidence.   Increasing confidence, in this case, means lowering false alarms while, at the same time, increasing detection sensitivity.

As I have learned over a long 20 year career of IT consulting, the enemy of the right approach to solving a critical IT problem is the trail of previous failed solutions.   In this case, a long history of expensive systems that do not work as promised is creating scepticism over the benefits of CEP.


Original Survey on Event Processing Languages

November 19, 2007

A few of us have been discussing event processing languages (EPLs) for a number of years, advocating that SQL-like languages are appropriate for certain classes of CEP/EP problems, but not all.

Some readers might recall that I published a draft survey on EPLs to the Yahoo! CEP Interest group titled, (DRAFT) A Survey of Event Processing Languages (EPLs), October 15, 2006 (version 14).

A number of us CEP “grey beards” have consistently advocated that there are EPLs and analytics that are optimal for certain classes of event processing problems (and, in turn, there also are EPLs that are suboptimal for certain classes of event processing problems).

For readers who do not frequent the Yahoo! CEP group, below here is a link to a copy of the original survey.


Clustered Databases Versus Virtualization for CEP Applications

November 16, 2007

In my earlier post, A Model For Distributed Event Processing, I promised to address grid computing, distributed object caching and virtualization, and how these technologies relate to complex event processing.   Some of my readers might forget my earlier roots in networking if I continue to talk about higher level abstractions!  So, in this follow-up post I will discuss virtualization relative to database clustering.

In typical clustered database environments there are quite a few major performance constraints.  These constraints limit our capability to architect and design solutions for distributed, complex, cooperative event processing problems and scenarios.  Socket-based interprocess communications (IPCs) within database clusters create a performance limitation contrained by low bandwidth, high latency, and processing overhead.

In addition, the communications performance between the application layer and the database layer can be limited by both TCP and operating system overhead.  To make matter worse, hardware input-output constraints limits scalability for connecting database servers to disk storage.   These are standard distributed computing constraints.

The physical architecture to address scalability in emerging distributed CEP solutions require a low-latency network communications infrastructure (sometimes called a fabric).  This simple means that event processing agents (EPAs) require virtualization technologies such as Remote Direct Memory Access (RDMA).  CEP agents (often called CEP engines) should have the capability to write data directly to the memory spaces of a CEP agent fabric (sometimes called an event processing network, EPN).   This is similar to the concept of shared memory as an IPC in UNIX-based systems applied to distributed computing, so all “old hat” UNIX systems engineers will easily grok these concepts.

RDMA virtualization helps improve performance by bypassing operating-system and TCP overhead resulting in significantly higher bandwidth and lower latency in the EPF (Event Processing Fabric – I just minted a new three letter acronym, sorry!).  This, in turn, improves the communication speed between event processing agents in an event processing network (EPN), or EPF (depending on your taste in acronyms).

Scheduling tasks such as a distributed semaphore checking and lock management can also operate more efficiently and with higher performance.    Distributed tables scans, decision tree searches, rule-engine evaluations, Bayesian and neural analytics can all be performed in parallel,  dramatically improving both performance and scalability of distributed event processing applications.

In addition, by adopting transparent protocols with existing socket APIs, the CEP architect can bypass both operating-system and TCP protocol overhead.   In other words, communications infrastructures for CEP that optimize networking, interprocess communications, and storage, provide architects with the underlying tools to build better solutions to computational complex problems.

Many of the communications constraints of earlier distributed architectures for solving complex problems, such as blackboard architectures,  can be mitigated with advances in virtualization.  So, in a nutshell, virtualization technologies, are one of the most important underlying capabilities required for distributed, high performance CEP applications, in my opinion.

The article, Virtualization hot; ITIL, IPv6 not,  appears to indicate that some of the top IT managers at Interop New York might agree with me.  

Unfortunately for a few software vendors, virtualization threatens to dilute their market share for ESB and message bus sale.  (OBTW, SOA is DOA.)   “Old hat” UNIX system programmers will recall how the UNIX IPC called “message queues” lost favor to sockets, pipes and shared memory.   A similar trend is happening in the virtualization world with RDMA as a distributed shared memory technology versus message-based communications technologies.  I will opine more on this topic later.


Analytical Patterns for Complex Event Processing

October 31, 2007

Back in March of 2006 during my enjoyable times at TIBCO Software, I presented a keynote at the first event processing symposium, Processing Patterns for Predictive Business.   In that presentation, I introduced a functional event processing reference architecture and highlighted the importance of mapping the business requirements for event processing to appropriate processing analytics and patterns.  The figure below is a screenshot of slide 26 of that presentation:

Slide 26

The idea behind the illustration above was that it is essential for organizations to look at their business problems and deterimine the best processing pattern, or processing analytics, in the context of the problem they are trying to solve.   I also graphically illustrated a few examples of event processing analytics relevant to CEP, including:

  • Rule-Based Inference;
  • Bayesian Belief Networks (Bayes Nets);
  • Dempster-Shafer’s Method;
  • Adaptive Neural Networks;
  • Cluster Analysis; and
  • State-Vector Estimation.

The key takeaway for that part of my presentation was that many analytics for CEP exist in the art & science of mature multi-sensor data fusion processing and these analytics can be mapped to recurring business patterns in event processing. I illustrated this point in slide 28 with the figure below (for illustrative purposes only):

Slide 26

In future posts on this topic I will elaborate by discussing analytics at each level of the functional CEP reference architecture, highlighting where different analytical methods and patterns can be efficiently applied to solve real-world event processing business problems.