Posted by: Tim Bass
One of the biggest problems prior to commercialization of event processing engines was that each event processing application was yet-another hand-made custom implementation.
For example, in the military, event processing history goes back many years prior to the (commercial) history that Mark and John kindly offer, and there are a number of books on the topic, and in particular, the sensor fusion area, where processing distributed sensor data is the same as processing events, the events just happen to be sensor data. The implementations were (and still are) expensive, custom built event processing applications.
Also, event processing was also a big part of the heritage of large-scale modelling and simulations, and much of the genesis of David Luckham’s DARPA funded Rapide and CEP work follows that school
of thought. Similarly, prior to the commercialization of CEP and ESP engines, these applications were (and still are) very expensive custom implementions. In many cases today’s CEP engines are not yet advanced enough to be “white boxes” for extremely challenging command-and-control (C2) event processing applications.
One of the reasons is that all of these “extremely complex” event processing applications have a very large challenge, for example, task scheduling between distributed, heterogeneous event sources. Most commercial CEP and ESP engines are still relatively immature in their ability to manage the complexity of distributed computing scheduling tasks.
In addition, when event processing requires specialization, coooperation, distributed object caching and distribution between distributed CEP engines (event processing agents) another layer of scheduling complexity arises.
So, while we would all agree that we have come a long way in the commercialization of CEP and ESP engines, we would also agree that we have a long way to go as well.
Copyright © 2007 by Tim Bass, All Rights Reserved.