This week I completed a presentation on complex event processing at Wealth Management Asia 2007 where I had a chance to field some tough questions from risk management experts working for some of the top banks in the region.
In particular, one of the meeting attendees voiced strong scepticism over emerging event processing technologies. The basis for his scepticism was, in his words, that the other “65 systems” the bank had deployed to detect fraud and money laundering (AML) simply did not work. In particular, he referenced Mantas as one of the expensive systems that did not meet the banks requirements.
My reply was that one of the advantages of emerging event processing platforms is the “white box” ability to add new rules, or other analytics, “on the fly” without the need to go back to the vendor for another expensive upgrade.
Our friend the banker also mentioned the huge problem of “garbage-in, garbage-out” where the data for real-time analytics is not “clean enough” to provide confidence in the processing results.
I replied that this is always the problem with stand-alone detection-oriented systems that do not integrate with each other, for example his “65 systems problem.” Event processing solutions must be based on standards-based distributed communications, for example a high speed messaging backbone or distributed object caching architecture, so enterprises may correlate the output of different detection platforms to increase confidence. Increasing confidence, in this case, means lowering false alarms while, at the same time, increasing detection sensitivity.
As I have learned over a long 20 year career of IT consulting, the enemy of the right approach to solving a critical IT problem is the trail of previous failed solutions. In this case, a long history of expensive systems that do not work as promised is creating scepticism over the benefits of CEP.