Last week, I spent some time talking to PULSE, a Discover Financial Services company, on why it stepped away from a suite of vendors — including IBM, EMC, Cisco and Oracle — to implement a massive HP NonStop blade system using ReD’s real-time transaction monitoring technology to create the first real-time debit card fraud alert system. (The related announcement is here.)
It was an amazing conversation and a couple of things jumped out at me.
I’ve been following on-line fraud for some time and am well aware that debit cards are where we are most exposed. Generally, the banks have taken a position that is it cheaper to absorb the losses than implement technology to prevent them, even though, should the breach result in your identity being stolen, it can cost up to 9 months and $250K (that estimate is from a decade ago and now likely higher) to get things sort of back to where they were. You’ll likely never fix your credit scores completely.
This is particularly true of debit cards, which are not afforded the same protections as credit cards are (which is why I very rarely use mine).
As it turns out, the other problem, in addition to cost, is that the banks won’t tolerate false blocks or false positives, which block users from their legitimate transactions. This is because they drive up support costs and piss off customers in mass. So the system that PULSE had to create had to not only do the job of stopping most fraud, it had to do it very cheaply and it had to be very accurate. It also turned out that it had to be real time to maximize the impact and this last was particularly difficult.
The first big moment was when PULSE talked about its first attempt to implement this system and how it was having an administrative nightmare. It wasn’t that the EMC, IBM and Cisco hardware weren’t competitive; it was that constantly working with technologies that weren’t designed to work together was creating an overhead nightmare of finger-pointing problems and requiring a level of staffing it couldn’t sustain. In addition, this multi-vendor mess wasn’t particularly agile either and didn’t allow it to plan for the growth it was anticipating with its market-leading solution.
Reliability wasn’t particularly good either at an estimated 97 percent and it knew it needed to bet to 99.999 percent to be viable. Customers, and it has customers that range from the largest to the smallest debit card providers, aren’t very understanding of downtime.
It found that by going with a single vendor, it could dramatically reduce support costs and dramatically increase reliability, performance and manageability — all critical to the success of this offering. While it also bid IBM’s System Z, the company just wasn't left feeling that IBM could execute with what it needed, even though it is a big System Z user on with other parts of its business. Oracle wasn’t asked to bid for two reasons: The first is that Oracle’s Exadata solution wasn’t competitive, but the other is that Oracle penalized it for anticipating growth and for being secure.
What also came out of the early trial and when PULSE purged Oracle from this solution was that Oracle's pricing penalized it for redundancy and growth planning. Apparently, if you plan for aggressive growth (to double your capacity over a short time) and build for it, Oracle charges you before you use it and if you want redundancy Oracle charges for that too. In effect, since PULSE planned to double its size and needed full mirroring, the charges from Oracle were 4x what it actually needed to operate. HP’s competitive pricing was more pay-as-you-grow and it doesn't charge for redundancy, so PULSE massively reduced its software spending, which helped fund this effort.
I’ve increasingly heard this as a major Oracle complaint, that customers are paying up to 4x for the product just because they are good planners. In effect, Oracle is gouging them for better planning and that rarely seems fair for any IT executive.
Now the solution is still in beta, only handling about 10 percent of the transaction load that it will handle in production, but it is on schedule to be fully deployed almost a year to the day from when the project first began. Given the massive scale of this system and the global nature of the resulting customers, it is amazing that PULSE is able to keep this close to a very aggressive deployment schedule.
The way it was able to do this was through aggressive pre-planning and an equally aggressive use of modeling and heavy benchmarking so it could measure every change and enhancement and, long before actually putting hardware in place, could emulate it to understand what was needed. As a result, there have been very few changes during the deployment and it is likely breaking records, not only for the speed in which accurate alerts are generated, but in the quality of those alerts and debit card blocking actions.
In the end, if your identity isn’t stolen, it may well be as a result of PULSE, ReD and HP. But this deployment showcased three best practices: massive simplification, massive planning and benchmarking, and a clear measurable set of goals for the result.