Ann All spoke with Nigel Pendse, a business intelligence analyst and author of an annual BI Survey, which is billed as the largest independent survey of BI and performance-management users. Pendse just released the eighth version of the report.
All: Query response seems to be quite important to users. Companies that consider query speed in product evaluations achieved the highest rates of product success. And slow query response was the biggest user complaint. Yet I believe only 18 percent of respondents say they include it as part of their evaluation criteria. Is it something companies need to think about earlier and/or more often in the process?
Pendse: Overwhelmingly what comes out is the thing that companies consider most often when evaluating products is features. And yet lack of features isn't really a problem.
"The whole idea here is go for quick, simple wins and quick payback. If you're lucky, it'll last for a long time. If you're not lucky, at least you've got your payback."
All: So is this a case of the vendors pushing features? Or is it just they way organizations are used to evaluating products?
Pendse: First of all, I think there is a lack of understanding of how important performance is. When companies evaluate a product, what they really look at is, does it do the job. But in a way, that's a silly thing to do, though it may seem odd to say so. If a product has been around two or three years and had a couple of releases and already has a few hundred customers, there's a pretty good chance it does the job. It's like buying a car and saying, "I think it's really important that the wheels are round. So I am going to spend most of my time making sure those wheels are really round." I agree it's a problem if they aren't round. But guess what? The manufacturers already knew that. You tend to measure the things that are easy to measure rather than the things that really matter.
Companies that are going to buy a new product send around a document asking people to tell them the things they think are really important. IT gets the ball rolling, and they probably start with some questions related to some previous product that may not even be a BI product. Users may not know much about this area but feel obliged to put something on the list just to make it look like they were awake. And they maybe some user has used some product like this in the past and picks some feature they thought was really cool, which may or may not be relevant for this application.
The thing that's really hard to measure is performance, because what happens is someone comes along with a laptop and it's really slow because they are running the application server and the database server and the operating system and some virtual machine. But you assume it'd never be this slow in the real world, and you just sort of ignore it. Then you get to where you deploy it and you finally start getting your first correct answers, and at this point everyone is cheering that they got the numbers and they look right. Nobody cares much about performance at that stage. Then you get lots of users using it, and it slows down, and somebody says, "I can go to Google, which costs me nothing and has an unbelievably large amount of data, and it always gives me answers in less than a second."
The IT guy thinks it doesn't think it matters because hardware gets faster all the time and you can solve problems with a bigger box. But that doesn't work either, because the way machines get more powerful now is to apply more cores to the processes. But with BI tools, any one query is handled by a single core, so having more cores doesn't make it quicker for each user. Quoting all sorts of mysterious hardware numbers doesn't make the product any quicker.