More

    Four Reasons Why Companies Fail in Web App Performance Testing

    Slide Show

    Tensions Between IT and Business Leadership Force New Approach to Delivering Technology

    In my post last week about Kevin Surace, the entertainingly outspoken CEO of San-Jose-based Web app performance testing company Appvance, I noted that I would follow that post up with Surace’s list of the top four reasons why companies fail in app performance testing. If reading that made your eyes glaze over, refocus. It’s worth it.

    Reason No. 1, Surace said, is what he calls a “back-loaded” testing schedule:

    Let’s say the launch date of the app, site, or whatever, is July 1. They get backed up because the coding takes longer, and this and that take longer, and the CEO says they have to go with the launch date. They get to June 30—they’ve done a little bit of functional testing, they think it sort of functions, and it’s time to launch. They didn’t leave enough time to even think about performance and scalability and transaction times and all of that. So they launch, and they fail. What you need to do is build in three or four weeks of scalability performance testing at the end, always. And by the way, if you’re really good, you’re doing it throughout your agile development cycle, all the way through. So every time you check in code, you go in and look at transaction times back through the servers. Did it get better, did it get worse, did you cause a problem with the database, are you caching correctly? But people back-end these things, and say, “We’ll leave three days and run a quick performance test and a quick load test at the end.” That’s not what it’s about. It’s about putting 1,000 users on there, seeing what happens to your transaction times, and seeing how the code can be addressed so the transaction times meet users’ expectations. This is all about making the code better. It’s not about, let’s see when the servers crash.

    The second reason, Surace explained, is poor testing productivity:

    Let’s say you allow three weeks for performance and load testing, but it takes you three weeks to get a test working. How many tests are you going to be able to run? One in the last hour [before launch]? How much code can you fix? None—you’re out of time. These older tools from 10 or 20 years ago can take days or weeks to get the scripts working, because it’s all manual writing and pausing of these scripts to try to get the thing to execute work. And often, they’re just doing it at the protocol level, which means you’re skipping maybe 50 percent to 70 percent of the code that’s on the client side—code in the browser or the device, vs. code in the server. Ten years ago, it was all on the server. Today, 60 percent is on the device. So if you’re doing that, you’re not testing everything, and if you’re taking too long to write the test, you’ve used up all your time simply writing the test—you have no time now to test and recode and test and recode. Agile is all about [repeatedly] testing and recoding. You ought to be doing, basically, a daily checking of code, if not hourly. That’s where things should be.

    Reason No. 3 is unrealistic reliance on code:

    What a lot of these guys do is say, “My coders are really good, the code is going to be fine. I’m going to rely on my code—it’ll all be fine.” That’s just dumb. That’s like the CEO of Target saying, ”I’m going to rely on my CIO—she’s probably got it all under control, there’s probably no security issue.” I wouldn’t rely on anyone, and I certainly wouldn’t rely on code, because code doesn’t give you scalability—testing gives you scalability. Code doesn’t give you rapid response times—testing gives you rapid response times, because otherwise, you cannot surface the bottlenecks. You don’t know where they are. You really don’t know if your database is configured for this app’s performance. Coders will tell you, “Well, we have this DBA over here, and we told him what we’re doing, and he configured the database.” Really? What’s your caching strategy? “I don’t know, ask the DBA.” How do you know it’s right? “Because he’s our DBA.” How does he know it’s right? “Uh, I don’t know.” That’s the problem, isn’t it? Nobody knows. And by the way, you won’t know until you try it. You have to simulate the usage of whatever it is—hundreds or thousands or millions of users, and you’ll bubble up where the problems are.

    And the fourth reason, Surace said, has to do with limited user pathways:

    If you take too long to do these tests, or you have to write them all by hand, and write all these scripts, you end up saying, “All right, I’ve got one use case, or two; these things take a week to write and debug—I’m just going to do one, because that’s all the time I’ve got.” But in real user scenarios, there could be dozens or hundreds of use cases, and you really want to have all the use cases tested in parallel, because you want to see what that really does to your database. Some need to access the database a lot, some only read a lot to it, some only write a lot to it. Having one use case is not a good example of the real world. But having one use case ends up being the result of using a 20-year-old tool, and it taking a week to get something working, so there’s only time for one. What you should have done is gin up 12 in the first hour. That’s what you needed to do, and that’s why you need modern technology.

    Get the Free Newsletter!

    Subscribe to Daily Tech Insider for top news, trends, and analysis.

    Latest Articles