New Report Shows Testing Services Require New Approach

Loraine Lawson

As I've mentioned before, testing for Web services and service-oriented architecture is different from your typical application testing.


In a recent IT Business Edge interview, Erik Josowitz, the vice president of product strategy for a virtual lab management applications company, explained the complexities of testing in an SOA like this:

That complexity derives from many more tiers in the overall application architecture - from things being split over more hardware, being broken up and applications being distributed. That complexity [also] derives from many more people using applications in many more ways. Those users are more geographically distributed and are not as likely to be trusted users in the way applications used to be designed for trusted users. ... Then, if you're trying to synthesize multiple applications or you're trying to provide business-relevant views through SOA interfaces, you're trying to do something that's even more complex in order of magnitude than the issues around one application. From the standpoint of testing, what that means is you have many more moving parts and those parts are being consumed in many more and many different ways.

But really - just how different can it be? The Aberdeen Group recently asked 240 end users about their experience with testing Web services and SOA. The result is this 25-page white paper, "SOA and Web Services Testing: How Different Can It Be?"

The study looked at the testing practices of best-in-class companies versus the industry average and industry laggards. It turns out, best-in-class companies are almost three times as likely as others to have completely redesigned their testing process. To me, that really speaks to the difference in testing practices required for deploying services, rather than whole applications.

Another related finding is that the industry average and industry laggards are much more concerned about the risks of introducing new services. Their concern is warranted, since only 39 percent of the laggards do regression testing, according to Aberdeen.

The Executive Summary suggests companies take the following steps to improve testing:

  • Expand the focus of quality by tracking both business requirements and quality across the entire organization - not just on a service-by-service basis.
  • Apply design-time governance, which basically means using governance tools that promote re-use of services.
  • Improve visibility of deployed services by using monitoring and reporting tools on production systems.

The paper was published last month, but Aberdeen unlocked it this week. Generally, they unlock papers for short periods, so if you're interested, you should probably download it today.

There's a lot more to the report, of course. Download it and check it out. For more about the SOA's unique testing requirements, read:

Subscribe to our Newsletters

Sign up now and get the best business technology insights direct to your inbox.


Add Comment      Leave a comment on this blog post

Post a comment





(Maximum characters: 1200). You have 1200 characters left.




Subscribe Daily Edge Newsletters

Sign up now and get the best business technology insights direct to your inbox.

Subscribe Daily Edge Newsletters

Sign up now and get the best business technology insights direct to your inbox.