Failed IT Projects Cost Way Too Much, No Matter How You Crunch the Numbers

Ann All

Just last week, I wrote about the Department of Veteran Affairs' efforts to improve the success of its IT projects, using a new system called the Program Management Accountability System (PMAS). After reviewing 280-plus projects, the VA ended a dozen projects that were behind schedule and/or over budget and temporarily halted another 33 projects.

 

I also mentioned a report produced by a UK think tank called The Taxpayers' Alliance listing a number of notable government IT project failures, including the National Programme for IT (NPfIT) which at a cost of 10.4 billion (approximately US$17 billion) has exceeded original budget estimates by 450 percent. A reader pointed out the alliance's political bent, questioning its impartiality in reporting on current Labour Party government spending. He's certainly got a point, but problems with the NPfIT have been well documented elsewhere.

 

Both the VA and The Taxpayers' Alliance knocked a "big bang" approach to government IT projects as a common problem. The big-bang approach has also seemingly wrecked a number of state government IT transformation projects I've written about recently, including a 10-year, $2 billion deal between Northrop Grumman and Virginia and sweeping contracts with IBM In Texas and Indiana.

 

I listed several common project-management problems and, guess what, the underlying cause of many of them -- inadequate governance, changes in project budget, length or scope, poor communications between business stakeholders and IT -- is complexity. So it's not too surprising that Roger Sessions, chief technology officer at ObjectWatch Inc., author of "Simple Architectures for Complex Enterprises" (Microsoft Press, 2008) and a man of many accomplishments judging by his bio, says complexity is the single largest cause of software failures.

 

Though he admits the number isn't exact, Sessions estimates the annual cost of IT failure is about $6 trillion or $500 billion per month. For the United States alone, the annual cost is about $1 trillion. IT Business Edge contributor Dennis Byron, for one, thinks Sessions' number is far too high. Dennis' number is $500 billion a year. That's way less than Sessions' estimate, but it's still a lot of scratch.


 

As Computerworld reports, Sessions suggests using a software design process called Simple Iterative Partitions, which "partitions business functions into subsystems" in a way that makes the overall system as simple and reliable as possible.

 

There's plenty of good reading at Sessions' blog, Simple Architecture for Complex Enterprises. I was quite taken with a post from late October in which he says his main problem with the Standish Group's annual report on IT failure is its focus on which percentages of IT projects are successful, when it would be more useful to look at what percentages of IT budgets are successful.

 

He offers the example of an IT department that completes six projects, four of which cost $50,000, one that costs $100,000 and one that costs $700,000. The most costly project will also be the most complex, and thus most likely to fail. If that project and one of the $50,000 projects fail but the other four succeed, Standish would consider it a 67 percent success rate. Writes Sessions:

I look at the percentage of IT budget that was successfully invested. I see $250K of $1M budget invested in successful projects and $750K in failed projects. I report this as a 25 percent success rate, a 75 percent failure rate. ... I argue that, from the organizational perspective, my interpretation is much more reasonable. The CEO wants to know how much money is being spent and what return that money is delivering. The CEO doesn't care how well the IT department does one-off minor projects, which are the projects that dominate the Standish numbers.

Sessions isn't the first (and won't be the last) to take issue with the Standish methodology. In late 2007, I wrote about research produced by business professors Chris Sauer, Andrew Gemino and Blaize Homer Reich that suggested about a third of IT projects were failures vs. Standish's larger number of failed or "challenged" projects.



Add Comment      Leave a comment on this blog post
Dec 22, 2009 1:31 AM Roger Sessions Roger Sessions  says:

Thank you, Ann, for this discussion. I hope that if your readers are interested in this topic, they will look at my 22 page white paper called The IT Complexity Crisis; Danger and Opportunity. It is available free and without registration at http://bit.ly/3O3GMp.

I hope readers don't get overly focused on the exact cost of the IT failure. The much more important message is that what we are doing today is not working. We can either continue doing what we are doing even though it isn't working (the traditional definition of insanity!) or start doing something else.

I believe that most IT failures are caused by complexity. Complexity is a solvable problem. And the payoff for solving it is very large.

Reply
Dec 23, 2009 8:46 AM Francis Carden Francis Carden  says: in response to Roger Sessions

Small iterative quick wins WORK big time. Now that's proven. You get to see the ROI opportunity in days/weeks and prove it in weeks/months. No brainer.

I grew up in an IT world of early prototyping which meant show n tell between IT and Business. Projects that take weeks/months verses years/forever are back and rightly so.

Francis

Reply

Post a comment

 

 

 

 


(Maximum characters: 1200). You have 1200 characters left.

 

null
null

 

Subscribe to our Newsletters

Sign up now and get the best business technology insights direct to your inbox.