Chaos! (Or Maybe Not)

| by Jeannette Cabanis-Brewin

Several months ago, a colleague asked me if I could clarify some of the buzz he'd heard about the Standish Group's CHAOS report statistics on IT project success and failure. This wasn't the first time I'd heard grousing - from both the research community and the IT community - about this set of statistics, long considered to be the gold standard of project management research (at least for IT). But, it was the first time that, in looking for discussion about it online, I found that people were beginning to view the CHAOS Report critically. It turns out that, for many people in the project management discipline, the story told by the Standish numbers does not mirror the reality they perceive around them in their own workplaces.

As a project management researcher, I find the statistics baffling myself. At PM Solutions, we have seen, over the course of ten years of survey-based and action research, that things are improving on the project front. That is, when companies focus on improving project management, they are rewarded with measurable business results. This has been borne out by every study we have done, from the first (2000) Value of Project Management study to the just-completed State of the PMO 2010. Yet the 2009 Standish numbers showed a marked increase in failures. Could IT projects really be so different from all others?

My prejudice is that they are not, and at least one expert agrees with me. Christopher Sauer of Oxford University did a study some years ago suggesting how IT projects could learn from construction projects, and saying that the two were really not as different as it seemed. The difference, his research argued, was that the organizations that supported construction projects were more project-oriented ... that the way the sponsoring organization viewed IT projects might set them up to fail. Subsequently, Chris published on his research website PM Perspectives news of a study of IT projects that found results very different from Standish:the study found that two-thirds of IT projects succeeded ... with some out-performing their targets.

A significant difference in this study was the definition of success, which incorporated project benefits and business value to define the "star performers" in the study.

Indeed, the way one defines "challenged," "failed" or "successful" will skew the results one direction or the other. And, of course, these definitions can change from one company to another ... and certainly between IT and other types of projects.

For me, the crucial point is that, until companies have established good measurement baselines so that they know how projects were performing to begin with, they won't know to what precise extent the application of discipline, methodology, training or structure yields concrete benefits. This knowledge allows companies to make informed decisions about where to invest their energies and money. It also allows them to know a failure when they see one.

Read up on this controversy at the links below:

CHAOS Report: Groupthink?

"Crisis" in IT ... or Not?

Misleading Definitions?

Then let us know: What's your experience? Do one-third of projects truly "fail"?