Matthew Cook -
Software Money Pit Blog
    Strategy & Management
    Trends & Technologies
    Case Studies
About Matt
Buy Matt's Book
Matthew Cook -
  • Software Money Pit Blog
    • Strategy & Management
    • Trends & Technologies
    • Case Studies
  • About Matt
  • Buy Matt’s Book
Strategy & Management

A Rocky History: Studies of IT Project Failure

December 5, 2014 by Matt Cook No Comments

Studies over the past 15 years show enterprise software project failure rates ranging between one-third and two-thirds. Failure is defined in various ways – over budget, taking much longer than planned to implement, causing major business disruptions, or simply abandoned. 

The OASIG Study

OASIG, an organizational management group in the UK, commissioned a study in 1995 that involved interviews with 45 experts in management and business who had extensive experience with information technology projects either as consultants or researchers. The in-depth interviews revealed a dismal 20%-30% success rates for IT projects, and the reasons cited for the overall poor track record were:

  • Failing to recognize the human and organizational aspects of IT;
  • Weak project management; and
  • Unclear identification of user requirements. 

The Chaos Report

The Standish Group is a research and advisory firm that in 1995 published The Chaos Report, which found

  • Only about 15% of IT projects were completed on time and on budget;
  • Thirty-one percent of all projects were canceled before completion;
  • Projects completed by the largest American companies had only about 42% of the originally proposed features and functions.

The firm extrapolated the results to estimate that in 1995, eighty thousand projects were canceled, representing approximately $81 billion in wasted spending. 

The KPMG Canada Survey

In 1997 accounting firm KPMG studied why IT projects fail. The top reasons were:

  • Weak project management, including insufficient attention to risk management;
  • Questionable business case for the project;
  • Inadequate support and buy-in from top management. 

The Conference Board Survey

In 2001 The Conference Board surveyed 117 companies that had started or completed ERP software projects. The results showed that:

  • Forty percent of the initiatives did not produce the expected benefits within a year of completion;
  • On average respondents reported spending 25% more than expected on the implementation and 20% more on annual support costs;
  • Only one-third of the respondents said they were satisfied with their results. 

The Robbins-Gioia Survey

In 2001, management consulting firm Robbins-Gioia queried 232 companies across a range of industries about their IT investments, particularly investments in ERP systems. Of the companies that already had an ERP system in place or were in the process of implementing one:

  • Fifty-one percent said the implementation of the new system was unsuccessful; and
  • Forty-six percent said they believed their organization didn’t know how to use the ERP system to improve business results.

My take on these studies is this: Projects fail for many different reasons, but nearly all of those reasons can be tied back to human factors. The likelihood of success is directly correlated to the decisions you make, the strength of your project team, the way they manage the project, the way you manage the team, and particularly the strength of your project manager (PM).

Share:
Strategy & Management

Software Has Always Been Problematic

November 16, 2014 by Matt Cook No Comments

Software is like no other product on earth – it is a collection of millions of lines of instructions telling a computer what to do. You can’t “see” software; reading the lines of code would tell you nothing unless you had written the code yourself, and even programmers themselves can easily forget what they did. You must imagine software, and are left to rely on what the software’s creators say about it.

That good software is indispensable goes back to one of the first-ever software projects: an effort in the early 1950s to link together data from radars along the Eastern seaboard that were monitoring possible air and seaborne threats to the United States. Software, it was discovered, could collect, compare and plot radar data on paper much faster than human beings could.

But from software’s early beginnings as an industry in the 1950s and 1960s, business managers have struggled to understand the systems they buy, and the people and firms that market them.

In his excellent history of the software industry, Martin Campbell-Kelly describes the origins of software programming in the 1950s: “Only weeks after the first prototype laboratory computers sprang to life, it became clear that programs had a life of their own – they would take weeks or months to shake down, and they would forever need improvement and modification in response to the demands of a changing environment.” Some things haven’t changed.

Mr. Campbell also describes what must be one of the earliest maddening software experiences. General Electric had purchased a Univac computer in 1954, but it took “nearly 2 years to get a set of basic accounting applications working satisfactorily, one entire programming group having been fired in the process. As a result, the whole area of automated business computing, and Univac especially, had become very questionable in the eyes of many businessmen.”

We are 60 years into the commercial software industry, and while applications have become much more powerful, they are still prone to failure.  Large projects still fail at a high level, and the sums spent are much greater than 60 years ago.

So the challenge remains: not to build more powerful or smarter applications, but to build and integrate software into an enterprise in a predictable, reliable and cost-sensible way.

Share:

Categories

  • Strategy & Management
  • Trends & Technologies

Archives

© 2017 Copyright Matthew David Cook // All rights reserved