Matthew Cook -
Software Money Pit Blog
    Strategy & Management
    Trends & Technologies
    Case Studies
About Matt
Buy Matt's Book
Matthew Cook -
  • Software Money Pit Blog
    • Strategy & Management
    • Trends & Technologies
    • Case Studies
  • About Matt
  • Buy Matt’s Book
Strategy & Management

Case Study: Nike’s Adventure with Supply Chain Planning Software

July 17, 2015 by Matt Cook No Comments

A Nike Factory store in Atlantic City, NJ.  Photo by Shabai Liu, CC license.

Background

In February 2001 Nike, Inc. announced that it would miss sales and profit targets for the quarter due to problems with supply chain software it had begun to implement the previous year. The company said that it had experienced unforeseen complications with the demand and supply planning software that would result in $100 million in lost sales.

Nike was trying to put in a system that would cut its response time to changing sales demand. These types of systems rely on algorithms and models that use historical sales data combined with human input to generate a sales forecast, which is then converted to a manufacturing plan and orders for raw materials from suppliers. It’s not easy to set up and successfully run these applications to produce optimal results. The process demands a lot of trial and error, testing, and running in parallel with the old system to shake out bugs.

As reported by CNET News’ Melanie Austria Farmer and Erich Leuning, SAP spokesman Bill Wohl, reflecting on Nike’s dilemma, said at the time, “What we know about a software implementation project is that it’s just not about turning on the software. These projects often involve really wrenching changes in a company’s business process…It involves changes in the way employees work, and anytime you make changes in the way employees are used to working, it can get difficult.”

Nike is in the apparel business, where styles come and go, and where advertising and promotional programs can spike demand, requiring the supply chain to react just in time, delivering to the market just the right amount of each style. An oversupply of shoes or other apparel will lead to discounting and reduced profits, and an undersupply will lead to lost sales. Nike ran into both of these scenarios, and its profit dropped while sales declined, resulting in the $100 million unfavorable financial impact to the company.

Inside the logic of the software Nike chose, parameters and settings must be optimally set for the most efficient quantities to be produced and distributed to the market. It’s very easy to get it wrong, and companies launching this type of application usually run a pilot for several months before they are satisfied with the recommended production and distribution plans generated by the software.

Much has been written about Nike’s experience, and much of it is valuable for any enterprise thinking about a similar project. Keep in mind, though, that this was a public spat, and both the software firm and Nike told their own version of the story for the public record. That means we don’t have all the facts. Nonetheless, I think there are valuable lessons in the Nike story, and at the risk of not getting all the facts right, I present my conclusions more to help you learn and succeed than to cast blame on any of the Nike project participants.

Key Points

Here is what I think were the main issues in the Nike project:

Complexity of the application without commensurate resources applied to making it work. Christopher Koch, writing in CIO Magazine at the time, said “If there was a strategic failure in Nike’s supply chain project, it was that Nike had bought in to software designed to crystal ball demand. Throwing a bunch of historical sales numbers into a program and waiting for a magic number to emerge from the algorithm — the basic concept behind demand-planning software — doesn’t work well anywhere, and in this case didn’t even support Nike’s business model. Nike depends upon tightly controlling the athletic footwear supply chain and getting retailers to commit to orders far in advance. There’s not much room for a crystal ball in that scenario.”

I don’t fully agree with this assessment; I think demand forecasting systems are critical to modern businesses, and if configured and used correctly, bring many benefits. Other reports said Nike didn’t use the software firm’s methodology, and if true, this would greatly contribute to its troubles. I have implemented these systems and they require precise attention to dozens of settings and flags, pristinely accurate data, and the flawless sequential overnight execution of sometimes 30 or more heuristic calculations in order to produce a demand forecast and a recommended production and raw material supply plan.

It’s also critical with these types of applications to have the right subject matter experts and the best system users in your company on the team dedicated to making the system work the right way for your business. This is where, if published reports are true, I believe Nike may have failed. It is possible Nike simply needed more in-house, user-driven expertise, and more time to master the intricacies of the demand planning application.

In 2003 I ran an ERP project that included an overhaul of supply chain systems. The suite included demand and supply planning solution software, which we would use to forecast demand, generate a production and raw materials supply plan, and determine the plan for supplying product from plants to distribution centers. Unfortunately the best system users declined to be part of the team due to heavy travel requirements, and we had multiple problems getting the parameters right. The supply chain suffered after launch as incorrect production and distribution plans disrupted the business for several months.

Combining a maintenance-heavy, complex application with an organization unwilling or unable to meet the challenge is one way to find the money pit.

A ‘big bang’ approach to the launch without sufficient testing. Despite prevailing wisdom and suggestions by veterans that Nike phase in the new application, Nike chose to implement it all at once. This immediately put at risk a large portion of the Nike business. A phased approach would have limited the potential damage if things went wrong.

A case study of the project published by Pearson Education discusses this point: “Jennifer Tejada, i2’s vice president of marketing, said her company always urges its customers to deploy the system in stages, but Nike went live to thousands of suppliers and distributors simultaneously”

The study also quotes Lee Geishecker, an analyst at Gartner, Inc., who said “Nike went live a little more than a year after launching the project, yet this large a project customarily takes two years, and the system is often deployed in stages.”

Brent Thrill, an analyst at Credit Suisse First Boston, sent a note to his clients saying that because of the complexities he would not have been surprised if to test the system Nike had run it for three years while keeping the older system running. According to Larry Lapide, a research analyst at AMR and a supply chain expert, “whenever you put software in, you don’t go big-bang and you don’t go into production right away. Usually you get these bugs worked out . . . before it goes live across the whole business.”

I can understand that Nike would want to convert a large portion of its business and supplier base at the same time. It reduces the length of the implementation and therefore the cost of maintaining consultants and support staff, and it eliminates the need for temporary interfaces to existing systems.

But a smart move might have been to launch and stabilize the demand planning portion of the software first. It’s easy for me to second guess, but Nike could have taken the forecast generated by the new system and entered it manually into their existing, or ‘legacy’ systems. After all, if the forecast is wrong, then everything downstream – the production, raw material, and distribution plan – are also wrong. I did this on two projects, and it significantly reduced risk. On both projects we launched the demand planning (DP) application and ran it in parallel with our legacy system until we were satisfied with the results, then we disengaged the legacy DP application and began manually keying the new system’s DP forecast into our legacy production, raw material, and distribution planning software.

Share:
Strategy & Management

Case Study: The 2010 Census

May 6, 2015 by Matt Cook No Comments

Image: U.S. Counties, population change 2000-2010, by Joe Wolf, CC license

 

Failed IT projects are not unusual in the government sector. Big IT projects are hard enough without the added complexity, delays, politics, and bureaucracy of government entities. But leaving those dysfunctions aside, there is much to learn from these failures. The 2010 Census is one such event, because the factors that led to failure are some of the same ones that kill private sector projects as well.

Background

The 2010 census was to be the most automated census ever. Half a million census takers with handheld devices would capture, in the “field,” household data that had not been sent in via census forms. Harris Corp. was given a $600 million contract to supply the handheld devices and manage the data.

But according to published news accounts, the Census Bureau requested, after the project began, more than 400 changes to the requirements that had originally been submitted to Harris. In trying to accommodate these requests, Harris naturally encountered more expenses to redesign or re-program the handheld units or to redesign thelocke_apport_med data management system that would collect and organize the accumulated data.

The handheld units themselves were difficult to operate for some of the temporary workers who tested them, and they couldn’t successfully transmit large amounts of data. A help desk for field workers using the devices was included in the original contract at a cost of $36 million, but revised to $217 million.

In the spring of 2008, the Census Bureau was faced with a decision whether to continue with the automation plan, because the handheld units had not yet been completely tested and needed further development, in part because of the additional post-contract requirements. The Bureau needed enough time to hire and train about 600,000 temporary workers if the original Field Data Collection Automation (FDCA) plan had to be revised or scrapped.

In the end, the 2010 Census may not have been the most automated census ever, but it was the most expensive. The contract with Harris was revised to $1.3 billion, and other expenses were incurred for equipment and other areas that were not anticipated and therefore not estimated. Not all of the overruns were systems-related.

Key Points

Constantly changing requirements increased delays and costs. As we know from understanding the nature of software, a system is unable to simply change its code and accommodate additional requirements on the fly. Why no one put a stop to the additional requirements heaped on to the project is a mystery, but it’s pretty much standard procedure to freeze the requirements at some point in the project. It’s like asking a homebuilder to add another bathroom on the second floor when the home is halfway to completion. It can be done, maybe, but will make the house cost more and take longer to complete. In extreme cases – like the new custom-built Medicaid claims processing system for the State of North Carolina – the project may never end.

Undue confidence in the user’s ability to learn how to operate the handheld devices led to surprise additional costs. The project didn’t plan on people having so much difficulty with the handheld data collectors. But people’s innate abilities, especially in the area of new technology, vary greatly. Nearly every project I’ve been involved in experienced difficulty because of a certain percentage of users not being able to catch on to the new system. This means more mistakes are made with the new system, more support is needed, and in some cases people who were competent at their jobs with the old system simply cannot perform at a satisfactory level with the new one.

In the end, the project was a money pit. The Census Bureau had to revert to pencil and paper when the handheld devices couldn’t be used – which it said would add $3 billion to the cost of the census. If $3 billion is what the Bureau would have saved with automation, then it was probably worth it to invest the originally estimated amount of $600 million, and even the revised estimate of $1.2 billion. Instead, the government paid the full $1.2 billion and had to use pencil and paper. Net result: a waste of money.

Just freezing the requirements alone, at some point in the project, could have completely changed the outcome. Intentions were apparently good – saving labor cost through automation – and I expect there were presentations made to different levels of management in order to gain approval. A well-intentioned project developed by smart people becomes a vast hole sucking time and money into the abyss.

Share:
Strategy & Management

Case Study: Denver Airport Baggage System

April 14, 2015 by Matt Cook No Comments

Photo: Denver Airport pano, by Doug Waldron, CC license

“Denver Airport” has become synonymous with epic technology failure for those who remember the colossal breakdown of that airport’s ambitious new automated baggage handling system in 1995. Just two numbers explain the magnitude of the failure: 16 months — the delay in opening Denver’s new airport – and $560 million – the extra cost of building the new airport – both a direct result of the baggage system debacle.

Denver Airport has huge lessons for us that can be applied to any software endeavor.

In 1993 a brilliant plan was hatched to fully automate baggage handling at Denver’s new state of the art airport. The key to the automated system was software that would control the movement of thousands of carts moving along miles of track throughout the airport’s three main terminals. The software would direct the movement of the carts to collect or deposit bags at precisely the right time. It was to be a highly orchestrated activity that depended on software that would continuously process complex algorithms. After more than a decade of trying to make the system work, the airport went back to the traditional manual bar code tug and trolley system.

Key Points

Hubris can lead to very bad decisions. The airport’s new system was to be state of the art, the most automated passenger baggage system in the world. Through 26 miles of underground track, bags would move from plane to carousel or gate to plane without human handling. Tours were given to the public to show how advanced the new system would be.

BAE Systems, the company hired to design and build the baggage system, had built several traditional systems at airports throughout the world, but none were as advanced as the Denver project’s design. A sign of early trouble came when the airport bid the design out for construction, and several companies either declined to make a bid or responded that construction of the complicated system within the airport’s stated timeline was impossible. The Denver team was proud of its futuristic design and even these clear signals of danger ahead did not dissuade them from their plan.

Complexity greatly increases risk of failure. All along the miles of track of the new system, thousands of small carts would deposit or pick up baggage at precise points in the network, ostensibly at just the right time. This required complex algorithms that had to account for travel distance, expected flight arrivals and departures, sorting rules and routings, and canceled flights. Scanning devices positioned at just the right locations would read bar code labels applied to bags and route them to the appropriate conveyor. This type of technology, while standard today, still isn’t perfect; you can imagine its relative immaturity in the 1990s.

A ‘Big Bang’ launch of a new system adds to likelihood of failure. Denver’s airport authority had a great opportunity to start small and prove out its advanced system design. The team could have sliced the project into digestible parts in two ways: An end-to-end prototype system on a small portion of the airport’s baggage traffic, or a fully functional piece of the envisioned architecture, such as the bar code label and scanning functionality. Small pieces are easier to focus resources on. Best of all, had the Denver team phased in the system gradually and still faced failures, it wouldn’t have shut down the entire airport. Being able to continue business as normal if the new system fails is an essential but sometimes forgotten aspect of all software projects. Building in pieces or parts allows new learning to be incorporated into the design. Instead, the team gambled on launching the whole system at once.

No backup plan = nasty outcome. Once the Denver team realized that it may take awhile to make the new system work, they rushed to put in place a more traditional trolley system using baggage handlers. But this alone took many months and an extra $70 million before it was completed. In the end, the original advanced-design system was only ever used for outbound baggage at one terminal. Large parts of the airport’s new system simply never worked. But a failure or cost overrun on the original ambitious project is one thing. Because there was no backup, the project was squarely in the critical path of the new airport’s opening. If you’re going to fall out of the boat, don’t drag everyone with you.

OK, you say, your organization isn’t launching anything nearly as ambitious as the Denver Airport baggage system. And of course you would never make the dumb mistakes they made, right? Yes, you would, because every human being makes mistakes; the Denver team just put a lot more at stake. It’s not so much that their software design failed, it’s that they placed a huge bet on their system working – the opening of a new $3 billion airport.

Share:
Strategy & Management

Why Wouldn’t I Want SaaS?

March 17, 2015 by Matt Cook No Comments

Software-as-a-Service is challenging the paradigm that software is a thing you buy, take back to your office and install. Looking back some day, we might shake our heads and wonder why any enterprise ever thought it had to purchase and physically install a copy of millions of lines of code that ran on a computer within its premises, just to transact day to day business.

On-premise going the way of cassette tapes……?

The market is receptive to more and more SaaS solutions, and software firms are positioning themselves to offer those products. Most of the big, traditionally on-premise software providers now offer at least some of their applications in SaaS form. For you, this will mean more choices.

But remember this: SaaS applications do not guarantee perfect performance and 100% uptime. They are still computer programs running on a server somewhere, and if those programs are buggy, unstable, corrupted, or lack proper expert support you will land in the money pit just as sure as if you bought that same buggy and unstable application and installed it in your own data center.

Are there any good reasons why you would want a traditional on-premise application?

Yes: security is one reason, in certain circumstances. No matter how demonstrably secure a third party may seem, you simply may not want to entrust your data security to a third party, period.

Your customers, particularly, may want the relative or perceived assurance of your own firewall surrounding your applications and their data. Their business relationship is with you, not the company hosting your applications.

You may already have economies of scale suited to on-premise hosting – plenty of server capacity, a built-in support staff, and developers on your team who are capable of building out the application the way you want it.

If you are positioning the application to be used by several divisions within your company, you may also want central on-premise hosting. You may want to tightly control modifications to the “core” system and also manage access permission levels among users, as well as the total number of users. These actions can significantly reduce per-user costs.

With SaaS applications, you still need to do the same due diligence you would do with a traditional on-premise application. The fit analysis, testing, and project management are largely the same, as are the same precautions to avoid the money pit. You can still spend a fortune modifying a SaaS application, as well as integrating it to your other systems and pulling data out of it for analysis purposes.

Share:
Strategy & Management

Ten Ways to Drive Project Success

March 8, 2015 by Matt Cook No Comments

Photo: astronaut Eugene Cernan salutes American flag on the surface of the moon, December 1972, Goddard Space Flight Center, CC license.

Our team felt pretty good one day in early March, 2012, when we went live on an SAP system — manufacturing, inventory, order processing, MRP, demand planning, purchasing, shipping and receiving, and finance – with no business disruption, in 90 business days, and 30% under budget

We arrived at this happy conclusion because of (good) decisions made early in the project — decisions about strategy, scope, preparation, technology selected, and the makeup of the team.

How do you drive success — especially if you’re not savvy to enterprise software in the first place?

  1. Can you, in a three minute speech to your CEO, sell the project: why you need $2 million and 15 people for nine months, and how your company strategy and your customers demand this kind of investment? If not, start over.
  2. Only pursue projects with an unambiguous and demonstrably positive ROI based on realistic assumptions. Leave the great ideas, dreams and wish lists behind.
  3. Leave no room for surprise costs. You’ll find them in these categories: additional user, site, or department licenses, support fees, custom programming, server, network, PC and operating system upgrades, future version upgrades, project consultants, and backup staff and travel for the team.
  4. Use only your best people for the team. Nothing is more powerful than an experienced, focused, and motivated team with a mandate from top management and a simple, clear objective. If you can’t afford to free up your best people, you can’t afford the project.
  5. Decide what is included in the project and what is not, clearly and as simply as possible. Ruthlessly prevent any changes to the scope.
  6. The software you choose: prove that it works in other enterprises, preferably companies like yours. It should be familiar to consultants or programmers you may need, now and in the future.
  7. Pick technology your company’s users can easily master. You can do lots of damage with people who don’t understand the new systems they have to use.
  8. Insist on a full under-the-hood evaluation of the fit of the system with the business. Have the software vendor prove that it meets your requirements by piloting your business scenarios in the application.
  9. Protect as much of the business as possible from the inevitable problems of a new system. Break up the “go-live” into manageable parts and have a manual backup plan to keep the business running.
  10. Remember this quote from software legend Frederick P. Brooks: “How does a software project come in six months late? One day at a time.”
Share:
Strategy & Management

Ten Ways to Land in the Money Pit

February 28, 2015 by Matt Cook No Comments

Image by Nick Ayres, CC license

Big software project failures can frequently be tied back to decisions made at the very beginning of the project.

As the government struggles to launch healthcare.gov after the expenditure of several hundred million dollars, it’s logical to question the original decision to build from scratch software programs that may in part have already existed.

In 2001, Nike made the decision to big-bang its launch of new supply chain planning software with thousands of suppliers and distributors all at once, resulting in a massive disruption to its business and $100 million in lost sales.

An overly complex design and an urge to impress the public were two big reasons – made very early in the project — for the colossal breakdown of Denver Airport’s $560 million baggage handling system.

In each of these cases, the main reasons for failure could have been nipped in the bud at the very beginning.

How to avoid failure from dumb decisions at the start? To illustrate, I’ll describe what not to do. I’ve seen every one of these decisions made at some point in some project I’ve worked on over the years.

Launch the project for the wrong reasons. Believe that new software will fix a broken or inefficient process.
Fool yourself about the ROI. Show the CFO numbers you know are a fantasy, just to get approval for your new system.
Make the scope bigger than necessary. Launch a big ERP project when all you really need is a sales planning and trade promotion system, or an updated financials and payroll application.
Develop the software yourself, even though there are reasonably good packaged applications available on the market.
Assign the project to people who are already swamped and who live in different time zones. Run most of the project via conference calls to save money on travel.
Put your weakest people on the project team. Select a project manager who is leading a project for the first time, or who logically should run it based on the organization chart. Don’t free up the critical subject matter experts that will be needed; they’ll find the time somehow. Just empower the team and everything will run fine.
Ensure the application is fully modified to match every nuance of your business so people are comfortable with it. Don’t worry if the solution becomes too complex – it’s software so everything can be programmed.
Don’t place a fixed deadline or budget for project completion; plans need to be flexible to account for unforeseen difficulties, or opportunities for expansion of the solution to satisfy more of your company’s needs.
Ensure everything is launched all at the same time. Why drag out the project in phases?
Don’t try to manage the project with a structured methodology. Doing so will unduly restrict the free flow of ideas and creativity that are essential. Most project methodologies are a waste of time and only serve to increase the workload on the project team.

Share:
Strategy & Management

The Shiny Object and the Psychology of Large Enterprise Software Projects

December 15, 2014 by Matt Cook No Comments

You’ve seen it before – how projects gain institutional momentum even though the hard reasons for it aren’t clear.

An expensive multi-year project is hatched from the need for some level of management somewhere, to “move the enterprise forward” by technology-enabling new and improved processes.

Ideas – the seeds of projects — have a way of gaining momentum without the right level of rigorous questioning and unbiased analysis. Notice how a project, slim as the specifics might be, gets an impressive name, such as Phoenix, Gemini or Athena, and then almost automatically has credence as a legitimate, important initiative that must be done. Once the project gains notoriety in this way, there is a bias to produce something from it, and often that bias to do something ends up being the purchase of a new system or systems.

Years ago I worked for a company that announced a project to “transform” the company chiefly by “integrating” different departments; we’ll call this project Excalibur. A team was formed, made up of the best and brightest, from a cross-section of functional departments. A lot of full-day meetings were held.

I wasn’t involved in Excalibur, yet, so I didn’t know what the meetings were about. But I do know that the first thing — no, make that the only thing — the Excalibur team set out to do was to evaluate packaged software solutions…for what, it wasn’t immediately clear. But it occurred to me that the team was completely bypassing important steps, like stating a problem and business goals, identifying broken processes or dysfunctional parts of the organization, and defining a desired future state.

Epilogue: the project took nearly five years to complete, and resulted in replacement of four legacy applications with a new, interconnected ERP system. Two years after the completion of Excalibur, a new project was launched to replace everything with another ERP system.  I don’t know how much was spent on Excalibur, but we had a lot of consultants around for four or five years, and they weren’t free.

Yes, there must be a peculiar psychology to the decisions leading to projects that cost a lot of money but don’t produce results.  Maybe it’s just a reflection of human nature: wanting to be part of something big, visible, and important.  That’s risky though, if you look at the percentage likelihood of pure success of large IT projects based on studies to date, and if you plan on telling the truth when it comes to evaluating success at the end.

Recognizing that your enterprise might be in this situation – drinking the Kool-Aid, as it were – is a good first step toward avoiding the ensuing money pit.

Share:
Strategy & Management

Nip the Losing Projects in the Bud

December 9, 2014 by Matt Cook No Comments

How do software projects get started anyway? Someone went to a conference. A department manager wants to “streamline workflow.” A sales VP says his team is wasting time with old and slow systems. A “transformational” project is launched. A new plant or warehouse is being planned. Customers start asking for things your company cannot do with existing systems.

In each case a person or group of people claims that, with a new system, all kinds of benefits are possible. But anyone can create a list of benefits; with some creativity you can build an attractive ROI for anything.

How can you tell your project might have to be stopped before it fails?

  • Having spent weeks evaluating software demos by different providers, your VP of Sales and her team is convinced the company should purchase and implement vendor B’s solution.
  • The technology demos well and looks cool, but the ROI is unclear and seems contrived.
  • No one has documented a thorough summary of how a new system will be used to run your business.
  • Even though the vendor has quoted implementation, license, and maintenance costs, other costs like integration with existing systems are unknown.
  • No one in your enterprise is familiar with the vendor’s products or reputation.
  • It’s not clear what kind of internal team you will need, and how you will free up those people for the project.
  • There are many parts of the solution that will have to be custom-developed (invented).

There is always a possibility out there, imagined by someone or everyone, that a particular technology investment could produce different processes, eliminate waste, inspire minds, please customers.  But a sober clear-eyed assessment is what is needed before grand project aspirations can begin.

Share:
Strategy & Management

A Rocky History: Studies of IT Project Failure

December 5, 2014 by Matt Cook No Comments

Studies over the past 15 years show enterprise software project failure rates ranging between one-third and two-thirds. Failure is defined in various ways – over budget, taking much longer than planned to implement, causing major business disruptions, or simply abandoned. 

The OASIG Study

OASIG, an organizational management group in the UK, commissioned a study in 1995 that involved interviews with 45 experts in management and business who had extensive experience with information technology projects either as consultants or researchers. The in-depth interviews revealed a dismal 20%-30% success rates for IT projects, and the reasons cited for the overall poor track record were:

  • Failing to recognize the human and organizational aspects of IT;
  • Weak project management; and
  • Unclear identification of user requirements. 

The Chaos Report

The Standish Group is a research and advisory firm that in 1995 published The Chaos Report, which found

  • Only about 15% of IT projects were completed on time and on budget;
  • Thirty-one percent of all projects were canceled before completion;
  • Projects completed by the largest American companies had only about 42% of the originally proposed features and functions.

The firm extrapolated the results to estimate that in 1995, eighty thousand projects were canceled, representing approximately $81 billion in wasted spending. 

The KPMG Canada Survey

In 1997 accounting firm KPMG studied why IT projects fail. The top reasons were:

  • Weak project management, including insufficient attention to risk management;
  • Questionable business case for the project;
  • Inadequate support and buy-in from top management. 

The Conference Board Survey

In 2001 The Conference Board surveyed 117 companies that had started or completed ERP software projects. The results showed that:

  • Forty percent of the initiatives did not produce the expected benefits within a year of completion;
  • On average respondents reported spending 25% more than expected on the implementation and 20% more on annual support costs;
  • Only one-third of the respondents said they were satisfied with their results. 

The Robbins-Gioia Survey

In 2001, management consulting firm Robbins-Gioia queried 232 companies across a range of industries about their IT investments, particularly investments in ERP systems. Of the companies that already had an ERP system in place or were in the process of implementing one:

  • Fifty-one percent said the implementation of the new system was unsuccessful; and
  • Forty-six percent said they believed their organization didn’t know how to use the ERP system to improve business results.

My take on these studies is this: Projects fail for many different reasons, but nearly all of those reasons can be tied back to human factors. The likelihood of success is directly correlated to the decisions you make, the strength of your project team, the way they manage the project, the way you manage the team, and particularly the strength of your project manager (PM).

Share:
Strategy & Management

Software Has Always Been Problematic

November 16, 2014 by Matt Cook No Comments

Software is like no other product on earth – it is a collection of millions of lines of instructions telling a computer what to do. You can’t “see” software; reading the lines of code would tell you nothing unless you had written the code yourself, and even programmers themselves can easily forget what they did. You must imagine software, and are left to rely on what the software’s creators say about it.

That good software is indispensable goes back to one of the first-ever software projects: an effort in the early 1950s to link together data from radars along the Eastern seaboard that were monitoring possible air and seaborne threats to the United States. Software, it was discovered, could collect, compare and plot radar data on paper much faster than human beings could.

But from software’s early beginnings as an industry in the 1950s and 1960s, business managers have struggled to understand the systems they buy, and the people and firms that market them.

In his excellent history of the software industry, Martin Campbell-Kelly describes the origins of software programming in the 1950s: “Only weeks after the first prototype laboratory computers sprang to life, it became clear that programs had a life of their own – they would take weeks or months to shake down, and they would forever need improvement and modification in response to the demands of a changing environment.” Some things haven’t changed.

Mr. Campbell also describes what must be one of the earliest maddening software experiences. General Electric had purchased a Univac computer in 1954, but it took “nearly 2 years to get a set of basic accounting applications working satisfactorily, one entire programming group having been fired in the process. As a result, the whole area of automated business computing, and Univac especially, had become very questionable in the eyes of many businessmen.”

We are 60 years into the commercial software industry, and while applications have become much more powerful, they are still prone to failure.  Large projects still fail at a high level, and the sums spent are much greater than 60 years ago.

So the challenge remains: not to build more powerful or smarter applications, but to build and integrate software into an enterprise in a predictable, reliable and cost-sensible way.

Share:
Page 1 of 212»

Categories

  • Strategy & Management
  • Trends & Technologies

Archives

© 2017 Copyright Matthew David Cook // All rights reserved