Case Study: Complacency Leads to Disaster


Late summer in western Ohio is beautiful: brilliant deep orange sunsets over a flat simmering horizon of farmhouses and silos and rich corn fields.

But in 2007, away from the sanguine setting, inside a large, dark refrigerated warehouse, an exhausted project team was struggling to convert the operation to new inventory management software. An operational disaster was unfolding. Manufacturing lines in the plant, normally producing and shipping 50 truckloads a day, had come to a halt, blocked by inventory waiting to be shipped that choked every available space in the warehouse.

Things turned into a heart-stopping disaster as inventory piled up and eventually shut down production lines because there was no place to put product. The outbound deliveries to distribution centers also slowed, resulting in trucks delivering one to two days late. Eventually the customer felt the disruption as product failed to make it to the distribution centers in time to fill waiting orders.

Displaying my keen sense for career-enhancing moves, I got into a heated argument with the VP of Operations. We disagreed as to the true cause of the disruption. The stress on our teams was enormous. Although production lines were able to start back up after a day, we still had an inventory-choked warehouse in which movement with a forklift was difficult, and every task proceeded at about half the normal speed. Day after day we struggled with how to get enough breathing room to clear away the overstock in the building.

A general sense of profanity-laced panic was setting in. How had it come to this?

It took a solid month to dig out. But not before making customers angry, incurring hundreds of thousands of dollars in obsolete inventory losses, and completely burning out the 30 or so people who worked day and night on the problems. I guess compared to other, more epic disasters, I should have been elated.

No one thought that this launch would be any different than the previous two launches we had done in two of the company’s other plants that year. Those other two implementations had gone well, with no disruption to the business and with general stability after about two weeks.

In IT project terms, having to shut down a large plant because of troubles cutting over to a new system is considered one big fat failure. Senior management was livid.

Key Points

The team was complacent about the software. We had “gone live” in two other plants that year with the new software. What could be different about this plant that would create problems? The Ohio plant, although bigger than the other two plants, made the same product, and shipped it in the same way, as the other two plants, the software was configured in the same way, and we had thoroughly tested the application as if production and shipping were coming out of the Ohio plant.

But the Ohio plant was different in at least one important respect: its production rate in pallets per hour was much higher, and that meant forklift operators would pick up in a random fashion any pallet they could at the wrapping station in order to keep up. They didn’t necessarily pick up and scan in a first-wrapped-first-pickup fashion, and that went against the logic of the software (this was news to us).

For some reason, in the other two plants, the forklift operators would always pick up and scan the pallet in the queue that had first come off the wrapping station. So we hadn’t encountered any problems.

We discovered that the system operated according to a kind of First-in, First-out rule, which we never expected to be applied to this particular pallet movement. Surprise! The operator could still move the pallet but had to access a different transaction on a different screen on the truck-mounted radio-frequency (RF) scanning unit. This slowed down the entire flow of product.

So we had let our complacency about the system override any thoughts that the Ohio plant might have different operating characteristics that the software would react differently to. We were caught by surprise on an issue that significantly and negatively affected our launch.

The impact from the additional steps in the new system was completely underestimated. Coupled with operators fumbling with handheld scanners and RF terminals, our throughput capability fell drastically. In reality the organization was unprepared for the new system. As a result everything slowed down and soon the normal flow path through the warehouse was clogged with inventory.

What did we learn, and what would we do differently? “Throughput” is now one of my favorite words. If you can’t demonstrate that throughput can be equaled or exceeded post-new system implementation, you have a major risk that must be fixed. We also learned that training classes are insufficient to prepare a workforce for a new system. Instead, a change management process is required, in which you examine each change brought on by the new system, how it affects a person’s daily work, and what the team is going to do to ensure those changes are brought on without disruption to the business.

The team was complacent about the readiness of the organization. The Ohio plant was large, yet the staffing in the warehouse was rather thin. We had held numerous training sessions for weeks, and our point person for making sure the work force was ready assured us that training had been sufficient. But the truth was something different.

The Ohio plant produced 24 hours a day, seven days a week. Each pallet that was produced, at a rate of one every 45 seconds, required 3 to 4 bar code scans, each representing a transaction in the system requiring a user to key in data, before it was shipped. Prior to the new system, no bar code scans were required, and no data entry was required until the truck was ready to leave. This allowed a relatively small group of operators to ship 50 truckloads in a 24-hour period, because they could freely move pallets around without scanning and recording movement.

The team didn’t realize how the extra steps from scanning and data entry would slow down work. The warehouse team had never used bar code scanners before. The steps were simple, but the process involved pressing buttons on a very small keyboard, scanning a bar code label, and then pressing other buttons. The lighting in the warehouse wasn’t great, and wearing gloves while operating the scanners felt clumsy. All of this combined to slow down the whole operation.

In retrospect, we should have sent a few warehouse operators to one of the two other plants we had already successfully converted to the new system. The operators would have had a chance to work in a live environment, actually performing the work instead of just learning about it in a classroom. These operators could have also served as trainers in the Ohio plant.