Matthew Cook -
Software Money Pit Blog
    Strategy & Management
    Trends & Technologies
    Case Studies
About Matt
Buy Matt's Book
Matthew Cook -
  • Software Money Pit Blog
    • Strategy & Management
    • Trends & Technologies
    • Case Studies
  • About Matt
  • Buy Matt’s Book
Strategy & Management

Scope Can Determine Success or Failure

May 24, 2016 by Matt Cook No Comments

Image: Island Peak, Nepal, by McKay Savage, CC license

“Scope,” or “footprint” in software terms refers to the number of business processes that an application will “cover,” or enable.  The scope of an accounting system is usually: general ledger, accounts payable, accounts receivable, fixed assets, P&L and balance sheet.

The scope has to fit the application, and vice versa, and it has to be feasible for the project team and deliver the benefits expected to pay back the investment in the new system.

Too big a scope can overwhelm the team and the application you select.  It will also cost more.  Too small a scope might not be worth the time and expense, and may not yield the financial benefits expected.  A creeping scope starts out small and feasible, then as the project progresses scope is added in the form of requests for features and functions not originally planned.

Money pits are usually found at the end of projects with too big of a scope or a creeping scope.

How do you find the right scope?

Determine which areas of the business would benefit the most from a new or better application. Can you define the specific problems that are leading your enterprise to consider new software? Where are those problems located – in what functional areas and related to which current (legacy) system? Is the problem that a) a particular application is too limiting; b) a group of applications are islands and that integration of them would yield benefits; c) none of your applications are integrated; or d) something else?

Consider a range of scope options to find the optimal one. In some cases, expanding the scope of a new application beyond “problem areas” can be the optimal choice. The process is iterative, and you should consider several alternatives. For example, implementing a new accounting system may satisfy most of a company’s needs and produce a good ROI on its own. But expanding the application footprint to, say, payroll and purchasing, may result in an even better return because it simplifies integration costs, eliminates more manual work, and may strategically be a better decision.

Set up a framework to evaluate each scope alternative. In a framework (Excel comparison) you can evaluate each scope option according to such factors as cost, complexity, length of time to implement, risk to the business, ROI, required internal resources and strategic value. Then you have a logical basis for your decision.

The scope of an ERP project does not have to be huge. You can be selective in what processes to migrate to an ERP system, and you don’t have to convert everything at once – both of these steps will reduce the overall risk of the project. For example, you can implement demand planning systems first to shake out the bugs in what is traditionally a complex and parameter-sensitive application. The core financial systems of an ERP can also be phased in first before everything else.

Share:
Strategy & Management

The One Minute Technology Manager – Test the Assumptions

March 30, 2016 by Matt Cook No Comments

Image by Nicolas Will, CC license

Behind every idea is a set of assumptions.  These assumptions can be exposed by simply by asking “why”?

When it comes to good technology management, it’s your job to test these assumptions, to kill the losing propositions or to make them more viable as sound investments.  Sometimes these assumptions are wrong – and a lot of them need to be right in order for a project to succeed.

Many people don’t realize the number of assumptions they make when a technology project is launched. Among them:

  • what they saw in the demo or pilot will work in the real world;
  • the software will meet all the business requirements that were specified before the project started;
  • the team won’t have to make any customizations other than what was already identified;
  • users will quickly learn and accept the new system;
  • the project will be completed on the promised date.

In my book I described a hypothetical conversation between a manager and a CIO/CEO.  The manager was explaining that “the new system will give us real-time visibility of our vendor inventories and plant inventories, and instead of waiting for reports we’ll see our inventory positions and planned production and receipts real-time.”

Taken at face value, this statement implies acceptance of the following assumptions:

  1. The way we think of “real-time visibility” of inventories, production and receipts is the same as what the system can provide.
  2. The view of said data will be in a useful format and will provide all the data we need to make better/faster decisions.
  3. These better/faster decisions will enable us to let our customers order within a shorter lead- time window and will reduce our on-hand inventories.
  4. The savings from lower inventories and the additional sales from our late-order customers will more than pay for the cost of this new system.
  5. A change in business process (i.e., how we manage inventories and production) would not produce these same benefits.
  6. Out of all of the possible system solutions this one is the best choice from an IT strategy, cost and ongoing support standpoint.

A pause for a minute to question these six assumptions may well be the most valuable minute ever spent on the proposed project.  All kinds of havoc and wasted money can be avoided just by testing these assumptions.

And as you can see you don’t have to be an IT expert to successfully manage technology.  You just have to use common sense, by testing the logic that if we do X, then we will receive Y benefits.  If you are going to invest in technology, you may as well do it the right way.

Share:
Strategy & Management

A Software Vendor Checklist

March 10, 2016 by Matt Cook No Comments

Please choose the door through which your next software vendor will take you.  Image: Doors of Dublin, by Tim Sackton, edited to fit 569 X 368px, CC license.

Selecting a software vendor is difficult at best in the 21st century; here are some must-have criteria, in addition to, but perhaps more important than, cost and time:

Does it solve my problem? Does the software company’s system solve your business problem? Does its existing functionality match the business requirements you drafted?
Does it pay back? Do the financial benefits from the solution pay back the total cost of implementing it in three years or less?
Do I understand all of the solution’s costs? Have you accounted for initial license, recurring support fees, custom development costs for changes you want to make to the software, hardware costs, upgrades to your network bandwidth or operating systems on your current servers or PCs, the cost of the next version upgrade, the cost of consultants, of hiring backup staff for project team members, and travel?
Is the solution in line with my strategy? Does the system match your criteria for what types of information solutions you will invest in, now and in the near future?
Do I understand all of my alternatives, besides this particular vendor? Have you done your homework regarding software options available? Have you constructed an evaluation matrix and compared all the alternatives to one another?
Does my team have the time and skills to implement this solution? Can you secure near full-time people to manage this project? Is the system easy to learn? Is it intuitive? Has your team evaluated it and are they comfortable they can master it?
Do my users have the aptitude to learn it and become proficient? Can you envision your end users quickly learning to use all aspects of the software? Are there enough users who could become proficient enough to serve as key users and help other users with training and troubleshooting?
Does my team fully understand how this solution will integrate with the company’s other systems? Has the vendor demonstrated to your satisfaction the ease with which the system will integrate with your other systems? Are other enterprises already running the software with systems like yours? Try to get at least a conference call with those references to gauge the level of integration complexity.
How risky is this particular software alternative compared to others? Can the software be phased in without interrupting the business? If the solution fails or the team encounters startup problems, how easy will it be to keep mission-critical activities running?
Vendor reputation. How many enterprises are using the vendor’s software, and for how long? Get references and check them.
Can I find programming help in the open market? If you need customizations, can you readily find people to do the work? Or are you locked in to using the vendor to make all your changes?

All of this is of course after you have submitted and reviewed detailed RFPs from the most appropriate vendors.  You can build a grid or a table, with vendors/solutions across the top and your most important criteria down the left hand side, and weight the relative importance of each. The result is an overall score that points you to a solution that best fits your needs.

Share:
Strategy & Management

Why Do Software Projects Cost So Much?

October 15, 2015 by Matt Cook No Comments

The short answer to why corporate software costs so much is that implementing it takes so long, even if everything goes perfectly, which happens exactly as often as Haley’s Comet passing through our skies. It’s expensive for one reason: specialized, and therefore expensive, skills. It takes expensive skills to:

  • write the software in the first place;
  • modify it to your precise business needs; and
  • install and test it and fix problems before you can use it.

The three biggest cost buckets of a software investment are implementation, software modifications, and the cost of delays or disruption to the business.

What is “implementation”? It is the process of making your business function using the new software, or “integrating” the software into your business, however you choose to look at it. Companies have different philosophies about this; some insist the software must be modified to accommodate the way the business functions; others believe in keeping the software as “vanilla” as possible by changing processes to fit the way the software was designed to work.

There is probably a happy medium.  I think the more you modify a program the more trouble you can expect.  It is not unusual to spend a (low) percentage of the project cost on modifications.

“Implementation” is also the process of matching each step in your business process to corresponding steps in the software. A business “process” is usually something like “ship a customer order” or “receive a shipment from a supplier.”

There might be 100 or so distinct business processes in a company, each with five to eight steps or transactions involved, so a software implementation could involve matching all of those 500 to 800 steps or transactions to the new software, and that takes time, knowledge of your business, and knowledge of the new software.

That’s why implementations are expensive: high cost per hour multiplied by many hours.

But if a perfect project is expensive, imagine how expensive a delayed or failed project can be. Failure is the norm, according to some studies, defined as over budget, not meeting implementation dates, or not delivering functionality as expected.

I would add to that list, from personal experience, failure also includes unexpected business disruption, like temporarily shutting down a manufacturing plant or shipping to your customers a day late. So the fact that software implementations are perceived to be wildly expensive is not just because software implementations are wildly expensive anyway – they also have a high failure rate, which only adds to the cost.

Share:
Strategy & Management

Attention Deficit and the Enterprise Software Project

June 20, 2015 by Matt Cook No Comments

Never mind how many people you can get to work on your enterprise software project team. The critical factor today is how much of their focused attention you can get when you need it.

Quoting columnist and Guinness record holder for Highest IQ Marilyn vos Savant, “Working in an office with an array of electronic devices is like trying to get something done at home with half a dozen small children around. The calls for attention are constant.”

It is not easy to get people to pay attention. Your project is competing for share of mind with texts, email and the internet emanating non-stop from your team’s cell phones.

Thomas Davenport and John Beck, in their book The Attention Economy, state: “The ability to prioritize information, to focus and reflect on it, and to exclude extraneous data will be at least as important as acquiring it.”

In 2005, I was on a team whose mission it was to modify parts of SAP to enable the sale and invoicing of what are called “kits.” A kit in software parlance is a product whose components include other products. The final product is what is recorded as the sale and what appears on the customer invoice. Sounds simple, right?

We worked in a global company. The team participants were in Ohio, New York, Western Europe, The Philippines, India, and Texas. After 14 months the team had finally gotten to the point at which it could test the solution. There had been more than 25 full-team conference calls, but no face-to-face meetings in which everyone – business people, developers, and technical experts – was present.

Why did it take so long? Attention deficit: People in remote locations for whom this project was one of many, and whose interaction with the substance of it was via impersonal conference calls during which multi-tasking naturally diverted one’s attention.

Do you know anyone who does not multi-task during a conference call? The attention deficit is automatic. People are caught not listening. Participants tend to speak a lot less than if the meeting is held face to face. It’s easy to hide and withdraw on a conference call.

We pride ourselves for the ability to work virtually across continents, have meetings anytime anywhere, and send work around the globe as if we were passing it across the table. But I think we’ve diluted our effectiveness and the quality of our output. We split our minds into more and more slices, marveling at our ability to manage so many things at once, but all we are doing is giving cursory attention to a lot of things instead of focused energy on a few.

Share:
Strategy & Management

Have a Methodology – But Only One That Makes Sense to You

May 25, 2015 by Matt Cook No Comments

Who needs a methodology?  Just do it.  Photo: Spring 2013 Hackathon at Columbia University, where participants were challenged to build — over a weekend —  software programs for New York City startups; by Matylda Czarnecka, hackNY.org, CC license.

You’re about to launch a big ERP project.

You need a structured methodology.  The lazy/easy thing to do is to use the one your software vendor or consulting partner uses.  Don’t automatically accept this.  Understand first how their methodology works (it’s usually designed to maximize billable hours).  Then use common sense.

A methodology is simply a way of doing things. A methodology for doing laundry could be: 1) separate colors from whites; 2) wash whites in hot water/cold rinse with bleach; 3) wash colors in cold water; 4) hang dry delicate fabrics and put everything else in the dryer on medium heat for 30 minutes.

But there can be variations in laundry methodology depending on preferences and beliefs, such as 1) throw colors and whites together in a cold wash and cold rinse so colors don’t run; 2) throw everything in the dryer on delicate cycle; 3) check periodically for dryness; and 4) pull out anything that seems dry.

IT project methodologies are practically an industry – templates, software, books, training programs and consulting engagements.

The “waterfall methodology” used in Microsoft Project and adhered to for decades by many in the project management field is arguably flawed from the beginning, at least according to some. This methodology assumes that task completion is a function of the number of man-hours, total time duration, and dependency on completion of one or more preceding tasks. Viewed on a chart, the timeline of the project looks like waterfalls cascading from left to right, top to bottom, as time progresses.

Frederick Brooks, the computing legend who pioneered IBM’s System/360 mainframe computers launched between 1965 and 1978, says in his seminal book The Mythical Man-Month that “the basic fallacy of the waterfall model is that it assumes a project goes through the process once, that the architecture is excellent and easy to use, the implementation design is sound, and the realization is fixable as testing proceeds (but) experience and ideas from each downstream part of the construction process must leap upstream, sometimes more than one stage, and affect the upstream activity.”

Brooks also says the planning process usually and mistakenly assumes that the right people will be there, available, focused and engaged, when they are needed, and that when they perform their work they will do so in a linear fashion, with no mistakes and no need to backtrack and start over. In the 21st century, this is simply not the case in many organizations.

I don’t prefer one particular methodology – I think the way to work depends on what you’re trying to achieve, by when, with whom. I also think the imposition of a particular methodology – if it doesn’t fit the project or the team – will actually detract from success.

I do like at least one part of the Scrum methodology, called a Sprint.  It could be a one week or two week “Sprint,” but the idea is the same: everyone needed for a particular stage or segment of the project is brought together and forced to stay together until their part is finished and tested.  Just get together and get it done.

Have you seen the videos where a house is built in a day?  Here is just one.  How did they do that? Lots of preparation, teamwork, intensity, focus, urgency.

The idea of a sprint in an IT project is the same: skilled people intensely focused on the same, singular objective that must be achieved within a fixed period of time.  I like it because sometimes it’s the only way to get the brains you need to complete something without the endless distractions of everyday work.  Attention deficit is rampant in organizations today — exactly the opposite of what successful enterprise software projects need to be successful.

Which methodology is best? For your project, the best methodology is the one that provides structure in a common-sense way with simple, easy-to-use tools that you think your team will, in a practical sense, actually stick to.

 

Share:
Strategy & Management

Case Study: The 2010 Census

May 6, 2015 by Matt Cook No Comments

Image: U.S. Counties, population change 2000-2010, by Joe Wolf, CC license

 

Failed IT projects are not unusual in the government sector. Big IT projects are hard enough without the added complexity, delays, politics, and bureaucracy of government entities. But leaving those dysfunctions aside, there is much to learn from these failures. The 2010 Census is one such event, because the factors that led to failure are some of the same ones that kill private sector projects as well.

Background

The 2010 census was to be the most automated census ever. Half a million census takers with handheld devices would capture, in the “field,” household data that had not been sent in via census forms. Harris Corp. was given a $600 million contract to supply the handheld devices and manage the data.

But according to published news accounts, the Census Bureau requested, after the project began, more than 400 changes to the requirements that had originally been submitted to Harris. In trying to accommodate these requests, Harris naturally encountered more expenses to redesign or re-program the handheld units or to redesign thelocke_apport_med data management system that would collect and organize the accumulated data.

The handheld units themselves were difficult to operate for some of the temporary workers who tested them, and they couldn’t successfully transmit large amounts of data. A help desk for field workers using the devices was included in the original contract at a cost of $36 million, but revised to $217 million.

In the spring of 2008, the Census Bureau was faced with a decision whether to continue with the automation plan, because the handheld units had not yet been completely tested and needed further development, in part because of the additional post-contract requirements. The Bureau needed enough time to hire and train about 600,000 temporary workers if the original Field Data Collection Automation (FDCA) plan had to be revised or scrapped.

In the end, the 2010 Census may not have been the most automated census ever, but it was the most expensive. The contract with Harris was revised to $1.3 billion, and other expenses were incurred for equipment and other areas that were not anticipated and therefore not estimated. Not all of the overruns were systems-related.

Key Points

Constantly changing requirements increased delays and costs. As we know from understanding the nature of software, a system is unable to simply change its code and accommodate additional requirements on the fly. Why no one put a stop to the additional requirements heaped on to the project is a mystery, but it’s pretty much standard procedure to freeze the requirements at some point in the project. It’s like asking a homebuilder to add another bathroom on the second floor when the home is halfway to completion. It can be done, maybe, but will make the house cost more and take longer to complete. In extreme cases – like the new custom-built Medicaid claims processing system for the State of North Carolina – the project may never end.

Undue confidence in the user’s ability to learn how to operate the handheld devices led to surprise additional costs. The project didn’t plan on people having so much difficulty with the handheld data collectors. But people’s innate abilities, especially in the area of new technology, vary greatly. Nearly every project I’ve been involved in experienced difficulty because of a certain percentage of users not being able to catch on to the new system. This means more mistakes are made with the new system, more support is needed, and in some cases people who were competent at their jobs with the old system simply cannot perform at a satisfactory level with the new one.

In the end, the project was a money pit. The Census Bureau had to revert to pencil and paper when the handheld devices couldn’t be used – which it said would add $3 billion to the cost of the census. If $3 billion is what the Bureau would have saved with automation, then it was probably worth it to invest the originally estimated amount of $600 million, and even the revised estimate of $1.2 billion. Instead, the government paid the full $1.2 billion and had to use pencil and paper. Net result: a waste of money.

Just freezing the requirements alone, at some point in the project, could have completely changed the outcome. Intentions were apparently good – saving labor cost through automation – and I expect there were presentations made to different levels of management in order to gain approval. A well-intentioned project developed by smart people becomes a vast hole sucking time and money into the abyss.

Share:
Trends & Technologies

Tell Me Again Why I Should Care About Hyperscale Computing?

May 2, 2015 by Matt Cook No Comments

Photo: “Trails in the Sand,” Dubai, by Kamal Kestell, CC license

If “Humanscale” computing is managing bags of sand, “Hyperscale” computing is managing each individual grain of sand in every bag.

“Hyperscale” computing (HC) is the processing of data, messages or transactions on a scale orders of magnitude larger than traditional computing.  HC is becoming a need for many businesses.  Why?

Consider a company that sells bottled water.  Its main business used to be selling truckloads full of cases of water to big grocery chains.  It has 25 different products, or Stock Keeping Units (SKUs).  The big grocery chains then distributed cases of water to its stores, which numbered 20,000.  The data requirements for the water company’s computers was manageable, even as the company grew rapidly.

Now, the company wants to analyze the performance of its products on store shelves by measuring things like velocity (how fast the product turns), price compared to competing products, and out-of-stocks.  It’s customers — the big grocery chains — are offering to supply data from their systems on every scan of every product in every store, because they too want to improve the performance of products on the shelf.

In one month during the summer, about 3.5 billion bottles of water are sold.  A data file from just one big grocery chain runs to 3 million lines.  How and where will you process this data?  Traditional databases will be too slow.  You will need superfast databases that distribute computing to many servers — this is called in-memory, or massively parallel computing.  This is an example of hyperscale computing.

Other examples where you would need HC: selling direct to consumers through their smartphones, where you might have to process millions of transactions say, during the Christmas holiday season; gathering machine data every second to monitor a machine’s performance (a General Electric turbofan jet engine generates 5,000 data points per second, which amounts to 30 terabytes every 30 minutes); and managing millions of product-attribute combinations.

The computing tools for hyperscale will not be found in your ERP system.  Trying to engineer your existing systems to handle hyperscale data and transactions will be a costly failure.  But there are tools available on the market today, and many of them are found in cloud applications, and in application hosting providers.

Cloud application and hosting vendors usually have much larger data processing capabilities, including automatic failover and redundant servers.  You can take advantage of this capacity.  For example, you can obtain, from a leading application hosting provider, at a cost less than the monthly rent of an apartment in New York City, 30 terabytes of storage and a massively parallel computing environment.

My advice:

  • Identify areas of your business that are significantly under-scaled, or where you have large gaps in business needs compared to processing capability;
  • Pick one and design a pilot project (many vendors are willing to do this with you at very low cost);
  • Measure results and benefits, and if beneficial, expand the solution to other parts of your business.

It’s probably not OK to ignore this trend.  Even of you don’t need HC today, think about the future and where commerce is going.  If you don’t gain the capability for hyperscale computing, one or more of your competitors probably will.

 

Share:
Strategy & Management

The One Minute Technology Manager – Ask Questions

December 18, 2014 by Matt Cook No Comments

The welling up of excitement around the idea of acquiring new systems is palpable in companies; the feeling is that action is being taken to move the enterprise forward.  And it’s hard to be the skeptic in the room, but you must, if you are to be a successful manager of technology.

One of your main roles as an official or unofficial manager of technology is to keep asking questions. This has the effect of exposing and testing the initial rationale for a project. For example, in the scenario below let’s say you are the CIO or CEO.

CIO/CEO: What exactly are the benefits of this investment?

Manager: We’ll be able to cut customer order lead time and reduce our on-hand inventory.

CIO/CEO: Great. How?

Manager: The new system will give us real-time visibility of our vendor inventories and plant inventories, and instead of waiting for reports we’ll see our inventory positions and planned production and receipts real-time.

CIO/CEO: So people will be monitoring inventories, planned and actual production 24×7?

Manager: Well, that’s possible, but might not be necessary…

CIO/CEO: How exactly will the order fulfillment and material buying change after the new system is put in?

Manager: As I said, we’ll be able to see the real-time situation, and be able to make better decisions…

CIO/CEO: Yes, but exactly what will change, in terms of process, compared to today, to give us these benefits?

Manager: We’ll be making smarter decisions because of the real-time information and ….

Does this dialogue ring true in terms of how projects are justified? The problem in this hypothetical conversation is that the manager hasn’t thought beyond the basic headline argument that real- time views will make everything better. This should be a danger signal – people have bought into an imagined benefit without proving out to themselves exactly how this benefit will be achieved.

Some other relevant questions in this scenario would be: “Is lack of real-time visibility the only constraint to lower inventories…how will lower inventories translate into real savings besides reducing cash flow…how will you change your decisions about production and will the company be able to execute these changes in production scheduling…when customers miss the order cutoff do they order anyway and take delivery later…?

Asking questions, especially specific ones, quickly changes the conversation from one of vague potential to real-world feasibility.  You will be viewed as a wet blanket, a naysayer, a stick-in-the mud cranky old fashioned Resister to Inevitable Change.

But in positioning yourself in this way you have taken the first steps to successfully managing technology.  Your questions will initiate pragmatic thinking.  They will cause people to stop and consider the realities of what they are proposing.  Reality is a good thing.  You will set in motion a different but important conversation.  Your questions are in effect defining the requirements that the proposal must meet before it can be considered a worthwhile endeavor.

Share:
Strategy & Management

The Shiny Object and the Psychology of Large Enterprise Software Projects

December 15, 2014 by Matt Cook No Comments

You’ve seen it before – how projects gain institutional momentum even though the hard reasons for it aren’t clear.

An expensive multi-year project is hatched from the need for some level of management somewhere, to “move the enterprise forward” by technology-enabling new and improved processes.

Ideas – the seeds of projects — have a way of gaining momentum without the right level of rigorous questioning and unbiased analysis. Notice how a project, slim as the specifics might be, gets an impressive name, such as Phoenix, Gemini or Athena, and then almost automatically has credence as a legitimate, important initiative that must be done. Once the project gains notoriety in this way, there is a bias to produce something from it, and often that bias to do something ends up being the purchase of a new system or systems.

Years ago I worked for a company that announced a project to “transform” the company chiefly by “integrating” different departments; we’ll call this project Excalibur. A team was formed, made up of the best and brightest, from a cross-section of functional departments. A lot of full-day meetings were held.

I wasn’t involved in Excalibur, yet, so I didn’t know what the meetings were about. But I do know that the first thing — no, make that the only thing — the Excalibur team set out to do was to evaluate packaged software solutions…for what, it wasn’t immediately clear. But it occurred to me that the team was completely bypassing important steps, like stating a problem and business goals, identifying broken processes or dysfunctional parts of the organization, and defining a desired future state.

Epilogue: the project took nearly five years to complete, and resulted in replacement of four legacy applications with a new, interconnected ERP system. Two years after the completion of Excalibur, a new project was launched to replace everything with another ERP system.  I don’t know how much was spent on Excalibur, but we had a lot of consultants around for four or five years, and they weren’t free.

Yes, there must be a peculiar psychology to the decisions leading to projects that cost a lot of money but don’t produce results.  Maybe it’s just a reflection of human nature: wanting to be part of something big, visible, and important.  That’s risky though, if you look at the percentage likelihood of pure success of large IT projects based on studies to date, and if you plan on telling the truth when it comes to evaluating success at the end.

Recognizing that your enterprise might be in this situation – drinking the Kool-Aid, as it were – is a good first step toward avoiding the ensuing money pit.

Share:
Page 1 of 212»

Categories

  • Strategy & Management
  • Trends & Technologies

Archives

© 2017 Copyright Matthew David Cook // All rights reserved