Matthew Cook -
Software Money Pit Blog
    Strategy & Management
    Trends & Technologies
    Case Studies
About Matt
Buy Matt's Book
Matthew Cook -
  • Software Money Pit Blog
    • Strategy & Management
    • Trends & Technologies
    • Case Studies
  • About Matt
  • Buy Matt’s Book
Strategy & Management

What’s Behind the Frosty Relationships Between Business and IT?

November 24, 2016 by Matt Cook No Comments

Image by Andrew Becraft, CC license.

Business people complain their IT team is understaffed, uncooperative, slow, and deploys lousy systems. IT claims business people are disorganized, unwilling to learn, bossy and never available.

Such is the just-under-the-surface functional rivalry that regularly plays out every day in many organizations, a tiff that delays and sometimes stops a company’s technology evolution.

Where does the animosity come from?

Mars and Venus. Business and IT do come from different planets. IT is methodical, structured, detailed, and logical. Business functions are impatient, practical, and focused on quick solutions that work. The language, terminology, and methods of one are incomprehensible to the other. Right from the start, there is an inherent lack of understanding.

Turf and different agendas. Your CIO insists on controlling every aspect of technology in the workplace, but your Manufacturing VP asserts the right to acquire any information system that best meets the need. One side explores solutions and comes up with its favorite; the other side isn’t even consulted. People become locked into a viewpoint, start a campaign for their side, and nothing gets done.

Unrealistic expectations. The business side expects new stuff – innovation – from IT. But only 10-20% of a typical IT budget is slotted for innovation – the rest is consumed by keeping the lights on, a hard budget reality for most companies. The result: an unfair perception in the business that IT never delivers anything new.

Proximity. If a company occupies three floors of a building, where does IT usually sit? In the basement, first floor, or an annex somewhere (and sometimes halfway around the world). This telegraphs that IT is not important, and the physical separation creates gaps in relationships and communication. IT people want to be thought of as part of the team and having a seat at the table.

Misunderstanding of roles. Business and IT teams are often thrown together on a project without clear definition of who is going to do what. The business side expects much more from IT than IT expects to do, and vice versa. IT people also resent having to lead a project while the business side remains dis-engaged and absent.

Bring back some harmony with these practical moves:

Recruit the crossover people – those who understand and speak the language of both Mars (business) and Venus (IT) – to bridge the two sides. Put them in engagement roles with “the other side.” There aren’t enough of these people today because most companies still cling to ancient uni-functional career paths, but you know who they are, or could be, in your organization.

Establish together how IT solutions will be acquired. Business people are much more agreeable if they know there is at least a process where IT requests can be considered, prioritized, and acted on.  Agree that any exploration of solutions will be done together, that objective, score-based criteria will be used to evaluate system alternatives, and set a joint annual budget for IT investments.

Put people together: Co-locating IT people with their functional business counterparts is a big plus: 1) the business side feels valued because IT is paired with them; 2) IT people feel valued because they are “on the field” with business teams; and 3) communication and understanding have a better chance of emerging.

Negotiate roles upfront like you would any team effort. For a large project, what people resources are needed from business and IT, for how long? Who determines what each team member is expected to deliver? How self-sufficient is the business expected to be, in terms of learning and testing the new application? Write these down, publish them, and get signatures if you have to. Think of it as a team charter.

A final thought: is it time to re-think the traditional functional roles and the career paths that go with them, where IT is IT and sales is sales and supply chain is supply chain?

Share:
Strategy & Management

Scope Can Determine Success or Failure

May 24, 2016 by Matt Cook No Comments

Image: Island Peak, Nepal, by McKay Savage, CC license

“Scope,” or “footprint” in software terms refers to the number of business processes that an application will “cover,” or enable.  The scope of an accounting system is usually: general ledger, accounts payable, accounts receivable, fixed assets, P&L and balance sheet.

The scope has to fit the application, and vice versa, and it has to be feasible for the project team and deliver the benefits expected to pay back the investment in the new system.

Too big a scope can overwhelm the team and the application you select.  It will also cost more.  Too small a scope might not be worth the time and expense, and may not yield the financial benefits expected.  A creeping scope starts out small and feasible, then as the project progresses scope is added in the form of requests for features and functions not originally planned.

Money pits are usually found at the end of projects with too big of a scope or a creeping scope.

How do you find the right scope?

Determine which areas of the business would benefit the most from a new or better application. Can you define the specific problems that are leading your enterprise to consider new software? Where are those problems located – in what functional areas and related to which current (legacy) system? Is the problem that a) a particular application is too limiting; b) a group of applications are islands and that integration of them would yield benefits; c) none of your applications are integrated; or d) something else?

Consider a range of scope options to find the optimal one. In some cases, expanding the scope of a new application beyond “problem areas” can be the optimal choice. The process is iterative, and you should consider several alternatives. For example, implementing a new accounting system may satisfy most of a company’s needs and produce a good ROI on its own. But expanding the application footprint to, say, payroll and purchasing, may result in an even better return because it simplifies integration costs, eliminates more manual work, and may strategically be a better decision.

Set up a framework to evaluate each scope alternative. In a framework (Excel comparison) you can evaluate each scope option according to such factors as cost, complexity, length of time to implement, risk to the business, ROI, required internal resources and strategic value. Then you have a logical basis for your decision.

The scope of an ERP project does not have to be huge. You can be selective in what processes to migrate to an ERP system, and you don’t have to convert everything at once – both of these steps will reduce the overall risk of the project. For example, you can implement demand planning systems first to shake out the bugs in what is traditionally a complex and parameter-sensitive application. The core financial systems of an ERP can also be phased in first before everything else.

Share:
Strategy & Management

The One Minute Technology Manager – Test the Assumptions

March 30, 2016 by Matt Cook No Comments

Image by Nicolas Will, CC license

Behind every idea is a set of assumptions.  These assumptions can be exposed by simply by asking “why”?

When it comes to good technology management, it’s your job to test these assumptions, to kill the losing propositions or to make them more viable as sound investments.  Sometimes these assumptions are wrong – and a lot of them need to be right in order for a project to succeed.

Many people don’t realize the number of assumptions they make when a technology project is launched. Among them:

  • what they saw in the demo or pilot will work in the real world;
  • the software will meet all the business requirements that were specified before the project started;
  • the team won’t have to make any customizations other than what was already identified;
  • users will quickly learn and accept the new system;
  • the project will be completed on the promised date.

In my book I described a hypothetical conversation between a manager and a CIO/CEO.  The manager was explaining that “the new system will give us real-time visibility of our vendor inventories and plant inventories, and instead of waiting for reports we’ll see our inventory positions and planned production and receipts real-time.”

Taken at face value, this statement implies acceptance of the following assumptions:

  1. The way we think of “real-time visibility” of inventories, production and receipts is the same as what the system can provide.
  2. The view of said data will be in a useful format and will provide all the data we need to make better/faster decisions.
  3. These better/faster decisions will enable us to let our customers order within a shorter lead- time window and will reduce our on-hand inventories.
  4. The savings from lower inventories and the additional sales from our late-order customers will more than pay for the cost of this new system.
  5. A change in business process (i.e., how we manage inventories and production) would not produce these same benefits.
  6. Out of all of the possible system solutions this one is the best choice from an IT strategy, cost and ongoing support standpoint.

A pause for a minute to question these six assumptions may well be the most valuable minute ever spent on the proposed project.  All kinds of havoc and wasted money can be avoided just by testing these assumptions.

And as you can see you don’t have to be an IT expert to successfully manage technology.  You just have to use common sense, by testing the logic that if we do X, then we will receive Y benefits.  If you are going to invest in technology, you may as well do it the right way.

Share:
Strategy & Management

Why Do Software Projects Cost So Much?

October 15, 2015 by Matt Cook No Comments

The short answer to why corporate software costs so much is that implementing it takes so long, even if everything goes perfectly, which happens exactly as often as Haley’s Comet passing through our skies. It’s expensive for one reason: specialized, and therefore expensive, skills. It takes expensive skills to:

  • write the software in the first place;
  • modify it to your precise business needs; and
  • install and test it and fix problems before you can use it.

The three biggest cost buckets of a software investment are implementation, software modifications, and the cost of delays or disruption to the business.

What is “implementation”? It is the process of making your business function using the new software, or “integrating” the software into your business, however you choose to look at it. Companies have different philosophies about this; some insist the software must be modified to accommodate the way the business functions; others believe in keeping the software as “vanilla” as possible by changing processes to fit the way the software was designed to work.

There is probably a happy medium.  I think the more you modify a program the more trouble you can expect.  It is not unusual to spend a (low) percentage of the project cost on modifications.

“Implementation” is also the process of matching each step in your business process to corresponding steps in the software. A business “process” is usually something like “ship a customer order” or “receive a shipment from a supplier.”

There might be 100 or so distinct business processes in a company, each with five to eight steps or transactions involved, so a software implementation could involve matching all of those 500 to 800 steps or transactions to the new software, and that takes time, knowledge of your business, and knowledge of the new software.

That’s why implementations are expensive: high cost per hour multiplied by many hours.

But if a perfect project is expensive, imagine how expensive a delayed or failed project can be. Failure is the norm, according to some studies, defined as over budget, not meeting implementation dates, or not delivering functionality as expected.

I would add to that list, from personal experience, failure also includes unexpected business disruption, like temporarily shutting down a manufacturing plant or shipping to your customers a day late. So the fact that software implementations are perceived to be wildly expensive is not just because software implementations are wildly expensive anyway – they also have a high failure rate, which only adds to the cost.

Share:
Strategy & Management

Case Study: Nike’s Adventure with Supply Chain Planning Software

July 17, 2015 by Matt Cook No Comments

A Nike Factory store in Atlantic City, NJ.  Photo by Shabai Liu, CC license.

Background

In February 2001 Nike, Inc. announced that it would miss sales and profit targets for the quarter due to problems with supply chain software it had begun to implement the previous year. The company said that it had experienced unforeseen complications with the demand and supply planning software that would result in $100 million in lost sales.

Nike was trying to put in a system that would cut its response time to changing sales demand. These types of systems rely on algorithms and models that use historical sales data combined with human input to generate a sales forecast, which is then converted to a manufacturing plan and orders for raw materials from suppliers. It’s not easy to set up and successfully run these applications to produce optimal results. The process demands a lot of trial and error, testing, and running in parallel with the old system to shake out bugs.

As reported by CNET News’ Melanie Austria Farmer and Erich Leuning, SAP spokesman Bill Wohl, reflecting on Nike’s dilemma, said at the time, “What we know about a software implementation project is that it’s just not about turning on the software. These projects often involve really wrenching changes in a company’s business process…It involves changes in the way employees work, and anytime you make changes in the way employees are used to working, it can get difficult.”

Nike is in the apparel business, where styles come and go, and where advertising and promotional programs can spike demand, requiring the supply chain to react just in time, delivering to the market just the right amount of each style. An oversupply of shoes or other apparel will lead to discounting and reduced profits, and an undersupply will lead to lost sales. Nike ran into both of these scenarios, and its profit dropped while sales declined, resulting in the $100 million unfavorable financial impact to the company.

Inside the logic of the software Nike chose, parameters and settings must be optimally set for the most efficient quantities to be produced and distributed to the market. It’s very easy to get it wrong, and companies launching this type of application usually run a pilot for several months before they are satisfied with the recommended production and distribution plans generated by the software.

Much has been written about Nike’s experience, and much of it is valuable for any enterprise thinking about a similar project. Keep in mind, though, that this was a public spat, and both the software firm and Nike told their own version of the story for the public record. That means we don’t have all the facts. Nonetheless, I think there are valuable lessons in the Nike story, and at the risk of not getting all the facts right, I present my conclusions more to help you learn and succeed than to cast blame on any of the Nike project participants.

Key Points

Here is what I think were the main issues in the Nike project:

Complexity of the application without commensurate resources applied to making it work. Christopher Koch, writing in CIO Magazine at the time, said “If there was a strategic failure in Nike’s supply chain project, it was that Nike had bought in to software designed to crystal ball demand. Throwing a bunch of historical sales numbers into a program and waiting for a magic number to emerge from the algorithm — the basic concept behind demand-planning software — doesn’t work well anywhere, and in this case didn’t even support Nike’s business model. Nike depends upon tightly controlling the athletic footwear supply chain and getting retailers to commit to orders far in advance. There’s not much room for a crystal ball in that scenario.”

I don’t fully agree with this assessment; I think demand forecasting systems are critical to modern businesses, and if configured and used correctly, bring many benefits. Other reports said Nike didn’t use the software firm’s methodology, and if true, this would greatly contribute to its troubles. I have implemented these systems and they require precise attention to dozens of settings and flags, pristinely accurate data, and the flawless sequential overnight execution of sometimes 30 or more heuristic calculations in order to produce a demand forecast and a recommended production and raw material supply plan.

It’s also critical with these types of applications to have the right subject matter experts and the best system users in your company on the team dedicated to making the system work the right way for your business. This is where, if published reports are true, I believe Nike may have failed. It is possible Nike simply needed more in-house, user-driven expertise, and more time to master the intricacies of the demand planning application.

In 2003 I ran an ERP project that included an overhaul of supply chain systems. The suite included demand and supply planning solution software, which we would use to forecast demand, generate a production and raw materials supply plan, and determine the plan for supplying product from plants to distribution centers. Unfortunately the best system users declined to be part of the team due to heavy travel requirements, and we had multiple problems getting the parameters right. The supply chain suffered after launch as incorrect production and distribution plans disrupted the business for several months.

Combining a maintenance-heavy, complex application with an organization unwilling or unable to meet the challenge is one way to find the money pit.

A ‘big bang’ approach to the launch without sufficient testing. Despite prevailing wisdom and suggestions by veterans that Nike phase in the new application, Nike chose to implement it all at once. This immediately put at risk a large portion of the Nike business. A phased approach would have limited the potential damage if things went wrong.

A case study of the project published by Pearson Education discusses this point: “Jennifer Tejada, i2’s vice president of marketing, said her company always urges its customers to deploy the system in stages, but Nike went live to thousands of suppliers and distributors simultaneously”

The study also quotes Lee Geishecker, an analyst at Gartner, Inc., who said “Nike went live a little more than a year after launching the project, yet this large a project customarily takes two years, and the system is often deployed in stages.”

Brent Thrill, an analyst at Credit Suisse First Boston, sent a note to his clients saying that because of the complexities he would not have been surprised if to test the system Nike had run it for three years while keeping the older system running. According to Larry Lapide, a research analyst at AMR and a supply chain expert, “whenever you put software in, you don’t go big-bang and you don’t go into production right away. Usually you get these bugs worked out . . . before it goes live across the whole business.”

I can understand that Nike would want to convert a large portion of its business and supplier base at the same time. It reduces the length of the implementation and therefore the cost of maintaining consultants and support staff, and it eliminates the need for temporary interfaces to existing systems.

But a smart move might have been to launch and stabilize the demand planning portion of the software first. It’s easy for me to second guess, but Nike could have taken the forecast generated by the new system and entered it manually into their existing, or ‘legacy’ systems. After all, if the forecast is wrong, then everything downstream – the production, raw material, and distribution plan – are also wrong. I did this on two projects, and it significantly reduced risk. On both projects we launched the demand planning (DP) application and ran it in parallel with our legacy system until we were satisfied with the results, then we disengaged the legacy DP application and began manually keying the new system’s DP forecast into our legacy production, raw material, and distribution planning software.

Share:
Strategy & Management

Attention Deficit and the Enterprise Software Project

June 20, 2015 by Matt Cook No Comments

Never mind how many people you can get to work on your enterprise software project team. The critical factor today is how much of their focused attention you can get when you need it.

Quoting columnist and Guinness record holder for Highest IQ Marilyn vos Savant, “Working in an office with an array of electronic devices is like trying to get something done at home with half a dozen small children around. The calls for attention are constant.”

It is not easy to get people to pay attention. Your project is competing for share of mind with texts, email and the internet emanating non-stop from your team’s cell phones.

Thomas Davenport and John Beck, in their book The Attention Economy, state: “The ability to prioritize information, to focus and reflect on it, and to exclude extraneous data will be at least as important as acquiring it.”

In 2005, I was on a team whose mission it was to modify parts of SAP to enable the sale and invoicing of what are called “kits.” A kit in software parlance is a product whose components include other products. The final product is what is recorded as the sale and what appears on the customer invoice. Sounds simple, right?

We worked in a global company. The team participants were in Ohio, New York, Western Europe, The Philippines, India, and Texas. After 14 months the team had finally gotten to the point at which it could test the solution. There had been more than 25 full-team conference calls, but no face-to-face meetings in which everyone – business people, developers, and technical experts – was present.

Why did it take so long? Attention deficit: People in remote locations for whom this project was one of many, and whose interaction with the substance of it was via impersonal conference calls during which multi-tasking naturally diverted one’s attention.

Do you know anyone who does not multi-task during a conference call? The attention deficit is automatic. People are caught not listening. Participants tend to speak a lot less than if the meeting is held face to face. It’s easy to hide and withdraw on a conference call.

We pride ourselves for the ability to work virtually across continents, have meetings anytime anywhere, and send work around the globe as if we were passing it across the table. But I think we’ve diluted our effectiveness and the quality of our output. We split our minds into more and more slices, marveling at our ability to manage so many things at once, but all we are doing is giving cursory attention to a lot of things instead of focused energy on a few.

Share:
Strategy & Management

Have a Methodology – But Only One That Makes Sense to You

May 25, 2015 by Matt Cook No Comments

Who needs a methodology?  Just do it.  Photo: Spring 2013 Hackathon at Columbia University, where participants were challenged to build — over a weekend —  software programs for New York City startups; by Matylda Czarnecka, hackNY.org, CC license.

You’re about to launch a big ERP project.

You need a structured methodology.  The lazy/easy thing to do is to use the one your software vendor or consulting partner uses.  Don’t automatically accept this.  Understand first how their methodology works (it’s usually designed to maximize billable hours).  Then use common sense.

A methodology is simply a way of doing things. A methodology for doing laundry could be: 1) separate colors from whites; 2) wash whites in hot water/cold rinse with bleach; 3) wash colors in cold water; 4) hang dry delicate fabrics and put everything else in the dryer on medium heat for 30 minutes.

But there can be variations in laundry methodology depending on preferences and beliefs, such as 1) throw colors and whites together in a cold wash and cold rinse so colors don’t run; 2) throw everything in the dryer on delicate cycle; 3) check periodically for dryness; and 4) pull out anything that seems dry.

IT project methodologies are practically an industry – templates, software, books, training programs and consulting engagements.

The “waterfall methodology” used in Microsoft Project and adhered to for decades by many in the project management field is arguably flawed from the beginning, at least according to some. This methodology assumes that task completion is a function of the number of man-hours, total time duration, and dependency on completion of one or more preceding tasks. Viewed on a chart, the timeline of the project looks like waterfalls cascading from left to right, top to bottom, as time progresses.

Frederick Brooks, the computing legend who pioneered IBM’s System/360 mainframe computers launched between 1965 and 1978, says in his seminal book The Mythical Man-Month that “the basic fallacy of the waterfall model is that it assumes a project goes through the process once, that the architecture is excellent and easy to use, the implementation design is sound, and the realization is fixable as testing proceeds (but) experience and ideas from each downstream part of the construction process must leap upstream, sometimes more than one stage, and affect the upstream activity.”

Brooks also says the planning process usually and mistakenly assumes that the right people will be there, available, focused and engaged, when they are needed, and that when they perform their work they will do so in a linear fashion, with no mistakes and no need to backtrack and start over. In the 21st century, this is simply not the case in many organizations.

I don’t prefer one particular methodology – I think the way to work depends on what you’re trying to achieve, by when, with whom. I also think the imposition of a particular methodology – if it doesn’t fit the project or the team – will actually detract from success.

I do like at least one part of the Scrum methodology, called a Sprint.  It could be a one week or two week “Sprint,” but the idea is the same: everyone needed for a particular stage or segment of the project is brought together and forced to stay together until their part is finished and tested.  Just get together and get it done.

Have you seen the videos where a house is built in a day?  Here is just one.  How did they do that? Lots of preparation, teamwork, intensity, focus, urgency.

The idea of a sprint in an IT project is the same: skilled people intensely focused on the same, singular objective that must be achieved within a fixed period of time.  I like it because sometimes it’s the only way to get the brains you need to complete something without the endless distractions of everyday work.  Attention deficit is rampant in organizations today — exactly the opposite of what successful enterprise software projects need to be successful.

Which methodology is best? For your project, the best methodology is the one that provides structure in a common-sense way with simple, easy-to-use tools that you think your team will, in a practical sense, actually stick to.

 

Share:
Strategy & Management

If You Can’t Imagine the Future You Want, You’ll Never Get There

May 15, 2015 by Matt Cook No Comments

Image: a toy replica of the DeLorean Time Machine from the 1985 film “Back to the Future,” photo by J.D. Hancock, CC license

Company P is a firm I have worked with that helps companies to achieve ambitious goals. One of the firm’s methods is to engage in a “Merlin” exercise – so named for the legendary magician who claimed to come from the future. In this exercise, groups describe in words or illustrations the future of their company in three, five or more years, pretending that ambitious goals have already been achieved.

How did the company get there? What do processes look like? The more detail, the better. How did we get rid of all that non-value-added work? How did we double business with the same number of people?

This is the approach you need in order to define how your software investment is going to change your business results.

Ignore the systems for now. For now, don’t try to imagine your current system with great new features. That is too limiting. For this part of the project, keep “the system” generic. It can and should appear in your future plans, with specific capabilities, but not as a specific system. When you have selected specific software and become aware of its capabilities and limitations, come back and re-draw the future processes as they will be executed with that particular application.

Define what problems are fixed in the future state. The present must have some characteristics that people want to change, or else there would be no interest in a new system investment. So start with a few problem statements to articulate why you want the future to be different from the present, such as “While visiting customers, the sales team cannot access order history nor place additional sales orders, resulting in lost revenue to the company.”

Imagine. When the problems identified are resolved, what does that look like? What would it look like if, for example, the problem of having to delete and re-enter a customer order just to change it were fixed? “Service reps can retrieve from the system a customer order, go into an edit process, make any change required, including products ordered, quantities, delivery dates, method of payment, and shipping options, then save the order.”

Don’t be constrained by today’s business situation, type of system architecture, or preferences of senior management. Don’t limit your thinking to just small problems. Imagine a bigger picture, years into the future, when the company is transformed from what it is today…what would that look like?

Start with the big picture – “Sales have doubled” – and work backwards to specifics from there. “We started selling through a web portal.” The big picture is the eventual outcome you want, the specifics – a web portal, for example – will be the things around which new business processes will be formed. The new processes are what you want the new technology to enable.

Share:
Strategy & Management

Case Study: The 2010 Census

May 6, 2015 by Matt Cook No Comments

Image: U.S. Counties, population change 2000-2010, by Joe Wolf, CC license

 

Failed IT projects are not unusual in the government sector. Big IT projects are hard enough without the added complexity, delays, politics, and bureaucracy of government entities. But leaving those dysfunctions aside, there is much to learn from these failures. The 2010 Census is one such event, because the factors that led to failure are some of the same ones that kill private sector projects as well.

Background

The 2010 census was to be the most automated census ever. Half a million census takers with handheld devices would capture, in the “field,” household data that had not been sent in via census forms. Harris Corp. was given a $600 million contract to supply the handheld devices and manage the data.

But according to published news accounts, the Census Bureau requested, after the project began, more than 400 changes to the requirements that had originally been submitted to Harris. In trying to accommodate these requests, Harris naturally encountered more expenses to redesign or re-program the handheld units or to redesign thelocke_apport_med data management system that would collect and organize the accumulated data.

The handheld units themselves were difficult to operate for some of the temporary workers who tested them, and they couldn’t successfully transmit large amounts of data. A help desk for field workers using the devices was included in the original contract at a cost of $36 million, but revised to $217 million.

In the spring of 2008, the Census Bureau was faced with a decision whether to continue with the automation plan, because the handheld units had not yet been completely tested and needed further development, in part because of the additional post-contract requirements. The Bureau needed enough time to hire and train about 600,000 temporary workers if the original Field Data Collection Automation (FDCA) plan had to be revised or scrapped.

In the end, the 2010 Census may not have been the most automated census ever, but it was the most expensive. The contract with Harris was revised to $1.3 billion, and other expenses were incurred for equipment and other areas that were not anticipated and therefore not estimated. Not all of the overruns were systems-related.

Key Points

Constantly changing requirements increased delays and costs. As we know from understanding the nature of software, a system is unable to simply change its code and accommodate additional requirements on the fly. Why no one put a stop to the additional requirements heaped on to the project is a mystery, but it’s pretty much standard procedure to freeze the requirements at some point in the project. It’s like asking a homebuilder to add another bathroom on the second floor when the home is halfway to completion. It can be done, maybe, but will make the house cost more and take longer to complete. In extreme cases – like the new custom-built Medicaid claims processing system for the State of North Carolina – the project may never end.

Undue confidence in the user’s ability to learn how to operate the handheld devices led to surprise additional costs. The project didn’t plan on people having so much difficulty with the handheld data collectors. But people’s innate abilities, especially in the area of new technology, vary greatly. Nearly every project I’ve been involved in experienced difficulty because of a certain percentage of users not being able to catch on to the new system. This means more mistakes are made with the new system, more support is needed, and in some cases people who were competent at their jobs with the old system simply cannot perform at a satisfactory level with the new one.

In the end, the project was a money pit. The Census Bureau had to revert to pencil and paper when the handheld devices couldn’t be used – which it said would add $3 billion to the cost of the census. If $3 billion is what the Bureau would have saved with automation, then it was probably worth it to invest the originally estimated amount of $600 million, and even the revised estimate of $1.2 billion. Instead, the government paid the full $1.2 billion and had to use pencil and paper. Net result: a waste of money.

Just freezing the requirements alone, at some point in the project, could have completely changed the outcome. Intentions were apparently good – saving labor cost through automation – and I expect there were presentations made to different levels of management in order to gain approval. A well-intentioned project developed by smart people becomes a vast hole sucking time and money into the abyss.

Share:
Strategy & Management

Why Wouldn’t I Want SaaS?

March 17, 2015 by Matt Cook No Comments

Software-as-a-Service is challenging the paradigm that software is a thing you buy, take back to your office and install. Looking back some day, we might shake our heads and wonder why any enterprise ever thought it had to purchase and physically install a copy of millions of lines of code that ran on a computer within its premises, just to transact day to day business.

On-premise going the way of cassette tapes……?

The market is receptive to more and more SaaS solutions, and software firms are positioning themselves to offer those products. Most of the big, traditionally on-premise software providers now offer at least some of their applications in SaaS form. For you, this will mean more choices.

But remember this: SaaS applications do not guarantee perfect performance and 100% uptime. They are still computer programs running on a server somewhere, and if those programs are buggy, unstable, corrupted, or lack proper expert support you will land in the money pit just as sure as if you bought that same buggy and unstable application and installed it in your own data center.

Are there any good reasons why you would want a traditional on-premise application?

Yes: security is one reason, in certain circumstances. No matter how demonstrably secure a third party may seem, you simply may not want to entrust your data security to a third party, period.

Your customers, particularly, may want the relative or perceived assurance of your own firewall surrounding your applications and their data. Their business relationship is with you, not the company hosting your applications.

You may already have economies of scale suited to on-premise hosting – plenty of server capacity, a built-in support staff, and developers on your team who are capable of building out the application the way you want it.

If you are positioning the application to be used by several divisions within your company, you may also want central on-premise hosting. You may want to tightly control modifications to the “core” system and also manage access permission levels among users, as well as the total number of users. These actions can significantly reduce per-user costs.

With SaaS applications, you still need to do the same due diligence you would do with a traditional on-premise application. The fit analysis, testing, and project management are largely the same, as are the same precautions to avoid the money pit. You can still spend a fortune modifying a SaaS application, as well as integrating it to your other systems and pulling data out of it for analysis purposes.

Share:
Page 1 of 3123»

Categories

  • Strategy & Management
  • Trends & Technologies

Archives

© 2017 Copyright Matthew David Cook // All rights reserved