Matthew Cook -
Software Money Pit Blog
    Strategy & Management
    Trends & Technologies
    Case Studies
About Matt
Buy Matt's Book
Matthew Cook -
  • Software Money Pit Blog
    • Strategy & Management
    • Trends & Technologies
    • Case Studies
  • About Matt
  • Buy Matt’s Book
Trends & Technologies

Enterprise Information Management (EIM) 101

August 31, 2016 by Matt Cook No Comments

Photo: Ministry of Information, Singapore; William Cho

The term “enterprise information management” seems to capture the whole world, right? But this term applies to the indexing, searching and compilation of information (not necessarily data) from all of the places in your enterprise where documents might reside.

Gartner defines EIM as “an integrative discipline for structuring, describing and governing information assets across organizational and technological boundaries to improve efficiency, promote transparency and enable business insight.”

It’s hard to tell where document/information management leaves off and (data or information) analytics begins. This is part of the mashing up of software functionality that is going on in the market today.

Information is everywhere – in emails, presentations, documents stored on a company’s server, individual user hard drives, servers in the cloud, etc. So traditional search software is somewhat ineffective, because it expects data or documents to be neatly organized inside a box where it can simply sort through data and return matches to your query.

Modern enterprise information management (EIM) software is different because it can search multiple and geographically and systematically separate sources according to terms defined by the user. It does this usually through a web browser.

The market for these tools arose because companies generated tons of documents without any “filing” standards, other than placing them on a corporate shared drive or on people’s PC hard drives. As a result, it was almost impossible to assemble all documents within a company’s four walls related to a particular customer, vendor, product, project, formula or activity. The ability to perform this type of search is especially important to legal professionals, who must respond to government inquiries or parties involved in litigation. This type of search is referred to as e-discovery.

Just a few years ago, I worked on a project like this, except it was referred to at that time as a records retention project, and we installed software from vendor L. The software was basically a search tool for the company’s numerous internal file directories, and required the indexing of every file according to pre-established criteria, and the establishment of a document hierarchy and permission levels. It also assumed that all of the company’s 1,200 employees would store all of their documents on the company’s shared drives, and no longer use their PC hard drives to store files (this was not realistic).

Today, the company that used vendor L is implementing a different system that is capable of locating files anywhere within the company’s network – shared drives, hard drives, emails. In less than 24 months, software that cost over $1 million to implement was rendered obsolete.

EIM usually includes e-discovery tools, and tools for managing content or knowledge, such as user guides, formulas, troubleshooting guides, business process steps or standard operating procedures, system diagrams, and documents critical to retaining official records.

Clearwell Systems, acquired in 2011 by Symantec, is a leader in the e-discovery field. Symantec also offers other EIM solutions. The Symantec web site says this about the Clearwell e-discovery application: “The Clearwell eDiscovery Platform, nominated as a Leader in Gartner’s 2012 Magic Quadrant for eDiscovery, provides users with one seamless application to automate the legal hold process, collect data in a forensically sound manner, cull-down down data by up to 90%, and reduce review costs by up to 98% through the use of Transparent Predictive Coding.”

My advice with this type of software is 1) like any software, garbage in = garbage out, so make it easy for users to do what they need to do for compliance; otherwise users will invent ways to work around your application, not use it; 2) whatever you are archiving must be important so put it in a secure environment with a redundant backup; and 3) as soon as you index, categorize, and digitize your enterprise information you will think of new ways of using it so pick a vendor with a wide range of services and solutions.

Share:
Trends & Technologies

An Intro to Analytics Vendors

June 20, 2016 by Matt Cook No Comments

Image by David Bleasdale, CC license

Analytics is one of the top buzzwords in business software today. Analytics software is often marketed as a tool for business intelligence, data mining or insights. It’s the crystal ball software: tell me things I don’t already know, and show me ah-hahs or other exciting revelations that, if acted on, will increase sales, cut costs or produce some other benefit.

The essential elements for analytics are:

1) A design for your ‘stack’ which is just a term for layers: usually at the bottom you have a data layer, then a translation layer, then on top of that some kind of user interface layer. The translation and user interface layers are usually provided by the analytics vendor; you provide a place for data storage.

2) A way to send the data to your data storage, automatically, which is usually referred to as “ETL” or extract, transform, and load. SnapLogic and Informatica are two vendors who offer these tools.

3) Some way to “harmonize” the data, which means define each data element and how it will be used in analytics. “Sales” will mean such and such, “Gross Margin” will be defined as ……

These three components can be on-premise in your building or in a cloud hosted by a vendor.

SAS, based in North Carolina, has long pioneered this space, and now many business software firms claim to provide “robust analytics.” The problem: what constitutes “analytics”? Canned reports are not analytics. So you’ll need to shop this category knowing that probably the most serious applications will come from firms that are dedicated to analytics.

International Data Corporation (IDC) reports that the business analytics software market is projected to grow at a 9.8% annual rate through 2016. IDC describes the market as dominated by giants Oracle, SAP and IBM, with SAS, Teradata, Informatica and Microstrategy rounding out the top 10 in terms of sales revenue. Although the top 10 account for 70% of the market, IDC reports that “there is a large and competitive market that represents the remaining 30%…hundreds of ISVs (Independent Software Vendors) worldwide operate in the 12 segments of the business analytics market…some provide a single tool or application, others offer software that spans multiple market segments.”

Here are some other interesting analytics or business intelligence (BI) products: Qliktech provides easy-to-develop dashboards with graphical representations as well as tabular and exportable reports. Its Qlikview software is an “in-memory” application, which means that it stores data from multiple sources in RAM, allowing the user to see multiple views of the data, filtered and sorted according to different criteria.

Information Builders (IB) is a software company classified by advisory firm Gartner as a leader in BI applications. IB’s main application, WebFocus, is a flexible, user-friendly tool that is popular with sales teams because salespeople use it while visiting customers to enhance their selling messages with facts and visual interpretations of data.

WebFocus has a “natural language” search capability, making it useful to monitor and analyze social media.
Birst, named by Gartner as a challenger in the BI space, is a cloud-based (SaaS) application that offers “self-service BI,” deployment to mobile devices, adaptive connectors to many different types of data sources, in-memory analytics, drill-down capabilities, and data visualization. The Birst tool also has a data management layer, allowing users to link data, create relationships and indexes, and load data into a data store.  Tableau is another similar vendor.

It’s useful to start small and experiment with analytics.  People in your organization with good quantitative skills and imagination can experiment with tools, usually at very low cost.  Soon you will see some interesting results and will want to do more…but make sure to put in place some rules about what constitutes sanctioned and official “analytics” in your organization, to prevent uncontrolled proliferation of un-validated information.

Share:
Trends & Technologies

Big Data: Correlations, Not Cause-and-Effect

February 18, 2016 by Matt Cook No Comments

Image by Marcos Gasparutti, CC license

In their recently published book, “Big Data: A Revolution That Will Transform How We Live, Work, and Think,” Viktor Mayer-Schonberger and Kenneth Cukier say that big data will provide a lot of information that can be used to establish correlations, not necessarily precise cause and effect.

But that might be good enough to extract the value you need from big data.

Three examples from their book:

  1. Walmart discovered a sales spike in Pop-Tarts if storms were in the forecast. The correlation was also true of flashlights, but selling more flashlights made sense; selling more Pop-Tarts didn’t.
  2. Doctors in Canada now prevent fevers in premature infants because of a link between a period when the baby’s vital signs are unusually stable, and, 24 hours later, a severe fever.
  3. Credit scores can be used to predict which people need to be reminded to take a prescription medicine.

Why did the people involved in the above examples compare such different sets of data? One possible reason: because they could – relatively quickly and at low cost – this was made possible by superfast data processing and cheap memory. If you could mash together all kinds of data in large volumes – and do so relatively cheaply – why wouldn’t you until you found some correlations that looked interesting?

You can begin experimenting – a process I endorse — with Big Data. You need three basic components:

  1. A way to get the data, whether out of your transaction systems or from external sources, and into a database.
  2. Superfast data processing (a database with enormous amounts of RAM and massively parallel processing). This can be had on a software-as-service basis from Amazon and other vendors.
  3. Analytics tools that present the data in the visual form you want. Vendors include Oracle, Teradata, Tableau, Information Builders, Qlikview, Hyperion, and many others.

Correlations are usually easier to spot visually. And visualization is where the market seems to be going, at least in terms of hype and vendor offerings. New insights are always welcome, so we shall see what sells and what doesn’t.

The assessment from Gartner seems about right to me at this point in time: that big data is both 1) currently in the phase they call the “trough of disillusionment;” and 2) promising enough that its use in BI will grow sharply.

Share:
Strategy & Management

Case Study: Nike’s Adventure with Supply Chain Planning Software

July 17, 2015 by Matt Cook No Comments

A Nike Factory store in Atlantic City, NJ.  Photo by Shabai Liu, CC license.

Background

In February 2001 Nike, Inc. announced that it would miss sales and profit targets for the quarter due to problems with supply chain software it had begun to implement the previous year. The company said that it had experienced unforeseen complications with the demand and supply planning software that would result in $100 million in lost sales.

Nike was trying to put in a system that would cut its response time to changing sales demand. These types of systems rely on algorithms and models that use historical sales data combined with human input to generate a sales forecast, which is then converted to a manufacturing plan and orders for raw materials from suppliers. It’s not easy to set up and successfully run these applications to produce optimal results. The process demands a lot of trial and error, testing, and running in parallel with the old system to shake out bugs.

As reported by CNET News’ Melanie Austria Farmer and Erich Leuning, SAP spokesman Bill Wohl, reflecting on Nike’s dilemma, said at the time, “What we know about a software implementation project is that it’s just not about turning on the software. These projects often involve really wrenching changes in a company’s business process…It involves changes in the way employees work, and anytime you make changes in the way employees are used to working, it can get difficult.”

Nike is in the apparel business, where styles come and go, and where advertising and promotional programs can spike demand, requiring the supply chain to react just in time, delivering to the market just the right amount of each style. An oversupply of shoes or other apparel will lead to discounting and reduced profits, and an undersupply will lead to lost sales. Nike ran into both of these scenarios, and its profit dropped while sales declined, resulting in the $100 million unfavorable financial impact to the company.

Inside the logic of the software Nike chose, parameters and settings must be optimally set for the most efficient quantities to be produced and distributed to the market. It’s very easy to get it wrong, and companies launching this type of application usually run a pilot for several months before they are satisfied with the recommended production and distribution plans generated by the software.

Much has been written about Nike’s experience, and much of it is valuable for any enterprise thinking about a similar project. Keep in mind, though, that this was a public spat, and both the software firm and Nike told their own version of the story for the public record. That means we don’t have all the facts. Nonetheless, I think there are valuable lessons in the Nike story, and at the risk of not getting all the facts right, I present my conclusions more to help you learn and succeed than to cast blame on any of the Nike project participants.

Key Points

Here is what I think were the main issues in the Nike project:

Complexity of the application without commensurate resources applied to making it work. Christopher Koch, writing in CIO Magazine at the time, said “If there was a strategic failure in Nike’s supply chain project, it was that Nike had bought in to software designed to crystal ball demand. Throwing a bunch of historical sales numbers into a program and waiting for a magic number to emerge from the algorithm — the basic concept behind demand-planning software — doesn’t work well anywhere, and in this case didn’t even support Nike’s business model. Nike depends upon tightly controlling the athletic footwear supply chain and getting retailers to commit to orders far in advance. There’s not much room for a crystal ball in that scenario.”

I don’t fully agree with this assessment; I think demand forecasting systems are critical to modern businesses, and if configured and used correctly, bring many benefits. Other reports said Nike didn’t use the software firm’s methodology, and if true, this would greatly contribute to its troubles. I have implemented these systems and they require precise attention to dozens of settings and flags, pristinely accurate data, and the flawless sequential overnight execution of sometimes 30 or more heuristic calculations in order to produce a demand forecast and a recommended production and raw material supply plan.

It’s also critical with these types of applications to have the right subject matter experts and the best system users in your company on the team dedicated to making the system work the right way for your business. This is where, if published reports are true, I believe Nike may have failed. It is possible Nike simply needed more in-house, user-driven expertise, and more time to master the intricacies of the demand planning application.

In 2003 I ran an ERP project that included an overhaul of supply chain systems. The suite included demand and supply planning solution software, which we would use to forecast demand, generate a production and raw materials supply plan, and determine the plan for supplying product from plants to distribution centers. Unfortunately the best system users declined to be part of the team due to heavy travel requirements, and we had multiple problems getting the parameters right. The supply chain suffered after launch as incorrect production and distribution plans disrupted the business for several months.

Combining a maintenance-heavy, complex application with an organization unwilling or unable to meet the challenge is one way to find the money pit.

A ‘big bang’ approach to the launch without sufficient testing. Despite prevailing wisdom and suggestions by veterans that Nike phase in the new application, Nike chose to implement it all at once. This immediately put at risk a large portion of the Nike business. A phased approach would have limited the potential damage if things went wrong.

A case study of the project published by Pearson Education discusses this point: “Jennifer Tejada, i2’s vice president of marketing, said her company always urges its customers to deploy the system in stages, but Nike went live to thousands of suppliers and distributors simultaneously”

The study also quotes Lee Geishecker, an analyst at Gartner, Inc., who said “Nike went live a little more than a year after launching the project, yet this large a project customarily takes two years, and the system is often deployed in stages.”

Brent Thrill, an analyst at Credit Suisse First Boston, sent a note to his clients saying that because of the complexities he would not have been surprised if to test the system Nike had run it for three years while keeping the older system running. According to Larry Lapide, a research analyst at AMR and a supply chain expert, “whenever you put software in, you don’t go big-bang and you don’t go into production right away. Usually you get these bugs worked out . . . before it goes live across the whole business.”

I can understand that Nike would want to convert a large portion of its business and supplier base at the same time. It reduces the length of the implementation and therefore the cost of maintaining consultants and support staff, and it eliminates the need for temporary interfaces to existing systems.

But a smart move might have been to launch and stabilize the demand planning portion of the software first. It’s easy for me to second guess, but Nike could have taken the forecast generated by the new system and entered it manually into their existing, or ‘legacy’ systems. After all, if the forecast is wrong, then everything downstream – the production, raw material, and distribution plan – are also wrong. I did this on two projects, and it significantly reduced risk. On both projects we launched the demand planning (DP) application and ran it in parallel with our legacy system until we were satisfied with the results, then we disengaged the legacy DP application and began manually keying the new system’s DP forecast into our legacy production, raw material, and distribution planning software.

Share:

Categories

  • Strategy & Management
  • Trends & Technologies

Archives

© 2017 Copyright Matthew David Cook // All rights reserved