Matthew Cook -
Software Money Pit Blog
    Strategy & Management
    Trends & Technologies
    Case Studies
About Matt
Buy Matt's Book
Matthew Cook -
  • Software Money Pit Blog
    • Strategy & Management
    • Trends & Technologies
    • Case Studies
  • About Matt
  • Buy Matt’s Book
Trends & Technologies

SaaS vs. Cloud Not Exactly Clear With Some Software Vendors

July 26, 2016 by Matt Cook No Comments

Photo by Meredith Cook at Breckenridge, CO on a “blue bird” day, a clear day following a fresh snowfall.  It’s unrelated to SaaS or Cloud (or is it?); just nice to look at.

SaaS and cloud are starting to be used interchangeably (“we’re looking for a cloud solution”) but they really are not the same thing.

Software-as-a-Service (SaaS) is: software that is made available for use based on access to features, time, number of transactions, number of users, or a combination of variables.  A ‘cloud’ is simply a server – a computer you don’t own or maintain – that sits somewhere other than in your building, that you access to run applications or store data.

SaaS describes a type of software, cloud describes a type of platform.

So you can see it’s possible to take applications that you own, and put them in ‘the cloud,’ and also possible to use software you don’t own, but pay for based on usage, that is sitting in your data center with all your other applications.

But there are more important distinctions.

Type of Software-as-a-Service: Is it software that only you access (single tenant), or is it an application that many other people or companies use (multi-tenant)?  Multi-tenant is generally lower cost, but with less specialized functions for your particular enterprise.  Is it truly SaaS, or just a full cost, configured-for-you application hosted by someone else whose costs have been spread out monthly over 5 years to look like a SaaS solution?  True SaaS works like a subscription: sign up, pay by month and use it; when you no longer need it, you cancel.

Type of Cloud: Is it a private or public cloud, or a hybrid?  A private cloud is a single tenant environment (your enterprise) where you control access and have firewalls for security and where you can define the hardware.  A public cloud is where you are paying for a part of an existing cloud server environment; here, you “rent” space and the advantage is flexibility, low cost, and the ability to scale capacity up or down based on your needs.  Hybrid clouds offer both private and public spaces for you to use.

Let’s look at examples; in both cases we will use the scenario of a manufacturing company selling to major retailers in the U.S. and Canada.

True SaaS:  You contract with a company that offers tools for analytics (software) together with point-of-sale (POS) data for your products for your largest retail customers.  You pay by month to access and analyze POS data. The costs vary depending on how many report levels you want to see.  You access the application via internet, anyone in your company can use the service, and you can cancel at any time.

True Cloud: For purposes of experimenting with business intelligence applications, you purchase database space from a vendor, at a cost that varies depending on how much space you use.  You can scale up or down in terms of the storage you need.  You can connect to this space with a variety of tools for transporting  data and you can install and remove applications easily.  For running your business day to day, you can host your most critical applications in this cloud, and have in reserve an identical cloud with servers ready to take over in cases of disaster or over-capacity of your main servers.

Before you accept at face value the terms ‘cloud’ or ‘SaaS,’ make sure you understand what the vendor is telling you. Ask for details and explanations.  What the vendor thinks is cloud or SaaS can certainly be different from what you expect.

 

 

Share:
Trends & Technologies

An Intro to Analytics Vendors

June 20, 2016 by Matt Cook No Comments

Image by David Bleasdale, CC license

Analytics is one of the top buzzwords in business software today. Analytics software is often marketed as a tool for business intelligence, data mining or insights. It’s the crystal ball software: tell me things I don’t already know, and show me ah-hahs or other exciting revelations that, if acted on, will increase sales, cut costs or produce some other benefit.

The essential elements for analytics are:

1) A design for your ‘stack’ which is just a term for layers: usually at the bottom you have a data layer, then a translation layer, then on top of that some kind of user interface layer. The translation and user interface layers are usually provided by the analytics vendor; you provide a place for data storage.

2) A way to send the data to your data storage, automatically, which is usually referred to as “ETL” or extract, transform, and load. SnapLogic and Informatica are two vendors who offer these tools.

3) Some way to “harmonize” the data, which means define each data element and how it will be used in analytics. “Sales” will mean such and such, “Gross Margin” will be defined as ……

These three components can be on-premise in your building or in a cloud hosted by a vendor.

SAS, based in North Carolina, has long pioneered this space, and now many business software firms claim to provide “robust analytics.” The problem: what constitutes “analytics”? Canned reports are not analytics. So you’ll need to shop this category knowing that probably the most serious applications will come from firms that are dedicated to analytics.

International Data Corporation (IDC) reports that the business analytics software market is projected to grow at a 9.8% annual rate through 2016. IDC describes the market as dominated by giants Oracle, SAP and IBM, with SAS, Teradata, Informatica and Microstrategy rounding out the top 10 in terms of sales revenue. Although the top 10 account for 70% of the market, IDC reports that “there is a large and competitive market that represents the remaining 30%…hundreds of ISVs (Independent Software Vendors) worldwide operate in the 12 segments of the business analytics market…some provide a single tool or application, others offer software that spans multiple market segments.”

Here are some other interesting analytics or business intelligence (BI) products: Qliktech provides easy-to-develop dashboards with graphical representations as well as tabular and exportable reports. Its Qlikview software is an “in-memory” application, which means that it stores data from multiple sources in RAM, allowing the user to see multiple views of the data, filtered and sorted according to different criteria.

Information Builders (IB) is a software company classified by advisory firm Gartner as a leader in BI applications. IB’s main application, WebFocus, is a flexible, user-friendly tool that is popular with sales teams because salespeople use it while visiting customers to enhance their selling messages with facts and visual interpretations of data.

WebFocus has a “natural language” search capability, making it useful to monitor and analyze social media.
Birst, named by Gartner as a challenger in the BI space, is a cloud-based (SaaS) application that offers “self-service BI,” deployment to mobile devices, adaptive connectors to many different types of data sources, in-memory analytics, drill-down capabilities, and data visualization. The Birst tool also has a data management layer, allowing users to link data, create relationships and indexes, and load data into a data store.  Tableau is another similar vendor.

It’s useful to start small and experiment with analytics.  People in your organization with good quantitative skills and imagination can experiment with tools, usually at very low cost.  Soon you will see some interesting results and will want to do more…but make sure to put in place some rules about what constitutes sanctioned and official “analytics” in your organization, to prevent uncontrolled proliferation of un-validated information.

Share:
Trends & Technologies

Business Software for Finance 101

June 9, 2016 by Matt Cook No Comments

Image by reynermedia, CC license

Finance and accounting functions were among the first to be automated through software. The sheer volume of numbers and calculations, reporting requirements, tax filings and payroll mechanics, plus the fact that nearly every business has to engage in these activities, made the area perfect for software.

When just these basic functions are needed, not much distinguishes one finance application from another. They all post transactions to a cost center and sub ledger account, they all capture sales and costs and calculate required P&L and balance sheet data, and they all provide reports. They might distinguish themselves in terms of ease of use or report writing, or banking account integration, or cash management, or some other aspect.

Many finance applications are simply bookkeeping systems; if you want real analysis you’ll need to extract data to Excel, Business Objects, or another analysis and reporting tool. My own experience with both Oracle and SAP bears this out: even these leading finance packages are mostly concerned with accounting and financial, not management reporting.

Oracle and SAP both have what they call “business intelligence” capabilities, but they are contained in separate modules that must be purchased and integrated with the core software. So companies can easily spend millions implementing SAP or Oracle, and still find themselves extracting data into Excel spreadsheets for basic business analysis.

My experience is that most finance applications lack budgeting and financial modeling capabilities. It is one thing to know that your prior month results were over budget because of rising fuel prices, and quite another to project the future profit impact of different oil price scenarios. At what point would it make sense to switch to alternative fuels, to pass on some of these increased costs, or to buy oil futures as a hedge? A typical finance application won’t help you to answer these questions because they mostly record and categorize costs based on what already happened, not what might happen in the future.

Yes, there are “what if” modeling applications available on the market, but as a stand-alone application they aren’t very useful, since you have to enter all of your data, as if you’re using an Excel spreadsheet. The modeling application needs integration with your ERP to be most effective. Your ERP is the source of all kinds of data needed for financial modeling: production costs, formulas, material costs, transportation costs, revenue by product, as well as cost standards and budget information. This data changes frequently based on business conditions, competition, labor costs, and many other factors.

Microstrategy, Oracle Hyperion and Cognos are leading names in the financial modeling and analytics areas, but other, smaller firms are emerging. Netsuite, the ERP-in-the-cloud vendor, offers an add-on financial modeling application. Netsuite’s web site states that the modeling application features these capabilities:
• Dynamic formulas and assumptions
• “Actuals” data incorporated into new forecasts
• Workflow management
• Planning of full financial statements
• Unlimited versions for “what-if” analysis
• Multi-dimensional models for complex sales and product planning
• Multiple currency budgeting
• Graphic drag-and-drop report builder
• Multi-version variance reporting (vs. budget, vs. plan, vs. forecast)

A3 Solutions is another, smaller firm offering financial modeling applications, either on-premise or as Software-as-a-Service. A3 uses the Excel spreadsheet as the user interface, claiming it is the friendliest environment for creating what-if scenarios, and provides tools to link multiple sources of corporate data and manage modeling versions dynamically and virtually through its Spreadsheet Automation Server. A3 claims McDonalds, Honda, Toyota, T. Rowe Price, and American Airlines as clients. Simplicity, speed of implementation, and low cost are A3’s main selling points.

Once you have the “system of record” stabilized in a strong finance application, as well as good controls over product, customer, and sales data, you can start to think about these higher-level analytical tools. Define a standard model for delivering analytics, put someone in charge of the data, and tightly control the “official” analyses that are produced.

Share:
Trends & Technologies

Big Data: Correlations, Not Cause-and-Effect

February 18, 2016 by Matt Cook No Comments

Image by Marcos Gasparutti, CC license

In their recently published book, “Big Data: A Revolution That Will Transform How We Live, Work, and Think,” Viktor Mayer-Schonberger and Kenneth Cukier say that big data will provide a lot of information that can be used to establish correlations, not necessarily precise cause and effect.

But that might be good enough to extract the value you need from big data.

Three examples from their book:

  1. Walmart discovered a sales spike in Pop-Tarts if storms were in the forecast. The correlation was also true of flashlights, but selling more flashlights made sense; selling more Pop-Tarts didn’t.
  2. Doctors in Canada now prevent fevers in premature infants because of a link between a period when the baby’s vital signs are unusually stable, and, 24 hours later, a severe fever.
  3. Credit scores can be used to predict which people need to be reminded to take a prescription medicine.

Why did the people involved in the above examples compare such different sets of data? One possible reason: because they could – relatively quickly and at low cost – this was made possible by superfast data processing and cheap memory. If you could mash together all kinds of data in large volumes – and do so relatively cheaply – why wouldn’t you until you found some correlations that looked interesting?

You can begin experimenting – a process I endorse — with Big Data. You need three basic components:

  1. A way to get the data, whether out of your transaction systems or from external sources, and into a database.
  2. Superfast data processing (a database with enormous amounts of RAM and massively parallel processing). This can be had on a software-as-service basis from Amazon and other vendors.
  3. Analytics tools that present the data in the visual form you want. Vendors include Oracle, Teradata, Tableau, Information Builders, Qlikview, Hyperion, and many others.

Correlations are usually easier to spot visually. And visualization is where the market seems to be going, at least in terms of hype and vendor offerings. New insights are always welcome, so we shall see what sells and what doesn’t.

The assessment from Gartner seems about right to me at this point in time: that big data is both 1) currently in the phase they call the “trough of disillusionment;” and 2) promising enough that its use in BI will grow sharply.

Share:
Trends & Technologies

What Is Data Visualization?

December 20, 2015 by Matt Cook No Comments

A data visualization of LinkedIn connections. Image by Luc Legay, CC license

Frank Luntz is a professional pollster who uses the visualization of data to show the sentiments of viewers as they watch political ads. The technique uses a moving second-by-second graph to show when exactly during an ad viewers felt positive or negative toward the content of the ad. Viewers use a handheld device with buttons for positive and negative, and press each one according to their sentiment as they view the ad.

Mr. Luntz could have simply had each viewer fill out a questionnaire about the ad – what did they like and what didn’t they like? You would then see numeric totals and percentages related to each question, but you wouldn’t see exactly when during the ad viewers had positive or negative feelings. The second-by-second gathering of data draws a much clearer picture.

That is what data visualization is about.

Many software vendors offer products in this category and many of those vendors are start-ups. Some claim the ability to merge all kinds of data – including Twitter feeds — into a coherent picture. This may be the case but my advice is to treat this area as very formative – i.e. not yet mature and therefore somewhat experimental.

I think technology will make it easy to index every single event in your enterprise and to display in real time a visual interpretation of all of those events interacting with one another. Executives and managers will no longer look at static tables of numbers or even graphs or charts; they will be able to “watch” their business in real time and see a future visualized picture of their business, much like a weather forecast is shown in graphical terms.

Some advice, if you want to experiment in this area:

  • Find an area of your business where there is complete mystery, and where a vivid picture holds promise for a breakthrough development;
  • Make sure you have a way of capturing the data;
  • Try and buy: vendors will often conduct a pilot for you at little or no cost
Share:
Trends & Technologies

Five Smart Ways to Use Retail Store Data

October 2, 2015 by Matt Cook No Comments

Photo by Krystian Olszanski, CC license

Great!  Your second biggest customer has agreed to collaborate with you to build the business through collaboration and data sharing, and you’ll be getting access to all of their point-of-sale data through a third party company.  Now what?

Fortunately there are some very good POS analytical capabilities out there, but not all are the same. Here are some tips from my own experience:

  1. Use a data provider who also has an analytical/BI environment to work in – preferably an environment with pre-built reports and analyses, and to which you can add other data sources. Otherwise, you’ll need to build and maintain a database and BI “stack” of software applications and every day you’ll be importing huge data files and spending lots of time checking for accuracy. Too much time spent on mechanics.
  2. Get the data aligned and normalized with your company’s customer and product master data, so that product numbers and descriptions and brands and formats are the same whether you’re looking at the shelf data or data in your internal systems.
  3. Go after the biggest returns, which will be in gauging the effectiveness of merchandising and promotion spending.  For manufacturers selling to retailers this type of spending can amount to 30% or more of revenue.  If turns on the shelf are lackluster for a given promotion, you’re wasting money and it’s time to find out why.
  4. Focus on the largest customers with the largest perceived gaps, on the highest-turning items.  Look for shelf voids — out of stocks — that seem to have a pattern, like day of the week, or within a certain cluster or geography of stores.  These might be fixable with an adjustment in your customer’s store replenishment settings.
  5. Concentrate on shelf issues you can actually do something about.  You probably won’t convince your company to discontinue ten slow-moving SKUs because they’re wasting shelf space – those products could be there for other, more defensive reasons.  You might, however, show your customers how adding more shelf space for some of your products will benefit them.

Related: Demand Signal Applications: The Basics

To learn more about data and analytics provider capabilities, check out Orchestro, Relational Solutions, RSI, JDA, and Mindtree.

Share:
Trends & Technologies

Retail POS Data Analytics Getting Easier

August 29, 2015 by Matt Cook No Comments

Photo: Decisions Decisions; Andrew Stawarz, CC license

Companies making products sold at retail stores long ago realized they had minimal visibility of how their products are displayed at the retail shelf and how well their products ‘turned’ and produced revenue for them and the retailer. But today they can have visibility, and investments in software, services, analytics, or both are likely to produce good returns, if done correctly.

Where to get solutions? Retailers themselves will sometimes provide the data, but you may not have the infrastructure and tools to make sense of it in its raw form.  Third party data providers such as Nielsen and RSI, Orchestro, and Relational Solutions all provide the data in a much more usable form, accompanied by a whole range of reports and metrics.

The tools (software) and the POS and customer warehouse data are not expensive; you don’t need to build a colossal on-premise database and create a huge infrastructure to maintain it. Plenty of vendors have tools on a try-and-buy basis, and the data can be obtained on an a la carte basis (one retailer only, for example) from several different firms. Experimentation should be encouraged; there is no one right way to do this type of analytics.

The challenge, however, is what to do with the data and the findings gained, and how to engage your retail partners for the best collaboration? Start with your own products and determine metrics against which you want to measure retail performance, such as in-stock rates, % distribution in the store network, velocity, average days of supply in the warehouse, and any irregular peaks or valleys in inventory. You can also measure promotion and merchandising effectiveness.  These findings are the beginnings of the conversations you start to have with your retail customers.

From an IT standpoint, you need three things: 1) if you want the data on-premise, inside your firewall, you’ll need storage and ETL (Extract, Transform, Load) tools to load retailer and other transactional data; 2) a translation and user interface tool to build your analytics (many to choose from – see Tableau, Information Builders, Tibco, Microstrategy); and 3) some way to “govern” the data, which is how each data element will be defined and how it will be used in different analytical formats.

Keep in mind that setting up POS analytics is a means, not an end.  And that’s why experimentation is good — you want to find the best combination of data and analysis that yields the best results for you and your enterprise.  In this case experimenting is not expensive, so do lots of it.  What gets measured usually gets managed, and that is where the true opportunity is.

Share:
Trends & Technologies

Data Virtualization vs. Data Visualization

August 26, 2015 by Matt Cook No Comments

Image: visualization of data showing places in New York City frequented by tourists (red) and locals (blue); by Eric Fisher; Creative Commons license.

Another emerging segment of the analytics software market is data virtualization (DV), referred to by some as Information-as-a-Service (IaaS), which enables access to multiple data sources, usually in real time, without the time and expense of traditional data warehousing and data extraction methods.

Forrester Research defines DV as solutions that “provide a virtualized data services layer that integrates data from heterogeneous data sources and content in real-time, near-real-time or batch as needed to support a wide range of applications and processes.”  Data Visualization, on the other hand, refers to methods of displaying data in a highly visual way, with the purpose of finding a display mechanism that reveals more insight than traditional reporting methods (see ‘What is Data Visualization’?)

Traditional BI or analytics methods rely on some form of data warehousing, in which pieces of data are extracted, usually from transaction systems, transformed or “normalized” (i.e., “formatted”), and stored in tables according to some type of schema. “Customer Account Number,” for example, may belong in the “Customer” table, and so on. As covered in the book, building a data warehouse and getting it to work right can take years, and require substantial technical skills that even many mid-sized to large companies just don’t have.

Data Virtualization aims to overcome this disadvantage by not extracting data from their original sources but by viewing and manipulating the data inside the DV tool or layer to build your analysis.  In simple terms, a DV tool is supposed to let you “see” sources of data in different applications and databases, and to “select” data from those sources for your queries or analysis.

While it’s feasible to connect directly to external applications and other data sources, whoever owns or manages that application or data source may prevent you from connecting directly, for security reasons, or to avoid overloading the database, to avoid corrupting the data, or simply because the data is proprietary and the provider allows access only through an environment external to the data source.  These are some of the barriers I have encountered.

Forrester estimates an $8 billion market for DV software.  Forrester notes that the current market is dominated by big companies such as SAP, Oracle, Informatica, Microsoft and Red Hat, and specialized firms like Composite Software, Denodo Technologies and Radiant Market.

Experimenting on a small scale is a good idea here.  Vendors are willing to show you capabilities and do small pilots to prove the concept you might be considering the software for.

Share:
Trends & Technologies

Demand Signal Applications: the Basics

July 12, 2015 by Matt Cook No Comments

The Landmark Shopping Mall in Hong Kong.  Photo by See-ming Lee, CC license.

It’s now possible to string together data points collected from the shelf (POS) and upstream from there (your supply chain) to optimize your sourcing, production and distribution decisions.  Application vendors are calling this Demand Signal Management (DSM).  It’s not new; the same concept a few years ago was called shelf-driven demand management.

Is this something you should invest in?

My experience is that these tools used even on a small scale are insightful and provide real returns, but that it’s easy to over-invest and get lost in what you’re trying to do with the mountains of data now available.

The vendor landscape is predictably unclear.  Before you consider vendors, though, it’s helpful to understand the different segments within DSM:

  • Shelf or Point-of-Sale data is just one component and by itself in its raw form it’s not very useful without formatting and organizing it and normalizing it to your definitions. This is where POS data providers add value.  Many retailers will only supply data for your products, not selected competing products or your product category as a whole. This to me is a big gap in understanding true demand.  From POS data one can calculate product performance indicators such as days of supply, out of stocks, and item velocity.
  • Customer warehouse data is one step upstream and is usually available from most retailers and included in the data package offered by POS vendors.  This data shows the quantities into and out of the warehouse, quantities on order, and current inventory. Again, usually only data for your products.
  • Upstream from the warehouse is your enterprise and any data you might want to incorporate into the analysis: forecast, orders, inventory, past or planned promotions, production plans, supplier orders, marketing events, advertising, etc.
  • Then there’s everything else: environmental data such as market size and segment trend data, geographic/demographic data, social media, the weather, the time of year, and cultural trends.

Putting all these pieces together for a coherent and insightful view of your demand is the promise of DSM.

Application vendors in this area fall into two main categories: 1) “point” applications that offer one solution for one part of the supply chain, such as POS data vendors; and 2) fuller end-to-solutions that offer the ability to incorporate many different data points from many parts of the enterprise and to relate them in a logical way to one another.  My advice:

  1. Try a point solution approach on a particularly difficult or troublesome part of your supply chain; it’s not expensive and many application vendors can enable a solution quickly on a cloud platform;
  2. Keep the financial commitment small and the option to exit from the solution easy.  This should not be difficult, as vendors often will agree to month-to-month contracts;
  3. Do steps 1) and 2) before launching any large end-to-end DSM initiative.  An end-to-end project can involve lots of data management (conversion, translation, normalization) and if its not done right the result could be a confusing and unreliable addition to your demand planning efforts.

 

Share:
Trends & Technologies

Big Data 101

May 10, 2015 by Matt Cook No Comments

Image: “Data Center.” by Stan Wlechers, CC license

So what is Big Data, particularly Big Data analytics? Why all the hype?

Big Data is what it implies: tons of data. We’re talking millions or billions of rows here – way too much for standard query tools accessing data on a disk.

What would constitute “tons” of data? Every bottle of “spring,” “purified” or “mineral” water that was scanned at a grocery store checkout during the month of July 2011; the brand, the price, the size, the name and location of the store, and the day of the week it was bought. That’s six pieces of data, multiplied by the estimated 3.3 billion bottles of water sold monthly in the United States.

Big Data analytics is the process of extracting meaning from all that data.

The analysis of big data is made possible by two developments:

1) The continuation of Moore’s law; that is, computer speed and memory have multiplied exponentially. This has enabled the processing of huge amounts of data without retrieving that data from disk storage; and

2) “Distributed” computing structures such as Hadoop have made it possible for the processing of large amounts of data to be done on multiple servers at once.

The hype you read about Big Data may be justified. Big data does have potential and should not be ignored. With the right software, a virtual picture of the data can be painted with more detail than ever before. Think of it as a photograph, illustration or sketch – with every additional line of clarification or sharpening of detail, the picture comes more into focus.

Michael Malone, writing in The Wall Street Journal, says that some really big things might be possible with big data:

“It could mean capturing every step in the path of every shopper in a store over the course of a year, or monitoring every vital sign of a patient every second for the course of his illness….Big data offers measuring precision in science, business, medicine and almost every other sector never before possible.”

But should your enterprise pursue Big Data analytics? It may already have. If your company processes millions of transactions or has millions of customers, you have a lot of data to begin with.

You need three things to enable Big Data analytics:

  1. A way to get the data, whether out of your transaction systems or from external sources, and into a database. Typically this is done with ETL or Extract, Transform, and Load software tools such as Informatica. Jobs are set up and the data is pulled every hour, day, etc., put into a file and either pushed or pulled into a storage environment.
  2. Superfast data processing. Today, an in-memory database (a database with enormous amounts of RAM and massively parallel processing) can be acquired and used on a software-as-service basis from Amazon Web Services at a very reasonable cost.
  3. User interface analytics tools that present the data in the visual form you prefer. Vendors include Oracle, Teradata, Tableau, Information Builders, Qlikview, Hyperion, and many others. The market here is moving toward data visualization via low-cost, software-as-a-service tools that allow you to aggregate disparate sources of data (internal and external systems, social media, and public sources like weather and demographic statistics.
Share:

Categories

  • Strategy & Management
  • Trends & Technologies

Archives

© 2017 Copyright Matthew David Cook // All rights reserved