Andy on Enterprise Software

It’s all in the timing

February 28, 2006

Database guru Colin White raises a point in a BI network article that is often overlooked by commentators. To quote:

“…another difficulty is that customer reference data and relationships vary over time. This issue has important implications for business intelligence applications that may analyze customer data across various time periods, comparing revenue this month to this time last year, for example. If, during the last 12 months, the customer hierarchies have changed or the sales organization has been restructured, then this will affect the validity of the comparison. This means that metadata and metamodel changes may have to be tracked and recorded in MDM applications.”

Spot on Colin! Except that “may have to be tracked” should be “must”. Organizations do not make significant changes to their business stcuctures every day, but these changes do happen every few weeks or months e.g. reorganizations occur, marketing reclassify their product hierarchy or customer segmentations, finance changes allocation rules. Yet most MDM products concentrate on point-in-time synchronisation e.g. of customer data, and frequently retain no history whatsoever. Hence when you want to make comparisons over time, or go back and reconstruct something in the pass to deal with a compliance request, the task is difficult to impossible since the old transactions may be archived, but the master data associated with them is typically not.

A well-designed system to handle master data should be able to reconstruct past hierarchies for comparison. Just as within a data warehouse, where you often want to go back and look at data “as is” and “as was” and even “as it would have been”, the need to go back in time and understand the changes in master data is very important. However the big vendors don’t understadn this issue properly, and most customers have just started dabbling in MDM so haven’t thought yet about this thorny issue, which will come back to bite them in due course.

Metadata repositories are not enough for MDM

February 27, 2006

An article in CIO today has one good nugget and then sets out to miss the main point entirely. The author correctly says that “Working with data across an enterprise — especially in an SOA environment — requires understanding its context and semantics, not just its format and field attributes”. Spot on. However he then drifts off to bemoan the general state of the metadata repository market. What is missed is that the solution to master data management can never be a passive repository of the type that organizations put in during the 1990s e.g. the Reltech and Brownstone products later bought by CA, or even the IBM attempts prior to this e.g. the IBM Data Dictionary, which I had fun using in the 1980s. Such initiatives fail for several reasons:

  • The metadata that is captured tends to be technical metadata e.g. “CUST is VARCHAR(8)” rather than business metadata: “Coca cola is a product within the product class carbonated drink, which is in turn within the product group beverage”. Technical matadata is of limited use.
  • If the metadata is not linked back into real operational systems, it will become as out of date as the repositories of a decade or more ago
  • The tools tend to use arcane conventions which business users have trouble relating to, do this job is left to IT folks, who aren’t really in the best position to know what the data rules really are.
  • There is no one single definition of almost any master data in a company; instead there are many, and they need to be managed.

It is critical for the success of any master data (or metadata) project that the people who actually understand the master data like general ledger structures, customer segmentation, product hierarchies i.e. the business people who set them up, are put in charge of owning the core definitions, including the process to update this data. With this is mind there is a data governance process required in addition to any software tool. Such software should ideally be able to deal with business modeling in a way that is more intuitive than E/R diagramming, which most business users have trouble with.

Next, there needs to be automated workflow to support this update process, which may be complex e.g. there may be several draft versions of a new product hierarchy needed, with different groups of people who have to review and approve before final publication. If this just happens by email then errors will occur.

The master data repository then needs the capability of “semantic integration” i.e. enabling the storage of multiple versions of the various types of master data that actually exist today, so that it can be at the hub of any project to improve the quality of this data, which may involve some rationalization.

Having understood what is out there, modeled it, mapped together the different versions and defined the workflow needed to deal with update, the master data project then needs the ability to hook up to messaging technology to actually drive changes made to the master data repository back out into the other operational systems like ERP and CRM. Without such integration it will only be a partial solution.

Not many, perhaps any, vendors today offer a complete solution to the above set of issues, but some come close, and not one of them is a traditional repository vendor or an ETL tool.

Tackle master data to achieve SOA dream

February 23, 2006

Dana Gardner makes a sensible point regarding SOA, which I would amplify. In order for the promise of SOA to come to fruition, the discussion needs to move beyond the user interface and all those sexy Web 2.0 applications, and towards dealing with the barriers to applying SOA to general business applications. The thorny issues of dealing with inconsistent data models, and of addressing data quality seem rarely to come up in discussion, yet these are critical to the broader success of the SOA dream. It is tricky enough to get an enterprise application up and running, but if you want to be able to have different applications from different vendors interacting happily with one other then it is not just a matter of application layer standards; for business applications these need to be able to share data as well. Unfortunately the reality in large companies is that there are multiple, incompatible, versions of key business definitions such as “product”, “customer”, “asset”, “supplier” and even seemingly unambiguous terms like “gross margin” (the average large company has 11 systems that each think they are the master source of “product”, for example, according to a survey a couple of years ago. All the elegant user interface design in the world is of limited help if two applications cannot agree on what to display when they are supposed to be showing a product hierarchy, or where the definitions of cost allocations differ.

This is why projects and tools that can enable the improvement of the management of key business terms, such as “master” data like “product”, “customer” etc are a necessary precursor to broader SOA deployment in mainstream business applications, as are improvements in data quality, another area that is far from sexy but in practice is a major issue whenever applications have to bring together data from multiple sources, as any data warehouse practitioner can testify. Dealing with the “semantic integration” of the varying business models that exist out there in multiple ERP, CRM and supply chain systems is a major barrier to broad SOA adoption, yet it is scarcely mentioned in the technology media. When those first SOA prototypes start showing data to business users, and it is realized that the data is “wrong”, this topic will become much higher profile.

Vendor due diligence

February 21, 2006

In a previous blog I gave some general thoughts about vendor evaluation, and expanded on this to give an outline framework for such evaluations. One thing that should be considered in any evaluation is “his stable is the vendor” i.e. will they still be around in a few years? This question can be surprisingly hard to answer, and in fact the question itself has a flaw. The question should be: “will this product still be further developed in a few years?”. The reason for the distinction is that even the largest vendors sometimes discard technologies due to changes in their product roadmap, internal political issues or because the thing isn’t selling very well.

So, if buying from a very large vendor, it will be easy enough to look at their general health because their finances are public (OK, there are sometimes accounting frauds but you can’t do much about these). The usual ways of analyzing company’s health can be used here. I highly recommend the “Guide to analyzing Companies” by the Economist, which is clearly written and gives excellent examples of the indicators of company health and also of early warning signs. You should not assume that just because a company is publicly listed then it must be OK – ask the customers of Commerce One about that approach, for example.

In such cases you will probably find that the company is fine, so the bulk of the diligence efforts should instead be directed to how important this particular product is to the company, and so assess how likely is to to get ongoing development. Of course the vendor is hardly a reliable source here, but you can seek advice from analysts, and also it is fair to ask the vendor
just how many customers of this particular product there are (you should be able to talk to some of them). For example, SAP’s MDM product had seemingly shifted just twenty copies into customer use throughout its 18,000 strong customer base in two years. Given this dismal penetration rate it was perhaps not a shock that they dropped it (their replacement MDME offering is based on an acquisition of PIM vendor A2i). Anything which is not contributing to a vendor’s sales figures on any scale should be considered suspect.

In the case of small vendors you have different problems. You can be pretty sure that the product is important to the vendor, since it is probably the only one that they have. The question is whether the vendor will survive. This is trickier, since the company is probably privately held, and so is not obliged to publish its accounts, at least in the US. In the UK you can get around this by paying a few pounds to Companies House and look up their acocunts.
If you are making a large purchase then it is fair game for you to ask for information on the company financials, and you should get nervous if they refuse. One thing that will achieve little is to ask for a letter from the VCs backing the company. They will inevitably sing its praises; they are hardly going to say “ah, this one’s on a bit of a knife-edge, I’d watch out if I were you”. Indeed I knew of one case where a major deal was in progress at a BI vendor, and through a contact I became aware that the entire future financing of the (cash strapped) company was dependent on this deal going ahead; in such cases you cannot expect objectivity from investors.

So, what can you do? Well, profits are an opinion but cash is not. Hence, assuming you can see some figures, you can get a sense of how much cash the company has, and attempt to work out the “burn rate” i.e. how fast are they burning through this cash (most VC backed start-ups are unprofitable; if they were profitable then they probably wouldn’t need expensive VC money). However this on its own may give false signals. Due to their IRR-driven instincts, VCs don’t dole out more cash than they need to start-ups; they like to always have the option of pulling out if they need to, so it is rare for a start-up to actually have more than about one year’s cash needs in hand. The question is: will they be able to raise more cash if they need it? This is a complex subject, but essentially you should be able to get a sense of this by talking to analyst familiar with the VC community. For example, companies growing at 50% or more are very likely to be able to raise cash, even if they very unprofitable. The gross margins in software are commonly 90%, so profits will come eventually if the company can just grow large enough; this is why VCs invest in software companies more than, say, restaurants. So if you cannot find someone knowledgeable to look the figures over for you and make an assessment, then a decent proxy for security is the revenue growth rate. If the company’s growth is stalling (say 10% a year growth for a small-medium software company) then things could be sticky in a future financing round. This is a generalization (and companies with a subscription model, for example, have a much more predictable life than ones selling traditional licenses) but it may be the only real set of figures that you can dig out.

Another source of due diligence is other customers, who may well have done exactly the same due diligence exercise as you fairly recently. Of course you have to be careful it was not out of date, and you should check how thorough they really were, but you may be able to save yourself a lot of work. If three Fortune 100 companies recently did detailed due diligence on a vendor and bought its software anyway, this may help you at least feel better.

Remember: the company or product does not have to be around in ten years if your payback case is 13 months. The faster the payback period for the product, the less concerned that you need to be about agonizing over the long term future of the company, or of the product within the vendor. You did do a proper business case for the purchase, right? Of course you did.

Evaluating Software Vendors – a framework


I have written previously in general about vendor evaluation processes. Over time I will flesh this topic out further, as I feel it is a much-neglected area. As I have said, it is important to define up front your main functional and technical requirements from the software package that you want to buy. It is then important to have a process to take the “long list” of candidates down to a manageable number of two to four vendors to evaluate properly.

So, you have done this, and are down to three vendors. What are the mechanics of the evaluation criteria? I show in the diagram a simplified example to illustrate. It is critical that you decide on what is most important to you by selecting weightings for each criteria before you see any products. Ensure that you group the broad criteria into at least two and perhaps three categories: functional should list all the things you actually want the product to do, and you may choose to separate “technical”, which may include things like support for you particular company’s recommended platforms e.g. “Must run on DB2″, or whatever. What is sometimes forgotten is the commercial criteria, which are also important. Here you want things like the market share and financial stability of the company, how comprehensive its support is, how good is its training etc. I would recommend that you exclude price from these criteria. Price can be such a major factor that it can swamp all others, so you may want to consider it as a separate major criteria once you have done the others. I would recommend that the “functional” weightings total not less than 50%, It is no good buying something from a stable vendor if the thing doesn’t do what you need it to.

An important thing about using a weighting system like this one is that the weights must add up to 100. The point here is that it forces you to make trade-offs: you can have an extra functional criteria, but you must reduce the existing weights to make sure that the weights still add to 100. This gives the discipline to stop everything being” essential”. You assign all the weights before the evaluation begins. You can share this with the vendors if you like. Coveniently, however you assign the weights, the scores will come in out of 1000, so can be easily expressed as a percentage e.g. vendor B is a 74% match to the criteria in the example, while vendor C is 67%.

The final stage is that you need to score the various criteria that you have laid out. You want this to be as objective as possible, which is why you do not want too many – you want to see evidence for the functional criteria. Just because the salesman says that it does something is not sufficient to credit a score – you need to see the feature yourself, preferably working against some of your own data rather than faked up demo. I recall doing an evaluation of BI tools in 1992 at Shell and having one vendor with quite a new product that due to a stellar analyst recommendation made it to the short-list. When the pre-sales guy turned up and was presented with a file of our data for him to do the trial on he went white; their whole product was virtually hard-coded around their demo dataset, and it quickly became clear that even the slightest deviation from the data they were expecting caused the product to break.

Score each criteria out of 10. Commercial criteria can be done off-line and in advance; analyst firms can help you with this, as they tend to be up on things like market shard (IDC have the most reliable quantified figures, but rough estimates are probably good enough). Financial stability is a subject all in itself, and I will cover this in another blog.

The evaluation then becomes quite mechanical, as you crank out the scores. You see that in this simplified example vendor B has won, though not by a huge margin. If it turns out that vendor B’s price is twice that of the others then you may decide this difference is not big enough to justify the slightly better scores (we will retunr to this shortly). Again, you could weight price as a factor if you prefer.

However, don’t get too hung up on price; as someone who used to do technology procurement it may seem like the be all and end all, but it is not. The total cost of a new software package to your company is far greater than the initial license cost. There is maintenance and training over several years, and also the people time and cost in actually implementing the technology, which will usually be several times the cost of the software package. Hence getting a package that is 20% more productive than the next best is worth a lot more than 20% extra in the license price, as the associated costs of people will be multiples of the software cost (people costs being five times package software costs in a project is common, ten times is not unusual). It is sensible for you to try and consider the likely full lifetime costs of the software in this way (assume, say five years) since you will then have an idea as to how important the license cost really is. For example if you are about to do a 30 country roll-out budgeted at USD 50 million, making sure that the product you select is the most productive one is a lot more important than if you are doing a single project for USD 500k. Here a product that is 10% more productive than the next one to implement may save you USD 5 million, so haggling to the death over that last USD 10k of license discount may not be so critical. This will give you a true bottom line case for the level of spend you can afford to make.

Taking a structured evaluation approach like this has a number of benefits. Firstly, it reduces the amount of gut feel and “did I like the salesman” mentality that too often creeps in. You’ll probably never see the salesman again unless you want to buy more software, but you will be stuck with the product that you select for years. Secondly, it gives you a documented case for selection that can, if necessary, be used to back up things internally e.g. in the case of an audit, or just to give comfort to senior management that a sound process has been used.

Moreover, given that salesmen get paid on how much they sell you, you’d be surprised at the tactics they can adopt; they will try and go over your head if they think they are going to lose, and make all sorts of accusations about how the process wasn’t fair and how you are about to make a terrible mistake, so having a solid, documented case will make it much easier for your manager to do the right thing and tell the salesmen to take a running jump. I am amazed at how often this tactic was tried when I was procuring software, but I never once had a decision overturned. If you ever find yourself in this situation, remember that revenge is a dish best served cold. After a particularly acrimonious session with one vendor salesman when I was working at Exxon, I was amused to find, a few years later, the same salesman turning up a few years later when I transferred to Shell. He walked in the room, his face fell when he saw me and he walked back out again. Good professional sales staff know that the world is a small place and that it does not pay to aggravate the customer, but all too few remember this.

In another blog I will return to the subject of assessing the financial stability of vendors.

Packaged Data Warehouse Market

February 20, 2006

I was impressed with the depth of a recent Bloor report “Packaged Data Warehouses”, which examines in depth Decision Point, Kalido, IBM DM”, SAS, SAP BI, Showcase and Teradata. In these days when some analyst firms’ presentations barely mention vendors at all it is great to see an old-fashioned, detailed set of product evaluations. Also, unlike many, this report was not paid for by a particular vendor, so the companies covered within it have had no opportunity to influence the results.

The report evaluates and scores the products by:

  • stability and risk
  • support
  • performance
  • ease of use
  • fit for purpose i.e. product quality
  • architecture
  • value for money

Having looked carefully through the Kalido evaluations, it seems fair and accurate, and clearly is the product of a great deal of work. I commend Philip Howard, the analyst who led this, for producing such a comprehensive piece of analysis in an age of lightweight analyst sound-bites.

The report is not cheap, but it does have genuinely good in-depth analysis, and anyone considering a data warehouse project should buy this before they contemplate building one themselves.

Blue sky thinking?

February 17, 2006

IBM continues to invest in “information management”, creating a major practice in this area. It has been active for some time in its software division in acquiring technologies related to data integration and various aspects of master data management. Now this latest reorganization puts in place the services side that accompany the software products. Clearly information management goes well beyond just technology, involving ownership of data (business or IT?) governance issues, project management, change management, perhaps setting up competence centers, etc. Hence there is clearly a large services component. Estimates of the BI/data warehouse market vary, but the total market is perhaps ten times larger than that purely for the software technologies. This reorganization makes clear that IBM sees this is a growing opportunity.

Someone once described IBM as the “beige” of the computer world i.e. they go with anything, meaning that IBM raises less hackles in customer organizations than some other high profile organizations e.g. Accenture is loved by some, but loathed by others. Hence in some ways IBM ought to be quite well placed to provide advice on projects that tend by definition to span wide areas of the business, and touch many other technologies. At some point systems integrators need to stop sucking on the comforting teat of ERP implementations, ERP re-implementations and ERP consolidation projects (surely at some point customers are going to stop paying for re-implementing these huge projects for the nth time?), so it would seem timely for enterprise software consulting teams to have a major alternative to draw on.

It must be generally good for the industry that IBM is addressing information management in a serious way, as if nothing else it will raise its profile. Delivering better information to decision makers is something that has always seemed to me obviously higher value than just automating operational processes, yet the bulk of the IT spend has always been on such projects. Perhaps this latest IBM move is another straw in the wind indicating that information management is finally moving up the agenda?

Not quite dead yet

February 16, 2006

Infoworld has a piece intriguingly titled “The Death of the Software salesman“. Those of us who have been on the receiving end of high pressure negotiating tactics from software companies may simply want to ask “when please?”. However rather than being a glorious article about lynch mobs of customers, this article is more mundanely concerned with open source, and how this model may be an alternative to traditional software licensing. It observes that 40% of software company budgets go on sales and marketing, and indeed this estimate is a little on the low side. Enterprise software companies will typically spend 45-55% of their budgets on sales and marketing, with the lion’s share of this going to sales, depending on the stage of the company (obviously this may be lower in very early stage companies which are still mostly in R&D).

It is all a bit ironic. The incremental cost of printing one more CD with a software product on it is less than a dollar, which is why venture capital firms like software companies. However to actually convince anyone to shell out (say) half a million dollars on this CD requires a great deal of expensive sales and marketing effort. It is rather naive of the panelists at the Open Source Business Conference to believe that this is going to change any time soon outside of a few niches. Sure, Linux has done very well, but some of this success is because IBM has put a lot of muscle into it (to avoid Microsoft eating further into the server operating system market). However if you move up a layer or two in the stack, open source is still in the special interest category. MySQL gradually improves, but it is still a long way off being a heavy duty DBMS; Oracle, DB2 and Microsoft are far from quaking in their boots. Higher still, there are very early initiatives in business intelligence like Pentaho and good luck to them, but not even the wildest-eyed open source advocate could accuse them of having made even a dent in the enterprise BI market yet.

Hence, while the idea of running a mostly R&D company and letting the customers come to you may sound appealing to some engineering types, the sad reality is that this is not going to happen. Customers buy through a series of stages. One model of this is called AIDA “awareness” -> “interest” -> desire-> action. Unless people are made aware of your product then they cannot buy it, and so you have to spend money on marketing. Once they have become aware and show interest in the value it offers them, they need to be nudged along, tantalised and have their many objections overcome (”does it really work”,”how many customers are using it”, “will it work with my existing technology” etc). If you make it to this stage then the reality for any enterprise software is that you have what is called the “complex sale” i.e there are multiple people who influence the decision, each of whom have to be convinced or at least be persuaded to not object. Miller Heiman and others make a living by selling training in sales methodologies that go through this. It is very rare for a six or seven figure software purchase to involve just one person, and that’s where sales come in. The salesman needs to get the proposition in front of the prospect, find if there is a project that fits, identify the key influencers in the account, see whether they really have a budget or just like talking to salesmen, navigate the client organization and unravel its purchasing process, perhaps deliver a trial or proof of concept, and all this before you get anywhere near negotiating a deal.

I just can’t see all this happening without a lot of marketing and sales effort, except in very specific situations or niches that can suit open source, and those are too rare at present to put enough money on the table to pay for all those creative software engineers. I fear that, like Mart Twain’s demise that was mistakenly reported during his lifetime, the death of the software salesman is being much exaggerated.

Customer satisfaction pays

February 15, 2006

The software industry is full of awards for the fastest growing companies e.g. the “Inc 500″ and numerous others, but it is perhaps revealing that the industry is almost silent on a critical measure: customer satisfaction. There seem to be two objective ways of measuring this: renewal rates i.e. do the customers actually pay maintenance, and surveys. The former has the advantage of being wholly objective i.e. they either pay up or they drop the software, with no wiggle room for spin, though of course it is a somewhat crude measure. A McKinsey report I saw recently reckoned that best practice in the software industry has renewal rates of 85%-95%, and indeed SAS Institute has made considerable (and justified) play over renewal rates of over 90% year after year (I am pleased to say that Kalido’s renewal rate in 97%).

The other measure is surveys, which are a richer measure but of course are always subject to the perversion of framing the question e.g. “are you happy with our software?” is not the same question as “are you delighted with our software?”. One company that has had a lot of fun at some companies’ expense in Nucleus research, who sometimes call up the reference customers of software companies (as featured on their web sites) and ask them how happy they are. An amusing one a few years back was one they did on Siebel, where more than half of the Siebel reference customers they spoke to reported that their projects had cost more than the benefits. It was a similar story when they surveyed I2 reference customers, and an ERP report they produced showed that no ERP vendor could must even a 50% score in response to the question: “would you recommend this software to others”, which is pretty shocking given that these are the reference customers of these vendors (highest scoring was Peoplesoft at a hardly dazzling 47%).

Given that endless studies have shown how more expensive it is to sell to new customers that to sell more to existing ones (four times more is one figure often quoted), I would have expected that software companies, with their high sales and marketing costs, would pay more attention to customer satisfaction. However, sad to say that my own experience as a customer for many years taught me that most software companies treat their customers with anything from indifference to outright hostility. Perhaps in the giddy days of the late 1990s software companies could get away with this, but these days the power is definitely back with the buyer, and so it is in the industry’s interest to improve customer satisfaction, if only for purely selfish reasons. Yet how many software companies even survey their customers on a regular basis? Talking to sometimes unhappy customers may not always be a comfortable experience for software executives, but it can teach you a great deal, and showing that you care enough to listen is itself a way of raising customer satisfaction.

The price of love

February 14, 2006

As it is Valentine’s day I will depart from my usual carping about the latest creative marketing of the technology industry and observe that, according to the BBC, the escalating price of love (or at least the price of dating services) has caused on-line dating revenues to actually drop in the US. Fortunately those romantic Europeans (perhaps the Germans or Swiss, tantalizingly it is not clear) have kept their end up, with Europe’s on-line dating market growing 43% this year. It is a tribute to the power of marketing that we all feel obliged to splash out on cards, presents, overpriced flowers and squeeze into overcrowded restaurants all on the same day. However it would seem that we can blame the Romans and even their predecessors for this one rather than a modern greeting card company for this particular annual celebration.

There surely have to be a range of opportunities for technology companies here. For a start, on-line restaurant booking services like Toptable in the UK make it a little easier to find that elusive table. Perhaps someone could come up with a roses futures market that allowed one to hedge against the predictable hike in the price of roses on or around the 14th February? There are already on-line greeting card services, though good luck convincing your loved one that an on-line card is an adequate substitute for a real card.

I can’t find a single software acquisition happening today – our industry just has no romance…

Happy Valentine’s day to you all.