Pure and chased

Purisma has been acquired by Dun & Bradstreet, the business information company that provide, amongst other things, assessment of credit risk of companies and company statistics. On the face of it this is a somewhat peculiar acquisition, since D&B is not a pure provider of enterprise software solutions in the way that, say Oracle, is. However D&B did have its own data quality offering (clearly data quality is a big issue for an information supplier) and Pursima’s customer hub technology is certainly complementary to this data quality offering. It seems possible that D&B has bought Purisma primarily for its own internal purposes, and at this point it is unclear whether Purisma will even continue to be sold as a product in its current form. Rather ironically, Purisma had a product offering allowing integration of D&B into its CDI application. I guess that will come in handy now.

Purisma does not publish public financial data, so it is tricky to tell whether how good or bad the price paid of USD 48 million for the company was. I believe that Purisma had less than 50 employees and I would speculate that its revenues were in the USD 15-20M range. In general it is known that stand-alone CDI and PIM players have been struggling somewhat in the market. This is part due to a gradual dawning on customers that master data management is a broader topic than just “customer” or “product”, a long term theme of this blog. When customers ask “ah, but what about other kinds of master data” (asset, location, employee etc) then specialist CDI and PIM vendors do not have good answers, however good their offerings in their particular domains are. Even IBM has done an about turn on this topic recently, laying out a roadmap for a single MDM Server that will eventually bring together its menagerie of acquired technologies into a platform that will handle multiple master data domains consistently. For this reason I suspect that D&B did not pay over the odds for Purisma.

D&B has had phases in the past of buying software companies, and then moving away from this business e.g. those with long memories will recall the 4GL Nomad, which it sold off after some years. The press release that is tucked away on the Purisma web site today is not giving anything away. If press releases played poker, this one would be a tough player. Purisma customers need to seek guidance from D&B about its future intentions, and consider their alternatives.

The dark side of a marketing announcement

Business Objects announced a witches brew tie-up with Big Blue, bundling DB2 and its data warehouse capabilities with Business Objects together as a business intelligence “offering”. Given that Business Objects already works happily enough with DB2, it is rather unclear as to whether this is a any more a ghostly smoke and mirrors marketing tie-up rather than anything deeper, but it certainly makes some sense for both companies. It does, however, hint at Business Objects moving away from pure database platform independence, which takes on a new significance given the takeover (sorry: “merger” – much background cackling) of Business Objects by SAP. Is this really a subtle move to try and irritate Oracle, the other main DBMS vendor, give the highly competitive situation between SAP and Oracle, who are locked in a nightmare struggle for world domination? In this case, is SAP just manipulating Business Objects like a possessed puppet, pulling the strings behind the scenes, or was this just a hangover from the pre-takeover days, with the Business Objects marketing machine rolling on like a zombie that stumbles on yet does not realise it already has no independent life, clinging to some deep-held memory of those days in its old life. SAP has a more tense relationship with IBM itself these days. IBM sells cauldrons of consulting work around SAP implementations, but found a knife in its back when SAP started promoting the Netweaver middleware in direct competition with IBM’s Web Sphere.

Announcements from Business Objects from now on all need to be looked at through the distorting mirror of the relationship with its new parent, as there may be meaning lurking that would not have existed a month ago. Everything needs to be parsed for implications about the titanic Oracle v SAP struggle, as Business Objects should strive as far as possible to appear utterly neutral to applications and databases in order to not spook its customers. Arch rivals Cognos, Microstrategy and SAS will take advantage of any hint that the giant behind Business Objects is just pulling its strings.

Happy halloween everyone!

The murky world of market sizing

Defining a software segment’s market size is a tricky thing, partly because is all about what you include and what you exclude. Take MDM as an example. A much quoted IDC figure reckoned the MDM market would be USD 10 billion in 2009, implying a USD 5 billion market size in 2005 given compound growth of 14%. Such figures are regularly bandied about by the computer press, but mean little unless you qualify such statements by explaining what is included or excluded. For example this figure includes an estimate for services business associated with MDM. This is itself hard to pin down, but in my experience an MDM project where the software costs X will spend about 3X on services to implement it. Hence that USD 5 billion market size actually only has about USD 1.6 billion of software sales. Then MDM itself is a broad church, including CDI and PIM as well as a generalist MDM solutions such as those from Orchestra Networks and Kalido. I was still puzzled as to why even this USD 1.6 billion figure number was so large, but by deduction I think that the IDC figure was including data quality within the picture also. Fair enough, but it needs to be explicitly stated to make sense of the market, and as we will see still does not explain the gap.

Let’s come at this another way. A Gartner figure just released reckoned the CDI market was worth USD 310 million in 2006. This appears to be an estimate for software rather than services. Getting a figure for the product information management market is murkier, but I believe it will be broadly at a similar level. The generalist MDM vendors are these days mostly from smaller companies (products like Razza having been swallowed and digested by Oracle, and Stratature by Microsoft for example) and I doubt would add USD 100 million in software sales to this picture. Hence, adding PIM + CDI + specialist MDM (but excluding data quality) you get a software market of maybe USD 700M (probably a bit less), which is a far cry from the apparent IDC figure of USD 5 billion, or even the likely USD 1.6 billion of software revenues only. I still struggle to bridge the gap here, as the data quality market is not that large. Again you have to be careful about what is in and what is out, but other than leader Trillium data quality vendors are mostly very small (e.g. Exeros, Datanomic, etc) or are now buried within larger companies through acquisition (e.g. Informatica, Business Objects). However though I have seen estimates like USD 500M for the data quality market, again I wonder how much of this is services; personally I am unconvinced that the software sales of the data quality market would be much beyond USD 100M or so (companies like FirstLogic were not that large prior to their acquisition). So if we take the USD 700M figure and throw in USD 150M for data quality software sales (let’s be generous) this is still a far cry from the USD 1.6 billion estimate we arrived at earlier. Of all the analyst firms I respect the market size figures from IDC best, as they do actually check with the vendors what their revenues really are (they used to do this every year when I was running Kalido) but as you can see their MDM market size figure is still a mystery to me. If someone from IDC is reading this and can shed some light on it I would be interested to hear from them.

MDM is certainly growing quickly: each analyst firm agrees on this, and is clear enough from the number of companies entering the market or (more commonly) re-labelling existing products as MDM. However it can be seen that you can take a figure like the IDC 5 billion number, and also produce a valid market estimate of under USD 850 million, just based on what you include or exclude, for seemingly the same market. Quite a range. I guess it is hoping too much to expect the IT press to actually mention pesky caveats like what a number includes, since it is more headline inducing to say “MDM market worth $5 billion”, but if you are to actually use these figures to help with a decision then you would be well advised to dig deeper, below the headline numbers.

The Price of Failure

I enjoyed an article by Madan Sheina on the failure of BI projects. 87% of BI projects fail to meet expectations, according to a survey by the UK National Computing Centre. I wish I could say this was a surprise, but it is not. Any IT project involves people and processes as well as technology, yet many project focus almost entirely on the technology: tool choices, database performance etc. Yet in practice the issues which confound a BI project are rarely the technology itself. Projects fail to address actual customer needs, and frequently don’t acknowledge that there are several user constituencies out there with quite different requirements. Frequently a new technology is stuffed down the customer’s throat, and projects often neglect data quality to their peril.

From my experience, here are a few things that cause projects to go wrong.

1. Not addressing the true customer need. How much time does the project team spend with the people who are actually going to use the results of the BI project? Usually there are a subset of users who want flexible analytical tools, and others who just need a basic set of numbers once a month. A failure to realise this can alienate both main classes of user. Taking an iterative project to project development rather than a waterfall appraoch is vital to a BI project.

2. Data is only useful if it is trusted, making data quality a key issue. Most data is in a shocking state in large companies, and the problems often come to light only when data is brought together and summarised. The BI project cannot just gloss over this, as the customers will quickly avoiding using the new shiny system if they find they cannot trust the data within it. For this reason the project teams needs to encourage the setting up of data governance processes to ensure that data quality improves Such initiatives are often outside the project scope, are unfunded and require business buy-in, which is hard. The business people themselves often regard poor data quality as an IT problem when in fact it is an ownership and business process problem.

3. “Just one more new user interface” is not what the customer wants to hear. “Most are familiar with Excel and are not willing to change their business experience” was one quote from a customer in the article. Spot on! Why should a customer whose main job is, after all, not IT but something in the business, have to learn a different tool just to get access to data that he or she needs? Some tool vendors have done a good job of integrating with Excel, and yet are often in denial about this since they view their proprietary interface as a key competitive weapon against other vendors. Customers don’t care about this; they just want to get at the data they need to do their job on an easy and timely way. Hence a BI project should, if at all possible, look at ways of allowing users to getting data into their familiar Excel rather than foisting new interfaces on them. A few analyst types will be prepared to learn a new tool, but this is only a small subset of the audience for a BI project, likely 10% or less.

4. Data being out of date, and the underlying warehouse being unable to respond to business change, is a regular problem. Traditional data warehouse designs are poor at dealing with change caused by reorganisations, acquisitions etc, and delays in responding to business change cause user distrust. Unchecked, this causes users to hire a contractor to get some “quick and dirty” answers into a spreadsheet and bypass the new system, causing the proliferation of data sources to continue. Using packaged data warehouse that are good at dealing with business change is a good way of minimising this issue, yet even today most data warehouse are hand-built.

5. Training on a new application is frequently neglected in IT projects. Spend the time to sit down with busy users and explain to them how they are to access the data in the new system, and make sure that they fully understood how to use the system. It is worth going to some trouble to sit down with users and train them one to one if you have to, since it only takes a few grumbling voices to sow the seeds of discontent about a new system. Training the end users is never seen as a priority for a project budget, yet this can make the world of difference to the likelihood of a project succeeding.

6. Running smaller projects sounds crass but can really help. Project management theory shows that the size of a project is the single biggest predictor of success: basically, if projects fail, small ones do better, and yet you still see USD 100 million “big bang” BI projects. Split the thing into phases, roll out by department and country, do just about anything to bring the project down to a manageable size. If your BI project has 50 people or more on it then you are already in trouble.

7. Developing a proper business case for a project and then going back later and doing a post implementation review happens surprisingly rarely, yet can help shield the project from political ill winds.

You will notice that not one of the above issues involves a choice of technology, technical performance or the mention of the word “appliance”. Yes, it is certainly important to pick the right tool for the job, to choose a sufficiently powerful database and server and to ensure adequate systems performance (which these days appliances can help with in the case of very large data volumes). The problem is that BI projects tend to gloss over the “soft” issues above and concentrate on the “hard” technical issues that we people who work in IT feel comfortable with. Unfortunately there is no point in having a shiny new server and data warehouse if no one is using it.

The evidence mounts

The annual IDC business intelligence report shows a reassuring 11% rise in overall BI tool revenues in 2006. Regular readers of my blog know that a couple of my long-term viewpoints are that: (a) Microsoft is the vendor with the long-term best position due to its ownership of Excel, which is still the BI front-end that end-users actually want and (b) that specialist BI tools will never, contrary to many BI vendor projections, have a place on every desktop, due to the rather dull reason that most people do not need one.

Is there any evidence for these hypotheses? Well, Microsoft’s BI revenues grew 28%, while the major specialist BI vendors grew by 7% (Business Objects) and 9.8% (Cognos). As IDC analyst Dan Vesset notes, “IDC does not yet see a substantial impact on the market from the strategy and marketing messages of most BI vendors seeking to reach a broader use base”. Nor will it ever, in my opinion.

Also of note is the formidable performance of Qliktech, which grew by a little matter of 97% to USD 43.6 million revenue. Its offering to the mid-market offering based on in-memory search technology continues to get considerable customer traction. I have a soft spot for this company, having been asked to look at it for a venture capital firm as an investment when it was still a tiny company; I am very relieved that I recommended that they invest – otherwise I imagine that they would be hunting me down like a dog right now.

I see a tall dark stranger in your future….

There is an interesting article in CIO Insight by Peter Fade, a professor of marketing at the top-rated Wharton Business School. in this he discusses the limitations of data mining, and it is an article that anyone contemplating investing in this technology should read carefully. I set up a small data mining practice when I was running a consulting division at Shell, and found it a thankless job. Although I had an articulate and smart data mining expert and we invested in what at the time was a high quality data mining tool, we found time and again that it was very hard to find real-world problems where the benefits of data mining could be shown. Either the data was such a mess that little sense could be made of it, or the insights shown by the data mining technology were, as Homer Simpson might say, of the “well, duh” variety.

Professor Faber argues that in most cases the best you can hope for is to develop simple probabilistic models of aggregate behaviour, and you simply cannot get down to the level of predicting individual behaviour using the level of data that we typically have, however alluring the sales demonstrations may be. Moreover, such models can mostly be built in Excel and don’t need large investments in sophisticated data mining tools.

While I am sure there are some very real examples where data mining can work well e.g. why some groups of people are better credit risks than others, the main point he makes is that the vision of 1-1 marketing via a data mining tool is a fantasy, and that the tools have been seriously oversold. Well, that is something that we in the software industry really do understand. We all want technology to provide magical insights into a messy and complex world that is hard to predict. Unfortunately the technology at present is generally as useful as a crystal ball when it comes to predicting individual behaviour. Yet there is still that urge to go into the tent and peer into the mists of the crystal ball in search of patterns.

Dot Bomb 2.0?

Currently enterprise software companies have been trading at around three times revenues, with premiums for particularly good firms up to about five times revenues, less for firms that are not showing much market progress. There have been several examples of this type of deal recently. Enterprise software CEOs could be forgiven for a casting an envious eye at the internet software market. On the UK market AIM a company called Blinkx, which offers the ability to search video clips (using technology from Autonomy) recently raised money, with its first day trading giving a valuation of GBP 180 million. Any guesses to the revenues or profitability of this company? Revenues of GBP 60 million perhaps, maybe as low as GBP 40 million? Nope. Revenues are expected to be just over GBP 2 million in 2007. Profits? “Profitability is not expected until 2010”.

How about the teenage scribblers who presumably can explain this kind of valuation? According to an analyst at Dresdner Kleinwort: “It is hard to value because we don’t know what it is going to focus on. There’s no proven management history, and few historical numbers to play with”.

Does this kind of language ring any bells? Does anyone recall a time in the far distant past when companies could not be valued using “old fashioned” methods like price/sales or price/earnings, since the internet was a new business model? Maybe you were wiser than me, but I admit to to buying some shares in basket cases like Commerce One at the height of the bubble, believing the previous generation of teenage scribblers that the internet “changed everything” and old fuddy duddies who fretted about irrational exuberance just “didn’t get it”.

Those who cannot remember the past are doomed to repeat it. For the latest lesson we don;t have to think back to the South Sea Bubble or the Amsterdam Tulip fiasco. We just have to cast our minds back six years or so. I for one will not be investing in Blinkx.

Footnote. After writing this I found a thoughtful blog on the same subject.

The good old days

I attended an interesting talk today by Greg Hackett, who founded financial benchmarking company Hackett Group before selling this to Answerthink and “going fishing for a few years”. He is now a business school professor, and has been researching into company performance and, in particular, company failure. Studying the 1,000 largest US public companies from 1960 to 2004 his research shows:

– company profitability is 40% lower in 2004 than in 1960, with a fairly steady decline starting in the mid 1960s
– the average net income after tax of a company in 2004 was just 4.3%
– half of companies were unprofitable for at least two out of five years
– 65% of those top 1,000 companies in 1965 have disappeared since, with just half being acquired but 15% actually going bankrupt.

He gave four reasons for company failure: missing external changes in the market, inflexibility, short term management and failing to use systems that would show warning signs of trouble. What I found most surprising was that the correlation between profitability and stock market performance was zero.

The research suggests that the world is becoming a more competitive place, with pricing pressure in particular reducing profitability despite greater efficiency (cost of goods sold is 67% of turnover, down from 75% in 1960, though SG&A is up from around 13% or turnover to around 18%). All those investments in technology have made companies slightly more efficient, but this has been more than offset by pricing pressure.

I guess this also tells you that holding a single blue chip stock and hanging onto it is a risky business over a very long time; with 15% of companies folding over that 45 year period, it pays to keep an eye on your portfolio.

A key implication is that companies need to get better at implementing management information systems that can react quickly to change and help give them insight into competitive risks, rather than just monitoring current performance. Personally I am unsure that computer systems are ever likely to provide sufficiently smart insight for companies to take consistently better strategic decisions e.g. divesting from businesses that are at risk; even if they did, would management be smart enough to listen and act on this information? It does imply that systems which are good at handling mergers and acquisitions should have a prosperous future. This is one thing, at least, that seems to have a growing future.

Master Data comes to London – day 1

This week is the CDI/MDM Institute London conference. It is a useful bellwether of MDM progress in the UK, and based on the attendance today it looks like MDM interest is indeed picking up in the UK. There are just over 300 attendees and 22 vendors exhibiting. Compared to the one last year there are encouragingly more customer case studies (last year the speakers were mostly vendors presenting), for example from Panasonic, BT, Harrods, Allied Bakeries, M&S and the Co-Op.

It is noticeable that the CDI v MDM debate continues to favour the broader view that customer is just one (important) type of master data, with the MDM acronym now being used by most of the vendors. The Panasonic case study was a good example, starting out as a product master data initiative and now spreading to customer, and then on to market information. The speaker was able to share some real business benefits form the initiative (enabling new products to be launched two weeks quicker as well as data quality savings), measured in millions of pounds. IBM claimed to be integrating its various acquired technologies, which is an improvement from the conversation I had with them a year ago when it was claimed that there was nothing wrong with having a clutch of separate, incompatible repositories, one for customer, one for product etc. When I asked how many different repositories would be needed to cope with all the different types of master data in an enterprise I received the mystical answer “seven”, at which point I gave up as the conversation had seemed to move into the metaphysical realm. We shall see where the integration efforts lead.

Aaron Zornes gave a useful high level split of the MDM market into the groupings of:

– operational e.g. CDI hubs like Siperian, Oracle
– analytical e.g. Hyperion Razza, Data Foundations
– collaborative i.e. workflow (e.g. Kalido does a lot of this)

which seems to me a useful split. Certainly no one vendor does everything, so understanding where the strengths of the vendors are, even in this simplistic way, at least helps customers narrow down which vendors are most likely to match their particular problem.

IBM, Initiate, SAS and Kalido are the main sponsors of the event, and once again SAP chose not to attend (to be fair, SAP did speak at two of the US MDM conferences). Nimesh Mehta assures me that SAP MDM is making steady market progress, but with no numbers he is willing to share I cannot verify this. However the buzz at the conference suggest that most customers here are using products from specialist vendors. One repeated theme in talking to SAP MDM early adopters is its apparent inability to deal well with customer data, perhaps not surprising given the A2i heritage of the product. No doubt SAP has lots of resources to throw at this problem, but at present it is not obvious that it is getting much in the way of production deployments. Clearly SAP’s dominant market position should get it on to every MDM shortlist, but how many real broad deployments there are in production is much less clear.

There were a couple of entertaining exhibit conversations. One Dilbert-esque one was with a sales person. I asked the following question “what does your product do – is it a repository, or data quality tool, or something else?”, The sales person took a sudden physical step back like a scalded cat and said “oh, a technical question; I’ll need to find someone else to answer that.” Now maybe I’m old-fashioned but “what does your product do?” seems to me a question that even a software salesman should be able to hazard a guess at. What kind of questions do you think this salesperson is likely to be able to field? I’m guessing anything beyond “where is the bar?” or “where do I sign the order form” is going to prove challenging.

I was amused to see Ab Initio had a stand. Ab Initio is famous in the industry for its secretive nature e.g. customers have to sign a NDA in order to see a product demo. This is driven by its eccentric founder Sheryl Handler, and makes life hard for its sales and marketing staff. There was indeed no printed brochure or material of any kind, and they (very charming) sales person I spoke to was unable to confirm very much about the company other than it seemed pretty certain that there was a UK office. Ab Initio’s technology has the reputation of being the fastest performing data transformation tool around, and in the UK has most of the really heavy-duty data users (BT, Vodaphone, Tesco, Sainsbury etc) as customers. It must certainly make it interesting trying to sell the thing, but perhaps the aura of mystery paradoxically helps; after all, this is not a company that anyone could accuse of aggressive marketing.

Another one bites the dust

The consolidation trend in the BI industry continued today with Business Objects (ticker symbol BOBJ) announcement of their intention to buy Cartesis, who are essentially a poor man’s Hyperion. One in four Fortune 500 companies use Cartesis for financial consolidation, budgeting and forecasting, and they had USD 125M in revenues, but reportedly had struggled with growth. The purchase price of USD 300M is less than two and a half times revenues, so is hardly what you would call a premium price (Hyperion went for 3.7 times revenues), though no doubt Apax, Partech and Advent (the VCs involved) will be grateful for an exit. This is not the first time Cartesis was bought (PWC bought Cartesis in 1999) but Business Objects is a more logical owner. Not only it is a software company, but the French history of Cartesis should make it an easy cultural fit for Business Objects. With Hyperion disappearing into the maw of Oracle then there were only so many opportunities out there in this space. Business Objects superior sales and marketing should be able to make more of Cartesis than had been done, and strategically this takes Business Objects up-market relative to its core reporting, which makes good sense.