There is a good article about business ownership and MDM in DM News, from the rather unlikely pen of the marketing director of Siperian. Although any article written by a vendor should be the reading equivalent of held at a distance and handled with tongs, this piece actually has a lot of good sense in it. The key thesis is that MDM initiatives will not do well if owned by the CIO office or IT department, since success critically depends upon business engagement in the area of data ownership and governance. Now that MDM is developing momentum, some IT departments are embarking on large, enterprise-wide MDM initiatives. The hidden point of the article is that these projects are frequently being done using software from Oracle or SAP rather than an independent company like, well, Siperian, say, but you can forgive this subtly disguised message. The point is that without the business standing up and saying “Fred over there owns the notion of customer and will handle disputes over it between departments” and similarly for other master data, things will end in tears.
I was impressed recently by a client of mine who had already set up a cross-functional business team to do exactly this, and they could list not only the department which had been agreed to own various major master data elements (not just customer but asset, product, production facility, person etc) but actually had real people’s names attached to them i.e. it was not just wishful thinking. There were still plenty of issues with that project, but at least they had established the groundwork that would give them a chance of success. MDM initiatives which are driven by IT without this level of involvement to resolve boundary disputes are doomed to failure, whether the technology they use is from a mega vendor or an independent.
The most widely publicised piece that I wrote was “EII Dead on Arrival” back in July 2004. Metamatrix was the company that launched the term on the back of heavy funding from top end VCs, and I wrote previously about what seemed to me to be its almost inevitable struggles. There was some controversy over my article, which differed from the usual breathless press coverage which was associated with EII at the time (our industry does love a new trend and acronym, whatever the reality may be). I could never see how it could work outside a very limited set of reporting needs. Well, as they say on Red Dwarf: “smug mode”.
Gravity finally caught up with marketing hype this week, and Metamatrix will be bought by Red Hat and made into open source. It would have been interesting to know what the purchase price was, but Red Hat were keeping quiet about that. It is a fair bet that it was not a large sum of money. Kleiner Perkins won’t be chalking this up as one of their smarter bets.
Business Objects quarterly results reveal a continued split between the success of the enterprise performance management business relative to the stagnation of the core reporting business. License revenue overall was up 9% to USD 137M, but though “information and discovery and delivery” (traditional reporting) had a decent quarter the annual licence revenue for this part of this business is actually in decline. Rather disappointingly, management will no longer publish the split of revenue between the different businesses, presumably to avoid pesky analysts pointing out that there core business is in decline.
On the positive side, the continued diversification away from reporting e.g. with its Cartesis acquisition this week, means that Business Objects has been following a sensible strategy to avoid being caught up too badly by the core reporting malaise.
As with most large software companies, services revenue plays an increasing role, up 29% from a year ago. Business Objects has always done an excellent job of sales and marketing, and this is reflected in the 12 deals of USD 1 million in size (up from nine in the corresponding quarter a year ago). Financially, cash from operations was a healthy USD 107M, with overall cash at USD 675M.
The consolidation trend in the BI industry continued today with Business Objects (ticker symbol BOBJ) announcement of their intention to buy Cartesis, who are essentially a poor man’s Hyperion. One in four Fortune 500 companies use Cartesis for financial consolidation, budgeting and forecasting, and they had USD 125M in revenues, but reportedly had struggled with growth. The purchase price of USD 300M is less than two and a half times revenues, so is hardly what you would call a premium price (Hyperion went for 3.7 times revenues), though no doubt Apax, Partech and Advent (the VCs involved) will be grateful for an exit. This is not the first time Cartesis was bought (PWC bought Cartesis in 1999) but Business Objects is a more logical owner. Not only it is a software company, but the French history of Cartesis should make it an easy cultural fit for Business Objects. With Hyperion disappearing into the maw of Oracle then there were only so many opportunities out there in this space. Business Objects superior sales and marketing should be able to make more of Cartesis than had been done, and strategically this takes Business Objects up-market relative to its core reporting, which makes good sense.
In the first of a series of articles on MDM, Richard Skriletz of RCG starts by trying to define master data. RCG has an excellent reputation as a consulting company, but after saying some sensible things the article seems to me to get tangled up. To me master data is anything that is shared between multiple systems. This is captured well in the Wikipedia definition which Richard references, before immediately going on to try and “improve” on this very clear definition. He wants to distinguish between master data and “reference data”, in his case by splitting the world into physical entities like product and abstract entities like organisation. I have written about this before. Not only is there no need for this distinction, it can be misleading, and cause one to start treating data in different ways when there is no need to. I’m not sure whether IT people just love to classify things (all that being brought up on “int”, “char”, “varchar” etc) but this desire to split master data up is just confusing. Data that is shared between systems needs to be managed, and it needs to be managed whether it is physical or abstract. By imposing artificial distinctions between types of master data we introduce complication where none is necessary. William of Ockham had the right idea on this, and he did so in the 14th Century.
Cognos’ 4th quarter revenues were USD 284M, up 12% year on year. The 4th quarter (ending February for Cognos) is the strongest one traditionally, but this run rate means that Cognos has the tantalising prospect of hitting USD 1 billion in revenue in the next financial year, clearly a major milestone.
The results were quite strong across the board, with more deals in excess of USD 1 million than the company has ever achieved (25 deals of this size) and revenue growing in each region (9% US, 16% Europe, 18% Asia Pacific) though the currency effects flatter the European figures (6% growth in local currency terms).
Of this revenue, USD 92M was in license revenue (USD 238M for the year in all) which has potential for improvement since migration to Cognos Version 8 is reportedly sluggish; perhaps only 10% of customers have migrated so far.
Overall the figures are solid rather than dazzling, as reflected in the share price performance, but still indicates that the BI industry is in generally healthy shape.
I just read a provocative blog on SOA which raises an interesting point. Articles on SOA tend to focus on the technical issues e.g. performance, standards etc. While I don’t agree with everything in the article, Robin Harris is correct in pointing out that how a new piece of infrastructure is perceived depends in part on the pricing mechanisms that end users see. Different IT departments charge in different ways. Some levy a “tax” on the business lines, perhaps based on a simply charge-back mechanism “retail is 20% of the business, so they pay 20% of the IT department’s costs”. Others charge out for services in a much more granular way e.g. a charge for each desktop, a charge for each GB of server storage, a charge for DBA services etc. The latter has the big advantage of being related to usage, meaning that heavy users of IT pay more, presumably in some relation to the costs that they cause the IT department to incur. The disadvantage is that the pricing can easily become over-complex, with user departments receiving vast itemised bills each month for storage, telecoms, data, networking, applications support etc in minute detail. This can cause some users to try and “game” the system by taking advantage of any flaws in the pricing model, which may make logical sense to the individual department but may actually cause the true costs to the enterprise to rise. For example if the IT department prices storage in bands then a department may resort to all kinds of tricks to avoid moving into the next pricing band, and yet the effort involved in fiddling around may exceed the true costs to the company of just buying some more storage.
At one time I worked in Esso UK, and a study was done of the pricing mechanism, which was of the complex/sophisticated type. The recommendation, from a bright young manager called Graham Nichols, was simply to scrap the charge-back systems altogether and just levy a simplistic charge by department. This actually saved three million pounds in costs, which is what it took for th charge-back systems to be administered. No doubt years later things have changed, but this was an example of how the internal systems added no value at all to the company, and by simplifying them could remove a layer of administration. The drawback to simplified systems like this is that there is no incentive for increased efficiency, since the departments know what they are going to get charged so perceive no extra cost in heavier usage of the systems. This may eventually cause heavier systems costs which will be charged back eventually to departments; it is a question of balancing the costs of the internal processes v the potential higher costs that may occur.
SOA is an example of core infrastructure which pricing mechanism have always struggled with i.e. how do you justify an investment in infrastructure which has no benefit at first, but will incrementally benefit future applications? However the investment in justified and charged back, a key point is that the investment should be justified, like any other. IT departments should view a new piece of infrastructure like other departments consider capital expenses e.g. a new fleet of trucks or a new factory. What is the payback compared to the investment? What is the IRR, NPV and time to break-even? I have not seen much if at all written about this aspect of SOA, and yet we all need to understand what productivity gains are actually going to occur before we head down this path. There may be significant productivity improvements, or none at all (indeed it could be worse than today) and yet commentators seem to take SOA as a given. If a whole industry moves in a certain direction then eventually this can be hard for end-user companies to avoid e.g. if you decided a decade or two ago that client/server was just an expensive way of distributing data from one safe, secure place (mainframe) to lots of unsafe and insecure places (PCs) then you could have tried to hang on to your mainframe, but eventually fewer and fewer applications would run on your old mainframe, and you would be obliged to switch whether you liked it or not. It is not yet clear that SOA has that kind of momentum. However I am sure that understanding its economic impact would be valuable for all sorts of reasons. I look forward to seeing someone addressing this issue seriously (I do not count breathless marketing material from vendors selling SOA services claiming 10:1 improvements in everything from productivity to quality, without any actual pesky real examples), but I am not holding my breath.
One thing I have been banging on about for a long time is how the MDM industry needs to move aways from its roots in CDI hubs and PIM if it is to address the needs of large enterprises on any scale. It seems obvious to me that if you end up with one specialised hub for each type of master data then you quickly descend into architectural anarchy. You have the various ERP systems, and are then going to add a customer hub, a product hub (different vendor) and then are going to notice that there are other important master data types that need managing, like financial data, assets, people data, brand information etc. Are hubs going to spring up to support each one? That way madness lies.
If we are to sort out master data properly then we need to isolate it in a master data repository, separate from a data warehouse and from ERP, which can manage and maintain all master data for the enterprise. This hub needs to be able to handle all data types and keep track of where versions of master data are, even if it is not the only place the master data physically resides. It at least needs to know where the copies of master data live, or else we are back in master data anarchy. I have been amazed at how few vendors (and customers) seem to have picked up on this obvious point. In their latest press release Siperian demonstrates that it, at least, has figured it out. The release itself contains the amusing claim that Siperian is the only MDM product that can do this: “the only operational hub capable of managing hundreds of different types of master data entities” which will come as news to, for example, BP Lubes, who are using Kalido MDM to manage 350 different types of master data and have been for three years. However, despite the over-egging on the marketing, at least Siperian seems to understand the problem, which is more than can be said for a lot of its competitors.