EII – dead and now buried

The most widely publicised piece that I wrote was “EII Dead on Arrival” back in July 2004. Metamatrix was the company that launched the term on the back of heavy funding from top end VCs, and I wrote previously about what seemed to me to be its almost inevitable struggles. There was some controversy over my article, which differed from the usual breathless press coverage which was associated with EII at the time (our industry does love a new trend and acronym, whatever the reality may be). I could never see how it could work outside a very limited set of reporting needs. Well, as they say on Red Dwarf: “smug mode”.

Gravity finally caught up with marketing hype this week, and Metamatrix will be bought by Red Hat and made into open source. It would have been interesting to know what the purchase price was, but Red Hat were keeping quiet about that. It is a fair bet that it was not a large sum of money. Kleiner Perkins won’t be chalking this up as one of their smarter bets.

Swings and roundabouts

Business Objects quarterly results reveal a continued split between the success of the enterprise performance management business relative to the stagnation of the core reporting business. License revenue overall was up 9% to USD 137M, but though “information and discovery and delivery” (traditional reporting) had a decent quarter the annual licence revenue for this part of this business is actually in decline. Rather disappointingly, management will no longer publish the split of revenue between the different businesses, presumably to avoid pesky analysts pointing out that there core business is in decline.
On the positive side, the continued diversification away from reporting e.g. with its Cartesis acquisition this week, means that Business Objects has been following a sensible strategy to avoid being caught up too badly by the core reporting malaise.

As with most large software companies, services revenue plays an increasing role, up 29% from a year ago. Business Objects has always done an excellent job of sales and marketing, and this is reflected in the 12 deals of USD 1 million in size (up from nine in the corresponding quarter a year ago). Financially, cash from operations was a healthy USD 107M, with overall cash at USD 675M.

Another one bites the dust

The consolidation trend in the BI industry continued today with Business Objects (ticker symbol BOBJ) announcement of their intention to buy Cartesis, who are essentially a poor man’s Hyperion. One in four Fortune 500 companies use Cartesis for financial consolidation, budgeting and forecasting, and they had USD 125M in revenues, but reportedly had struggled with growth. The purchase price of USD 300M is less than two and a half times revenues, so is hardly what you would call a premium price (Hyperion went for 3.7 times revenues), though no doubt Apax, Partech and Advent (the VCs involved) will be grateful for an exit. This is not the first time Cartesis was bought (PWC bought Cartesis in 1999) but Business Objects is a more logical owner. Not only it is a software company, but the French history of Cartesis should make it an easy cultural fit for Business Objects. With Hyperion disappearing into the maw of Oracle then there were only so many opportunities out there in this space. Business Objects superior sales and marketing should be able to make more of Cartesis than had been done, and strategically this takes Business Objects up-market relative to its core reporting, which makes good sense.

Master data and razors

In the first of a series of articles on MDM, Richard Skriletz of RCG starts by trying to define master data. RCG has an excellent reputation as a consulting company, but after saying some sensible things the article seems to me to get tangled up. To me master data is anything that is shared between multiple systems. This is captured well in the Wikipedia definition which Richard references, before immediately going on to try and “improve” on this very clear definition. He wants to distinguish between master data and “reference data”, in his case by splitting the world into physical entities like product and abstract entities like organisation. I have written about this before. Not only is there no need for this distinction, it can be misleading, and cause one to start treating data in different ways when there is no need to. I’m not sure whether IT people just love to classify things (all that being brought up on “int”, “char”, “varchar” etc) but this desire to split master data up is just confusing. Data that is shared between systems needs to be managed, and it needs to be managed whether it is physical or abstract. By imposing artificial distinctions between types of master data we introduce complication where none is necessary. William of Ockham had the right idea on this, and he did so in the 14th Century.

Cognos nears the magic number

Cognos’ 4th quarter revenues were USD 284M, up 12% year on year. The 4th quarter (ending February for Cognos) is the strongest one traditionally, but this run rate means that Cognos has the tantalising prospect of hitting USD 1 billion in revenue in the next financial year, clearly a major milestone.

The results were quite strong across the board, with more deals in excess of USD 1 million than the company has ever achieved (25 deals of this size) and revenue growing in each region (9% US, 16% Europe, 18% Asia Pacific) though the currency effects flatter the European figures (6% growth in local currency terms).

Of this revenue, USD 92M was in license revenue (USD 238M for the year in all) which has potential for improvement since migration to Cognos Version 8 is reportedly sluggish; perhaps only 10% of customers have migrated so far.

Overall the figures are solid rather than dazzling, as reflected in the share price performance, but still indicates that the BI industry is in generally healthy shape.

The price of SOA?

I just read a provocative blog on SOA which raises an interesting point. Articles on SOA tend to focus on the technical issues e.g. performance, standards etc. While I don’t agree with everything in the article, Robin Harris is correct in pointing out that how a new piece of infrastructure is perceived depends in part on the pricing mechanisms that end users see. Different IT departments charge in different ways. Some levy a “tax” on the business lines, perhaps based on a simply charge-back mechanism “retail is 20% of the business, so they pay 20% of the IT department’s costs”. Others charge out for services in a much more granular way e.g. a charge for each desktop, a charge for each GB of server storage, a charge for DBA services etc. The latter has the big advantage of being related to usage, meaning that heavy users of IT pay more, presumably in some relation to the costs that they cause the IT department to incur. The disadvantage is that the pricing can easily become over-complex, with user departments receiving vast itemised bills each month for storage, telecoms, data, networking, applications support etc in minute detail. This can cause some users to try and “game” the system by taking advantage of any flaws in the pricing model, which may make logical sense to the individual department but may actually cause the true costs to the enterprise to rise. For example if the IT department prices storage in bands then a department may resort to all kinds of tricks to avoid moving into the next pricing band, and yet the effort involved in fiddling around may exceed the true costs to the company of just buying some more storage.

At one time I worked in Esso UK, and a study was done of the pricing mechanism, which was of the complex/sophisticated type. The recommendation, from a bright young manager called Graham Nichols, was simply to scrap the charge-back systems altogether and just levy a simplistic charge by department. This actually saved three million pounds in costs, which is what it took for th charge-back systems to be administered. No doubt years later things have changed, but this was an example of how the internal systems added no value at all to the company, and by simplifying them could remove a layer of administration. The drawback to simplified systems like this is that there is no incentive for increased efficiency, since the departments know what they are going to get charged so perceive no extra cost in heavier usage of the systems. This may eventually cause heavier systems costs which will be charged back eventually to departments; it is a question of balancing the costs of the internal processes v the potential higher costs that may occur.

SOA is an example of core infrastructure which pricing mechanism have always struggled with i.e. how do you justify an investment in infrastructure which has no benefit at first, but will incrementally benefit future applications? However the investment in justified and charged back, a key point is that the investment should be justified, like any other. IT departments should view a new piece of infrastructure like other departments consider capital expenses e.g. a new fleet of trucks or a new factory. What is the payback compared to the investment? What is the IRR, NPV and time to break-even? I have not seen much if at all written about this aspect of SOA, and yet we all need to understand what productivity gains are actually going to occur before we head down this path. There may be significant productivity improvements, or none at all (indeed it could be worse than today) and yet commentators seem to take SOA as a given. If a whole industry moves in a certain direction then eventually this can be hard for end-user companies to avoid e.g. if you decided a decade or two ago that client/server was just an expensive way of distributing data from one safe, secure place (mainframe) to lots of unsafe and insecure places (PCs) then you could have tried to hang on to your mainframe, but eventually fewer and fewer applications would run on your old mainframe, and you would be obliged to switch whether you liked it or not. It is not yet clear that SOA has that kind of momentum. However I am sure that understanding its economic impact would be valuable for all sorts of reasons. I look forward to seeing someone addressing this issue seriously (I do not count breathless marketing material from vendors selling SOA services claiming 10:1 improvements in everything from productivity to quality, without any actual pesky real examples), but I am not holding my breath.

Generic MDM message starts to sink in

One thing I have been banging on about for a long time is how the MDM industry needs to move aways from its roots in CDI hubs and PIM if it is to address the needs of large enterprises on any scale. It seems obvious to me that if you end up with one specialised hub for each type of master data then you quickly descend into architectural anarchy. You have the various ERP systems, and are then going to add a customer hub, a product hub (different vendor) and then are going to notice that there are other important master data types that need managing, like financial data, assets, people data, brand information etc. Are hubs going to spring up to support each one? That way madness lies.

If we are to sort out master data properly then we need to isolate it in a master data repository, separate from a data warehouse and from ERP, which can manage and maintain all master data for the enterprise. This hub needs to be able to handle all data types and keep track of where versions of master data are, even if it is not the only place the master data physically resides. It at least needs to know where the copies of master data live, or else we are back in master data anarchy. I have been amazed at how few vendors (and customers) seem to have picked up on this obvious point. In their latest press release Siperian demonstrates that it, at least, has figured it out. The release itself contains the amusing claim that Siperian is the only MDM product that can do this: “the only operational hub capable of managing hundreds of different types of master data entities” which will come as news to, for example, BP Lubes, who are using Kalido MDM to manage 350 different types of master data and have been for three years. However, despite the over-egging on the marketing, at least Siperian seems to understand the problem, which is more than can be said for a lot of its competitors.

On Toasters and MDM

MDM vendor Purisma just announced something which seems to me a useful idea, and then got carried away in the marketing. MDM is a fairly broad landscape, and certainly trying to fix a company’s MDM problems is a major exercise involving not just clever software but also business processes and data quality. This may all seem too much for some customers, and so a smart move is to try to reduce things to manageable proportions by tackiling some more “bite sized” issues. One good example of this is dealing with Dun & Bradstreet data. Dun & Bradstreet is a company who provide information on credit risk, and as a by-product of this have the most robust set of company data around. Hence if you want to know who owns who, Dun & Bradstreet has a pretty definitive set of data, updated on a regular basis. Companies wanting to tackle procurement quickly find that managing their supplier data on a consistent basis is a recurring headache, so standardising around Dun & Bradstreet codes for companies is a good way to get a grip on who their suppliers really are. However, keeping things up to date when the new D&B data comes out is an issue.

Purisma have bundled up their MDM application with pre-built Dun & Bradstreet data capabilities, thus creating an application of MDM that is widely applicable and can create a foot in the door for their broader MDM capabilities. This is an astute move and one I am surprised that other MDM vendors have been so slow to pick up on. Picking off niche but meaningful business problems like this one is a way of bringing the benefits of MDM software to customers and creating a bridgehead within accounts that can be broadened without having to a sell a gigantic enterprise-wide MDM project. For me it is a pity that they have chosen to hype this by calling the application an “appliance”. I have written previously about the use of this term, which was cleverly introduced by Netezza to describe their data warehouse hardware/software solutiion. By using a term that once associates with a toaster or a fridge it conjures up in the mind something that can just be plugged in and immediately works, yet this is hardly the case with a data warehouse, from Netezza or anyone else. However it is at least correct that it involves a piece of hardware. To label an MDM application a “software appliance” is stretching the term ever thinner in my view. Tech companies seem unable to resist latching on to whatever term seems to be trendy, and this is an opportunistic label. The day that an enterprise can plug in a data-related application as easily as a toaster is the day that an awful lot of consultants will be out of business, and that will not be soon.

Anyway, this is a distraction from what I otherwise think is a clever move from Purisma, which has emerged under the leadership of Pete Daffern, an impressive character who used to work for Vitria and has done an excellent job of raising Purisma’s profile. Bringing MDM applications down to manageable business problems has to be a good idea, and I would expect others to follow.

The mythical software productivity miracle

We have got used to Moore’s Law, whereby hardware gets faster at a dizzying rate, though there ought to be a caveat to this that points out that software gets less and less efficient in tandem with this. A neat summary of this situation is “Intel giveth, and Microsoft taketh away”. However when it comes to software development, the situation is very different. Sure, things have become more productive for developers over the years. My first job as a systems programmer involved coding IBM job control language (JCL) decks, which entertainingly behaved pretty much as though they were still on punch cards, with all kinds of quirks (like cunningly ignoring you if you had a continuation of a line too far to the right, beyond a certain column). I just missed Assembler and started with PL/1, but anyone who coded IBM’s ADF will be familiar enough with Assembler. However it is not clear how much things have really moved on since then. In the 1980s “4GLs” were all the rage, but apart from not compiling and hence being slower to run, they were scarcely much advance on Cobol or PL/1. Then there were “CASE” tools like James Martin’s IEF, which promised to do away with coding altogether. Well, we all know what happened to those. Experienced programmers always knew that the key to productivity was to reuse bits of code that actually worked, long before object orientation came along and made this a little easier. Good programmers always had carefully structured code libraries to call on rather than repeating similar code by editing a copy and making minor changes, so I’m not convinced that productivity raced along that much due to OO either.

This is all anecdotal though – what about hard numbers? Software productivity can be measured in lines of code produced in a given time e.g. a day, though this measure has limitations e.g. is more code really better (maybe implying less reuse) and anyway how do we compare different programming languages? A more objective attempt was to measure the number of function points per day or month. This had the big advantage of being independent of programming language, and also works for packages – you can count the number of function points in SAP (if you had the patience). Unfortunately it requires some manual counting, and so has never really caught on widely beyond some diehards who worked in project office roles (like me). Well, we always used to reckon that 15-30 function points per man month was pretty much a good average for commercial programming, and when Shell actually measured such things back in the 1990s this turned out be pretty true, almost irrespective of whether you were using a 3GL or 4GL, or even a package. Shell Australia measured their SAP implementations carefully and found that the function points per man month was delivered was no better (indeed a little worse) than for custom code, which was an unpopular political message at the time but was inconveniently true. Hence, while 3GL productivity definitely was an advance on Assembler, just about every advance since then has had a fairly marginal effect i.e. programmer teams writing today are only slightly more productive than ones in 1986. By far the most important factor for productivity was size of project: big projects went slowly and small projects went quickly, and that was that.

A new book “Dreaming in Code” by Scott Rosenberg is a timely reminder of why this is. Many of the issues of writing a moderately complex application are not to do with individual programmer productivity and everything to do with human issues like a clear vision, good team communication, teamwork etc. All the faster processors and latest programmer tools in the world can only optimise one part of the software development process. Sadly, the human issues are still there to haunt us, having moved on not one jot. Scott discusses the Chandler open source project and its woes, reminding us that software productivity is only a little bit about technology, and a great deal about human nature.

When I was doing technology planning at Shell I always had a rule: if a problem was constrained by hardware then it would be fixed quicker than you expect, but if the problem was a software issue it would always take longer than you would think. This book tells you why that is not a bad rule.

Pandora’s box and hope for new CFOs

Recent management change at Radioshack reported in CFO.com shows just how important it is for CFOs to be able to produce accounts that they can confidently sign off in today’s stricter regulatory environment. An incoming CFO needs to feel absolutely certain that the books are in pristine shape, and may have to produce historical financial information from systems that he or she did not implement and is unfamiliar with. Particularly as in the case of Radio Shack, when the CFO has to deliver reports when they’ve been in the job less than half the fiscal year. Often, a new CFO is faced with a Pandora’s box when they do peek inside the finance systems they’ve just inherited.

How confident can they be given the serious consequences if something turns out to be awry? What if there are acquisitions, which need to be speedily assimilated into the corporate structure, yet in reality take years to convert or replace the acquired company’s incompatible IT systems? When a new CFO opens up the lid on the financial systems on which they rely, what do they uncover? We are all familiar with those scenes in horror films where the victim opens the forbidden door, or the lid on the long-shut chest, and as the audience we think “oh no, don’t open that”. How confident can an incoming CFO be that they are not about to re-enact such a scene?

The unpleasant reality in most companies is that financial data resides in multiple systems e.g. through a series of subsidiaries, or is in transition in the case of acquired companies. As I have written about elsewhere, letting a single and reliable view of corporate performance information can be a thorny problem. If you have to go back over time, as when changes occur to the structure of a chart of accounts or when major reorganisations happen, it is difficult to compare like with like. Moreover, CFOs need to understand the origin of the data on which they rely with a full auditable trail. This means that finance teams need to be active in defining and checking the business rules and processes from which data is derived in their company, and how these processes are updated when changes occur. Relying on the ERP system to do all this is insufficient since many of the business rules reside outside of these systems. This is why modern data warehousing and master data management software can help deliver clearer visibility. Ideally a CFO should be able to gather financial data together and view it from any perspective and at any level of detail – without having to standardise operational systems and business processes. The most intelligent software uses a model of the business, not the data or the supporting IT architecture, as its blueprint. Such a business model-driven approach insulates the company from change, since the reporting formats change immediately in response to changes in the business model. Using such intelligent software means that business change – such as re-organisations, new products, consolidation programs, and de-mergers – should no longer be feared.

Leading the organisation throughout change provides a real opportunity for the data-driven CFO to make his or her mark. By using the most modern technology available they can do this safely and without becoming a compliance victim. Good luck to all newly appointed CFOs!