Andy on Enterprise Software

The Price of Failure

July 31, 2007

I enjoyed an article by Madan Sheina on the failure of BI projects. 87% of BI projects fail to meet expectations, according to a survey by the UK National Computing Centre. I wish I could say this was a surprise, but it is not. Any IT project involves people and processes as well as technology, yet many project focus almost entirely on the technology: tool choices, database performance etc. Yet in practice the issues which confound a BI project are rarely the technology itself. Projects fail to address actual customer needs, and frequently don’t acknowledge that there are several user constituencies out there with quite different requirements. Frequently a new technology is stuffed down the customer’s throat, and projects often neglect data quality to their peril.

From my experience, here are a few things that cause projects to go wrong.

1. Not addressing the true customer need. How much time does the project team spend with the people who are actually going to use the results of the BI project? Usually there are a subset of users who want flexible analytical tools, and others who just need a basic set of numbers once a month. A failure to realise this can alienate both main classes of user. Taking an iterative project to project development rather than a waterfall appraoch is vital to a BI project.

2. Data is only useful if it is trusted, making data quality a key issue. Most data is in a shocking state in large companies, and the problems often come to light only when data is brought together and summarised. The BI project cannot just gloss over this, as the customers will quickly avoiding using the new shiny system if they find they cannot trust the data within it. For this reason the project teams needs to encourage the setting up of data governance processes to ensure that data quality improves Such initiatives are often outside the project scope, are unfunded and require business buy-in, which is hard. The business people themselves often regard poor data quality as an IT problem when in fact it is an ownership and business process problem.

3. “Just one more new user interface” is not what the customer wants to hear. “Most are familiar with Excel and are not willing to change their business experience” was one quote from a customer in the article. Spot on! Why should a customer whose main job is, after all, not IT but something in the business, have to learn a different tool just to get access to data that he or she needs? Some tool vendors have done a good job of integrating with Excel, and yet are often in denial about this since they view their proprietary interface as a key competitive weapon against other vendors. Customers don’t care about this; they just want to get at the data they need to do their job on an easy and timely way. Hence a BI project should, if at all possible, look at ways of allowing users to getting data into their familiar Excel rather than foisting new interfaces on them. A few analyst types will be prepared to learn a new tool, but this is only a small subset of the audience for a BI project, likely 10% or less.

4. Data being out of date, and the underlying warehouse being unable to respond to business change, is a regular problem. Traditional data warehouse designs are poor at dealing with change caused by reorganisations, acquisitions etc, and delays in responding to business change cause user distrust. Unchecked, this causes users to hire a contractor to get some “quick and dirty” answers into a spreadsheet and bypass the new system, causing the proliferation of data sources to continue. Using packaged data warehouse that are good at dealing with business change is a good way of minimising this issue, yet even today most data warehouse are hand-built.

5. Training on a new application is frequently neglected in IT projects. Spend the time to sit down with busy users and explain to them how they are to access the data in the new system, and make sure that they fully understood how to use the system. It is worth going to some trouble to sit down with users and train them one to one if you have to, since it only takes a few grumbling voices to sow the seeds of discontent about a new system. Training the end users is never seen as a priority for a project budget, yet this can make the world of difference to the likelihood of a project succeeding.

6. Running smaller projects sounds crass but can really help. Project management theory shows that the size of a project is the single biggest predictor of success: basically, if projects fail, small ones do better, and yet you still see USD 100 million “big bang” BI projects. Split the thing into phases, roll out by department and country, do just about anything to bring the project down to a manageable size. If your BI project has 50 people or more on it then you are already in trouble.

7. Developing a proper business case for a project and then going back later and doing a post implementation review happens surprisingly rarely, yet can help shield the project from political ill winds.

You will notice that not one of the above issues involves a choice of technology, technical performance or the mention of the word “appliance”. Yes, it is certainly important to pick the right tool for the job, to choose a sufficiently powerful database and server and to ensure adequate systems performance (which these days appliances can help with in the case of very large data volumes). The problem is that BI projects tend to gloss over the “soft” issues above and concentrate on the “hard” technical issues that we people who work in IT feel comfortable with. Unfortunately there is no point in having a shiny new server and data warehouse if no one is using it.

Business Objects has a solid quarter

July 30, 2007

The quarterly results of Business Objects reflect the generally fairly robust health of the business intelligence sector, displayed by several other vendors in recent months. Revenues in Q2 2007 were USD 350 million, and profits were up 26% on last year. License revenue, always a key measure for a software company, was up 19% (ignoring currency variations) which is a strong positive sign. On the downside, the large deals appear to drying up, with just six deals over USD 1 million, half that of the same quarter last year. On the other hand there were 154 deals between USD 200k and USD 1 million, which is up 36%. This suggests that customers are phasing things more gradually, and also points to the tendency towards saturation of the BI tools market that I have written of previously. There are likley to be fewer and fewer giant deals to go around, as most companies that want to make an enterprise purchase have already done so.

With USD 1 billion in cash lying around Business Objects also has plenty of ammunition for further acquisitions should it decide to do so. This has been a smart strategy in my view, with acquisitions such as Cartesis having diversified Business Objects from the pure BI tools market, which seems to me to be one with limited growth potential and vulnerable to price pressure form Microsoft. Going “up market” is the way to go here.

The surprisingly fertile world of database innovation

July 24, 2007

I came across a thought-provoking article, an interview with Michael Stonebraker. As the inventor of Ingres this is someone who knows a thing or two about databases, and I thought that some interesting points were raised. He essentially argues that advances in hardware have meant that specialist databases can out-perform the traditional ones in a series of particular situations, and that these situations are in themselves substantial markets that start-up database companies could attack. He singles out text, where relational databases have never prospered, fast streaming data feeds of the type seen on Wall Street, data warehouses and specialist OLTP. With Streambase he clearly has some first-hand experience of streaming data, and OLTP is what he is working on right now.

I must admit that with my background in enterprise architecture at Shell I underestimated how much of a market there has been for specialist databases, assuming that the innate conservatism of corporate buyers would make it very hard for specialsit database vendors. Initially I was proved right, with attempts like Red Brick flickering but quickly becoming subsumed, while object databases were clearly not going to take off. With such false starts it was easy to extrapolate and assume that the relational vendors would simply win out and leave no room for innovation. However to take the area of data warehousing, this has clearly not been the case. Teradata blazed the trail of a proprietary database superior in data warehouse performance to Oracle etc, and now Netezza and a host of smaller start-ups are themselves snapping at Teradata’s heels. The in-memory crowd are also doing well, with for example Qliktech now being the fastest growing BI vendors by a long way, thanks to its in-memory database approach. Certainly Stonebraker is right about text – companies like Fast and their competitors would not dream of using relational databases to build their text search applications, an area where Oracle et al never really got it right at all.

Overall there seems to be a surprising amount of innovation in what at first glance looks like an area which is essentially mature, dominated by three big vendors: Oracle, IBM, Microsoft. Teradata has shown that you can build a billion dollar revenue company in the teeth of such entrenched competition, and the recent developments mentioned above suggest that this area is far from being done and dusted from an innovation viewpoint.

Informatica looks perky

July 23, 2007

Informatica announced an excellent set of quarterly results, demonstrating continuing rude health. Revenue of $94M was a spanking 17% up on the same time last year. License revenue was up 15% at $41M, so the improvement was more than just good services revenue. Eight deals over $1 million compared to nine last time, but deals over $300k were massively up with 35 compared to just 9 a year ago. There was also a major OEM deal, with SAP now going to OEM Informatica, a rare exception to their usual not invented here attitude. This is a good move for both parties.

The results were broad-based, with Informatica’s international operations doing particularly well. These results are a sign of continuing broad based good conditions n the broader BI market. When ETL prospers, data warehouses and BI tools are not far behind.

Netezza heads to market

July 20, 2007

The forthcoming Netezza IPO will be closely watched by those interested in the health of the technology space, and the business intelligence market in particular. Netezza has been a great success story in the data warehouse market. From being founded in 2000 its revenues have risen dramatically. Its fiscal year ends in January. Revenues have climbed from $13M in 2004 to around $30M in 2005 to £54M in 2006, to $79.6M in fiscal year ending January 2007. Its revenues in the quarter ending April 2007 were $25M. Hardly any BI vendors can claim this kind of growth rate (other than Qliktech), especially at this scale. Its customer base is nicely spread amongst industries and is not restricted to the obvious retail, telco and retail banking. So, is this the next great software (actually partly hardware in this case) success story?

Before you get too excited, there are some things to ponder. Note that in 2006 Netezza lost $8M despite that steepling revenue rise. In the latest quarter it still lost $1.6M. This is interesting, since conventional wisdom has it that you can only IPO these days with a few quarters of solid profits, yet Netezza has yet to make a dime. Certainly, it would be fair to assume that if it can keep growing at this rate, profit will surely come (at least its losses are shrinking) but the past has showed that profits can be elusive in fast growing software companies. Also, the data warehouse market is certainly healthy, advancing at 9% or so according to IDC projections, but this is well below Netezza’s growth rate. More particularly, Netezza only attacks one slice of the data warehouse market, the high data volume one. If you have a small data warehouse then you don’t need Netezza, so only certain industries will really be happy hunting grounds for appliances like Netezza. This can be seen in the Teradata story, which is Netezza’s true competitor. Teradata has stalled at around $1 billion or so of revenue, growing just 6% last year (of course most of us wish we had this kind of problem). Certainly Netezza can attack Teradata’s installed base, but enterprise buyers are notoriously conservative, and will have to be dragged kicking and screaming to shift platforms once operational. So this to me suggests that there is a ceiling to the appliance market. If true, this means that you cannot just draw an extrapolation of Netezza’s current superb revenue growth. I have not seen this written about elsewhere, so perhaps it is just a figment of my imagination, and Netezza will prove me wrong. However you can look to Teradata to see that even it has entirely failed to enter certain industires, typically business to business industries where data is complex rather than high in volume. Fo example there is scarely a Teradata installation in the oil industry, which fits this category of complex but mostly low volume data (except for certain upstream data).

So, bearing this in mind, what would be a valuation? Well, solid companies like Datamirror are changing hands for 3x revenue or so, though these are companies with merely steady growth rather than the turbo-charged growth demonstrated by Netezza. So suppose we skip the pesky profitability question, accept this is a premium company and went for five times revenues? This would lead to a valuation of $400M on trailing revenues, maybe $500M on this year’s likely revenues. Yet the offer price of the shares implies a market cap of $621M, virtually eight time trailing revenues, and six times likely forward revenues.

This is scarcely a bargain then, though it is a multiple that will bring joy to the faces of other BI vendors, assuming that the IPO goes well. Of course such things are generally carefully judged, and no doubt the silver tongued investment bankers have gauged that they can sell shares at this price. However for me there seems a nagging doubt, based mainly on what I perceive to be this (in my view) effective cap on the market size that appliances can tackle, and to a lesser extent that lack of proven ability to generate profits. The markets will decide.

The performance of Netezza shares will be a very interesting indicator of the capital market’s view on BI vendors, and will show whether enterprise technology is coming in from the cold winter that started in 2001. Anyway, many congratulations to Netezza, who have succeeded in carving out a real success story in the furrow that for so long was owned by Teradata.

Postscript. On the first day of trading, no one seems troubled about any long term concerns.

Gazing Behind the Data Mirror

July 18, 2007

I have been digging a little deeper into the Data Mirror purchase by IBM that I wrote about yesterday.

It’s a good deal for IBM, and not only because the price was quite fair. With its Ascential acquisition IBM positioned itself directly against Informatica, yet Ascential’s technology did not have the serious real-time replication that is important for the future of ETL, and this is what Data Mirror does have. DataMirror gives IBM a working product with heterogeneous data source support in real time, giving IBM an important piece in the puzzle to achieve their vision for real-time operational BI and event-awareness.

A bigger question is whether IBM fully understands what it has bought and whether it will properly exploit it. Data Mirror’s strengths were modest pricing, low-impact installation, neutrality of sources it supports and performance doing this (via its log-scraping abilities and speed of applying changes). IBM must keep their eye on the development ball to ensure these aspects of the DataMirror technology are continued if it is to really exploit its purchase. For example, on the last point, the partnerships DataMirror has with Teradata and Netezza and Oracle should be continued, despite the obviously temptation to snub rivals Oracle and Teradata.

Any acquisition creates uncertainty amongst staff, and IBM needs to move beyond motherhood reassurance to show staff that it understands the DataMirror technology and business and wants to see it thrive and grow. It needs to explain how the DataMirror technology fits within a broader vision for real-time integration in combination with traditional batch oriented ETL, business intelligence and enterprise service bus (not just MQSeries) integration or else the critical technical and visionary people will dust off their resumes and start looking elsewhere.

I gather that IBM has already announced an internal town hall meeting next week, at which it needs to convince key technical staff that they have a bright future within the IBM family. I also hear that no hiring freeze has been imposed, which implies they are making the decision of growing the business, which should reassure people. IBM is an experienced company which will recognise that the true IP of a company is not in libraries of code but in the heads of a limited number of individuals, and no doubt will recognise the need to retain and motivate critical staff. It used to be poor at this (think about the brilliant technology it acquired when it bought Metaphor many years ago, but bungled the follow-up) but has got smarter in recent years e.g. I hear from DWL people that they have been treated well.

Hopefully IBM’s more recent and happier acquisition experiences will be the case here.

Mirror, mirror on the wall, who is most blue of them all?

July 17, 2007

on Monday IBM announced it would buy DataMirror, a Canadian software company. Data Mirror made its living by selling software that detects change in data sources and then managing replication. It differed from other ETL technology in being designed from the ground up to work in real-time rather than batch, which made it well suited to some customer situations, and the software was modestly priced. The technology was also used by some customers for backup and business continuity reasons. It had a large customer base (well over 2,000).

For IBM the acquisition adds some solid technology to its data warehouse offering and its “on demand” strategy, in this case replacing Powerpoint promises with something that actually works. Datamirror was publicly traded on the Toronto stock exchange. It did $46.5 million in revenue last year and was hoping for $55 million in fiscal year 2008, so this was a company that was delivering solid though unspectacular growth, though its share price had doubled in the last twelve months. IBM’s price of $162 million is over three times trailing revenues and so is a healthy valuation for the company, and a small premium to its stock market valuation of last week.

Open source data modelling

July 13, 2007

The open source movement continues to ripple into the business intelligence field. Now you can get hold of a data modelling tool that is open source rather than having to buy Erwin, thanks to a Canadian consulting firm called SQL Power Group. I am not familiar with this company but as a consulting firm this seems to be a sensible move, since after all they would not be maintaining the tool as a proper software product anyway, but adapting it to each client’s need on site. By making it open source they gain publicity, and may encourage others to improve the tool in which they have skills. So now you can have a data modelling tool, a database (mySQL), and assorted ETL and BI tools (Pentaho Greenplum, Jaspersoft etc). The penetration of open source is a matter of some debate. While Aberdeen Group reckon 18% of firms are now using open source BI tools, it is less clear what the level of penetration actually is within companies. It is one thing to have a small departmental pilot running, another to commit wholeheartedly to open source tools on an enterprise-wide scale. Clearly it would be a brave company who went aggressively down this path, as you are essentially taking on a major customisation and support project. Certainly you will save some licence fees, and that is not to be sneezed at, but it is less clear what trade-off there is in terms of customisation costs against these savings. I suspect that in the short term, at least, the main effect will be for enterprises to use these tools as a stick to beat traditional reporting vendors when it comes to price negotiations. This will certainly have some negative consequences of profitability for the likes of Business Objects and Cognos if the movement really takes hold and becomes a credible threat. At present I have not really seen this happening in my own experience.

I would invite anyone who has direct experience of an open source BI project to comment here on your experiences, good or bad. I could be wrong but my guess is that I am not expecting a blizzard of replies, despite the emerging interest. Interest is not the same as deployment.

Inappropriate marketing

July 5, 2007

SAP scored an own goal this week when it admitted to hacking into an Oracle website and stealing code. This is pretty bizarre behaviour in plenty of ways, but I like the way that their PR guys, desperate to find some way of spinning this, described the theft of software as “inappropriate downloads”, making it sound as if this act was in a similar category to a rogue employee downloading a favourite music track to his laptop.

I am always impressed (and sometimes amused) by the software industry’s creative marketing ability, but usually this happens when a software product’s features are exaggerated a tad, and I feel this is taking PR spin to a new level. What next? Using the same logic, defendants in murder trials should perhaps get their defence lawyers to rename them trials of “inappropriate demise”?

The evidence mounts

July 2, 2007

The annual IDC business intelligence report shows a reassuring 11% rise in overall BI tool revenues in 2006. Regular readers of my blog know that a couple of my long-term viewpoints are that: (a) Microsoft is the vendor with the long-term best position due to its ownership of Excel, which is still the BI front-end that end-users actually want and (b) that specialist BI tools will never, contrary to many BI vendor projections, have a place on every desktop, due to the rather dull reason that most people do not need one.

Is there any evidence for these hypotheses? Well, Microsoft’s BI revenues grew 28%, while the major specialist BI vendors grew by 7% (Business Objects) and 9.8% (Cognos). As IDC analyst Dan Vesset notes, “IDC does not yet see a substantial impact on the market from the strategy and marketing messages of most BI vendors seeking to reach a broader use base”. Nor will it ever, in my opinion.

Also of note is the formidable performance of Qliktech, which grew by a little matter of 97% to USD 43.6 million revenue. Its offering to the mid-market offering based on in-memory search technology continues to get considerable customer traction. I have a soft spot for this company, having been asked to look at it for a venture capital firm as an investment when it was still a tiny company; I am very relieved that I recommended that they invest – otherwise I imagine that they would be hunting me down like a dog right now.