Money no object

Business Objects posted a strong quarter, rebounding from a weak Q2.  The key metric of software license revenue of USD 132M was up 10%, and 9% year over year.  There were nine deals over USD 1 million.  Operating margin is a healthy 17%.  Overall revenue was USD 311 million. 

About the only party pooper to be found in the figures is if you dig in to see that the licence revenue growth is mainly in enterprise performance management and enterprise information management, which are heavily influenced by acquisitions. The core business intelligence licences (the bulk of the business) actually shrank by 1% year over year. 

The Americas was the star area, up 27%, with Europe up 9% (just 3% if you strip out currency effects) and Asia Pacific 19%.  Someone I spoke to at Business Objects in the US this week said that the mood on the ground was very positive, with lots of hiring going on.

Business Objects has also been known for its strong sales and marketing, and this engine seems to be purring along well at preent.

Informatica counts the profits

Informatica is about the last pure play ETL/integration player left standing now that Ascential is part of IBM and even little Sunopsis has disappeared into Oracle’s maw.  Hence it is interesting to see their progress as they essentially try to buck the trend that says that data integration technology should best reside in the database.  Informatica has of course moved beyond just ETL into more general integration, and has real time capabilities now that bump it up against EAI vendors like Tibco and WebMethods as well as against other ETL offerings.

This quarter’s results were fairly healthy, with profits in particular doing very well at a net margin of 16%.  This was based on some cost cutting and good renewal rates since license revenues of USD 33.6 million was less than Wall Street expected, though this was still a healthy 19% increase over last year.  Maintenance revenue rose 25%.  The generally difficult market for enterprise software is revealed in the fact that just four deals of over USD 1 million took place compared to nine last quarter, and 27 deals were over USD 300k in size (compared to 33 last quarter). 

Still, overall Informatica can be pleased with these results, especially now that it seems to have got into the habit of making a profit, something it has historically struggled with (its five year average is a negative 11%). 


BI on demand

I write recently about the emergence of software as a service as one of the few bright spots in enterprise software at present.  With perfect timing, today a vendor came along and announced a software as a service offering in the business intelligence field.  Celequest is a start-up and it is certainly early days to see how well this idea takes off, but this is certainly an interesting development.  Celequest has the credibility of being run by Dias Nesamoney, who was founder iof Infomatica, and is backed by VCs Bay Partners and Lightspeed Ventures, who both have long track records.  The company was set up in 2002, and has some good customers like Citigroup, Cendant and Brocade, though it is not clear from the company’ website what scale these deployments are.  The application covers dashboards, analytics and data integration technology.  As far as I am aware the company uses an in-memory database “appliance” though from what I can gather the volume of data dealt with by this application so far is modest.  However this is not the point and no doubt will imcrease over time as the concept gains acceptance.  Celequest has made an astute partnership with, with a bridge to AppExchange.  There is also a connector to SAP. 

Certainly, there are barriers to the widesprea acceptance of this approach.  Large enterprises will be naturally conservative about the idea of letting their data out of the corporate firewall, particularly when it is key performance data of he type that BI applications use.  It is also unclear what sort of scale issues come into play when data is being accessed from beyond the coirporate network.  However for many companies, and especially SMEs, such issues will seem less important than the convenience of being able to deploy a business intelligence solution without the usual hassle of complex software installation and an army of systems integrators.  No doubt where Celequest has begun to tread, others will follow, and it will be a healthy new area of competition in the business intelligence industry.





To be or not to be

I spent the early part of this week at ETRE, an unusual IT conference that has been running since 1990.  Organised by ex-journalist Alex Vieux, the conference is for technology CEOs and founders, general partners at VCs and the usual hangers on, rather than for customers.  It moves around to a different European city each year, and this year attracted about 500 people.  The conference is notable for two consistent things: the very high quality of people attending (Bill Gates used to be a regular) and the utter inability of the “organisers” to keep to schedule. This year, it has to be said, the overruns were of more manageable proportions than usual, and indeed the opening session only started 14 minutes late.  John Thompson of Symantec and Niklaas Zenmstrom of Skype were the star names this year.  Skype now constitutes 7% of all long distance calls, which does rather make one wonder at what point the phone companies who generously provide the infrastructure will send the boys round to collect some money from eBay.

The “future of enterprise software” session was definitely the odd one out, since the future was clearly all about social networks, at least in the eyes of investors.  They have a sort of Dragon’s Den session called “meet the money” where early stage companies pitch to a panel of VCs, and this year the company (I couldn’t make that up), a sort of myspace wannabee for mobile phones whose business cards feature a voluptuous fantasy woman, had the VCs lining up to throw money at it.  By contrast a shipping ecommerce company, who had been around a few years, had several million in revenues and was profitable, could not have caught a cold, never mind funding.  Perhaps a social networking site for melancholy enterprise software executives?  No takers?  Oh well.

What interest there was around enterprise software was confined to “software as a service” companies.  Rightnow has now reached USD 100 million in revenue, joining in that rarefied air, and certainly there seem to be a few other early stage companies branching out into software as a service for things like HR and ERP.  Given that as much as 80% of technical problems with software are to do with the client environment (often some odd combination of software versions that the vendor had not, or could not, test) then the model certainly makes a lot of sense.  The drawback is that the rental model that usually goes with this means relatively slow growth, though the recurring revenue generated certainly means less sleepless nights near the end of a quarter for software executives.  

One of the few enterprise areas prospering was security, where there was a general consensus that the hackers and spammers were comfortably winning the war.  I was impressed with a company called BitDefender, who are a Rumanian security software firm that has grown since launch in 2001to USD 60 million in revenues.  This has all been done without a dollar of venture capital (there are not too many VC conferences in Bucharest).

The lack of organisational skills of the conference remain legendary.  They denied all knowledge of my booking until I produced the bank transfer details, though they at least seemed embarrassed when I pointed out that they had done exactly the same thing last year.  The conference check-in was a procession of people with lost reservations and people who had booked airport transfers that never arrived.  To be fair, a very helpful gentleman called Farley Duvall did a bang-up job of sorting out my misbehaving video presentation, and the Red Herring people always seem to cope with problems with willingness and good humour. Perhaps they just need some German organisers. 

With its baffling inability to stick to a schedule, ETRE remains something of an enigma.  Attendees wonder aloud whether it is worth the high cost, and yet each year they come back as there is nowhere else quite like this for networking.  If your company is not there, what does this say about you?  With other conferences very much in decline (even Esther Dyson’s US conference just bit the dust) you certainly have to give a lot of credit to Alex Vieux and his team for managing to attract a healthy turnout of people back every year.  Not many tech conferences can claim a 17 year unbroken heritage.


Oracle buys Sunopsis

It has just been announced that Oracle has bought Sunopsis, one of the few remaining independent ETL vendors.  Since Oracle’s existing ETL tool (the rather inaccurately named “Data Warehouse Builder”) is pretty weak, this makes a lot of sense for Oracle.  I suspect that their statement about “integrating” the two tools will involve much use of the delete key for the Warehouse Builder code. Sunopsis is a good product, a French company that had been around for some time but had recently made more visible market progress in the US.  No numbers are public, but my information is that Sunopsis revenues were about USD 10M and the purchase price was just over USD 50M, which at a price/sales ratio of over five makes a quite healthy price for the company.  Sunopsis was 80% owned by the founder, who had spurned venture capital, so this is very good personal news for him also. 

Sunopsis made a virtue of using the DBMS functions where possible rather than re-inventing transformation code, so is particularly compatible with Oracle (or other relational databases). This deal should also put paid to the loose marketing relationship Oracle had with Informatica. 

In my view this is a rare case where the deal is good for both companies.  Oracle finally gets a decent ETL capability and Sunopsis gets Oracle’s massive sales channel. 

Between the sheets

Earlier this year I spoke at a conference in Madrid where Ventana research unveiled research that showed, amongst other things, that companies that rely heavily on spreadsheets take several days longer to close their financial books each month than those that do not.  This makes sense.  Excel is the analyst tool of choice, but its very ease of use presents issues.  It is not easy to document well, and when I was at Shell we had a whole team providing spreadsheet auditing and design services.  For some of the very complex financial or other models that end up being developed it turns out to be very difficult to make sense of a model when someone moves on.  In the IT world we are used to dealing with support issues and at least in theory have plenty of experience with documentation standards and debugging tools.  As we all know, even in mature systems the documentation can be a nightmare, but imagine how much worse it is in Excel when you are asked to take over a thousand line Excel spreadsheet where all the cell formulae use the default grid references e.g. “=Sum(c3:c27)”.  Instead you can use the facilities in Excel to assign meangingful names to cells e.g. this would become something like “=Sum(expenses)”, which is a lot easier to figure out, but how many peopel do this?   Indeed when audits of spreadsheet models were carried out then errors were frequently found, which is worrying given the kinds of decisions being taken that rely on these models. An article this week explains how eXtensible Business Reporting Language (XBRL) offers the prospect of some relief since it defines tags which are independent of cell location. By separating the definition of the data from its cell-specific information it becomes easier to keep track of things, and easier to combine worksheets from multiple sources.  Whilst this is a welcome development I suspect that there is a long, uphill climb involved since the problem comes down to people rather than technology.  It took a long hard struggle to get programmers to (sometimes) document things properly, and I cannot see most finance or other end-users really caring enough.  It is always quicker to use things like cell references rather than proper names, for example, and so it will always be tempting to do so and not worry too much that your pretty model is almost incomprehensible to anyone else.  

It will be interesting to see whether regulators take a firmer view of things over time, since in my experience the quality of spreadsheet models is distinctly patchy.  Insisting on proper audits of spreadsheets used for serious purposes (e.g. statutory accounts, investment decisions) would be a start.  I suspect that most companies have little idea of just what a can of worms they are relying on for the numbers on which they make their decisions. 

Opening the pricing box

The open source movement is creeping into BI in various guises, as pointed out by Jacqueline Ernigh.  However, while Linux is undoubtedly taking a lot of market share from proprietary UNIX variants, it is less clear of progress higher up the stack. The article mentions a number of organisation that provide some form of open source reporting tools e.g. Pentaho, Greenplum and Jaspersoft, and indeed there are others still. However it is by no means clear what penetration these are really getting.  It was noticeable that one of the two customer examples reported merely had a database running on Linux, but had yet to deploy open source reporting tools. 

The article unfortunately loses credibility when it cites an example of the savings to be made: “At pricing of $1,000 per user seat, for example, a company with 16,000 employees would need to pay $160,000 for a full-fledged BI deployment, Everett hypothesized.”  Hmm.  It is some time since I did my mathematics degree but I am thinking that 16,000 * 1,000 = 16,000,000 i.e. 16 million dollars, not $160,000.  Even if you are kind and assume that a major discount could be obtained for such a large deployment, even an unlikely 90% discount to list would still get you USD 1.6 million.  I doubt that Dan Everett at the entirely respectable research company Ventana would really have made such a dramatic numerical blunder, so perhaps it was a journalistic error.  Such carelessness does make one wonder about the accuracy of rest of the article, which is a pity since it is discussing an interesting trend.  

I still have yet to really come across significant deployments of open source reporting tools in production applications, but presumably they will catch on to a certain extend, just as MySQL is making steady inroads into the database market.  Perhaps the most significant point at this stage is not made by the article though.  The very existence of open source reporting tools puts pricing pressure on the established BI vendors.  Procurement people doing large deals with BI vendors will treat the open source BI movement as manna from heaven, since they have a stick to beat down the price of reporting tools from the major vendors.  Anyone about to engage in a major BI deployment or negotiation would be well advised to look carefully at these tools, if only as weapons in the armoury against pushy software salesmen.  This is further bad news for the BI vendors, who have enough to worry about with the push of Microsoft into their space and the general saturation of the market. In this case even a handful of customer deployments will suffice to send a shiver down the spine of the major vendors.



In the project jungle, your MDM initiative needs claws

Matthew Beyer makes a start on trying to come up with an approach to tackling master data initiatives.  Some of what he says makes good sense, as in “think strategically but act tactically”.  However I’d like to suggest a different approach to him in the way to prioritise.  The biggest problem with the issue of master data is one of scale.  Enterprises have a lot of systems and many types of master data, many far beyond the “10 systems” that is used as an illustration in the article.  Just one Shell subsidiary had 175 interfaces left AFTER they had implemented every module of SAP, to give a sense of the magnitude of the problem in a large company.  Hence an approach that says “just map all the master data in the enterprise and catalog which systems use each type of data” is going to be a severely lengthy process, which will probably get cancelled after a few months when little is to be shown for all the pretty diagrams.

I believe that a master data initiative needs to justify itself, just like any other project that is fighting for the enterprise’s scare resources and capital.  Hence I believe that a good approach is to start by identifying and costing problems that may be associated with master data, and putting a price tag on these problems.  For example, poor customer data could result in duplicate marketing costs, lower customer satisfaction, or misplaced deliveries.  Having an inability to get a view of supplier spend across the enterprise (as 68% of customers in one survey stated at a 2006 UK procurement conference) will have a cost in terms of not being able to get an optimal deal with suppliers, and in resulting in duplicate suppliers.  These things have real costs associated with them, and so, if fixed, have real hard dollar benefits.  Interviewing executives in marketing, procurement, finance, operations etc will soon start to tease out which operational issues are actually causing the business pain, and which have the greatest potential value if they could be fixed.  Business people may not be able to put a precise price tag on each problem, but they must be able to estimate at least a range.  If they cannot, then it is probably not that pressing a problem and you can move on to the next one. 

At the end of such an interview process you will have a series of business problems with estimates of potential savings, and can map this against the master data associated with these business processes.  Now you have a basis for priority.  If it turns out that there are tens of millions of dollars of savings to be gained from fixing problems with (say) supplier data, then that is a very good place to start your MDM pilot.

Such an approach assures you that you will be able to put a business case together for an MDM initiative, even if it has limited scope at first.  Such an initiative has a lot more chance or approval and ongoing survival that something that it perceived to be a purist or IT-led data modelling initiative. 

Provided that you adopt an architecture that can cope with master data in general and not just this one type specifically (i.e. try and avoid “hubs” that only address one type of master data) then you can build on the early success of a pilot project confident that the approach you have taken will be useful across the enterprise.  By getting an early quick win in this way you build the credibility for follow-on projects and can start to justify ongoing investment in protecting the integrity of master data in the future e.g. by setting up a business-led information assed competence centre where ownership of data is clearly defined. 

IT projects of any kind that fail to go through a rigorous cost-benefit case risk not being signed off, and then being cancelled part way through.  The race for funds and resources in a large company is a Darwinian one, so equip your MDM project with the ROI teeth and claws it needs to survive and justify itself.  When times turn sour and the CFO draws up a list of projects to “postpone”, a strong business-driven ROI case will go a long way to ensuring your MDM project claws its way to the top of the heap.