Andy on Enterprise Software

Trick or treat

October 31, 2006

I’m not sure who had the idea of holding a data quality conference on Halloween, but it was either a lucky coincidence or a truly inspired piece of scheduling.  DAMA ran today in London, and continues tomorrow.  This also fits with the seasonal festival, which originally was a Celtic festival over two days when the real world and that of ghosts overlapped temporarily.  Later the Christian church tried to claim it as their own by calling November 1st All Hallows Day, with 31st October being All Hallows Eve, which in the fullness of time became Halloween.  I will resist the temptation to point out the deterioration in data quality over time that this name change illustrates.  The conference is held at the modern Victoria Park Plaza hotel, which is that rare thing in London: a venue that seems vaguely aware of technology.  It is rumoured that there is even wireless access here, but perhaps that is just the ghosts whispering.

The usual otherworldly spirits were out on this day: the ghouls of the conference circuit were presenting (such as me), while scary topics like data architecture, metadata repositories and data models had their outings.  The master data management monster seemed to be making a bid to take over the conference, with assorted data quality and other vendors who had never heard the phrase a year ago confidently asserting their MDM credentials.  You’d have to be a zombie to fall for this, surely?  At least one pitch I heard a truly contorted segue from classic data quality issues into MDM, with a hastily added slide basically saying “and all this name and address matching stuff is really what MDM is about really, anyway”.  Come on guys, if you are going to try to pep up your data profiling tool with an MDM spin, at least try and do a little research.  One vendor gave a convincing looking slide about a new real-time data quality tool which I know for a fact has no live customers, but then such horrors are nothing new in the software industry.

The conference itself was quite well attended, with about 170 proper customers, plus the usual hangers-on.  Several of the speaker sessions over the conference do feature genuine experts in their field, so it seems the conference organisers have managed to minimise the witches brew of barely disguised sales pitches by software sales VPs masquerading as independent “experts” that all too often pack conference agendas these days. 

Just as it seems that the allure of ghosts is undiminished even in our modern age, so the problems around the age-old issue of data quality seem as spritely (sorry, I couldn’t resist that one) as ever.  New technologies appear, but the data quality in large corporations seems to be largely impervious to technical solutions.  It is a curious thing but given that data quality problems are very, very real, why can no one seem to make any real money in this market?  Trillium is the market leader, and although it is no longer entirely clear what their revenues are, about USD 50M is what I see in my crystal ball. Other independent data quality vendors now swallowed by larger players had revenues in the sub USD 10M range when they were bought (Dataflux, Similarity Systems, Vality).  First Logic was bigger at around USD 50M but the company went for a song (the USD 65M price tag gives a revenue multiple no-one will be celebrating).  Perhaps the newer generation of data quality vendors will have more luck.  Certainly the business problem is as monstrous as ever.

I am posting this just on the stroke of midnight.  Happy Halloween!

 

Money no object

October 26, 2006

Business Objects posted a strong quarter, rebounding from a weak Q2.  The key metric of software license revenue of USD 132M was up 10%, and 9% year over year.  There were nine deals over USD 1 million.  Operating margin is a healthy 17%.  Overall revenue was USD 311 million. 

About the only party pooper to be found in the figures is if you dig in to see that the licence revenue growth is mainly in enterprise performance management and enterprise information management, which are heavily influenced by acquisitions. The core business intelligence licences (the bulk of the business) actually shrank by 1% year over year. 

The Americas was the star area, up 27%, with Europe up 9% (just 3% if you strip out currency effects) and Asia Pacific 19%.  Someone I spoke to at Business Objects in the US this week said that the mood on the ground was very positive, with lots of hiring going on.

Business Objects has also been known for its strong sales and marketing, and this engine seems to be purring along well at preent.

Informatica counts the profits

October 20, 2006

Informatica is about the last pure play ETL/integration player left standing now that Ascential is part of IBM and even little Sunopsis has disappeared into Oracle’s maw.  Hence it is interesting to see their progress as they essentially try to buck the trend that says that data integration technology should best reside in the database.  Informatica has of course moved beyond just ETL into more general integration, and has real time capabilities now that bump it up against EAI vendors like Tibco and WebMethods as well as against other ETL offerings.

This quarter’s results were fairly healthy, with profits in particular doing very well at a net margin of 16%.  This was based on some cost cutting and good renewal rates since license revenues of USD 33.6 million was less than Wall Street expected, though this was still a healthy 19% increase over last year.  Maintenance revenue rose 25%.  The generally difficult market for enterprise software is revealed in the fact that just four deals of over USD 1 million took place compared to nine last quarter, and 27 deals were over USD 300k in size (compared to 33 last quarter). 

Still, overall Informatica can be pleased with these results, especially now that it seems to have got into the habit of making a profit, something it has historically struggled with (its five year average is a negative 11%). 

 

Kalido repositions itself

October 19, 2006

Kalido has now announced revised positioning targeted at selling solutions to business problems (and will soon announce a new major product release). The key elements are as follows. The existing enterprise data warehouse and master data management product offerings remain, but have been packaged with some new elements into solutions which are effectively different pricing/functionality mechanisms on the same core code base.

The main positioning change is the introduction of pre-built business models on top of the core technology to provide “solutions” in the areas of profitability management, specifically “customer profitability” and “product profitability”. This move is, in many ways, long overdue, as Kalido was frequently deployed in such applications but previously made no attempt to provide a pre-configured data model. Given that Kalido is very strong at version management, it is about the one data warehouse technology that can plausibly offer this without falling into the “analytic app” trap whereby a pre-built data model, once tailored, quickly becomes out of synch with new releases (as Informatica can testify after their ignominious withdrawal from this market a few years ago). In Kalido’s case its version management allows for endless tinkering with the data model while still being able to recreate previous model versions.

Kalido also announced two new packaging offerings targeted at performance management/business intelligence, one for data mart consolidation and one for a repository for corporate performance management (the latter will be particularly aimed at Cognos customers, with whom Kalido recently announced a partnership). Interestingly, these two offerings are available on a subscription basis as an alternative to traditional licensing. This is a good idea, since the industry in general is moving towards such pricing models, as evidenced by salesforce.com in particular. In these days of carefully scrutinised procurement of large software purchases, having something the customers can try and out rent rather than buy should ease sales cycles.

The recent positioning change doesn’t, however, ignore the IT audience – with solution sets geared toward “Enterprise Data Management” and “Master Data Management.” The enterprise data management category contains solutions that those familiar with Kalido will recognize as typical use cases – departmental solutions, enterprise data warehouse and networked data warehouse. The key product advance here is in scalability. Kalido was always able to handle large volumes of transaction data (one single customer instance had over a billion transactions) but there was an Achilles heel if there was a single very large master data dimension of many million of records. In B2B situations this doesn’t happen (how many products do you sell, or how many stores do you have – tens or hundreds of thousands only) but in B2C situations e.g. retail banking and Telco, it could be a problem given that you could well have 50 million customers. Kalido was comfortable up to about 10 million master data items or so in a single dimension, but struggled much beyond that, leaving a federated (now “networked”) approach as the only way forward. However in the new release some major re-engineering underneath the covers allows very large master data dimension in the 100 million range. This effectively removes the only real limitation on Kalido scalability; now you can just throw hardware at very large single instances, while Kalido’s unique ability to support a network of linked data warehouses continues to provide an effective way of deploying global data warehouses.

Technologically, Kalido’s master data management (MDM) product/solution is effectively unaffected by these announcements since it is a different code base, and a major release of this is due in January.

This new positioning targets Kalido more clearly as a business application, rather than a piece of infrastructure. This greater clarity is a result of its new CEO (Bill Hewitt), who has a strong marketing background, and should improve the market understanding of what Kalido is all about. Kalido always had differentiated technology and strong customer references (a 97% customer renewal rate testifies to that) but suffered from market positioning that switched too often and was fuzzy about the customer value proposition. This is an encouraging step in the right direction.

BI on demand

October 18, 2006

I write recently about the emergence of software as a service as one of the few bright spots in enterprise software at present.  With perfect timing, today a vendor came along and announced a software as a service offering in the business intelligence field.  Celequest is a start-up and it is certainly early days to see how well this idea takes off, but this is certainly an interesting development.  Celequest has the credibility of being run by Dias Nesamoney, who was founder iof Infomatica, and is backed by VCs Bay Partners and Lightspeed Ventures, who both have long track records.  The company was set up in 2002, and has some good customers like Citigroup, Cendant and Brocade, though it is not clear from the company’ website what scale these deployments are.  The application covers dashboards, analytics and data integration technology.  As far as I am aware the company uses an in-memory database “appliance” though from what I can gather the volume of data dealt with by this application so far is modest.  However this is not the point and no doubt will imcrease over time as the concept gains acceptance.  Celequest has made an astute partnership with salesforce.com, with a bridge to AppExchange.  There is also a connector to SAP. 

Certainly, there are barriers to the widesprea acceptance of this approach.  Large enterprises will be naturally conservative about the idea of letting their data out of the corporate firewall, particularly when it is key performance data of he type that BI applications use.  It is also unclear what sort of scale issues come into play when data is being accessed from beyond the coirporate network.  However for many companies, and especially SMEs, such issues will seem less important than the convenience of being able to deploy a business intelligence solution without the usual hassle of complex software installation and an army of systems integrators.  No doubt where Celequest has begun to tread, others will follow, and it will be a healthy new area of competition in the business intelligence industry.

 

 

 

 

To be or not to be

October 12, 2006

I spent the early part of this week at ETRE, an unusual IT conference that has been running since 1990.  Organised by ex-journalist Alex Vieux, the conference is for technology CEOs and founders, general partners at VCs and the usual hangers on, rather than for customers.  It moves around to a different European city each year, and this year attracted about 500 people.  The conference is notable for two consistent things: the very high quality of people attending (Bill Gates used to be a regular) and the utter inability of the “organisers” to keep to schedule. This year, it has to be said, the overruns were of more manageable proportions than usual, and indeed the opening session only started 14 minutes late.  John Thompson of Symantec and Niklaas Zenmstrom of Skype were the star names this year.  Skype now constitutes 7% of all long distance calls, which does rather make one wonder at what point the phone companies who generously provide the infrastructure will send the boys round to collect some money from eBay.

The “future of enterprise software” session was definitely the odd one out, since the future was clearly all about social networks, at least in the eyes of investors.  They have a sort of Dragon’s Den session called ”meet the money” where early stage companies pitch to a panel of VCs, and this year the company funkysexycool.com (I couldn’t make that up), a sort of myspace wannabee for mobile phones whose business cards feature a voluptuous fantasy woman, had the VCs lining up to throw money at it.  By contrast a shipping ecommerce company, who had been around a few years, had several million in revenues and was profitable, could not have caught a cold, never mind funding.  Perhaps a social networking site for melancholy enterprise software executives?  No takers?  Oh well.

What interest there was around enterprise software was confined to “software as a service” companies.  Rightnow has now reached USD 100 million in revenue, joining salesforce.com in that rarefied air, and certainly there seem to be a few other early stage companies branching out into software as a service for things like HR and ERP.  Given that as much as 80% of technical problems with software are to do with the client environment (often some odd combination of software versions that the vendor had not, or could not, test) then the model certainly makes a lot of sense.  The drawback is that the rental model that usually goes with this means relatively slow growth, though the recurring revenue generated certainly means less sleepless nights near the end of a quarter for software executives.  

One of the few enterprise areas prospering was security, where there was a general consensus that the hackers and spammers were comfortably winning the war.  I was impressed with a company called BitDefender, who are a Rumanian security software firm that has grown since launch in 2001to USD 60 million in revenues.  This has all been done without a dollar of venture capital (there are not too many VC conferences in Bucharest).

The lack of organisational skills of the conference remain legendary.  They denied all knowledge of my booking until I produced the bank transfer details, though they at least seemed embarrassed when I pointed out that they had done exactly the same thing last year.  The conference check-in was a procession of people with lost reservations and people who had booked airport transfers that never arrived.  To be fair, a very helpful gentleman called Farley Duvall did a bang-up job of sorting out my misbehaving video presentation, and the Red Herring people always seem to cope with problems with willingness and good humour. Perhaps they just need some German organisers. 

With its baffling inability to stick to a schedule, ETRE remains something of an enigma.  Attendees wonder aloud whether it is worth the high cost, and yet each year they come back as there is nowhere else quite like this for networking.  If your company is not there, what does this say about you?  With other conferences very much in decline (even Esther Dyson’s US conference just bit the dust) you certainly have to give a lot of credit to Alex Vieux and his team for managing to attract a healthy turnout of people back every year.  Not many tech conferences can claim a 17 year unbroken heritage.

 

Oracle buys Sunopsis

October 11, 2006

It has just been announced that Oracle has bought Sunopsis, one of the few remaining independent ETL vendors.  Since Oracle’s existing ETL tool (the rather inaccurately named “Data Warehouse Builder”) is pretty weak, this makes a lot of sense for Oracle.  I suspect that their statement about “integrating” the two tools will involve much use of the delete key for the Warehouse Builder code. Sunopsis is a good product, a French company that had been around for some time but had recently made more visible market progress in the US.  No numbers are public, but my information is that Sunopsis revenues were about USD 10M and the purchase price was just over USD 50M, which at a price/sales ratio of over five makes a quite healthy price for the company.  Sunopsis was 80% owned by the founder, who had spurned venture capital, so this is very good personal news for him also. 

Sunopsis made a virtue of using the DBMS functions where possible rather than re-inventing transformation code, so is particularly compatible with Oracle (or other relational databases). This deal should also put paid to the loose marketing relationship Oracle had with Informatica. 

In my view this is a rare case where the deal is good for both companies.  Oracle finally gets a decent ETL capability and Sunopsis gets Oracle’s massive sales channel. 

Between the sheets

October 6, 2006

Earlier this year I spoke at a conference in Madrid where Ventana research unveiled research that showed, amongst other things, that companies that rely heavily on spreadsheets take several days longer to close their financial books each month than those that do not.  This makes sense.  Excel is the analyst tool of choice, but its very ease of use presents issues.  It is not easy to document well, and when I was at Shell we had a whole team providing spreadsheet auditing and design services.  For some of the very complex financial or other models that end up being developed it turns out to be very difficult to make sense of a model when someone moves on.  In the IT world we are used to dealing with support issues and at least in theory have plenty of experience with documentation standards and debugging tools.  As we all know, even in mature systems the documentation can be a nightmare, but imagine how much worse it is in Excel when you are asked to take over a thousand line Excel spreadsheet where all the cell formulae use the default grid references e.g. “=Sum(c3:c27)”.  Instead you can use the facilities in Excel to assign meangingful names to cells e.g. this would become something like “=Sum(expenses)”, which is a lot easier to figure out, but how many peopel do this?   Indeed when audits of spreadsheet models were carried out then errors were frequently found, which is worrying given the kinds of decisions being taken that rely on these models. An article this week explains how eXtensible Business Reporting Language (XBRL) offers the prospect of some relief since it defines tags which are independent of cell location. By separating the definition of the data from its cell-specific information it becomes easier to keep track of things, and easier to combine worksheets from multiple sources.  Whilst this is a welcome development I suspect that there is a long, uphill climb involved since the problem comes down to people rather than technology.  It took a long hard struggle to get programmers to (sometimes) document things properly, and I cannot see most finance or other end-users really caring enough.  It is always quicker to use things like cell references rather than proper names, for example, and so it will always be tempting to do so and not worry too much that your pretty model is almost incomprehensible to anyone else.  

It will be interesting to see whether regulators take a firmer view of things over time, since in my experience the quality of spreadsheet models is distinctly patchy.  Insisting on proper audits of spreadsheets used for serious purposes (e.g. statutory accounts, investment decisions) would be a start.  I suspect that most companies have little idea of just what a can of worms they are relying on for the numbers on which they make their decisions. 

Opening the pricing box

October 5, 2006

The open source movement is creeping into BI in various guises, as pointed out by Jacqueline Ernigh.  However, while Linux is undoubtedly taking a lot of market share from proprietary UNIX variants, it is less clear of progress higher up the stack. The article mentions a number of organisation that provide some form of open source reporting tools e.g. Pentaho, Greenplum and Jaspersoft, and indeed there are others still. However it is by no means clear what penetration these are really getting.  It was noticeable that one of the two customer examples reported merely had a database running on Linux, but had yet to deploy open source reporting tools. 

The article unfortunately loses credibility when it cites an example of the savings to be made: ”At pricing of $1,000 per user seat, for example, a company with 16,000 employees would need to pay $160,000 for a full-fledged BI deployment, Everett hypothesized.”  Hmm.  It is some time since I did my mathematics degree but I am thinking that 16,000 * 1,000 = 16,000,000 i.e. 16 million dollars, not $160,000.  Even if you are kind and assume that a major discount could be obtained for such a large deployment, even an unlikely 90% discount to list would still get you USD 1.6 million.  I doubt that Dan Everett at the entirely respectable research company Ventana would really have made such a dramatic numerical blunder, so perhaps it was a journalistic error.  Such carelessness does make one wonder about the accuracy of rest of the article, which is a pity since it is discussing an interesting trend.  

I still have yet to really come across significant deployments of open source reporting tools in production applications, but presumably they will catch on to a certain extend, just as MySQL is making steady inroads into the database market.  Perhaps the most significant point at this stage is not made by the article though.  The very existence of open source reporting tools puts pricing pressure on the established BI vendors.  Procurement people doing large deals with BI vendors will treat the open source BI movement as manna from heaven, since they have a stick to beat down the price of reporting tools from the major vendors.  Anyone about to engage in a major BI deployment or negotiation would be well advised to look carefully at these tools, if only as weapons in the armoury against pushy software salesmen.  This is further bad news for the BI vendors, who have enough to worry about with the push of Microsoft into their space and the general saturation of the market. In this case even a handful of customer deployments will suffice to send a shiver down the spine of the major vendors.

 

 

In the project jungle, your MDM initiative needs claws

October 2, 2006

Matthew Beyer makes a start on trying to come up with an approach to tackling master data initiatives.  Some of what he says makes good sense, as in “think strategically but act tactically”.  However I’d like to suggest a different approach to him in the way to prioritise.  The biggest problem with the issue of master data is one of scale.  Enterprises have a lot of systems and many types of master data, many far beyond the “10 systems” that is used as an illustration in the article.  Just one Shell subsidiary had 175 interfaces left AFTER they had implemented every module of SAP, to give a sense of the magnitude of the problem in a large company.  Hence an approach that says “just map all the master data in the enterprise and catalog which systems use each type of data” is going to be a severely lengthy process, which will probably get cancelled after a few months when little is to be shown for all the pretty diagrams.

I believe that a master data initiative needs to justify itself, just like any other project that is fighting for the enterprise’s scare resources and capital.  Hence I believe that a good approach is to start by identifying and costing problems that may be associated with master data, and putting a price tag on these problems.  For example, poor customer data could result in duplicate marketing costs, lower customer satisfaction, or misplaced deliveries.  Having an inability to get a view of supplier spend across the enterprise (as 68% of customers in one survey stated at a 2006 UK procurement conference) will have a cost in terms of not being able to get an optimal deal with suppliers, and in resulting in duplicate suppliers.  These things have real costs associated with them, and so, if fixed, have real hard dollar benefits.  Interviewing executives in marketing, procurement, finance, operations etc will soon start to tease out which operational issues are actually causing the business pain, and which have the greatest potential value if they could be fixed.  Business people may not be able to put a precise price tag on each problem, but they must be able to estimate at least a range.  If they cannot, then it is probably not that pressing a problem and you can move on to the next one. 

At the end of such an interview process you will have a series of business problems with estimates of potential savings, and can map this against the master data associated with these business processes.  Now you have a basis for priority.  If it turns out that there are tens of millions of dollars of savings to be gained from fixing problems with (say) supplier data, then that is a very good place to start your MDM pilot.

Such an approach assures you that you will be able to put a business case together for an MDM initiative, even if it has limited scope at first.  Such an initiative has a lot more chance or approval and ongoing survival that something that it perceived to be a purist or IT-led data modelling initiative. 

Provided that you adopt an architecture that can cope with master data in general and not just this one type specifically (i.e. try and avoid “hubs” that only address one type of master data) then you can build on the early success of a pilot project confident that the approach you have taken will be useful across the enterprise.  By getting an early quick win in this way you build the credibility for follow-on projects and can start to justify ongoing investment in protecting the integrity of master data in the future e.g. by setting up a business-led information assed competence centre where ownership of data is clearly defined. 

IT projects of any kind that fail to go through a rigorous cost-benefit case risk not being signed off, and then being cancelled part way through.  The race for funds and resources in a large company is a Darwinian one, so equip your MDM project with the ROI teeth and claws it needs to survive and justify itself.  When times turn sour and the CFO draws up a list of projects to “postpone”, a strong business-driven ROI case will go a long way to ensuring your MDM project claws its way to the top of the heap.