Andy on Enterprise Software

The Long and Winding Road of MDM Product Integration

October 14, 2011

IBM as just announced version 10 of its MDM offering, now called “Infosphere Master Data Management”. IBM has been on a long-term path to merging the MDM technologies it acquired in the product domain (from Trigo) and the customer domain (DWL). This was a path further complicated by its more recent purchase of Initiate. This announcement brings these product lines together, at least under a common marketing banner and price structure. The idea is that the new product is available is four “editions”. The “collaborative” edition is essentially the old MDM Server for PIM (exTrigo). The “standard” edition is essentially the old Initiate product. The “Advanced” edition bundles these two technologies together. The “enterprise edition” adds in the old MDM Server for Customer (ex DWL) product.

There is a unified pricing model behind these editions, though this apparent step forward is rather handicapped by the pricing model being distinctly opaque. It is based on no less than four parameters: edition, industry, data domains being managed and how many records are being mastered. When something becomes this complex it gives the sales force considerable flexibility (presumably the intention) but is potentially confusing for the customers, and possibly IBM’s own staff.

Fortunately, as well as this partial step forward on the marketing side, there is some actual code in the release. The Initiate matching engine, which was well regarded, is now used across the product line for probabilistic matching (the old Quality Stage approach is still available fro deterministic matching). The workflow engine BPM Express is bundled in with the enterprise edition, meaning that very complex sets of workflow and permissions can now be handled, if need be in a real-time manner. There is a much-needed overhaul of the old PIM user interlace in the new Collaborative Edition. Other enhancements are present, such as integration with the Guardium Data Activity Monitor.

All this amounts to a significant release that at least starts IBM on a path to unifying its MDM technologies. This will be a long path, as there are still three underlying, different, server technologies here. However at least customers now have a sense of the MDM direction in which IBM is heading now even if they need to realise that it will be a long and winding road before they get there.

Price is in the eye of the beholder

May 25, 2010

I enjoyed an article today by Boris Evelson OF Forrester, discussing whether a 20% or discount for BI software was “good” or “bad”. As Boris so correctly points out, vendors go to great lengths to keep secret the list price of their software, as they prefer to assess each pricing negotiation separately. Indeed most vendor contracts have a non-disclosure clause that prevents companies from sharing any pricing information.

Given this, what sense does a 20% (or 50%) discount make? None at all. If the vendor was offering an enterprise deal at USD 2 million initially, and you haggle and get them down to USD 1 million, that may make you feel good, but what if they had been prepared to settle for half a million all along? Vendors are aware that many corporate procurement people have targets to achieve a certain percentage over list, and in most industries that is clear enough. After all, if you wanted to buy a new Mercedes then you can just look up the list price. Then if you get 10% or 20% off that from the dealer it is clear what you have saved.

But if the list price is shrouded in mystery, discounts are meaningless. You need to achieve the lowest price that you can, and the only way for that to happen is to be in a competitive situation. As soon as a vendor knows that they are the sure-fired winner or (even better) the only bidder, then they can basically make up any number that they want and stick to it. I have been on both sides of multi-million dollar software negotiations, and I can assure you that companies frequently pay well over the odds, sometimes millions more than they have to, simply through foolish procurement practices.

You need a well structured evaluation, with several competing vendors, bidding against each other on price as the evaluation is proceeding. In this case you will see who really wants to offer you a good deal. I saw one bid drop from USD 12 million to under half that in a single day once the vendor was convinced that they were genuinely in competition.

I have put quite a few procurement tips together in one module of this course.

On Frogs and Software Pricing

October 6, 2008

I am curious as to the level of take-up of the software as a service (SAAS) model, at least in respect to data management. Of course salesforce.com was the pioneer here, prompting a flood of interest in this approach. Many vendors offer their software in this way as an alternative to the usual “perpetual license” model, yet in many cases it seems to have had limited take-up. The latest vendor to offer their software in this way is Kalido, who are doing so via systems integrator BI partners. There is a lot of sense in SAAS from an end user perspective. A host (if you will excuse the pun) of problems with enterprise software are caused by inconsistencies between the recommended operating environment for a piece of software and what is actually lurking out there in the end user environment. Problems can be caused by esoteric combinations of DBMS, app server, operating system and who knows what, which are very difficult for vendors to replicate, no matter how much trouble they go to in creating test environments. Hosted solutions largely avoid any such issues. Moreover companies can try out software for a limited price per month rather than having to commit up front to a full license, which means that they can pay as they go and pay only for what they use.

For vendors the issue is double edged. By making it easy to try their software they may get customers that would otherwise not have chosen them as they were unwilling to commit to an up-front license cost. However pitching the price is not easy. If your software used to sell at USD 300k + 20% annual maintenance, then if you price the software at USD 5k per month you are seeing the maintenance (USD 60k a year) without the software license fee. Yet if you pitch the monthly fee too high you will scare the customers off and be back into a lengthy sales cycle. Ideally there is some way of pricing that draws customers in further as they use the software more e.g. as they add more users or load more data, gradually increasing the monthly fee. This was actually one of the clever things in the salesforce.com model – it seems really cheap at the beginning, but as you add more and more users you end up with a pretty hefty monthly bill, and can end up wondering how that would have compared to a traditional licence model. But by then you are already committed.

This is ideal from the vendor viewpoint. It is what I will term the “frog in the saucepan pricing model”. The legend goes (and I don’t fancy verifying its veracity) that if you toss a frog into a pan of boiling water it will jump out, but if you put it into a pan of cold water and slowly raise the temperature it does not notice and ends up being cooked. A pricing model that lures the end users in and gradually creeps up without anyone getting upset is certainly what a vendor should aim for. Not all software may be amenable to such gradated pricing, but it seems to me that this is the key if vendors are to avoid SAAS being the “maintenance but no license” model.

MDM In Savannah – Day 2

March 5, 2008

The conference continued today with a string of customer case studies, plus some panel discussions and a couple of vendor presentations that just about managed to avoid being too blatant in their product plugs. I enjoyed a case study from a transport company called Pitt Ohio Express, who had implemented a customer-oriented MDM hub for the practical reason that they need to know where their trucks have to turn up to deliver things. This seems a more pressing reason to sort out customer name and address than a bit of duplicated direct mail. Also, they had actually measured things properly before and after their project, and had achieved a 2% overall company improvement in operating margin due to the initiative. A proper view of customer spend has enabled targeted customer pricing rather than blanket price lists, as an example of a real benefit seen.

I also enjoyed a lively presentation by Brian Rensing,a data architect at Procter and Gamble. There must be marketing in the blood there, as he was an entertaining speaker, and how many data architects can you say that of? He explained how they had managed to get buy-in to their MDM initiative, working one business unit at a time and relying heavily on iterative prototyping to ensure that business people could see short-term benefits, rather than laying out a grandiose multi-year initiative. Their project covers both customer and product initially, both at the corporate level and (gradually) country level, using KALIDO MDM. They see this MDM initiative as being able to enable them to lead into better data warehousing and analytics n the future, since there will be a sounder data foundation on which to work.

In general I am surprised at the number of companies contemplating (and actually doing) MDM projects using entirely in-house technology. One company even devised its own matching algorithms. Surely this is the kind of thing that off-the-shelf data quality products can do much better? I suppose MDM is still in relative infancy in terms of market size (Rob Karel of Forrester reckoned USD 1 billion in 2006, of which only a third was software, a very different number from IDC estimates, but expecting over 50% compound growth over the coming years). The big systems integrators seem yet to have really caught on to this fast growth, with Baseline Consulting at this conference almost the only SI represented (and they are a specialist boutique). It will be interesting to see at what point PWC, Accenture and Bearing Point start turning up to such conferences.

I should relate a conversation with one vendor at the exhibit last night. “So, what kind of revenues do you guys do?”. “We don’t disclose that”. Fair enough, some companies are shy. “How many customers do you have?”. “We don’t disclose the number of customers we have”. “Er, OK, do you have any customers?”. “Oh yes”. Uh huh. “Who are your investors? “That is private.” “How many employees do you have?”. “We can’t share that information”. So we have here a vendor, at a trade show, unwilling to talk about how big it is, who has invested in it, how many customers or even employees they have. Short of putting a puzzle on its web site in order to find the contact address, it is hard to imagine how they could induce nervousness in a prospect more. Surreal. I guess they are going for the “dark and mysterious” marketing approach pioneered by Ab Initio.

Although many case studies were about customer, over half the respondents in a recent TDWI survey said that their MDM initiative had enterprise-wide scope, and there were certainly examples here of case studies around product information, as well as financial information. I still had the sense that a lot of companies were treading gingerly into the MDM world, but there were enough case studies of completed projects to suggest that the growth in the market which Forrester (and others) predict is plausible based on the level of interest shown here.

Perhaps the most entertaining moment of the conference was watching Todd Goldman, VP of marketing for Exeros, doing a (quite impressive) conjuring trick at the beginning of his presentation. It turns out that he is an amateur magician, a skill that must come in very useful in his career in software marketer. This was not the last time I have seen clever illusions in software marketing, nor will it be the last.

Appliances on demand

November 27, 2007

It is interesting to see Kognitio launching a data warehouse on demand service. Traditionally data warehouses are built in-house, partly because they are mostly “built” rather than bought even today, and partly because of the data-intensive nature of them, by definition involving links to multiple in-house systems. However there is no real reason why the service cannot be provided remotely. In my days at Shell my team used to provide a similar internal service to small business units who did not want to build up in-house capability. We implemented a warehouse, built the interfaces and then managed the operational service. Kognitio is well placed to provide such a service because they have good data migration experience, and they conveniently have a powerful warehouse appliance, which is much more mature than many others, even if it has been, until recently, not very successfully marketed. Hence this seems an astute move to me.

I would not expect this to be the last such offering. Given some clear advantages that software as a service brings to customers (less installed software footprint, typically a smoother pricing model) it will be interesting to see whether these advantages outweigh the fear in customer minds about allowing their key data outside the firewall.

Babies and bathwater

October 15, 2007

I read a rather odd article in Enterprise Systems proclaiming that “the data warehouse might be dead”. The thrust of the article was the old idea of piecing together reports for a particular user or department by accessing source systems directly rather than bothering with a pesky data warehouse, in this case advocated by a senior business user from Bank of America. I understand the frustration that business people have with most corporate data warehouses today. They are typically inflexible, and so unable to keep up with the pace of business change. Indeed this thought was echoed recently by Gartner analyst Don Feinberg, who said that data warehouses more than five years old should be re-written. To a person who has an urgent information need, going to an IT department that tells him that he cannot have the information he needs for weeks or months is understandably irritating.

Yet the apparent solution of accessing source systems directly is flawed, and the problems this causes is after all is why people invented data warehouses in the first place. Yes, you can patch together data from source systems, and yes, that one report you want may appear less complicated (and quicker) to get than going through a change request in corporate IT. Overall, though the economics of point to point spaghetti v a central warehouse are easy to see. The organisation as a whole will spend much more money in this manner than by having a central warehouse where the data can be relied upon; the more complex the organisation, the larger this gap will be.

Probably worse, the business user gets some numbers out, but are they the right numbers? Anyone who has worked on data warehouse projects will be familiar with the frequently dismal quality of data even in supposedly trusted corporate source systems such as ERP. It is often only by looking at sources together than problems with the data show up. Often errors in, say, regional systems can cancel out or be obscured, and the true picture only emerges when data is added up at a collective level.

In these days of increasing anxiety about banking scandals and greater regulation, companies can ill afford to subject themselves to unnecessary risk by making decisions based on data of questionable quality. The issue is that most data warehouses today are based on inflexible approaches and designs, causing lengthy and costly delays in updating the warehouse when the business, inevitably, changes. It does not have to be this way. You can construct data warehouses in a more flexible manner, and in a way in which business users are engaged with the process. By running parallel master data management initiatives and setting up an MDM repository, the data warehouse can be relieved of at least some of the burden of sorting out corporate reference data, giving it more of chance.

It is incumbent of IT departments to embrace modern data warehouse technologies and complementary MDM best practice in order to avoid driving their customers to “skunk works” desperation in order to answer their needs. IT organisations that fail to do this risk being marginalised, but also indirectly drive up costs and risks for their companies.

Creating a burning data quality platform

October 1, 2007

There is a blog I read by Forrester today that rang true. The point being made is that data quality is a hard sell unless some crisis happens. This is evidently true, since the data quality market is small. and yet the problems are large. I have encountered several shocking pieces of data quality in my time that were costing millions of dollars. In one case an undetected pricing error in a regional SAP system meant that a well known brand was being sold at zero profit margin. In another case, a data quality error in a co-ordinate system caused an exploration bore to be dug into an existing oil well, which luckily was not in production at the time so “only” cost a few million dollars. In more general terms every dollar spent on data quality should save you four. Yet these examples I mentioned (and there are plenty more) actually showed up not in data quality projects but in data warehouse or master data projects, which in principle were supposed to be taking “pure” data from the master transaction systems. This does not inspire confidence in the state of data in corporate systems which are not “clean”.

I am not sure why this sorry state of affairs exists other than to note that in most companies data quality is regarded as an IT problem, when in actual fact the IT folk are the last people to be in a position to judge data quality. Responsibility lies firmly in the business camp. Moreover, as I have mentioned, justifying a data quality project should not be hard: it has real dollar savings, quite apart from other benefits e.g. reduction in reputational risk. I suspect that some of the problem is that it is embarrassing (”no problems with our company data, no sirree”) and, let’s face it, pretty dull. Would you rather work on some new product launch or be buried away reviewing endless reports checking whether data is what it should be?

For people toiling away in corporate IT the right way to get attention might be to use a modern data quality tool, find a sympathetic business analyst and poke around some corporate systems. These days the tools do a lot of self-discovery, so finding anomalies is not as manually intensive as it used to be. If you turn over a few stones in corporate systems you will be surprised at what will turn up. Chances are that at least one of the issues you encounter will turn out to be expensive, and this may raise the profile of the work, allowing sponsorship to dig around in other areas.

Demanding BI

August 16, 2007

Ken Rudin makes a compelling case for on-demand BI in DM Review. If anything, he over-eggs things by saying that users will get more engaged with on-demand versions of software, and how you don’t really need IT deployment skills, which seems less than obvious to me. However the thrust of the argument is still valid. I know having been on both the end-user side of the fence (at Shell) and as a vendor (Kalido) that a scary proportion of problems that users encounter are “environmental” i.e. some combination of software installed on the PC causes an application to break, and the software support desk has great difficulty in replicating the problem because it is next to impossible to ensure that your PC has precisely the same patch release level of operating system/database/middleware as the customer who has the problem. Indeed this is why large companies go to great lengths to try to standardise the desktop configuration across the enterprise, doing upgrades as rarely as possible. This is no mean project if you have tens of thousands of desktops in lots of countries, and tends to results in customer frustration as they find that the latest feature their children are playing with at home is not available on their locked down and fairly stable but quite out of date software they have at work.

The lack of environmental intrusion seems to me the key advantage of the on-demand model to the customer. As a side benefit, but not really a function of the model, most vendors have changed their pricing models for on-demand so that customers pay on a leasing basis rather than a big up-front license charge. This has potentially benefits to both customers and vendors, since customers get to pay only on usage, which seems to them a fairer way of paying that having to pay up-front for something they are not sure about, and for vendors it gives them a steadier revenue stream, as well as a possibility to reduce the sales cycle: USD 100 per month per user sounds a lot less than USD 100k plus 20% maintenance, though of course it may not actually be if the usage rates become high enough. For example Actuate cite this as a reason why they are bullish on their on-demand offering, as they can employ relatively cheap “inside reps” (essentially up-market telesales people) rather than costly enterprise software salesmen, who may be able to land that big deal but so often do not in practice, yet still expect a hefty salary whether they sell anything or not.

It is indeed rather curious, then, as to why on-demand has not really taken off more in the BI sector. There are some toe-in-the-water efforts from some vendors, but I suspect that most are nervous that such sales cannibalise their conventional channels. To me it seems there are opportunities here for start-up companies who don’t have entrenched ways of doing business to take advantage of this situation, though since the VC community is out frantically hunting for the next Myspace and Facebook it is scarcely funding anything so untrendy as enterprise software, so there is actually little activity here either. At some point the pendulum will surely swing back, and at this point companies who have embraced on-demand delivery seem to me to be well-placed to take advantage of a rare but genuine shift in delivery model.

I’d be interested in any BI vendors or end-users with experience of on-demand BI who’d like to share their experiences, good or bad.

Gazing Behind the Data Mirror

July 18, 2007

I have been digging a little deeper into the Data Mirror purchase by IBM that I wrote about yesterday.

It’s a good deal for IBM, and not only because the price was quite fair. With its Ascential acquisition IBM positioned itself directly against Informatica, yet Ascential’s technology did not have the serious real-time replication that is important for the future of ETL, and this is what Data Mirror does have. DataMirror gives IBM a working product with heterogeneous data source support in real time, giving IBM an important piece in the puzzle to achieve their vision for real-time operational BI and event-awareness.

A bigger question is whether IBM fully understands what it has bought and whether it will properly exploit it. Data Mirror’s strengths were modest pricing, low-impact installation, neutrality of sources it supports and performance doing this (via its log-scraping abilities and speed of applying changes). IBM must keep their eye on the development ball to ensure these aspects of the DataMirror technology are continued if it is to really exploit its purchase. For example, on the last point, the partnerships DataMirror has with Teradata and Netezza and Oracle should be continued, despite the obviously temptation to snub rivals Oracle and Teradata.

Any acquisition creates uncertainty amongst staff, and IBM needs to move beyond motherhood reassurance to show staff that it understands the DataMirror technology and business and wants to see it thrive and grow. It needs to explain how the DataMirror technology fits within a broader vision for real-time integration in combination with traditional batch oriented ETL, business intelligence and enterprise service bus (not just MQSeries) integration or else the critical technical and visionary people will dust off their resumes and start looking elsewhere.

I gather that IBM has already announced an internal town hall meeting next week, at which it needs to convince key technical staff that they have a bright future within the IBM family. I also hear that no hiring freeze has been imposed, which implies they are making the decision of growing the business, which should reassure people. IBM is an experienced company which will recognise that the true IP of a company is not in libraries of code but in the heads of a limited number of individuals, and no doubt will recognise the need to retain and motivate critical staff. It used to be poor at this (think about the brilliant technology it acquired when it bought Metaphor many years ago, but bungled the follow-up) but has got smarter in recent years e.g. I hear from DWL people that they have been treated well.

Hopefully IBM’s more recent and happier acquisition experiences will be the case here.

The good old days

June 6, 2007

I attended an interesting talk today by Greg Hackett, who founded financial benchmarking company Hackett Group before selling this to Answerthink and “going fishing for a few years”. He is now a business school professor, and has been researching into company performance and, in particular, company failure. Studying the 1,000 largest US public companies from 1960 to 2004 his research shows:

- company profitability is 40% lower in 2004 than in 1960, with a fairly steady decline starting in the mid 1960s
- the average net income after tax of a company in 2004 was just 4.3%
- half of companies were unprofitable for at least two out of five years
- 65% of those top 1,000 companies in 1965 have disappeared since, with just half being acquired but 15% actually going bankrupt.

He gave four reasons for company failure: missing external changes in the market, inflexibility, short term management and failing to use systems that would show warning signs of trouble. What I found most surprising was that the correlation between profitability and stock market performance was zero.

The research suggests that the world is becoming a more competitive place, with pricing pressure in particular reducing profitability despite greater efficiency (cost of goods sold is 67% of turnover, down from 75% in 1960, though SG&A is up from around 13% or turnover to around 18%). All those investments in technology have made companies slightly more efficient, but this has been more than offset by pricing pressure.

I guess this also tells you that holding a single blue chip stock and hanging onto it is a risky business over a very long time; with 15% of companies folding over that 45 year period, it pays to keep an eye on your portfolio.

A key implication is that companies need to get better at implementing management information systems that can react quickly to change and help give them insight into competitive risks, rather than just monitoring current performance. Personally I am unsure that computer systems are ever likely to provide sufficiently smart insight for companies to take consistently better strategic decisions e.g. divesting from businesses that are at risk; even if they did, would management be smart enough to listen and act on this information? It does imply that systems which are good at handling mergers and acquisitions should have a prosperous future. This is one thing, at least, that seems to have a growing future.