Andy on Enterprise Software

SOA and how to run a conference

October 15, 2008

I am currently at the SAP Teched conference in Berlin. I will write in a separate publication about the forthcoming version 7.1 of SAP MDM, but have a couple of quite separate observations to mention here. The first is a confirmation of what i have long believed:that going towards an SOA world is going to be very hard work. One customer here, Volkswagen Financial Services, described an ambitious project where they have taken a part of their business, which deals with fleet car hire, and moved wholesale to an SOA-based infrastructure. This project has been live a few months and is already showing some genuine benefits compared to the rather manually intensive system they had before, in terms of faster processing time for certain common business processes (which used to involve agents dealing with multiple applications) and in terms of improved data quality. However it is interesting that no formal cost/benefit analysis appears to have been done. Moreover this project, which involved 100 IT staff and 50 business people, took over five years to complete. I do not think this is much to do with the technology, but rather the sheer complexity of taking a cross functional view, involving different business lines agreeing on common terminology and data definitions, agreeing on the way in which the many new web services behave. There has also been a lot of change management needed to effectively get the front-line business users to accept the new system, which automates many tasks that they used to have direct control of.

I suspect that few companies have been quite so aggressive in their move to SOA as VW. A more typical conversation was with a gentleman at a German utility and resources company, who have been looking actively into SOA since 2006. They are only just putting their toe in the water now, putting in a very limited project with just a handful of web services, across a single process, in just one small subsidiary of their organisation. Even this limited pilot has not been entirely without its issues. One problem which has reared its head is how much more difficult it is to do debugging across a web services application which touches a whole series of different applications in its wake. If something goes wrong, then they have found it is a lot more fiddly to trace where exactly the fault lies, given the cross-application nature of the project. Again, this is a project driven by the IT department as an exercise in proving technology, rather than one with a quantified business case. I do not pretend that a few random conversations at a conference is a remotely scientific sample, but it seems clear that SOA is far from mainstream in many companies thus far, and that there are new issues to address compared to traditional applications. Not least of these is the need to sort out common master data definitions across the multiple applications affected.

On a separate note, those who read my blog regularly will know that a bugbear of mine is conferences that do not run on time or are disorganised – yes ETRE, that means you. By contrast, this conference is a testament to stereotypical Teutonic efficiency. Sessions start on time to the minute, and finish on time, to the minute. There are plenty of staff around to guide people around the large congress centre, and the pre-conference administration was exemplary. When I arrived I was handed not just a conference schedule, but a suggested set of lectures and meetings that were likely to be of interest specifically to me based on my MDM interests. If only all conferences could be run by Germans.

On Frogs and Software Pricing

October 6, 2008

I am curious as to the level of take-up of the software as a service (SAAS) model, at least in respect to data management. Of course salesforce.com was the pioneer here, prompting a flood of interest in this approach. Many vendors offer their software in this way as an alternative to the usual “perpetual license” model, yet in many cases it seems to have had limited take-up. The latest vendor to offer their software in this way is Kalido, who are doing so via systems integrator BI partners. There is a lot of sense in SAAS from an end user perspective. A host (if you will excuse the pun) of problems with enterprise software are caused by inconsistencies between the recommended operating environment for a piece of software and what is actually lurking out there in the end user environment. Problems can be caused by esoteric combinations of DBMS, app server, operating system and who knows what, which are very difficult for vendors to replicate, no matter how much trouble they go to in creating test environments. Hosted solutions largely avoid any such issues. Moreover companies can try out software for a limited price per month rather than having to commit up front to a full license, which means that they can pay as they go and pay only for what they use.

For vendors the issue is double edged. By making it easy to try their software they may get customers that would otherwise not have chosen them as they were unwilling to commit to an up-front license cost. However pitching the price is not easy. If your software used to sell at USD 300k + 20% annual maintenance, then if you price the software at USD 5k per month you are seeing the maintenance (USD 60k a year) without the software license fee. Yet if you pitch the monthly fee too high you will scare the customers off and be back into a lengthy sales cycle. Ideally there is some way of pricing that draws customers in further as they use the software more e.g. as they add more users or load more data, gradually increasing the monthly fee. This was actually one of the clever things in the salesforce.com model – it seems really cheap at the beginning, but as you add more and more users you end up with a pretty hefty monthly bill, and can end up wondering how that would have compared to a traditional licence model. But by then you are already committed.

This is ideal from the vendor viewpoint. It is what I will term the “frog in the saucepan pricing model”. The legend goes (and I don’t fancy verifying its veracity) that if you toss a frog into a pan of boiling water it will jump out, but if you put it into a pan of cold water and slowly raise the temperature it does not notice and ends up being cooked. A pricing model that lures the end users in and gradually creeps up without anyone getting upset is certainly what a vendor should aim for. Not all software may be amenable to such gradated pricing, but it seems to me that this is the key if vendors are to avoid SAAS being the “maintenance but no license” model.

The price of SOA?

April 10, 2007

I just read a provocative blog on SOA which raises an interesting point. Articles on SOA tend to focus on the technical issues e.g. performance, standards etc. While I don’t agree with everything in the article, Robin Harris is correct in pointing out that how a new piece of infrastructure is perceived depends in part on the pricing mechanisms that end users see. Different IT departments charge in different ways. Some levy a “tax” on the business lines, perhaps based on a simply charge-back mechanism “retail is 20% of the business, so they pay 20% of the IT department’s costs”. Others charge out for services in a much more granular way e.g. a charge for each desktop, a charge for each GB of server storage, a charge for DBA services etc. The latter has the big advantage of being related to usage, meaning that heavy users of IT pay more, presumably in some relation to the costs that they cause the IT department to incur. The disadvantage is that the pricing can easily become over-complex, with user departments receiving vast itemised bills each month for storage, telecoms, data, networking, applications support etc in minute detail. This can cause some users to try and “game” the system by taking advantage of any flaws in the pricing model, which may make logical sense to the individual department but may actually cause the true costs to the enterprise to rise. For example if the IT department prices storage in bands then a department may resort to all kinds of tricks to avoid moving into the next pricing band, and yet the effort involved in fiddling around may exceed the true costs to the company of just buying some more storage.

At one time I worked in Esso UK, and a study was done of the pricing mechanism, which was of the complex/sophisticated type. The recommendation, from a bright young manager called Graham Nichols, was simply to scrap the charge-back systems altogether and just levy a simplistic charge by department. This actually saved three million pounds in costs, which is what it took for th charge-back systems to be administered. No doubt years later things have changed, but this was an example of how the internal systems added no value at all to the company, and by simplifying them could remove a layer of administration. The drawback to simplified systems like this is that there is no incentive for increased efficiency, since the departments know what they are going to get charged so perceive no extra cost in heavier usage of the systems. This may eventually cause heavier systems costs which will be charged back eventually to departments; it is a question of balancing the costs of the internal processes v the potential higher costs that may occur.

SOA is an example of core infrastructure which pricing mechanism have always struggled with i.e. how do you justify an investment in infrastructure which has no benefit at first, but will incrementally benefit future applications? However the investment in justified and charged back, a key point is that the investment should be justified, like any other. IT departments should view a new piece of infrastructure like other departments consider capital expenses e.g. a new fleet of trucks or a new factory. What is the payback compared to the investment? What is the IRR, NPV and time to break-even? I have not seen much if at all written about this aspect of SOA, and yet we all need to understand what productivity gains are actually going to occur before we head down this path. There may be significant productivity improvements, or none at all (indeed it could be worse than today) and yet commentators seem to take SOA as a given. If a whole industry moves in a certain direction then eventually this can be hard for end-user companies to avoid e.g. if you decided a decade or two ago that client/server was just an expensive way of distributing data from one safe, secure place (mainframe) to lots of unsafe and insecure places (PCs) then you could have tried to hang on to your mainframe, but eventually fewer and fewer applications would run on your old mainframe, and you would be obliged to switch whether you liked it or not. It is not yet clear that SOA has that kind of momentum. However I am sure that understanding its economic impact would be valuable for all sorts of reasons. I look forward to seeing someone addressing this issue seriously (I do not count breathless marketing material from vendors selling SOA services claiming 10:1 improvements in everything from productivity to quality, without any actual pesky real examples), but I am not holding my breath.

An unlikely source of BI ideas

February 23, 2007

I fully agree with an article by Steve Miller:

http://www.dmreview.com/article_sub.cfm?articleId=1076651

about how the Harvard Business Review is a surprisingly useful resource for people working in business intelligence. One of the recurring themes I have noticed over the years with projects going wrong is that the root cause of problems is more often people communications than technology. Of course as technologists we are inevitably drawn to the technical issues around the latest technology – performance, how buggy the software is etc, but few pieces of commercial software are so poor that they will cause a project to fail directly due to the software (I exempt Commerce One from this generalisation; it was that bad). The useful thing about Harvard Business review is that it gives some insight into the kind of issues that are confronting senior management, or at least about the kind of issues they are reading about.

However the HBR is rather hard work. There are rarely articles about technology directly (an exception was the November 2006 “Mastering the Three Worlds of Information Technology”) but technology often crops up within other articles, as Steve Miller points out. What I would add is that HBR can be a rather ponderous read. Their articles tend to be long and in-depth rather than bright and breezy, and there is a politically correct element about HR issues which can seem quite sanctimonious. But for every painfully worded article about the joys of diversity training there are several useful ones about current management trends and hot topics.

Speaking the same language as senior management is a stepping stone on the road to better understanding and communication, and that in turn will help improve the propsects of success for a BI project.

Impartial Advice?

January 17, 2007

HP continues with its plans for the business intelligence space with an announcement of in-house data warehouse technology:

http://www.computerworld.com:80/action/article.do?command=viewArticleBasic&articleId=9008218&intsrc=news_ts_head

with a new business unit. The offering with be based around HP’s attempt at a “data warehouse appliance”, called Neoview. This is a competitor to Teradata and Netezza, but at this stage it is hard to tell how functional this is, since it is unclear that there are any deployed customers other than HP itself.

The timing of this announcement is curious given HP’s acquisition of data warehouse consultancy Knightsbridge. Certainly data warehousing is a big market and Teradata is a tempting target – after all, most of the really big data warehouse deployments in retail, telco and retail banking use Teradata. There are lots and lots of juicy services to be provided in implementing an “appliance”, which in fact is no such thing. An appliance implies something that you just plug in, whereas data warehouse appliances are just a fast piece of hardware and a proprietary database, still requiring all the usual integration efforts, but with the added twist of non-standard database technology. Certainly plenty of business for consultants there.

However HP’s home-grown offering will not sit well with its newly acquired Knightsbridge consulting services, who made their reputation through a quite fiercely vendor-independent culture which always prided itself in choosing the best solution for the customer. People trust independent consultants to give them objective advice, since they are not (or at least they hope they are not) tied to particular vendor offerings. Presumably HP’s consultants will be pushing HP’s data warehouse solution in preference to alternatives, and so can hardly be trusted as impartial observers of the market. An analogy would be with IBM consultants, who while they may work with non-IBM software are clearly going to push IBM’s offerings given half a chance.

If you were a truly independent consultant how would you react to a brand new data warehouse appliance with a track record only of one deployment, and that in the vendor itself? Would you immediately be pushing that as your preferred solution, or would you be counseling caution, urging customers to wait and see how the new tool settles down in the market and how early customers get on with it? If you are a Knightsbridge consultant now working for HP, what would your advice be? Would it be any different to the advice you’d have offered in December 2006 before you became part of HP?

This kind of conflict of interest is what makes thing difficult for customers when choosing consultants. It is hard to find ones who are truly independent. Of course consultants always have their own agenda, but usually this is about maximising billable hours. If they are tied to a particular solution then that is fine if you are already committed to that solution, but you will need to look elsewhere for objective advice about it.

Data quality savings gone missing

December 12, 2006

One thing that continues to surprise me is how little developed the business case for data quality and master data management is.  When I look at data quality vendors speaking at conferences I can sit through whole sessions which do not mention the amount of actual dollars their clients saved by using their technology. In the case of MDM there is some excuse for this, since MDM as a term only recently became mainsteam, and so few vendors have real projects that are in production with clients.  Indeed just 4% of companies have completed an MDM project, according to a recent survey by Ventana (though 37% claim to have initiated a project).  However in the highly related field of data quality there are no such excuses: tools have been around for years, and yet trying to find examples of well justified projects with a hard dollar payback is like pulling teeth.

While data quality has remained something of a backwater (the largest data quality vendor does around USD 50M in revenue) it is surely one of the things that should be relatively easy to produce a cost benefit case for.  After all the tools will enable you to detect the proporton of bad data in a given application or enterprise, and it should not be beyond the wit of man to be able to assign a cost of poor data quality.  Even ignoring tricky things like customer satisfaction, poor data causes very real things: deliveries going to wrong places, misplaced inventory, incorrect payments, problems in manufacturing.  In certain industries it can be worse: drilling an oil well in the wrong place is an expensive affair, for example.  An 2003 AT Kearney study showed that USD 4 was saved for every dollar spent on data cleansing activity. 

By going back and looking at completed projects and carrying out cost/benefit analysis the data quality (and MDM) vendors will be doing themselves a favour, since by quantifying the savings these projects bring they can not only make it easier to justify new projects, but they beging to justify the price of their products: indeed they may be able to gain improved pricing if they can demonstrate that their products bring sufficient value to customers. It is a mystery to me as to why vendors have made such a poor show of doing so.

 

Opening the pricing box

October 5, 2006

The open source movement is creeping into BI in various guises, as pointed out by Jacqueline Ernigh.  However, while Linux is undoubtedly taking a lot of market share from proprietary UNIX variants, it is less clear of progress higher up the stack. The article mentions a number of organisation that provide some form of open source reporting tools e.g. Pentaho, Greenplum and Jaspersoft, and indeed there are others still. However it is by no means clear what penetration these are really getting.  It was noticeable that one of the two customer examples reported merely had a database running on Linux, but had yet to deploy open source reporting tools. 

The article unfortunately loses credibility when it cites an example of the savings to be made: ”At pricing of $1,000 per user seat, for example, a company with 16,000 employees would need to pay $160,000 for a full-fledged BI deployment, Everett hypothesized.”  Hmm.  It is some time since I did my mathematics degree but I am thinking that 16,000 * 1,000 = 16,000,000 i.e. 16 million dollars, not $160,000.  Even if you are kind and assume that a major discount could be obtained for such a large deployment, even an unlikely 90% discount to list would still get you USD 1.6 million.  I doubt that Dan Everett at the entirely respectable research company Ventana would really have made such a dramatic numerical blunder, so perhaps it was a journalistic error.  Such carelessness does make one wonder about the accuracy of rest of the article, which is a pity since it is discussing an interesting trend.  

I still have yet to really come across significant deployments of open source reporting tools in production applications, but presumably they will catch on to a certain extend, just as MySQL is making steady inroads into the database market.  Perhaps the most significant point at this stage is not made by the article though.  The very existence of open source reporting tools puts pricing pressure on the established BI vendors.  Procurement people doing large deals with BI vendors will treat the open source BI movement as manna from heaven, since they have a stick to beat down the price of reporting tools from the major vendors.  Anyone about to engage in a major BI deployment or negotiation would be well advised to look carefully at these tools, if only as weapons in the armoury against pushy software salesmen.  This is further bad news for the BI vendors, who have enough to worry about with the push of Microsoft into their space and the general saturation of the market. In this case even a handful of customer deployments will suffice to send a shiver down the spine of the major vendors.

 

 

Keep it cranky

September 7, 2006

I came across a really inspired blog the other day which I would highly recommend that you read.  The Cranky Product Manager is written by an anonymous American product manager at a software company.  There are several fine aspects to the blog, not least of which is that it is well written (let’s face it, too many blogs out there look like they were written by a dyslexic 12 year old).  However the best aspect is that her anonymity enables her to be delightfully rude about many aspects of the merry go round that is the software industry.  Her blog “Streetwalkers in Disguise” is a delightful example of this.

Those who have worked for some time in the software industry will have many a wry smile at the trials and tribulations of the Cranky PM, whose writing clearly reflects the very realities of product management, rather than some ultra-spun anodyne story that is so often fed to eager journalists as an “insider” story but is just clever PR.  

As someone who wades through more blogs than I care to admit to, I wish more blogs were like this one.

 

 

Darwin and data warehouse projects

September 4, 2006

Sreedhar Srikant writes about the importance of the logical data model in a data warehouse project in DM Review. This well-written article describes the process of building a model, highlights five pitfalls and suggests some ways of avoiding them.  It is in this last area that I feel the article could be enhanced.  In my experience there are two serious dangers that a data warehouse project faces that go beyond issues of project problems like trouble agreeing on a model.  These are:

(a)   The project gets insufficient business buy-in through lack of a well-articulated and robust business case

(b)   The project takes too long to deliver, making it vulnerable to budget cutbacks since it has not shown tangible benefit early enough.

I am constantly surprised how often IT projects in major corporations seem to get off the ground without a strong business case.  IT projects compete for capital in a company with many other project proposals, and so it can be a Darwinian process when times get tough: projects with the strongest business case and sponsorship will survive.  As a minimum, the project needs to set out the expected returns that it will make, set against the project costs, over a three year (sometimes five year) period.  A simple example is shown below:

Costs:  $3M one-off, $2.16M annual

Benefits: $5M from year 2 onwards, with $2M only in year 1.

In this instance the project costs $3M to deliver and just over $2M to support each year (a Data Warehouse Institute survey showed that the average data warehouse costs 72% of its build costs to support every year).  Against this are some benefits as shown.  In this instance the project is tolerably attractive, since it has a positive net present value (USD $536k using a typical 18% discount rate) and a decent 27% IRR, though its payback period is a little slow.  However, while not stellar, it is respectable as cases go, and it is at least written in the language of business. 

What might the project benefits be?   These will vary from project to project and from industry to industry, but examples might include either profit-enhancing benefits, such as reduced customer churn or improved pricing ability, or cost reductions such as fewer misplaced deliveries due to improved data quality, or better procurement margins to due to improved understanding of supplier spend.  In order to articulate these you need to find a business sponsor, preferably one who has a problem related to poor information.  Trust me; you should not have to look too far in a big company for one of these. 

Having a business case that is properly set out will act as a safety net when project reviews happen, and reduce the chances of a project being cancelled when the knives come out. 

The second thing that can help your project is to deliver something tangible early.  Traditional waterfall methodologies, often used by large systems integrators, are not always well suited to data warehouse projects, where requirements are often rather loose.  The average data warehouse project takes 16 months to deliver according to TDWI, and that is a long time in this turbulent world when management has its budgets adjusted and people are looking for projects to cut.  If your project can deliver something meaningful quickly i.e. a piece of the overall problem, then your project sponsor has a lot better chance of defending the project.  If all the review committee can see is costs then things will be harder.  Many real projects I have been involved with have been killed in this way.

One way to improve your odds of delivering something quickly is to use a data warehouse package, where at least some of the functionality is already pre-built for you.  Packages may or may not be cheaper than custom-build, but they should be quicker.  If you can pick off a chunk of the project and deliver reports back to the sponsor that add value early on then the project is much more likely to survive than one that is still delivering a grand enterprise logical data model. These days there are several packaged or semi-packaged alternatives to custom build.  A good overview of the packaged data warehouse market was done this year by Bloor and can be downloaded for free here.  

By developing a robust business case and by delivering benefits iteratively your project greatly improves its chances of survival.  When the budget sharks come circling, it is nice to have a life raft.

 

 

 

Culturing BI projects

August 30, 2006

In a beye network article Gabriel Fuchs raises the issue of culture when it comes to business intelligence projects. I think the issue is valid, but it goes far beyond the rather crude generalisations about nationalities that he makes in the article.  In my experience corporate culture is every bit as important as the general culture of the country where you are doing the implementation.  Take two contrasting examples from the oil industry (I choose this because I have worked for both these companies): Exxon has a highly centralised culture driven from central office in the US.  Shell, by contrast, is decentralised in nature and is much more consensus oriented.  This is highly relevant when implementing a BI project, because in a company with a decentralised culture you need to take into account the needs of the local subsidiaries far more than in a centralised culture. In Shell, if something was decided centrally then this was a bit like traffic signals in Manila: they are a starting point for negotiation. Someone in central office could define some new reporting need or technical standard, but the subsidiaries had a high degree of leeway to ignore or subvert the recommendation (Shell is less decentralised now than it was, but even so it is still highly decentralised compared to Exxon).  In such situations it was important to get buy-in from all the potential influences in the project; for example it was wise to produce reports or BI capabilities that the subsidiaries find useful rather than just the notional project sponsors in central office.  By contrast in Exxon, while it is sensible practice to get buy-in, if you were in a hurry or things were intractable then central office would be able to ram decisions down the throats of the subsidiaries without much resistance.  Incidentally, both cultures have advantages and disadvantages, and both companies are highly successful, so it would be a mistake to think that one culture is inherently better than the other.

In BI projects such issues come up a lot when discussing things like data models, and agreeing on international coding structures v regional or local ones.  Sometimes projects that did not go well were ones where these inherent cultures were ignored.  For example Shell spent a lot of time and money trying to ram together five European SAP implementations, and ultimately failed to do so, ending up with five slightly different implementations.  There was no technical reason why this could not be done, or in truth any real business reason, but it went against the culture of the company, so encountered resistance at every level. 

In my view such company cultural issues are very important to consider when carrying out enterprise BI projects, and are often ignored by external consultants or systems integrators who blindly follow what is in the project methodology handbook.