Stylish MDM

We have recently completed a major survey into the deployment styles used in MDM implementations. My colleague Dave Waddington has recently posted a summary of the results here. As can be seen, MDM projects are turning out to be quite meaty in size, but encouragingly the sucess rates were higher than I was expecting.

There were several quite interesting results that came out, and we will be doing further research into this area. The full report can be purchased from our website.

In the project jungle, your MDM initiative needs claws

Matthew Beyer makes a start on trying to come up with an approach to tackling master data initiatives.  Some of what he says makes good sense, as in “think strategically but act tactically”.  However I’d like to suggest a different approach to him in the way to prioritise.  The biggest problem with the issue of master data is one of scale.  Enterprises have a lot of systems and many types of master data, many far beyond the “10 systems” that is used as an illustration in the article.  Just one Shell subsidiary had 175 interfaces left AFTER they had implemented every module of SAP, to give a sense of the magnitude of the problem in a large company.  Hence an approach that says “just map all the master data in the enterprise and catalog which systems use each type of data” is going to be a severely lengthy process, which will probably get cancelled after a few months when little is to be shown for all the pretty diagrams.

I believe that a master data initiative needs to justify itself, just like any other project that is fighting for the enterprise’s scare resources and capital.  Hence I believe that a good approach is to start by identifying and costing problems that may be associated with master data, and putting a price tag on these problems.  For example, poor customer data could result in duplicate marketing costs, lower customer satisfaction, or misplaced deliveries.  Having an inability to get a view of supplier spend across the enterprise (as 68% of customers in one survey stated at a 2006 UK procurement conference) will have a cost in terms of not being able to get an optimal deal with suppliers, and in resulting in duplicate suppliers.  These things have real costs associated with them, and so, if fixed, have real hard dollar benefits.  Interviewing executives in marketing, procurement, finance, operations etc will soon start to tease out which operational issues are actually causing the business pain, and which have the greatest potential value if they could be fixed.  Business people may not be able to put a precise price tag on each problem, but they must be able to estimate at least a range.  If they cannot, then it is probably not that pressing a problem and you can move on to the next one. 

At the end of such an interview process you will have a series of business problems with estimates of potential savings, and can map this against the master data associated with these business processes.  Now you have a basis for priority.  If it turns out that there are tens of millions of dollars of savings to be gained from fixing problems with (say) supplier data, then that is a very good place to start your MDM pilot.

Such an approach assures you that you will be able to put a business case together for an MDM initiative, even if it has limited scope at first.  Such an initiative has a lot more chance or approval and ongoing survival that something that it perceived to be a purist or IT-led data modelling initiative. 

Provided that you adopt an architecture that can cope with master data in general and not just this one type specifically (i.e. try and avoid “hubs” that only address one type of master data) then you can build on the early success of a pilot project confident that the approach you have taken will be useful across the enterprise.  By getting an early quick win in this way you build the credibility for follow-on projects and can start to justify ongoing investment in protecting the integrity of master data in the future e.g. by setting up a business-led information assed competence centre where ownership of data is clearly defined. 

IT projects of any kind that fail to go through a rigorous cost-benefit case risk not being signed off, and then being cancelled part way through.  The race for funds and resources in a large company is a Darwinian one, so equip your MDM project with the ROI teeth and claws it needs to survive and justify itself.  When times turn sour and the CFO draws up a list of projects to “postpone”, a strong business-driven ROI case will go a long way to ensuring your MDM project claws its way to the top of the heap. 









Darwin and data warehouse projects

Sreedhar Srikant writes about the importance of the logical data model in a data warehouse project in DM Review. This well-written article describes the process of building a model, highlights five pitfalls and suggests some ways of avoiding them.  It is in this last area that I feel the article could be enhanced.  In my experience there are two serious dangers that a data warehouse project faces that go beyond issues of project problems like trouble agreeing on a model.  These are:

(a)   The project gets insufficient business buy-in through lack of a well-articulated and robust business case

(b)   The project takes too long to deliver, making it vulnerable to budget cutbacks since it has not shown tangible benefit early enough.

I am constantly surprised how often IT projects in major corporations seem to get off the ground without a strong business case.  IT projects compete for capital in a company with many other project proposals, and so it can be a Darwinian process when times get tough: projects with the strongest business case and sponsorship will survive.  As a minimum, the project needs to set out the expected returns that it will make, set against the project costs, over a three year (sometimes five year) period.  A simple example is shown below:

Costs:  $3M one-off, $2.16M annual

Benefits: $5M from year 2 onwards, with $2M only in year 1.

In this instance the project costs $3M to deliver and just over $2M to support each year (a Data Warehouse Institute survey showed that the average data warehouse costs 72% of its build costs to support every year).  Against this are some benefits as shown.  In this instance the project is tolerably attractive, since it has a positive net present value (USD $536k using a typical 18% discount rate) and a decent 27% IRR, though its payback period is a little slow.  However, while not stellar, it is respectable as cases go, and it is at least written in the language of business. 

What might the project benefits be?   These will vary from project to project and from industry to industry, but examples might include either profit-enhancing benefits, such as reduced customer churn or improved pricing ability, or cost reductions such as fewer misplaced deliveries due to improved data quality, or better procurement margins to due to improved understanding of supplier spend.  In order to articulate these you need to find a business sponsor, preferably one who has a problem related to poor information.  Trust me; you should not have to look too far in a big company for one of these. 

Having a business case that is properly set out will act as a safety net when project reviews happen, and reduce the chances of a project being cancelled when the knives come out. 

The second thing that can help your project is to deliver something tangible early.  Traditional waterfall methodologies, often used by large systems integrators, are not always well suited to data warehouse projects, where requirements are often rather loose.  The average data warehouse project takes 16 months to deliver according to TDWI, and that is a long time in this turbulent world when management has its budgets adjusted and people are looking for projects to cut.  If your project can deliver something meaningful quickly i.e. a piece of the overall problem, then your project sponsor has a lot better chance of defending the project.  If all the review committee can see is costs then things will be harder.  Many real projects I have been involved with have been killed in this way.

One way to improve your odds of delivering something quickly is to use a data warehouse package, where at least some of the functionality is already pre-built for you.  Packages may or may not be cheaper than custom-build, but they should be quicker.  If you can pick off a chunk of the project and deliver reports back to the sponsor that add value early on then the project is much more likely to survive than one that is still delivering a grand enterprise logical data model. These days there are several packaged or semi-packaged alternatives to custom build.  A good overview of the packaged data warehouse market was done this year by Bloor and can be downloaded for free here.  

By developing a robust business case and by delivering benefits iteratively your project greatly improves its chances of survival.  When the budget sharks come circling, it is nice to have a life raft.




Elusive Return on Investment

An article in CIO magazine revealed a fascinating paradox. 63% of IT executives claimed that they are required to present a cost justification for IT projects (a survey by Cutter) yet according to Aberdeen Group, just 5% of companies actually collect ROI data after the event to see whether the benefits actually appeared. I have to say that my own experience in large companies bears out this “ROI gap”. Most post implementation reviews occur when a project has gone hideously wrong and a scapegoat is required. There are exceptions – I have been impressed at the way that BP rigorously evaluate their projects, and Shell Australia used to have a world-class project office in which IT productivity was rigorously measured for several years (a sad footnote is that this group was eventually shut down to reduce costs). However, overall I think these apparently contradictory survey findings are right: a lot of IT projects have to produce a cost/benefit case, but hardly ever are these benefits tested.

It is not clear that the failure to do so is an IT problem, but rather a failure of business process. Surely the corporate finance department should be worried about this lack of accountability – it is hardly IT’s problem if the business doesn’t bother to check whether projects deliver value. It really should not be that hard. Project costs (hardware, software, consultants, personnel costs)are usually fairly apparent or can be estimated (unless it would seem, you work in government) while benefits are more slippery. This is mainly because they vary by project and so don’t fit into a neat template. However they will usually fall into the broad categories of improved productivity (e.g. staff savings), improved profitability e.g. reduced inventory, or (indirect and woollier) improved customer value e.g. the falling price of PCs over the years. It should be possible to nail down estimates of these by talking to the business people who will ultimately own the project. Once these benefits have been estimated then it is a simple matter to churn out an IRR and NPV calculation – these are taught in every basic finance class, and Excel conveniently provides formulae to make it easy. Of course there are some IT projects that don’t require a cost-benefit case: regulatory being one example (“do this or go to jail”) but the vast majority should be possible to justify.

By going through a rigorous analysis of this type, and then checking afterwards to see what really happened, IT departments will build credibility with the business, something that most CIOs could do with more of.