Margaret Harvey points out in a recent article that the effort of integrating the IT systems of two merged companies can be a major constraint and affect the success of the merger. Certainly this is an area that is often neglected in the heat of the deal. But once the investment bankers have collected their fees and an acquisition or merger is done, what is the best approach to integrating IT systems? What is often missed is that, in addition to different systems e.g. one company might use SAP for ERP and the other Oracle, the immediate problem is that the two companies will have completely different coding systems and terminology for everything, from the chart of accounts, through to product and asset hierarchies, customer segmentation, procurement supplier structures and even HR classifications. Even if you have many systems from the same vendor, this will not help you much given that all the business rules and definitions will be different in the two systems.
To begin with the priority should be to understand business performance across the combined new entity, and this does not necessarily involve ripping out half the operational systems. When HBOS did their merger, both Halifax and Bank of Scotland had the same procurement system, but it was soon discovered that this helped little in taking a single view of suppliers across the new group given the different classification of suppliers in each system. To convert all the data from one system into the other was estimated to take well over a year, but instead they put a data warehouse system in which mapped the two supplier hierarchies together, enabling a single view to be taken even though the two underlying systems were still in place. This system was deployed in just three months, giving an immediate view of combined procurement and enabling large savings to be rapidly made. A similar appraoch was taken when Shell bought Pennzoil, and when Intelsat bought Loral.
It makes sense initially to follow this approach so that a picture of operating performance can quickly be made, but at some point you will want to rationalize the operational systems of the two companies, in order to reduce support costs and eliminate duplicated skill sets. It would be helpful to draw up an asset register of the IT systems of the two companies, but just listing the names and broad functional areas of the systems covered is only of limited use. You also need to know the depth of coverage of the systems, and the likely cost of replacement. Clearly, each company may have some systems in much better shape than others, so unless it is case of a whale swallowing a minnow, it is likely that some selection of systems from both sides will be in order. To be able to have a stab at estimating replacement costs, you could use a fairly old but useful technique to estimate application size: function points.
Function points are a measure of system “size” that does not depend on knowing about the underlying technology used to build the system, so applies equally to packages and custom-build systems. Once you know that a system is, say, 2000 function points in size, then there are well established metrics on how long it costs to replace such a system e.g. for transaction systems, a ballpark figure of 25-30 function points per man month can be delivered, which does not really seem to change much whether it is a package or in-house. Hence a 2000 function point transaction system will cost about 80 man-months to build or implement, as a first pass estimate. MIS systems are less demanding technically than transaction systems (as they are generally read only) and better productivity figures can be be achieved here. These industry averages turned to be about right when I was involved in a metrics program at Shell in the mid 1990s. At that time a number of Shell companies counted function points and discovered productivity of around 15 – 30 function points per man month delivered for medium sized transaction systems, irrespective of whether these were in-house systems or packages. Larger projects had lower productivity, smaller projects have higher productivity, so delivering a 20,000 function point system will be a lot worse than a 2,000 function point system in terms of productivity i.e. fewer function points per man month will be delivered on the larger system. Counting function points in full is tedious and indeed is the single factor that has relegated it to something of a geek niche, yet there are short cut estimating techniques that are fairly accurate and are vastly quicker to do that counting in full. By using these short-cut techniques a broadly accurate picture of an application inventory can be pulled together quite quickly, and this should be good enough for a first pass estimate.
There are a host of good books that discuss project metrics and productivity factors which you can read for more detailed guidance. The point here is that by constructing an inventory of the IT applications of both companies involved in a merger you can get a better feel for the likely cost of replacing those systems, and hence make a business case for doing this. In this way you can have a structured approach to deciding which systems to retire, and avoid the two parties on either side of the merger just defending their own systems without regard to functionality or cost of replacement. Knowing the true costs involved of systems integration should be part of the merger due diligence.
Software Engineering Economics
Controlling Software Projects