Bloor analyst Harriet Frymanâ€™s article â€œmastering master dataâ€ raises some excellent points. She correctly points out that master data i.e. things other than business transactions, (such as â€œprice”, â€œcustomerâ€, â€œbrandâ€, â€œproductâ€, etc) are hopelessly fragmented in every organization of any size. Research from Tower Group indicates that a large company has 11 separate systems that think they own the master definitions of â€œproductâ€, and this always seemed to me an optimistic number. In the two global corporations I have worked in the number would be in the hundreds.
This is partly because even if you have a single ERP vendor, every separate implementation of that ERP system has a slightly different definition, and ERP systems are only some of the many systems that need master data. Since the major application vendors concentrate on turf-grabbing from each other (as we see this week with Oracleâ€™s takeover of the ailing Siebel) it is not in their interests to make it easy for other systems to interact with theirs. Their answer is â€œbuy everything from us,â€ a wholly impractical situation since no one vendor (not even SAP) covers more than 30-70% of a companyâ€™s business needs (and that is according to SAPâ€™s ex CEO). Hence the big application vendors are ill-suited to pick up the crown of the master data kingdom.
Instead what is needed are applications that are designed from the outset on the assumption that there are many, related versions of the truth, and that software has to be able to deal with this. This was the assumption on which KALIDO was developed at Shell, and any other vendor hoping to gain significant market share in this area needs to be able to deal with this reality also. Paradoxically, because the apps vendors are locked like dinosaurs in their â€œfootprintâ€ wars, I believe it will be the small furry mammal equivalents in software who will be able to produce working solutions here, since they do not have a massive legacy of application code to defend. Roll on the evolution.