Andy on Enterprise Software

What lurks within

March 11, 2009

I have recently been spending some time looking at the data quality market, and a few things seem to pop up time and again. The first thing is, in talking with customers, just how awful the quality of data really is within corporate systems. One major UK bank found 8,000 customers whose age was over 150 according to their systems. All seemingly academic (if you are taking money out of your account, who cares what your age is?) until some bright spark in marketing decided that selling life insurance to these customers would be a fine idea.

Story after story confirms some really shocking data errors that lurk beneath most operational systems. These are the same operational systems that are used to generate data for the end-year accounts which senior executives happily sign off on pain of jail-time these days. I hope no one shows these sames execs the data inside some of these systems, or they might start to get very nervous indeed.

Yet in a survey we did last year, only about a third of companies in the survey have invested in data quality tools at all! Does anyone else find this in any way scary? Do you have any entertaining data quality stories you can share?

4 comments so far

At Omikron we did a data quality project with Vattenfall – a large European power and heat supplier – concerning product master data. While verifying some parts that seemed to be similar the suppliers of these parts had to admit, that they had supplied identical parts under different names and at different prices to various Vattenfall locations.

More on the story on: http://e-pages.dk/omikron/14/

Data Quality is a huge problem in every organization,Be it duplication of data(which is asever problem) or standardizing/cleansing the data or defining the best of the breed records.

My question would be how are companies trying to solve the problem, is it worth investing in a DQ vendor or buy an MDM tool which would have matching Capabilities?

I would really be interested in hearing the comments about the organization’s trend.

This is not a new problem. Back in the 1970s I was working for what was then ICL, the UK’s largest computer manufacturer. In the lead up to putting a new range of machines into production we were checking that, as far as possible, none of the components were single-sourced. During the exercise we found some unfamiliar looking part numbers on the system for records loaded from legacy systems and tried to track down their provenance. We came across one particular group of parts dating back to the days of punched card tabulators. These parts dated back to the days of Hollerith – the company which later became IBM with a British subsidiary that was one of a group of companies that eventually gave rise to ICL. Our research indicated that we were planning to incorporate parts probably dating back to the end of the 19th century and for which if any drawing still existed they would have been owned by IBM, our biggest competitor. We also found parts marked ‘not to be used after 1961’ with references to drawings in a long-demolished plant.

That only one-third of firms have invested in DQ software should not be scary. Most quality of data issues can be resolved without DQ tools, the noted “age” issue being a perfect example. When the rules become highly complex, for example customer matching and address correction, a DQ tool is a must. Implying that acquisition of a DQ tool is a prerequisite to solving our ubiquitous quality of data issues simply extends the problem.



Leave a comment
Your e-mail address is for administration purposes and is never displayed.

(required)

(required but not displayed)