We have now completed our survey of data quality. Based on 193 responses from IT and business staff from around the world, there were some very interesting findings. Amongst these was that 81% of respondents felt that data quality was much more than just customer name and address, which is the focus of most of the vendors in the market. Moreover, customer name and address data ranked only third in the list of data domains which survey respondents found most important. Both product and financial data was felt to be more important, yet product data is the focus of barely a handful of vendors (Silver Creek, Inquera, Datactics) while of all the dozens of data quality vendors out there, few indeed focus on financial data. Name and address is of course a common issue and conveniently is well structured and has plenty of well-established algorithms out there to attack it. Yet surely the vendor community is missing something when customers rate other data types as higher in importance?
Another recurring theme is the lack of attention given to measuring the costs of poor data quality. Lots of respondents fail to make any effort to measure this at all, and then complain that it is hard to make a business case for data quality. “Well duh”, as Homer Simpson might say. Estimates given by survey respondents seemed very low when compared to our experience, and also to anecdotes given in the very same survey. One striking one was this: “Poor data quality and consistency has led to the orphaning of $32 million in stock just sitting in the warehouse that can’t be sold since it’s lost in the system.” This company at least has no difficulty in justifiying a data quality initiative. The survey had plenty of other interesting insights too.
The full survey and analysis, all 33 pages of it, can be purchased from here.