I’m not sure who had the idea of holding a data quality conference on Halloween, but it was either a lucky coincidence or a truly inspired piece of scheduling. DAMA ran today in London, and continues tomorrow. This also fits with the seasonal festival, which originally was a Celtic festival over two days when the real world and that of ghosts overlapped temporarily. Later the Christian church tried to claim it as their own by calling November 1st All Hallows Day, with 31st October being All Hallows Eve, which in the fullness of time became Halloween. I will resist the temptation to point out the deterioration in data quality over time that this name change illustrates. The conference is held at the modern Victoria Park Plaza hotel, which is that rare thing in London: a venue that seems vaguely aware of technology. It is rumoured that there is even wireless access here, but perhaps that is just the ghosts whispering.
The usual otherworldly spirits were out on this day: the ghouls of the conference circuit were presenting (such as me), while scary topics like data architecture, metadata repositories and data models had their outings. The master data management monster seemed to be making a bid to take over the conference, with assorted data quality and other vendors who had never heard the phrase a year ago confidently asserting their MDM credentials. You’d have to be a zombie to fall for this, surely? At least one pitch I heard a truly contorted segue from classic data quality issues into MDM, with a hastily added slide basically saying “and all this name and address matching stuff is really what MDM is about really, anyway”. Come on guys, if you are going to try to pep up your data profiling tool with an MDM spin, at least try and do a little research. One vendor gave a convincing looking slide about a new real-time data quality tool which I know for a fact has no live customers, but then such horrors are nothing new in the software industry.
The conference itself was quite well attended, with about 170 proper customers, plus the usual hangers-on. Several of the speaker sessions over the conference do feature genuine experts in their field, so it seems the conference organisers have managed to minimise the witches brew of barely disguised sales pitches by software sales VPs masquerading as independent “experts” that all too often pack conference agendas these days.Â
Just as it seems that the allure of ghosts is undiminished even in our modern age, so the problems around the age-old issue of data quality seem as spritely (sorry, I couldn’t resist that one) as ever.  New technologies appear, but the data quality in large corporations seems to be largely impervious to technical solutions. It is a curious thing but given that data quality problems are very, very real, why can no one seem to make any real money in this market? Trillium is the market leader, and although it is no longer entirely clear what their revenues are, about USD 50M is what I see in my crystal ball. Other independent data quality vendors now swallowed by larger players had revenues in the sub USD 10M range when they were bought (Dataflux, Similarity Systems, Vality). First Logic was bigger at around USD 50M but the company went for a song (the USD 65M price tag gives a revenue multiple no-one will be celebrating). Perhaps the newer generation of data quality vendors will have more luck. Certainly the business problem is as monstrous as ever.
I am posting this just on the stroke of midnight. Happy Halloween!
Â