My prize for the most creative marketing jargon of the week goes to IBM, who announced that they now consider their offerings to be a “third generation” of business intelligence. Come again? In this view of the world, first generation BI was mainframe batch reporting, while the second generation was data warehousing and associated BI tools like Cognos, Business Objects etc. So, as you wait with bated breath for the other shoe to drop, what is the “new generation”? Well, it would seem that this should include three things:
(a) pre-packaged applications
(b) focus on the access and delivery of business information to end users, and support both information providers and information consumers
(c) support access to all sorts of information, not just that in a data warehouse.
Well (a) this is certainly a handy definition, since IBM just happens to provide a series of pre-built data models (e.g. their banking data model) and so (surprise) would satisfy the first of these criteria. It is in fact by no means clear how useful such packages are outside of a few specific sectors that lend themselves to standardisation. Once you take a pre-existing data model and modify it even a little (as you will need to) then you immediately create a major issue for how you support the next vendor upgrade. This indeed is a major challenge that customers of the IBM banking model face. Nothing in this paper talks about any new way of delivering these models e.g. any new semantic integration and versioning capability.
Criteria (b) is essentially meaningless since any self respecting BI tool could reasonably claim to focus on information consumers. After all, the “universe” of Business Objects was a great example of putting user-defined terminology in front of the customer rather than just presenting tables and columns. Almost any existing data warehouse with a decent reporting tool could claim to satisfy this criteria.
On (c) there is perhaps a kernel of relevance here, since there is no denying that some information needs are not always kept in a typical data warehouse e.g. unstructured data. Yet IBM itself does not appear to have any new technology here, but merely is claiming that DB2 Data Joiner allows links to non-DB2 sources. All well and good, but this is not new. They haven’t even done something like OEM an unstructured query product like Autonomy, which would make sense.
Indeed all that this “3rd generation” appears to be is a flashy marketing label for IBM’s catalog of existing BI-related products. They have Visual Warehouse, which is a glorified data dictionary (now rather oddly split into two separate physical stores) and scheduling tool, just as they always have. They talk about ETI Extract as an ETL tool partner, which is rather odd given their acquisition of Ascential, which was after all one of the two pre-eminent ETL tools, and given ETI’s near-disappearance in the market over recent years. They have DB2, which is a good database with support for datatypes other than numbers (just like other databases). They also have some other assorted tools like Vality for data quality.
All well and good, but this is no more and no less than they had before. Moreover it could well be argued that this list of tools actually misses several important points that could be regarded as important from a “next generation” data warehouse architecture. The paper is oddly silent on the connection between this and master data management, which is peculiar given IBM’s buying spree in this area and its direct relevance to data warehousing and data quality. There is nothing about time-variance capabilities and versioning, which are increasingly important. What about the ability to handle a federation of data warehouses and synchronise these? What about truly business model-based data warehouse generation and maintenance? How about the ability to be embedded into transactional systems via SOA? What about “self discovery” data quality capabilities, which are starting to appear in some start ups.
Indeed IBM’s marketing group would do well to examine Bill Inmon’s DW 2.0 material, which while not perfect at least has a decent go at setting out some of the capabilities which one might expect from a next generation business intelligence system.
There is no denying that IBM has a lot of technology related to business intelligence and data warehousing (indeed, its buying spree has meant that it has a very broad range indeed). Yet there is not a single thing in this whitepaper that constitutes a true step forward in technology or design. It is simply a self-serving definition of a “3rd generation” that has nothing to do with limitations in current technology or new features that might actually be useful. Instead it just sets out a definition which conveniently fits the menagerie of tools that IBM has developed and acquired in this area. To put together a whitepaper that articulates how a series of acquired technologies fits together is valid, and in truth this is what this paper is. To claim that it represents some sort of generational breakthrough in an industry is just hubris, and destroys credibility in the eyes of any objective observer. This is by no means unique in the software industry, but is precisely why software marketing has a bad name amongst customers, who are constantly promised the moon but delivered something a lot more down to earth.
I suppose when presented with the choice of developing new capabilities and product features that people might find useful, or just relabelling what you have lying around already as “next generation”, the latter is a great deal easier. It is not, however, of any use to anyone outside a software sales and marketing team.
Â
Â