Andy on Enterprise Software

The Bulldog gets a housemate

July 21, 2008

Microsoft generally likes to acquire software companies when they are quite small, with a dozen or two employees. In this way they can assimilate the development staff into Redmond and into the Microsoft way of doing things. An example of this was last week, when they decided to acquire a data quality technology. There are literally dozens of data quality vendors out there, most fairly small, and so there was plenty of choice. They opted for Zoomix, a small Israeli company which I first encountered in 2006, though they were founded in 1999. Zoomix had some quite clever marketing, claiming “self learning” technology as a way of making data profiling in particular more productive. In this way it could be compared to Exeros, although the technology underpinnings are quite different.

In this case the R&D team will move into the Microsoft technology centre already in Israel. This is a logical move by Microsoft, who acquired Stratature in order to give them an MDM capability. This product is currently being retooled under the code-name Bulldog, and a data quality offering to complement this is a natural fit. The timing around Bulldog’s release are unclear at this point, as it is folded into the SQL Server release timeframe.

Initiate not going for Initial

June 27, 2008

In what could not be described as surprise move, Initiate Systems just pulled its previously planned Initial Public Offering. The turmoil in the capital markets means that it is difficult time to raise money right now, and so it seems sensible to wait until a better time for going public. This does raise the possibility of whether Initiate will consider raising money another way (update – it just did a USD 26 million private round), or indeed whether a potential predator might consider this a good time to pounce.

Generally this has little impact, but a lack of exit opportunities is a poor thing for the enterprise software sector in general, as venture capital firms are less likely to invest in earlier stage firms with one of the two exit routes (the other being a trade sale) closed. Initiate had made excellent market progress with its MDM technology, and it would have been nice to see a pure-play MDM vendor going public.

The dust clears (a bit)

June 5, 2008

I have wondered for some time about Informatica’s intentions in the MDM space. There had been some market rumours about them possibly buying Siperian, but it seems as if their not so secret meetings with Siperian were actually to form a partnership, which was just announced.

This is an eminently sensible partnership in my view. In most MDM projects there is going to be much gather of data from multiple places, and so MDM vendors typically need to rely on integration, or at least data movement technologies that enterprises have already deployed. Assuming that this relationship will be exclusive (unclear from the press release), Siperian has locked in a relationship with the last remaining independent ETL and integration player of note (Ab Initio is also still out there, but is so secretive that it is part vendor, part cult). Informatica gains a foothold into the fast growing and important MDM market, so it is a win-win as far as I can see.
Perhaps one day this relationship will lead to something more binding, but for now these two vendors appear to be dating rather than tying the knot.

The Information Difference

May 13, 2008

Today sees the launch of the Information Difference, a boutique market research and analyst firm specialising in the master data management market. This reflects the increasing interest in this fast-growing area. The company has developed detailed profiles of all the vendors in the MDM space, as well as of some of the major and most interesting players in the related data quality space. The company will shortly announce its first piece of primary research (into MDM adoption) and will produce white-papers on key issues in the MDM market.

Its principals are Dave Waddington (ex Chief Architect at Unilever Foods) and myself, with some part-time assistance from a number of other talented individuals. It is nice to see some positive reactions from some serious industry luminaries (see press release).

We hope to bring a more in-depth perspective to this emerging market than is common today, and have some exciting research in preparation.

For more information see the company website.

Tilting at Windmills

April 22, 2008

I wrote recently about likely further consolidation in the MDM market. A further example, albeit on a small scale, happened today as FullTilt, a PIM provider, was bought by QAD. QAD is a public company selling ERP software, with around 1,500 employees which has been listed since 1997 though its history goes back to 1987. FullTilt was known by industry insiders to be “in play” for many months, and has been openly for sale for some time. It is a relatively small company that has struggled to get scale, and so a deeper pocketed parent makes some sense for it.

This is another example of companies in more mature markets seeking to get exposure to the fast growing MDM market. It will not be the last such move.

Clearing a migration path

March 20, 2008

One of the issues often underestimated by new vendors attacking an entrenched competitor is the sheer cost of platform migration. For example, in the database world, if someone comes out with a new, shiny DBMS that is faster and cheaper than the incumbents, why would customers not just switch? After all the new database is ANSI compliant and so is the old one, right? Of course this view may look good in a glossy article in a magazine or the fevered fantasies of a software sales person, but in reality enterprises have considerable switching costs for installed technology. In the case of databases, SQL is just a small part of the story. There are all the proprietary database extensions (stored procedures, triggers etc), all the data definition language scripts, systems tables with assorted business rules implicitly encoded, and he invested time and experience of the database administrators, a naturally conservative bunch. I know as I was a DBA a long time ago; there is nothing like the prospect of being phoned up in the middle of the night to be told the payroll database is down and asked how many minutes will it take you bring it back up, to give you a sceptical perspective on life. New and exciting technology is all well and good, but if that involves rewriting a large suite of production batch jobs that you have just spent months getting settled, you tend to just push that brochure back across the table of the software sales person. Installed enterprise software is notoriously “sticky”.

Hence when attacking something that is already in production, you have to go further than just say “ours is x times cheaper and faster”. An example of this is with DATAllegro and their assault on the mountainous summit that is the Teradata installed base. They have just, very sensibly, brought out a suite of tools that will actually help convert and existing Teradata account, rather than just hoping someone is going to buy into the speed and cost story. This new suite of utilities will:

– convert BTEQ production jobs with the DATAllegro DASQL batch client
– convert DDL from Teradata to DATAllegro DDL
– connect to the Teradata environment and extract table structures (schema) and data and import them into DATAllegro.

This is right approach to take. What they need to do next is get some public customer stories who have actually been through this conversion, and get them to talk about the benefits, and also realistically about the effort involved. If they can do that then they will be in a credible position to start eating away at the Teradata crown jewels, the seriously high end databases with 100TB or more.

Psst, want a free business modelling tool?

February 20, 2008

Regular readers of this blog may recall that I mentioned the Kalido business modelling tool that was out with Kalido’s new software release. At TDWI Las Vegas yesterday Kalido launched this formally, and made it available for free download. There is also an on-line community set up to support this, in which as well as tool discussion, participants can share and collaborate on business models.

This seems a smart move to me, as by making the tool available for free Kalido will get some publicity for the tool that it would otherwise not get, and of course if people get hooked on the tool then they might wonder: “hey, maybe I could try connecting it up and building a warehouse” at which point, as the saying goes, a sales person will call. This follows the well-proven drug-dealer technique of giving away a free hit of something in order to lure you on to something more powerful and even more addictive in due course.

Business modelling does not get the attention it deserves, so the on-line forum could prove very interesting. The ability to share and improve models with others could turn out to be very appealing to those involved with projects of this nature; after all, essentially it is a source of free consultancy if the forum develops.

Visit http://www.kalido.com/bmcf to download a copy of the tool.
To join the community visit http://groups.google.com/group/bmcf

A Sideways Glance

February 19, 2008

Vertica is one of the plethora of vendors which have emerged in the analytics “fast database” space pioneered by Teradata and more recently opened up by Netezza. The various vendors take different approaches. Some (e.g. Netezza) have proprietary hardware, some (e.g. Kognitio, Dataupia) are software only, some (e.g. ParAccel) rely mainly on in-memory techniques, others simply use different designs from the traditional designs of the mainstream DBMS vendors (Oracle, DB2).

Vertica (whose CTO is Mike Stonebraker of Ingres and Postgres fame) is in the latter camp. Like Sybase IQ (and Sand) it uses a column-oriented design (i.e., it groups data together by column on disk) rather than the usual row-oriented storage used by Oracle and the like. This approach has a number of advantages for query performance. It reduces disk I/O by only having to read the columns referenced by the query and also by aggressively compressing data within columns. Through use of parallelism across clusters of shared-nothing computers, Vertica databases can scale easily and affordably by adding additional servers to the cluster. Normally the drawback to column-oriented approaches is their relatively slow data load times, but Vertica has some tricks up its sleeve (a mix of in-memory processing which trickle feeds disk updating) which it claims allow load times comparable to, and sometimes better than, row-oriented databases. Vertica comes with an automated design feature that allows DBAs to provide it with the logical schema, plus training data and queries, which it then uses to come up with a physical structure that organizes, compresses and partitions data across the cluster to best match the workload (though ever-wary DBAs can always override this if they think they are smarter). With a standard SQL interface Vertica can work with existing ETL and business intelligence tools such as Business Objects, and has significantly expanded the list of supported vendors in their upcoming 2.0 release.

With so many competing vendors all claiming tens of times better performance than others, the measure that perhaps matters most is not a lab benchmark but customer take-up. Vertica now has 30 customers such as Comcast, BlueCrest Capital Management, NetworkIP, Sonian Networks and LogiXML, and with its upcoming 2.0 release out on 19/2/2008 is doing joint roadshows with some of these. It has done well in Telcos, who have huge data volumes in their call detail records databases. Two deployed Vertica customers have databases approaching 40 TB in size. Another area is financial services, where hedge funds want to backtest their trading algorithms against historical market data. With one year worth of US financial markets data taking up over 2TB, this can quickly add up, and so Vertica has proved popular amongst this community, as well as with marketing companies with large volumes of consumer data to trawl trough. Vertica runs on standard Linux servers, and it has a partnership with HP and Red Hat to provide a pre-bundled appliance, which is available from select HP resellers.

With solid VC backing, a glittering advisory board (Jerry Held, Ray Lane, Don Hadrele,…) and genuine customer traction in an industry long on technology but short on deployed customers, Vertica should be on every vendor short-list for companies with heavy duty analytical requirements which currently stretch performance limits and budgets

A Lively Data Warehouse Appliance

February 15, 2008

DATAllegro was one of the earlier companies to market (2003) in the recent stampede of what I call ”fast databases”, which covers appliances and other approaches to speedy analytics (such as in-memory databases or column-oriented databases). Initially DATAllegro had its own hardware stack (like Netezza) but now uses a more open combination of storage from EMC and Dell Servers (with Cisco InfiniBand Interconnect). It runs on the well proven Ingres database, which has the advantage of being more “tuneable” than some other open databases like MySQL.

The database technology used means that plugging in business intelligence tools is easy, and the product is certified for the major BI tools such as Cognos and Business Objects, and recently Microstrategy. It can also work with Informatica and Ascential Datastage (now IBM) for ETL. Each fast database vendor has its own angle on why its technology is the best, but there are a couple of differentiators that DATAllegro has. One is that it does well in situations of mixed workloads, where as well as queries there are concurrent loads and even updates happening to the database. Another is its new “grid” technology, which allows customers to deal with the age-old compromise of centralised warehouse v decentralised data marts. Centralised is simplest to maintain but creates a bottleneck and creates scale challenges. However de-centralised marts quickly become un-co-ordinated and can lead to lack of business confidence in the data. The DATAllegro grid utilises node-to-node hardware transfer to allow dependent copies of data marts to be maintained from a central data warehouse. With transfer speeds of up to 1 TB a minute (!) claimed, such a deployment allows companies to have their cake and eat it. This technology is in use at one early customer site, and is just being released.

DATAllegro has set its sights firmly at the very high end of data volumes, those encountered by retailers and telcos. One large customer apparently has a live 470 TB database implementation, though since the company is very coy about naming its customers I cannot validate this. Still, this is enough data to give most DBAs sleepless nights, so it is fair to say that this is at the rarefied end of the data volume spectrum. This is territory firmly occupied by Teradata and Netezza (and to a lesser extent Greenplum). The company is tight-lipped about numbers of customers (and I can find only one named customer on its website), revenues and profitability, making it hard to know what market momentum is being achieved. However its technology seems to me to be based on solid foundations and has a large installed base of Teradata customers to attack. Interestingly, Oracle customers can be a harder sell, not because of the technology but because of the weight of stored procedures and triggers that customers have in Oracle’s proprietary extension to the SQL standard, making porting a major issue.

If only DATAllegro can encourage more customers to become public then it will be able to raise its profile further and avoid being painted as a niche vendor. Being secretive over customer and revenue numbers seems to me self-defeating, as it allows competitors to spread fear, uncertainty and doubt: sunlight is the best disinfectant, as Louis Brandeis so wisely said.

Peeking at Models

February 7, 2008

With its latest release of its data warehouse technology, Kalido has introduced an interesting new twist on business modelling. Previously in a Kalido implementation, as with a custom build warehouse, the design of the warehouse (the hierarchies, fact tables, relationships etc) was done with business users in a whiteboard-style setting. Usually the business model was captured in Visio diagrams (or perhaps Powerpoint) and then the implementation consultant would take the model and implement it in Kalido using the Kalido GUI configuration environment. There is now a new product, a visual modelling tool that is much more than a drawing tool. The new business modeller allows you to draw out relationships, but like a CASE tool (remember those?) it has rules and intelligence built into the diagrams, validating whether relationships defined in the drawing make sense and are valid or otherwise as rules are added to the model.

Once the model is developed and validated, it can be directly applied to a Kalido warehouse, and the necessary physical schemas are built (for example a single entity “Product SKU” will be implemented in staging tables, conformed dimensions and in one or many data marts) . There is no intermediate stage of definition required any more. Crucially, this means that there is no necessity to keep the design diagrams in sync with the model; the model is the warehouse, essentially. For existing Kalido customers (at least those on the latest release), the business modeller works in reverse as well: it can read an existing Kalido warehouse and generate a visual model from that. This has been tested on nine of the scariest, most complex use cases deployed at Kalido customers (in some cases these involve hundreds of business entities and extremely complex hierarchical structures), and seems to work according to early customers of the tool. Some screenshots can be seen here: http://www.kalido.com/resources-multimedia-center.htm

In addition to the business modeller Kalido has a tool that better automates its linkage to Business Objects and other BI tools. Kalido has for a long time had the ability to generate a Business Objects universe, a useful feature for those who deploy this BI tool, and more recently extended this to Cognos. In the new release it revamps these bridges using technology from Meta Integration. Given the underlying technology, it will now be a simple matter to extend the generation of BI metadata beyond Business Objects and Cognos to other BI tools as needed, and in principle backwards also into the ETL and data modelling world.

The 8.4 release has a lot of core data warehouse enhancements; indeed this is the largest functional release of the core technology for years. There is now automatic staging area management. This simplifies the process of source extract set-up and further minimises the need for ETL technology in Kalido deployments (Kalido always had an ELT, rather than an ETL philosophy). One neat new feature is the ability to do a “rewind” on a deployed warehouse. As a warehouse is deployed then new data is added and changes may occur to its structure (perhaps new hierarchies). Kalido’s great strength was always its memory of these events, allowing “as is” and “as was” reporting. Version 8.4 goes one step further and allows an administrator to simply roll the warehouse back to a prior date, rather as you would rewind a recording of a movie using your personal video recorder. This includes fully automated rollback of loaded data, structural changes and BI model generation. Don’t try this at home with your custom built warehouse or SAP BW.

This is a key technology release for Kalido, a company who has a track record of innovative technology that has in the past pleased its customers (I know; I used to do the customer satisfaction survey personally when I worked there) but has been let down by shifting marketing messages and patchy sales execution. An expanded US sales team now has a terrific set of technology arrows in its quiver; hopefully it will find the target better in 2008 than it has in the past.