Andy on Enterprise Software

Technology Planning

February 13, 2006

I spent much of my career in technology planning, both in Exxon and Shell. These organizations are quite contrasting in their culture despite being in the same industry: Exxon is the uber-centralist, with everything driven from head office, and very little room for variation allowed in its subsidiaries. Shell has changed over the years but has traditionally been extremely de-centralized, allowing its subsidiary operating companies a great deal of autonomy; in the mid to late 1990s Shell has tried to become more centralized in its approach, though still nothing like as much as Exxon.

What worked best in technology planning in these different companies? Exxon’s strength was its standardization at the infrastructure level. There were rigid standards for operating systems, databases, email, telecoms protocols and even desktop applications, long before most companies even dreamed up their “standard desktop” initiatives. This avoided a lot of problems that many companies had e.g. with varying email systems, or differing database applications, and critically meant that skills were very transferable between jobs and between Exxon subsidiaries (important in a company where most people move jobs every two to three years). By contrast Shell was fairly anarchic when I joined, even having different email systems, desktops using anything from Windows to UNIX desktops, and every database, TP monitor, 4GL etc that was on the market, though often just one of each. In 1992 I did a survey and recorded 27 different BI/reporting tools, and that’s just the ones I could track down. It was perhaps not surprising that Shell spent twice as much on IT as Exxon in the mid 1990s, despite the companies being about the same size (Exxon is now a lot bigger due to the Mobil acquisition). On the other hand Shell had excellent technology research groups, and many collaborative projects between subsidiaries, that helped spread best practice. Also, since operating companies had a lot of autonomy, central decisions that proved unsuitable in certain markets simply never got implemented rather than being rammed down subsidiary’s throats.

It was certainly a lot more fun being a technology planner in Shell, as there were so many more technologies to tinker with, but it was also like herding cats in terms of trying to get decisions on a new technology recommendation, let alone getting it implemented. In Exxon it was extremely hard, and probably career-limiting, to be in a subsidiary and try and go against a central product recommendation; in Shell it was almost a badge of honor to do so. Shell’s central technology planners did, though, make some excellent decisions e.g. they were very early into Windows when the rest of the world was plugging OS/2, and they avoided any significant forays into the object database world at a time when analyst forms were all writing the obituaries of relational databases.

Having worked in both cultures, I believe that the optimum approach for technology planners is to try and standardize as rigidly as your company will let you on things that have essentially become commoditized. For example, who really can tell the difference between Oracle, DB2 and SQL Server any more? For the vast majority if situations it doesn’t matter, and it is more important to have one common standard than it is to get the “right” one. On the other hand, in an emerging area, it is just self-defeating to try and pick a winner at too early a stage. You do not want to stifle innovation, and the farther up the technology stack, the less harm a few bad decisions are likely to make. For example, get the wrong operating system (like OS/2) and it is a major job to rip it out. Get the wrong web page design tool and there is less damage done.
Moreover, at the application level there is likely to be a clearly quantified cost-benefits case for a new technology, since applications touch the business directly e.g. a new procurement system or whatever. At the infrastructure level it is much harder to nail down the benefits case, as these are shared and long-term. If your new application has a 9 month payback period, then it matters less if one day it turnes out to be “wrong”, but you don’t want to find this with your middleware message bus product. There are lots of hobbyists in technology, and few products at the infrastructure level are truly innovative, so getting the lower levels of the stack standard is well worth doing.

Overall, while both companies are clearly highly successful, I think on balance that Exxon’s strong centralization at the infrastructure level is more beneficial from a technology planning viewpoint. Quite apart from procurement costs, the skills transfer advantages are huge: if your DBA or programmer moves from the UK to Thailand, he or she can still use their skills rather than starting from scratch. However Shell’s greater willingness to try out new technology, especially at the application level, often gave it very real advantage and rapid payback, even if overall IT costs were higher.

What is perhaps interesting is how the technology planning needs to reflect the culture of the company: a company whose decision making is highly decentralized will struggle greatly to impose a top-down technology planning approach, whatever its merits.

One more time

February 10, 2006

An article from Wayne Eckerson, research director at The Data Warehouse institute (TDWI) has some sound advice about how to revitalize a data warehouse, based on a case study by Greg Jones of Sprint Nextel Corp. As the article says:

“Many data warehouses are launched with much fanfare and promise but quickly fail to live up to expectations”

and indeed multiple studies have shown high failure rates for data warehouse projects. I was at a Gartner conference earlier this week when an analyst stated that “the vast majority” of business intelligence initiatives fail to deliver tangible value. Yet, as a wise colleague of my often says:

“There is never time to do it right, but always time to do it again”

By this he means that data warehouse projects cut corners and make simplifying assumptions in their design about how the business works. It is much harder to make the design truly robust to business change, and yet this inability to deal properly with major business change is what eventually leads to problems for most data warehouses. A reorganization occurs, and it takes three months to redesign the star schema, fix up the load routines, modify the data mart production process, test all this etc. In the mean time the business is getting no up-to-date information. What do they do? They knock up a few spreadsheets or perhaps something quick in MS Access, “just for now”. Then another change happens two months later: the company buys another company, which of course has different product codes, customer segmentation, cost allocation rules etc to the parent. Putting this new data into the warehouse is added to the task list of the data warehouse team, who have yet to finish adapting to the earlier reorganisation. The business users need to see the whole business picture right now, so extend their “temporary” spreadsheet or MS Access systems a little bit more. Since they have control of these, they start to do more using this, and after a time it hardly seems like the data warehouse is really that necessary any more. Of course they let the IT people get on with it (it is not their budget after all) but usage declines, and they give up telling the data warehouse team about the next major new requirement, as they never seem to see results in time anyway. Eventually the data warehouse falls into disuse. Eventually a new manager comes in and finds the spreadsheet and MS Access mess to be unmanageable, and a new budget is found to have another go, either from scratch or by rewriting the old warehouse. And so the cycle begins again.

Sound familiar? The overriding issue is the need to reflect business change in the warehouse quickly, in time for the business customers to make use of it, and before they start reverting to skunkworks spreadsheets and side solutions that they can get a contractor to knock up qucikly.
Until the industry starts adopting more robust, high quality modeling and design approaches, such as that based on generic modeling, this tale will repeat itself time and time again. The average data warehouse has 72% maintenance costs i.e. if it costs USD 3M to build, it will cost over USD 2M to maintain, every year. This is an unsustainable figure. Still, there will always be a new financial year, and new project budgets to start again from scratch…..

Another one bites the dust

February 9, 2006

I had written quite recently about the consolidation occurring in the data quality industry. The pace picked up today, when First Logic, having escaped the clutches of Pitney Bowes, looks as if it will be acquired by Business Objects.

This acquirer is a lot more logical for First Logic than Pitney Bowes. Data quality is a major company of any BI/data warehouse implementation, and indeed Business Objects has already been reselling First Logic for over a year, so the two companies already know each other. The bargain basement price (USD 69M purchase for a company with revenues of over USD 50M) tells you all you need to know about the health of the data quality market.

This move supports my thesis that data quality as an independent market is essentially disappearing, with the capabilities being baked into broader offerings. I believe the same fate awaits the ETL market; more on this later.

The India syndrome

February 3, 2006

One of the interesting effects of the rise and rise of India as an offshore location for technology staff is the effect that it is having on the prices that IT consulting firms can charge. We had a situation at Kalido where four well-known systems integration firms bid on a project, and in the end the customer chose none of them, but went in-house with significant input from staff in India. I also encountered a situation last year where a very well known firm was charging just USD 650 a day on a large project for its junior staff, a rate that would have been unthinkably low in 2001, when nearly double that would have been the going rate even for IROCs (idiots right out of college).

If you look at Accenture, perhaps the leading IT consulting firm other than IBM, you will see that its overall business is still growing, but in fact there are two trends: consulting has declined a lot, but outsourcing has risen to take its place. Even Accenture has been unable to protect its premium consulting pricing across the board under the onslaught of lower prices from Indian firms like TCS, Wippro and Infosys. The large consulting firms have responded by setting up large operations in India themselves (”if you can’t beat ‘em, join ‘em”) but while this has no doubt helped, the daily rates that consultants can charge in the US and Europe is still affected by this downwards pricing pressure. It’s not that the Indian companies are giving it away: Wippro’s profit margins are twice that of IBM.

Of course not every job can be done remotely. Support call centers and programming and testing to specifications are clearly the easiest to do, while projects that require a high degree of iteration e.g. web sites, user interface design, or reporting systems are much less suitable. Still, companies are now moving more complex work overseas e.g. accounting and financial research, so the list of “safe” jobs in IT in the developed world is gradually being eroded away.

Of course this movement has caused wage inflation in India, with 20% pay increases common amongst hot skills, and turnover rates of over 20% in Bangalore being normal (these can hit 50% for call centre jobs). Nonetheless, there is a long way to go. A top programmer with five to ten years C++ experience in the UK or the US might earn well over USD 100k (more in Silicon Valley) but the equivalent in Bangalore is still around USD 15k. It is less in Chennai or Pune. It is going to be a long time before inflation brings Indian wages up to anything like US levels.

This structural price deflation effect still has a long way to play out, with off-shoring growing steadily but still by no means universal, and large companies exploiting the lower prices to push down consulting rates in the US and Europe. It isn’t going to get prettier any time soon. I was in India last week, and the sense of momentum and progress is tangible.

Missing the Boat

February 2, 2006

One of the things that has bewildered me over the last year or two (and there are plenty of things that bewilder me) is how the data quality vendors have seemed oblivious to the emerging trend of master data management (MDM). On the face of it, there are few sectors more in need of a fillip. Data quality, which involves a lot of human issues such as data governance, and getting business people involved, is a hard sell. Rooting out errors in data is hardly the sexiest area to be working in, and as the solution is only partially provided by technolopgy, projects and initiatives here are prone to failure (human nature being what it is). The space has seen significant consolidation on recent years: Avellino was bought by Trillium, Evoke was bought by Similarity systems, Vality by Ascential (now IBM), Group 1 also by Pitney Bowes, who also made an abortive attempt to buy First Logic (if you can figure that strategy out, answers would be gratefully received), while Trillium is owned by Harte Hanks. Now Similarity systems has in turn been acquired by Informatica. Not the sign of a flourishing sector.

Surely then data quality vendors should have seized on MDM like a drowning man would at a life-raft? Data quality issues are a significant element of master data management, and while having software that can match up disparate name and address files is a long way from having a true MDM offering, remember that this is the tinseltown world of high-tech marketing, where a product can morph into another field with just a wave of a Powerpoint wand. Data quality vendors certainly ought to have grasped that matching up disparate definitions of things like “product” and “customer” was at least related to what their existing offerings did, and could have launched new MDM-flavored offerings to pick up on the coat-tails of the nascent but burgeoning MDM bandwagon. Instead there hasn’t been a peep, and vendors have resigned themselves to being picked off by in some cases somewhat odd acquirers (Pitney Bowes, for example, is a direct mail firm; does it really grasp what it takes to be an enterprise software vendor?). Having avoided the clutches of Pitney Bowes, First Logic is now making progress in talking about MDM, but it is not perceived by the market as an MDM vendor. Elsewhere in the data quality industry, the silence around MDM is deafening.

As the data quality market essentially disappears into the portfolios of integration companies like Ascential (now IBM) and Informatica (which at least make logical sense as buyers), and assorted others, the executives of some of these companies surely must be wondering whether they missed a trick.