Andy on Enterprise Software

Indian Summer

January 19, 2007

Therr will be a short intermission in the blog as I am off to India for a couple of weeks. There are many things to love and admire about India, but the quality of their internet connections outside the main urban centres is not one of them, so doing the blog from there will be impractical. I’ll just have to concentrate on the beaches of Goa instead; I expect I’ll cope.

Normal service will be resumed in February.

Impartial Advice?

January 17, 2007

HP continues with its plans for the business intelligence space with an announcement of in-house data warehouse technology:

http://www.computerworld.com:80/action/article.do?command=viewArticleBasic&articleId=9008218&intsrc=news_ts_head

with a new business unit. The offering with be based around HP’s attempt at a “data warehouse appliance”, called Neoview. This is a competitor to Teradata and Netezza, but at this stage it is hard to tell how functional this is, since it is unclear that there are any deployed customers other than HP itself.

The timing of this announcement is curious given HP’s acquisition of data warehouse consultancy Knightsbridge. Certainly data warehousing is a big market and Teradata is a tempting target – after all, most of the really big data warehouse deployments in retail, telco and retail banking use Teradata. There are lots and lots of juicy services to be provided in implementing an “appliance”, which in fact is no such thing. An appliance implies something that you just plug in, whereas data warehouse appliances are just a fast piece of hardware and a proprietary database, still requiring all the usual integration efforts, but with the added twist of non-standard database technology. Certainly plenty of business for consultants there.

However HP’s home-grown offering will not sit well with its newly acquired Knightsbridge consulting services, who made their reputation through a quite fiercely vendor-independent culture which always prided itself in choosing the best solution for the customer. People trust independent consultants to give them objective advice, since they are not (or at least they hope they are not) tied to particular vendor offerings. Presumably HP’s consultants will be pushing HP’s data warehouse solution in preference to alternatives, and so can hardly be trusted as impartial observers of the market. An analogy would be with IBM consultants, who while they may work with non-IBM software are clearly going to push IBM’s offerings given half a chance.

If you were a truly independent consultant how would you react to a brand new data warehouse appliance with a track record only of one deployment, and that in the vendor itself? Would you immediately be pushing that as your preferred solution, or would you be counseling caution, urging customers to wait and see how the new tool settles down in the market and how early customers get on with it? If you are a Knightsbridge consultant now working for HP, what would your advice be? Would it be any different to the advice you’d have offered in December 2006 before you became part of HP?

This kind of conflict of interest is what makes thing difficult for customers when choosing consultants. It is hard to find ones who are truly independent. Of course consultants always have their own agenda, but usually this is about maximising billable hours. If they are tied to a particular solution then that is fine if you are already committed to that solution, but you will need to look elsewhere for objective advice about it.

The mythical software productivity miracle

January 11, 2007

We have got used to Moore’s Law, whereby hardware gets faster at a dizzying rate, though there ought to be a caveat to this that points out that software gets less and less efficient in tandem with this. A neat summary of this situation is “Intel giveth, and Microsoft taketh away”. However when it comes to software development, the situation is very different. Sure, things have become more productive for developers over the years. My first job as a systems programmer involved coding IBM job control language (JCL) decks, which entertainingly behaved pretty much as though they were still on punch cards, with all kinds of quirks (like cunningly ignoring you if you had a continuation of a line too far to the right, beyond a certain column). I just missed Assembler and started with PL/1, but anyone who coded IBM’s ADF will be familiar enough with Assembler. However it is not clear how much things have really moved on since then. In the 1980s “4GLs” were all the rage, but apart from not compiling and hence being slower to run, they were scarcely much advance on Cobol or PL/1. Then there were “CASE” tools like James Martin’s IEF, which promised to do away with coding altogether. Well, we all know what happened to those. Experienced programmers always knew that the key to productivity was to reuse bits of code that actually worked, long before object orientation came along and made this a little easier. Good programmers always had carefully structured code libraries to call on rather than repeating similar code by editing a copy and making minor changes, so I’m not convinced that productivity raced along that much due to OO either.

This is all anecdotal though – what about hard numbers? Software productivity can be measured in lines of code produced in a given time e.g. a day, though this measure has limitations e.g. is more code really better (maybe implying less reuse) and anyway how do we compare different programming languages? A more objective attempt was to measure the number of function points per day or month. This had the big advantage of being independent of programming language, and also works for packages – you can count the number of function points in SAP (if you had the patience). Unfortunately it requires some manual counting, and so has never really caught on widely beyond some diehards who worked in project office roles (like me). Well, we always used to reckon that 15-30 function points per man month was pretty much a good average for commercial programming, and when Shell actually measured such things back in the 1990s this turned out be pretty true, almost irrespective of whether you were using a 3GL or 4GL, or even a package. Shell Australia measured their SAP implementations carefully and found that the function points per man month was delivered was no better (indeed a little worse) than for custom code, which was an unpopular political message at the time but was inconveniently true. Hence, while 3GL productivity definitely was an advance on Assembler, just about every advance since then has had a fairly marginal effect i.e. programmer teams writing today are only slightly more productive than ones in 1986. By far the most important factor for productivity was size of project: big projects went slowly and small projects went quickly, and that was that.

A new book “Dreaming in Code” by Scott Rosenberg is a timely reminder of why this is. Many of the issues of writing a moderately complex application are not to do with individual programmer productivity and everything to do with human issues like a clear vision, good team communication, teamwork etc. All the faster processors and latest programmer tools in the world can only optimise one part of the software development process. Sadly, the human issues are still there to haunt us, having moved on not one jot. Scott discusses the Chandler open source project and its woes, reminding us that software productivity is only a little bit about technology, and a great deal about human nature.

http://www.amazon.com/Dreaming-Code-Programmers-Transcendent-Software/dp/1400082463/sr=8-1/qid=1168361266/ref=pd_bbs_sr_1/102-6088231-4592927?ie=UTF8&s=books

When I was doing technology planning at Shell I always had a rule: if a problem was constrained by hardware then it would be fixed quicker than you expect, but if the problem was a software issue it would always take longer than you would think. This book tells you why that is not a bad rule.

Teradata steps into the light

January 8, 2007

In a logical move that I would say was overdue, Teradata finally became its own boss. It has long been nestling under the wing of NCR, but there was little obvious synergy between ATM machines and data warehouse database software, and so it seems to me eminently sensible for Teradata to stand on its own two feet. Running two quite different businesses with the same company is always a problem, as different business models lead to natural tensions as the company tries to accommodate different needs within the same corporate structure.

Teradata accounts for about USD 1.5 billion of revenue, around one third of NCR. The challenge for Teradata is growth. It has succeeded when others failed in the specialist database market, dominating the high end data warehouse market despite competing with Oracle, IBM and (to a lesser extent) Microsoft. Yet revenues have been pretty flat in the last couple of years, and there is new competition in the form of start-up Netezza, which although tiny compared to Teradata is nonetheless making steady inroads, and causing pricing pressure. Teradata has generally loyal customers though notoriously opaque pricing, which has enabled it to achieve good margins (especially on support), though its finances were never entirely clear as they were wrapped up with NCR. Splitting the company out will allow the market to value Teradata on its own merits.

A long journey

January 5, 2007

An Accenture study:

http://www.informationweek.com/research/showArticle.jhtml?articleID=196800921

quantifies how much time middle managers in enterprises waste seeking out information, and comes out at two hours a day i.e. a quarter of an average working day. When they find it, half of the information turns out to be of no use. This sounds about right to me, and ilustrates just how far BI really has yet to go in being genuinely useful, and also shows just how bad the true state of information is in large companies.

The issue is not only that technologies are insufficiently intuitive. In my experience there are a number of factors that come into play:

- no culture of sharing information
- inconsistent data definitions
- poor data quality
- inability to locate appropriate data sources
- insufficient understanding of how to use BI tools effectively.

If you set out to produce a useful new report in some area and succeed in doing so, what incentive is there for you to make this easily shared around the company, and to help others find it? In most companies this would be pure altruism, and so people just keep the information on their hard disk, and indeed may gain kudos from the “information is power” syndrome. Overcoming such cultural barriers is hard, and few companies succeed. I should say that Accenture themselves do as good a job as anyone I have seen, where their consultants are actively tasked with documenting project lessons and storing these, with appropriate keywords, in an internal knowledge management system. However I have not seen this in other consultancies to anything like the same extent.

The other problems are all too familiar to people working in BI. Inconsistent data definitions and poor data quality are the heart of what MDM is all about, and we know how immature that is. Yet without fixing this then accurate and easy to obtain information is still elusive. A further problem which some technologies are starting to address is the sheer job of finding an existing report. Ironically there is an excellent chance that if yoiu want some partioular report, then someone else did too and has already built it. The troiuble is that may be in an Excel spreadsheet on a hard drive, or sitting on a shared server but you simply have no easy way of finding that it is there. It is ironic that Google allows us to search the whole internet in moments, yet finding a report within our own company is a much tougher proposition. Enterprise search vendors like Fast and Apptus, as well as Google itself, are beginning to apply smart technology to the problem, but here it is still early days.

Finally, most end users either don’t have access to create a new report easily, or are not trained in making best use of BI tools, or simply don’t have time to learn. This is why Excel is so popular; it is familiar and ubiquitous, and so people would rather get data into Excel and play with it there than learn a new BI tool.

I believe that these are mostly quite intractable problems, only some of which lend themselves to new and better technology. So anyone with a magic bullet e.g. “the answer is SOA” is talking nonsense. It is only by addressing the organisational, cultural and data ownership issues in combination iwth enterprise search and better tool training that a company can improve that two hours a day per person. It will be long, hard slog, and buying the latest trendy tool is not enough, whatever the salesman tells you.

New Year

January 2, 2007

Happy New year to everyone. This period is one which journalists and bloggers dread since there is virtually no news of note (unless someone did something really naughty at the work Christmas party). Fortunately Gartner’s Andreas Bitterer has come to the rescue with the idea of a virtual cocktail party for a few bloggers, who are invited to reveal five things about themselves:

http://bitblueblog.com/2006/12/you-got-tagged.html

This is certainly a fine distraction from coming up with original observations about the software industry, so in the spirit of New year here are five things about me:

1. In 2004 I completed visiting every 3 star Michelin restaurant in the world at that time (see www.andyhayler.com for more on this).

2. I have cuddled a full-grown cheetah (in South Africa at a sanctuary where they bring up baby cheetahs and then release them back into the wild; they purr just like a regular cat)

3. I have eaten fugu (the poisonous puffer fish) when in Japan and lived to tell the tale.

4. I was a junior county chess champion (Somerset is a small county).

5. I once sat next to Madonna and Guy Ritchie at a restaurant (Zafferano in London) and didn’t recognise Madonna as she was wearing glasses, but I did recognise Harry Shearer (who does most of the voices on the Simpsons) a few weeks later. This may reveal something about my grasp of popular culture.

Well done Andreas – an inspired rescue idea for bloggers everywhere in search of material this week. Someone go and take someone else over soon please so we can all get back to worthy commentary.