SOA – Sounds-like Objects Again

Judith Hurwitz’s article today on SOA reminded me of at least two previous industry trends. I recall that analysts over a decade ago were predicting that “applets” were the way of the future. These mini-applications would allow customers to pick and choose a pricing routine from one vendor and a cost allocation routine from another and mix and match with impunity. This would allow a new range of innovative application vendors to bring solutions to market and let a thousand start-ups bloom. What did we get? SAP. For those who think it is different this time, let us all try and remember CORBA, which was another attempt to provide services that were application-neutral and would lead to a new set of standards-based applications. Seen too many of these recently?

The difficulty with such things is that the vision is always dragged down by the detail, and gaps in the offerings. In SOA, everything sounds good until you realise that there are no services for semantic reconciliation of data from these multiple sources, nor seemingly much in the way of a business inteligence layer. Worse, in order for people to actually build composite applications based on the SOA services, it will require all the current application vendors to meekly open up the guts of their products to allow composite apps to call them. Why exactly would they do this? Of course they will make defensive PR noises about being open, but the goal of application vendors is to sell as much of their own software as possible, not help someone else. Those with the largest entrenched installed base have the most to lose, so expect these vendors to start to offer their own “superior” form of web services, which will allow calls out from their own applications to “legacy” (i.e. anyone but them) applications, but don’t hold your breath for services going the other way. After all “legacy” is partly a matter of perspective, rather like freedom fighters and terrorists. If you are SAP then Peoplesoft is legacy, but of you are Oracle then it doesn’t look that way.

When looking at industry trends always ask yourself “who is going to make money here?”. Well, middleware vendors might, which is why IBM is so keen on the idea. As always, hardware vendors win from anything new and complex, and all that extra network traffic will benefit a further set of vendors. Of course the systems integrators will have a field day actually building those composite apps. So in summary, lots of camps will make money, so expect the hype to continue apace. Whether customers will see much real benefit is another matter.

Siebel aquisition may give Oracle indigestion

The applications industry saw further consolidation this week with Oracle’s purchase of Siebel . This is a logical step for Oracle, who need to bulk up in order to feed their struggle with SAP in the applications war, though after Peoplesoft (and hence JD Edwards), Siebel may yet cause some indigestion. It was well known on the industry that Siebel has struggled of late after its meteoric growth in the boom years. Clauia Imhoff amusingly refers to this acquisition as “donkeys can fly” on her blog. I don’t think that she intended that as praise. Siebel’s revenues shrank last year and there has been an exodus of management.

While the idea of customer relationship management is a noble one, Siebel was partly a victim of its own aggressive marketing hype, which promised much more than it delivered. Firstly, given the broad landscape of deployed applications in large companies, it was unrealistic to expect one application to “own” the customer. Worse, Siebel was long on marketing slides and short on well engineered code. A friend working at a bank who spent two years implementing Siebel described it as a “multi million dollar compiler”, since almost every function that they required was not in the core product, and required lengthy and expensive coding from Siebel consultants.

Another friend working at a giant corporation reckoned that their Siebel rollout cost more than their SAP rollout, and that was not meant as a compliment to SAP. Large companies (though not of course system integrators) had become disillusioned with these massive systems integration projects, while Salesforce.com showed just how much was possible in a relatively simple, hosted environment. When the giant deals dried up after the internet bubble collapsed, Siebel was ill suited to adapt to the more difficult sales climate, and a host of executive changes at the top were just the tip of the iceberg, with an exodus in the last year or so of mid-level staff. However, just as breeding two elephants rarely produces a gazelle, the disappearance of Siebel into Oracle’s maw does not solve the issue of customer integration in large companies. The definition of “customer” is still spread amongst every application that needs to reference it, which includes sales force systems but also billing systems, marketing systems and support systems. In time there will be one less definition around as Siebel is absorbed, but this barely scratches the surface of the problem of reducing the complexity of dealing with multiple definitions of master data such as “customer”, as there will still be dozens of sources of this data around. As discussed elsewhere , this requires tools that are not built on the assumption that they are the one and only source of the truth.

Oracle – Siebel takeover won’t solve master data mess

Bloor analyst Harriet Fryman’s article “mastering master data” raises some excellent points. She correctly points out that master data i.e. things other than business transactions, (such as “price”, “customer”, “brand”, “product”, etc) are hopelessly fragmented in every organization of any size. Research from Tower Group indicates that a large company has 11 separate systems that think they own the master definitions of “product”, and this always seemed to me an optimistic number. In the two global corporations I have worked in the number would be in the hundreds.

This is partly because even if you have a single ERP vendor, every separate implementation of that ERP system has a slightly different definition, and ERP systems are only some of the many systems that need master data. Since the major application vendors concentrate on turf-grabbing from each other (as we see this week with Oracle’s takeover of the ailing Siebel) it is not in their interests to make it easy for other systems to interact with theirs. Their answer is “buy everything from us,” a wholly impractical situation since no one vendor (not even SAP) covers more than 30-70% of a company’s business needs (and that is according to SAP’s ex CEO). Hence the big application vendors are ill-suited to pick up the crown of the master data kingdom.

Instead what is needed are applications that are designed from the outset on the assumption that there are many, related versions of the truth, and that software has to be able to deal with this. This was the assumption on which KALIDO was developed at Shell, and any other vendor hoping to gain significant market share in this area needs to be able to deal with this reality also. Paradoxically, because the apps vendors are locked like dinosaurs in their “footprint” wars, I believe it will be the small furry mammal equivalents in software who will be able to produce working solutions here, since they do not have a massive legacy of application code to defend. Roll on the evolution.

Real Time BI – get real

I permitted myself a wry smile when I first heard the hype about “real time” business intelligence, which is hyped again this week . The vision sounds appealing enough: as soon as someone in Brazil types in a new sales order, the ultra-swish business intelligence system in central office knows and reacts immediately. Those who have worked in large corporations will be entertained by the naivety of this, since most large companies would be grateful just to know who their most profitable global accounts are.

The mismatch between fantasy and reality is driven by two factors. The first is that business rules and structures (general ledgers, product classification, asset hierarchies, etc) are not in fact uniform, but are spread out among many disparate transaction system implementations – one survey found that the average Global 2000 company has 38 different sources of product master data alone. Yes, this is after all that money spent in ERP. Large companies typically have dozens or even hundreds of separate ERP implementations, each with a subtly different set of business structures from the next (plus the few hundred other systems they still have around). The second problem is that the landscape of business structures is itself in constant flux, as groups reorganize, subsidiaries are sold or new companies acquired.

Today’s business intelligence and data warehouse products try to sweep this reality under the carpet, producing tools to convert the source data into a lowest common denominator consistent set that can be loaded into a central data warehouse. This simplification is understandable, but means that local variations are lost, and many types of analyses are not possible. Worse, if the business structures change in the source systems, then the data warehouses and reports built on top of them are undermined, with any changes to the structure of the data warehouse taking typically months to bring about. In these intervening months, what happens to the “real time” business intelligence?

The problem comes down to fundamental truth: databases do not like having their structure changed. Adding data is fine, but something which affects the structure of a database (a major reorganization will usually do the trick) will cause pain. If you doubt this, ask a CFO how long it will take him or her to integrate an acquisition just enough to be able to run the management accounts as one combined entity. For some companies acquisitions are a way of life, with several undertaken a year. Such companies are always chasing their tail in terms of trying to get a complete picture of their business performance. This is not just inconvenient but also costly: one company built a large and well-used conventional data warehouse, costing USD 4 million to build. When they properly accounted for all aspects of maintenance, including business user time (which few companies do) they found it was costing USD 3.7 million per year to maintain. There was nothing wrong with the warehouse design; they were operating in a volatile business environment, with 80% of the maintenance cost caused by dealing with business change.

What is needed, and generally what the industry has failed to deliver, are technology solutions that are comfortable dealing with business change: “smarter” software. Today few IT systems can cope with a change in the structure of the data coming into the system without significant rework. The reason for this is in the heart of the way that databases are designed. They are usually implemented to reflect how the business is structured today, with relatively little regard to how to deal with future, possibly, unpredictable, change. Introductory courses on data modeling show “department” and “employee” with a “one-many” relationship between them i.e. a department can have many employees, but a person can be only in one department (and must be in one department). This is easy to understand and typical of the way data models are built up, yet even this most basic model is flawed. I have myself been in between departments for a time, and at another time was briefly part of two departments simultaneously. Hence the simple model works most of the time, but not all of the time: it is not resilient to exceptional cases, and IT systems built on this model will break and need maintenance to cope when such special cases arise. This is a trivial example, but it underlies the way in which systems, both custom built and packaged, are generally built today. Of course it is hard (and expensive) to cater for future and hence somewhat unknown change, but without greater “software IQ” we will be forever patching our systems and discovering that each package upgrade is a surprisingly costly process. If you are the CFO of a large company, and you know that it takes years to integrate the IT systems of an acquired company, and yet you are making several acquisitions each year, then getting a complete view of the business performance of your corporation requires teams of analysts with Excel spreadsheets, the modern equivalent of slaughtering a goat and gazing at its entrails for hidden meaning.

Some techniques in software are emerging that tackle the problem in a more future-oriented way, but these are the exception today. Unfortunately the vendor community finds it easier to sell appealing dreams than to build software to actually deliver them. “Real-time business intelligence” comes from the same stable as those who brought you the paperless office and executive information systems (remember those?) where the chief executive just touches a screen and the company instantly reacts. Back in reality, where it takes months to reflect a reorganization in the IT systems, and many months more just to upgrade a core ERP system to a new version, “real time” business intelligence remains a pipe dream. As long as people design data models and databases the traditional way, you can forget about true “real-time” business intelligence across an enterprise: the real world gets in the way. It is interesting that the only actual customer quoted in the techworld article, Dr Steve Lerner of Merial, had concluded that weekly data was plenty: “The consensus among the business users was that there was no way they were prepared to make business decisions based on sales other than on a weekly basis”.

On SAP and Zombies

On SAP and Zombies

I worked for many years in Exxon and Shell and noticed something curious about large (are there any other kind?) SAP implementations: something odd happens to the people on them. Previously reasonable people would start to view the world entirely through the eyes of SAP, as though by using the software they had joined some secret society or cult. Despite any evidence to the contrary, it was as if these people had their critical faculties removed when discussing SAP applications – which could do no wrong even when there were clear problems or issues. If, for example, you mentioned some issue with the software or project, a glazed look would come into their eyes as of they were extras in “Invasion of the Body Snatchers” http://www.imdb.com/title/tt0049366/combined

I noticed this for the first time after being involved in an SAP roll-out at Exxon in the late 1980s, one of the first large-scale SAP projects outside of Germany. The project was justified because Esso UK (where I worked) had very old transaction systems that needed replacing with something, and a previous attempt to implement software from Walker had been a fiasco. The business case rested on getting rid of all the accounts clerks who raised invoices and processed orders, the idea being that the rest of us employees would do this instead using SAP. So if you wanted anything from stationery to a new part for petrol station, you would use the system instead of involving people from finance. Unfortunately SAP at the time was still partly in German, and had a quite complex user interface that involved remembering various esoteric codes to do anything like post an invoice, so although we were all sent on training, things did not go well. After a period of denial, it emerged that Esso’s suppliers were not being paid to such an extent that there were issues about the company’s credit rating, and so all the old finance clerks (and more) were re-hired to sort out the mess. To save face, they were distributed around the business lines in order to not make it look like there were now more admin and finance staff than there were before the system. This was the first instance of denial around SAP that I observed – the project was “too big to fail”.

A little later I moved to Shell UK and was invited to a presentation by a gentleman from Shell Centre who will remain nameless; let’s call him Roger. We were to hear about Shell’s new IT strategy. Flanked by consultants from PWC, Roger proceeded to explain that Shell was going to implement SAP. The bulk of the presentation was done by PWC, and was light on details e.g. the only business case seemed to be “er, other big companies are doing it”. When I mentioned that the Esso UK implementation had not delivered its promised benefits there was another zombie-like experience, with Roger saying that he had been on exotic trips to all sorts of companies in warm locations in order to research the area, and there would be no problem. I asked “Have you ever seen an SAP screen” to which the reply “I don’t need to” was not the comforting response I had hoped for.. If you were about to spend a large but unspecified amount of money on a system and you were in charge of the recommendation, would it not have been prudent to have cast a quick glance at what was being bought? Apparently not. Fortunately the consultants from PWC, who at the time got 11% of their worldwide revenue from SAP implementations and so were utterly objective advisors, saw no problem at all either.

The space-out stares continued when we were selecting a finance, time-writing and billing system for Shell Services International, the internal IT arm of Shell at the time (1998). Despite that fact that SAP was designed for manufacturing companies and not services companies, and despite the fact that none of the consultancy firms implementing SAP used it for their internal tracking, SSI management selected SAP for this purpose. What was required was very simple and could have been done in a dozen commercial packages, or indeed probably an Excel spreadsheet on steroids, but SAP it was. The cost of this, for an organisation with under USD 1 billion in total revenue (and loss making at that) was estimated at USD 50 million i.e. 5% of total revenues, which should have set off some alarm bells (most big companies spend less than 2% of their revenue on IT, never mind one project). Not a bit of it, and the project duly clanked into life. What did it cost? Not a popular question, and eventually someone admitted that it had cost USD 70 million “up to the time they stopped counting”. Did anyone get into trouble over this? Far from it. Were the customers happy with their new billing systems? I can assure you they were not.

So what is it that causes such odd behaviour? I can only speculate that once a project reaches a certain size then it indeed cannot be seen to fail – too many senior people had their name on the decision to implement, and so problems dare not be acknowledged. The people on the project, comforted by the sheer scale of the thing going on around them and concentrating on delivery, have trouble seeing the wood for the trees. The scary thing was the inability to even raise issues for fear of being labelled “not a team player”. I’m not sure if this is something that occurs on any very large IT project, as I doubt there is anything peculiar about SAP that induces such lack of objectivity. However it is an odd experience, seeing those around you who you formerly trusted seemingly oblivious to all issues and suddenly incapable of critical reasoning. I felt like Kevin McCarthy’s character Miles Bennel in the “Invasion of the Body Snatchers” movie at the end, running around shouting: “you’re next, you’re next!”

Hiring Top Programmers

At Kalido we want to hire the best 1% of programmers. This is for a very good reason: the top 1% of programmers code 10 times as much code as the average ones, and yet their defect rates are half the average. This is a pretty amazing productivity difference, yet has been found consistently over the years e.g. by IBM. In order to try and search out these elusive people, we use a couple of different tests in addition to interviews. Firstly we use ability tests from a commercial company called SHL. In particular their “DIT5” test, aimed at programming ability, proves to be very useful. We found a very high correlation between the test results and our existing programming team when we tried this on ourselves, and we now use it for all new recruits. Another is a software design test that we developed ourselves. We find that very few people do a decent version of this, which allows us to screen out a lot of people prior to interview, sacing time for all involved.

I actually find it encouraging that some people don’t like to have to do such tests, thinking themselves above such things or (more likely) fearing that they won’t do well. This is an excellent screening mechanism in itself – as a company we want the very best, and in my experience talented people enjoy being challenged at interview, rather than being asked bland HR questions like “what are your strengths and weaknesses” (yeah, yeah we know, you are too much of a perfectionist and work too hard, yawn). Partly as a result of these tests, as well as detailed technical interviews, we have assembled a top class programming team.

I am encouraged that a similar view is shared by Joel Spolsky, who writes a fine series of his insights into software, “Joel on software”:

http://www.joelonsoftware.com/articles/HighNotes.html

which I highly recommend.