Technology Planning

I spent much of my career in technology planning, both in Exxon and Shell. These organizations are quite contrasting in their culture despite being in the same industry: Exxon is the uber-centralist, with everything driven from head office, and very little room for variation allowed in its subsidiaries. Shell has changed over the years but has traditionally been extremely de-centralized, allowing its subsidiary operating companies a great deal of autonomy; in the mid to late 1990s Shell has tried to become more centralized in its approach, though still nothing like as much as Exxon.

What worked best in technology planning in these different companies? Exxon’s strength was its standardization at the infrastructure level. There were rigid standards for operating systems, databases, email, telecoms protocols and even desktop applications, long before most companies even dreamed up their “standard desktop” initiatives. This avoided a lot of problems that many companies had e.g. with varying email systems, or differing database applications, and critically meant that skills were very transferable between jobs and between Exxon subsidiaries (important in a company where most people move jobs every two to three years). By contrast Shell was fairly anarchic when I joined, even having different email systems, desktops using anything from Windows to UNIX desktops, and every database, TP monitor, 4GL etc that was on the market, though often just one of each. In 1992 I did a survey and recorded 27 different BI/reporting tools, and that’s just the ones I could track down. It was perhaps not surprising that Shell spent twice as much on IT as Exxon in the mid 1990s, despite the companies being about the same size (Exxon is now a lot bigger due to the Mobil acquisition). On the other hand Shell had excellent technology research groups, and many collaborative projects between subsidiaries, that helped spread best practice. Also, since operating companies had a lot of autonomy, central decisions that proved unsuitable in certain markets simply never got implemented rather than being rammed down subsidiary’s throats.

It was certainly a lot more fun being a technology planner in Shell, as there were so many more technologies to tinker with, but it was also like herding cats in terms of trying to get decisions on a new technology recommendation, let alone getting it implemented. In Exxon it was extremely hard, and probably career-limiting, to be in a subsidiary and try and go against a central product recommendation; in Shell it was almost a badge of honor to do so. Shell’s central technology planners did, though, make some excellent decisions e.g. they were very early into Windows when the rest of the world was plugging OS/2, and they avoided any significant forays into the object database world at a time when analyst forms were all writing the obituaries of relational databases.

Having worked in both cultures, I believe that the optimum approach for technology planners is to try and standardize as rigidly as your company will let you on things that have essentially become commoditized. For example, who really can tell the difference between Oracle, DB2 and SQL Server any more? For the vast majority if situations it doesn’t matter, and it is more important to have one common standard than it is to get the “right” one. On the other hand, in an emerging area, it is just self-defeating to try and pick a winner at too early a stage. You do not want to stifle innovation, and the farther up the technology stack, the less harm a few bad decisions are likely to make. For example, get the wrong operating system (like OS/2) and it is a major job to rip it out. Get the wrong web page design tool and there is less damage done.
Moreover, at the application level there is likely to be a clearly quantified cost-benefits case for a new technology, since applications touch the business directly e.g. a new procurement system or whatever. At the infrastructure level it is much harder to nail down the benefits case, as these are shared and long-term. If your new application has a 9 month payback period, then it matters less if one day it turnes out to be “wrong”, but you don’t want to find this with your middleware message bus product. There are lots of hobbyists in technology, and few products at the infrastructure level are truly innovative, so getting the lower levels of the stack standard is well worth doing.

Overall, while both companies are clearly highly successful, I think on balance that Exxon’s strong centralization at the infrastructure level is more beneficial from a technology planning viewpoint. Quite apart from procurement costs, the skills transfer advantages are huge: if your DBA or programmer moves from the UK to Thailand, he or she can still use their skills rather than starting from scratch. However Shell’s greater willingness to try out new technology, especially at the application level, often gave it very real advantage and rapid payback, even if overall IT costs were higher.

What is perhaps interesting is how the technology planning needs to reflect the culture of the company: a company whose decision making is highly decentralized will struggle greatly to impose a top-down technology planning approach, whatever its merits.

Why are most job interviews so bad?

We have all been to job interviews, but has it struck you how remarkably random the whole process is? Some companies put in effort in but I recall interviews where the person interviewing me was clearly bored, had been asked to fill in for someone else, didn’t really know what the job was about etc. If it is a big company they might have a session about the company e.g. I recall as a graduate going to an interview at Plessey and hearing about the pension plan for an hour; just what every 21 year old is dying to listen to. On the other side of the fence, most of us will have interviewed some surreal people. I had one guy who was clearly on drugs, and one CFO candidate who considered that answering my accounting questions was “beneath him”. My colleague had one candidate for a technical author who, when asked about his prior work experience, jumped up, grabbed and opened the large suitcase he was carrying, revealing a cloud of dust, a horrible musty smell and two maintenance manuals for the ejector seat of an aircraft, which he proceeded to read out loud.

Does it really have to be this way?

I have been studying this in some depth recently, and was pleased to find that there is at least some science around. If you look at various selection techniques, it turns out that “unstructured interviews” i.e. the ones most of us are used to, are actually a pretty dismal way to select people. A major 2001 study looked into various selection techniques and tracked back performance at selection e.g. how good their interview was v how well the candidates were performing in the job a few years later. It turns out that unstructured interviews manage just a 0.15 correlation between interview results and job success i.e. only a bit better than random (correlation of 1 is perfect, zero is randomly, while -1 is perfect inverse correlation). Highly structured interviews, based on job competencies and evidence-based questions (which can be trained) manage 0.45 correlation. Ability tests on their own manage a correlation of 0.4, and if combined with structured interviews take the number up to 0.65, which although still not perfect was the best score that had been achieved.

Ability tests take various forms. The best of all are ones that are directly related to the job in hand e.g. actually getting someone to do a sales pitch if they are a salesman, or to write some code if they are a programmer. For more on one of these see an earlier blog. The most creative of these I heard about was an interviewer for a sales position who asked each candidate to sell him the glass of water in front of him on his desk. One candidate want to the waste-paper bin by the desk, pulled out a match, set fire to the paper inside and then said “how much do you want for the water now?”. Generally less creative approaches are adequate, and at Kalido we use a software design test for all our software developers which enables us to screen out a lot of less gifted candidates, saving time both for us and the candidates.

General intelligence tests also score well as, all other things being equal, bright people do better in a job than those less bright; studies show that this applies across all job disciplines (yes, yes, you can always think of some individual exception you know, but we are talking averages here). The 0.4 correlation with job success that these tests provide is a lot better than the 0.15 which most interviewing manages. Personality profiles can be used to supplement these, as for some types of job research has been done which shows certainly personality types will find it more comfortable than others. For example a salesman who was hated rejection, didn’t enjoy negotiating, disliked working on his own and was pessimistic might still be a good salesman, but would probably not be a very happy one. You don’t have to invent such profiles and tests: there are several commercially available ones, such as the ones we use at Kalido from SHL.

The cost/benefit case for employing proper interview training and such tests is an easy one to make: the cost of a bad job hire is huge just in terms of recruitment fees, never mind the cost of management time in sorting it out, the opportunity costs of the wasted time etc. Yet still most software companies don’t employ these inexpensive techniques. Perhaps we all like to think our judgment of people is so great that other tools are irrelevant, yet remember that 0.15 correlation score. There may be a few great interviewers out there, but most people are not, and by supplementing interviews by other tools like job-related tests and good interview training we can improve the odds of hiring the best people. I used to work at Shell, who did a superb job of structured interview training, and I recall being trained for several days, including video playback of test interviews, on how to do a half-hour graduate interview. This may sound like a lot of effort, but it is trivial compared to the cost of a bad hire.

Many software companies seem to be missing a trick here. When I applied for jobs as a graduate I recall virtually every large multi-national had an extensive selection process including ability tests, yet in the software industry, where almost all the assets are its people, such things seem rare. I was amused to hear a recruitment agency whining at me for our use of screening tests at Kalido: “but only software companies like Microsoft and Google do tests like that”. I rest my case.

Uncomfortable bedfellows

It is rare to find the word “ethical” and “software company” to appear in the same sentence. The industry has managed to become a byword for snake oil, aggressive pricing and sneaky contract terms. Years ago when working at Exxon I recall one vendor who sold Esso UK some software, rebadging the product as two separate products and then trying to charge Esso for the “other” product, which of course they had already bought. Needless to say I was having none of that, but the very notion that they would try to do this spoke volumes about their contempt for the customer.

The prize (so far) goes to one of my colleagues, who used to work for a software company that once sold a financial package to a customer on the basis that it had a particular module. The only problem was that it did not exist. He was asked to set up a “demo” of the software to the customer which sounds like something out of “The Office”. In one room sat the customer at a screen, who typed in various data to the system and requested a report from an (entirely fictiitious) pick list of reports that the vendor was supposed to have built but had not. In the next room was a programmer. When the customer pressed “enter” the data would appear in a table, and they quickly manually edited a report format using the customer data, which was “off to the printer”. A couple of minutes later the report was brought in to the customer, who could they see the new reporting module in action. The slow response time was explained by an “old server”. Lest you think this was some fly by night operation, this major provider of financial software had over USD 100 million revenue back in the early 1990s, which was when this particular scam was perpetrated. And yes, they closed the deal.

As if to prove that enterprise software companies are still amateurs when its comes to dubious behavior, Sony has just made all the wrong headlines by placing what is essentially a clever virus on its CDs, purportedly to avoid digital copyright violations. The software installs itself into the root directory of your PC and, quite apart from preventing unathorised copying of music, also broadcasts back to Sony what music you have been listening to. Apparently million of PCs may have been infected, and only after several refusals have Sony now agreed to stop producing the spyware. Just what corporate manager at Sony thought this was a really bright idea that no-one would figure out is yet to emerge. However it is safe to say that Sony’s PR agency are not having a quiet run-up to Christmas right now.

I’d be interested to hear about any reader’s experiences of outrageous software company behavior.

Elephants rarely dance

A recent Business Week article gave a good example of a small software company providing an important solution to retailer Circuit City in the face of competition from industry giants. The dot-com madness made CIOs understandably wary of small software companies bearing gifts, yet it is important for these same CIOs to realize that they do their shareholders no favours by taking an ultra-conservative “buy only from giants policy”. For a start, this option is by no means always safe. It also a flawed strategy.

Industry giants rarely produce innovative software. For example the founding executives of both Siebel and were both Oracle executives, but were unable to create what they knew the market wanted at Oracle itself. Large companies become inevitably less fast-moving as they grow, and frequently become more inwardly focused and cease listening to their customers, the very people that made them successful when they were small. In my years as a strategic technology planner I learned that the key to success in software portfolio planning was a twin approach: standardize on commodity infrastructure, yet encourage innovation above this layer. For example, it is pretty clear by now that the major relational databases (Oracle, DB2, SQL Server) are all functionally rich and basically work. Nobody uses more than a fraction of the features they have, and which one you choose is largely a matter of taste. However it is best if you can standardize on one of them, since you then get easy interoperability, and your IT staff build up skills in that technology that transfer when they switch departments. This is an example of a layer of infrastructure that has matured to the extent that the benefits of standardizing outweigh a few features at the edges.

Yet at the application layer this is by no means the case, except perhaps in the area of finance, where no one is really likely to produce a deeply innovative general ledger system any more. Yet this is clearly not the case in marketing, sales and many other business areas, where exciting applications are still popping up, and companies like Salesforce can radically change an existing application area. Here it makes no sense to try and second guess the market, where evolution is still working its magic to see what technologies work best. Trying to standardize too quickly in an area that is evolving will not only most likely make you look foolish when you get it wrong, but also misses real opportunities for companies to take advantage of new and exciting offerings.

Giant software behemoths are not the place where innovation flourishes, and the further they get from their core area of competence, the less likely they are to succeed. As a strategic technology planner, your job is to solidify the core infrastructure but to enable your customers to take advantage of innovation in fast-moving or evolving areas. This state of affairs is not going to change in software, where fairly low barriers to entry enable innovation to be created without massive capital investment. Best of breed software does indeed live!

Look before you leap, but look in the right place

How should customers go about “due diligence” prior to buying software? Certainly enterprise software is a major, multi-year commitment, and the overall costs of it will be many times the actual purchase price, so it is worth looking carefully before you leap. However many companies seem to look in the wrong place.

Firstly there is an assumption that if you buy software from an industry behemoth, then this is “safe”, whereas buying from a smaller vendor is inherently more dangerous. This is not necessarily the case. While the actual finances of an industry giant are rarely in doubt, the question to ask is not the the size of the balance sheet, but how committed are they to this particular product? When I was working at Exxon in the 1980s we discovered that the “strategic” 4GL called ADF that IBM sold was to be dropped in favor of another tool they had built called CSP. The fact that we were a big oil company and they were ultra-safe IBM did not help us one bit. Migration? We could hire their consultants to help us rewrite all the applicatons: thanks a lot. Or consider all the technologies that Oracle has acquired over the years and quietly dropped when theyfailed to perform. When looking at products from large vendors I believe the key to risk assessment is to see how far the vendor is straying from its core competence. For example, Oracle is hardly likely to abandon its core database product, which still accounts for a huge share of its profits, but just how committed will it be to something a long way from this core area of its expertise? SAP has come to dominate the ERP space, but its execution on products away from its core competence has been shaky, to say the least. The most recent example was its dropping its MDM offering after two years, now promising a new product based around an acquisition. Cold comfort to those loyal customers who pioneered SAP MDM thinking that it was the “safe” choice. Vendors tend to misfire the further they stray from their core area of business, and customers should factor this into their risk assessment.

Assuming the software you are interested in is not from an industry giant, then how do you assess the risks then? Small software vendors always dread the following sentence: “We like your software but we will need to bring in our financial due diligence team before we go any further”. This is partly because large corporations frequently lack experience in understanding how software companies are financed, and end up asking the wrong questions. Financial analysts used to dealing with large, stable public companies are often surprised at how small, and how apparently shaky, the balance sheets of privately held software companies are. This is partly because most are venture funded, and venture capital firms are careful to dole out their capital as their portfolio companies need it, rather than investing cash just to bolster company balance sheets.

Before looking at the right questions to ask, here is a true story to illustrate the wrong way. When working at Shell I was asked to look at a small company called Dodge Software (the name in itself did not inspire confidence), a general ledger vendor with some innovative technology that a subsidiary of Shell had already purchased. Before making a deeper commitment it was decided that due diligence should be done, so I was teamed up with a banking type with a very posh accent to look at the company. The company was very reluctant to share its accounts, but as it wanted the business it had little choice. There was a clear problem in terms of cash, with the company having less than six months of cash left at the rate it was burning. The finance analyst called the companies VCs, who hardly surprisingly sang its praises and its rosy future – well, what else were they going to do? The banker then met with the company CFO, who assured him that everything was fine and that further funds could be raised as needed. This actually comforted the banker, but not me, because I could see that, while the company had built up 16 customers with good names, there was no momentum: there were few recent customer names. This meant that the company in fact was stalling, and so would very likely struggle to raise another round of capital. Even if it did, why would the market situation improve for this company? It was pig-headedly selling a best-of-breed general ledger package at a time when broad integrated finance packages were all the rage, and it seemed unlikely to change this mind set. Consequently I wrote a negative assessment and the banker a guarded but positive one. The company duly folded about six months later, unable to raise new money.

The key here is not to focus entirely on the cash position of the vendor. If the company is growing fast and acquiring prestigious customers at a steady clip then it will very likely be able to raise more cash when it needs it. However there is a saying in venture capital: “raise money when you don’t need it, because it is hard when you do need it”. When things are going well then VCs flock around, but when they are problems then they stay away on droves.

The message is that there are certainly risks in buying software, but a risk assessment should be carried out even if buying from the largest vendors. For smaller vendors their market momentum is critical, and needs to be assessed just as much as their cash reserves.

Road testing software

Fortune 500 companies have surprisingly varied approaches to procurement of software. Of course the sheer size of the project or deal is an important factor, with the chances of professional procurement people being wheeled in rising as the deal value rises. Having been on both sides of the fence now, I do have some observations.

Some customers use an RFI (request for information) as a way of trying to help their own understanding of their problem, and this approach can lead to problems. If you are not quite sure what you need then you can be certain that a software vendor has less idea. Moreover, if your needs are vague, you can be sure that every vendors’ product will mysteriously fit these vague needs. It is better to sit down with your business customers and get a very firm grasp of the precise business needs, and then plan out how you are going to assess the software, before you speak to a single vendor. You should plan in advance just how you are going to select the product from the “beauty parade” of vendors that you will eventually pick from. It is important that you think about this PRIOR to talking to vendors, or your process will be tainted.

How are you going to pick a provider? Just bringing them in and seeing who presents well is unlikely to be optimal, as you are relying here too much on impressions and the skill of the individual sales teams. Do you want the best product, or the one with the slickest salesman? Instead you should define up from the list of functional, technical and commercial criteria that will frame your choice, and agree a set of weightings i.e. which are most important. You then need to think how you are going to go about measuring these in each case e.g. what does a score “8/10” mean for a particular criteria. Some things you can just look up e.g. many commercial criteria can be established from the internet (revenues of public companies) or via things like Dun and Bradstreet risk ratings. Analyst firms can help you short-list options, but be aware that analyst firms take money from vendors as well as customers. A key bit of advice here is not to go mad with the criteria – remember that you are going to have to score these somehow. Moreover, do a light check first to get the “long list” of vendors down to a short list before you delve too deeply. I know of a UK NHS trust who have a project going on right now with literally hundreds of criteria, and a “short list” of 22 vendors. How on earth they are planning to score these is a mystery to me. Slowly is presumably the answer. Get it down to three or four vendors via a first pass.

Once you have your short-list, a key part of this process is likely to be getting the vendor to actually try the software out on your own data in your own environment. Just because it all works fine in a different industry, platform and size of company to you does not mean it will all go smoothly in your environment, and you should conduct a “proof of value” for each of the short listed vendors. You will learn far more from seeing the software actually operate on your data than via any number of pretty Powerpoint slides and carefully crafted canned demos. Be reasonable here. A vendor selling software for a six figure sum will be prepared to put in a day or two of pre-sales effort, but if you expect a massive multi-week evaluation then you should expect to pay for some consulting time, either from the vendor or a consulting firm who are deeply experienced in the technology. Buying a piece of enterprise software is a major decision, with costs well beyond the basic purchase price of the software, so investing a little up-front in order to be sure you have made the right choice is a good idea. If you choose the proof of value carefully, then you can get a head start on the real business problem by tackling a small subset of it, and you may well learn something about the real implementation issues through a well structured proof of value activity. The vendors, one of which will be your future partner after all, will also be happy since they get to understand your requirement better and hopefully determine any technical horrors at this stage rather than much later on. It is amazing how often you encounter “you want it to run on what database?” type of basic issues at this stage. It is in your interest to make sure that the proof of value is realistic e.g. decent data volumes, and using the actual environment that you plan to deploy on. We have recently had problems with a project where the customer did all the testing on one web app server (Tomcat) and then deployed into production on an entirely different one and were surprised when it didn’t work first time (“but both web servers adhere to the same standard so it should work”; yeah, right).

Customer references are more important that they may appear. It may be surprising following a slick sales pitch, but small vendors in particular may have very few real customer implementations, and you can learn a lot about what a vendor is really like from customers who have gone past the pitch and actually implemented the product. Of course the vendor is not going to pick his unhappiest customer to do the reference, but most people are fairly honest about experiences if you ask. Even large vendors may have very few implementations of this PARTICULAR product, so size is not everything, as with so many things in life. I remember when I was working for a major corporate and a huge computer manufacturer were trying to sell me a big ticket application, but could not come up with one reference customer at all. This told me all I needed to know about the maturity of this technology.

A well structured evaluation process and proof of value does cost some effort up-front, but it will pay dividends in terms of demonstrating whether a technology is likely to actually do what you want it too and deliver value to your project.

Elusive Return on Investment

An article in CIO magazine revealed a fascinating paradox. 63% of IT executives claimed that they are required to present a cost justification for IT projects (a survey by Cutter) yet according to Aberdeen Group, just 5% of companies actually collect ROI data after the event to see whether the benefits actually appeared. I have to say that my own experience in large companies bears out this “ROI gap”. Most post implementation reviews occur when a project has gone hideously wrong and a scapegoat is required. There are exceptions – I have been impressed at the way that BP rigorously evaluate their projects, and Shell Australia used to have a world-class project office in which IT productivity was rigorously measured for several years (a sad footnote is that this group was eventually shut down to reduce costs). However, overall I think these apparently contradictory survey findings are right: a lot of IT projects have to produce a cost/benefit case, but hardly ever are these benefits tested.

It is not clear that the failure to do so is an IT problem, but rather a failure of business process. Surely the corporate finance department should be worried about this lack of accountability – it is hardly IT’s problem if the business doesn’t bother to check whether projects deliver value. It really should not be that hard. Project costs (hardware, software, consultants, personnel costs)are usually fairly apparent or can be estimated (unless it would seem, you work in government) while benefits are more slippery. This is mainly because they vary by project and so don’t fit into a neat template. However they will usually fall into the broad categories of improved productivity (e.g. staff savings), improved profitability e.g. reduced inventory, or (indirect and woollier) improved customer value e.g. the falling price of PCs over the years. It should be possible to nail down estimates of these by talking to the business people who will ultimately own the project. Once these benefits have been estimated then it is a simple matter to churn out an IRR and NPV calculation – these are taught in every basic finance class, and Excel conveniently provides formulae to make it easy. Of course there are some IT projects that don’t require a cost-benefit case: regulatory being one example (“do this or go to jail”) but the vast majority should be possible to justify.

By going through a rigorous analysis of this type, and then checking afterwards to see what really happened, IT departments will build credibility with the business, something that most CIOs could do with more of.

On software and relationships

An article in Intelligent Enterprise asks “why can’t vendors and customers just get along?” after explaining the many issues at which they are usually at loggerheads. Having been both a customer and a software vendor, I think Joshua Greenbaum points to one key point in his article: honesty. As a customer I found that software vendors frequently made patently absurd claims about their software “this tool will speed up application development by 1000%” being one memorable example. Release dates for software were another issue: vendors fail to grasp that businesses have to plan for change all the time, so a release date slipping by three months is rarely an issue provided that you are told about it in good time. However if you have lined up a load of resources to do an upgrade, it just not turning up on the appointed day does cause real cost and annoyance.

Another bug-bear is testing, a tricky subject since it is impossible to fully test all but the most trivial software (see the excellent book “the art of software testing“) . However vendors vary dramatically on the degree of effort that they put in. At Kalido we have extensive automated test routines which run on every build of the software, which at least means that quite a lot of bugs get picked up automatically, though bugs of course still get through. Yet according to someone who used to work at Oracle, there it was policy to do minimal testing of software, where the testing strategy was described as “compile and ship”. This certainly avoids the need for lots of expensive testers, but is hardly what customers expect.

However, customers can be unreasonable too. Their legal departments insist on using sometimes surreal contract templates that were often designed for buying bits of building equipment rather than software, resulting in needless delays in contract negotiation (but in-house corporate lawyers don’t care about this, indeed it helps keep them busy). They can also make absurd demands: we recently lost a contract after refusing to certify that we would fix ANY bug in the software within eight hours, something which patently cannot be guaranteed by anyone, however responsive. A small vendor who won the bid signed up to this demand, and so will presumably be in breach of contract pretty much every day. Quite what the customer thinks they have gained from this is unclear. It is not clear why some customers behave in such ways, perhaps they feel like exacting revenge for previous bad experiences with vendors, or maybe some corporate cultures value aggressive negotiating.

From my experience on both sides of the fence, the best relationships occur when both parties understand that buying enterprise software is a long-term thing, in which it is important that both sides feel they are getting value. Vendors are more inclined to go the extra mile to fix a customer problem if that customer has been doing lots of reference calls for them, and actively participates in beta test programs, for example. As with many things in life, there needs to be a spirit of mutual respect and co-operation between customers and vendors if both are to get the best out of their relationship.

Mastering data

At the 2005 Kalido User Group this week in London a survey was carried out of the attendees regarding the attendees persepctives on master data management. The striking result in the survey was that, although around two-thirds of the respondents (and these are serious companies, like BP, Unilever, Philips etc) felt that dealing with their companies master data was a “top three”priority issue for them, no less than 90% felt that the industry had failed to address it properly. While there are a few software products out there to help tackle customer data integration and product information management, very few address general issue of managing master data across a global corporation.

Large corporations are need to manage not just customers and products, but also other data such as brand, organization, people, price etc, which are scattered throughout a wide range of corporate systems, including multiple instances of ERP systems from the same vendor. The application consolidation that has been occurring in recent years has clearly failed to make inroads into this issue in the eyes of the people that matter: the customers.