Andy on Enterprise Software

James Bond or Johnny English?

August 25, 2006

I really wonder about corporate espionage, the subject of a new book which claims that network technology has meant that industrial espionage is on the rise and easier than ever.  I observe that people who write books or articles about this subject, while no doubt very knowledgeable, also tend to be security consultants selling their services.  I certainly would not want to criticise such fine people, especially as they could no doubt easily find out where I lived.  But, let’s face it, a consultant is hardly about to publish an article “corporate espionage is no big deal really; no need to invest much here”.  My scepticism is prompted by a couple of personal experiences.

In a really big company there is actually very little data that is truly “secret”, and then usually only for a certain period of time e.g. quarterly resuts just prior to making them public.  Or plans for an acquisition perhaps prior to a bid, or bidding information for large contracts, maybe certain aspects of R&D.  In most cases company executives have enough trouble making sense of their own corporate information.  Let’s face it, if you can’t figure out who your most profitable customers are (a significant problem for most companies, whether they admit it or not), how are your competitors going to work it out by accessing your information systems?  However, as I say, there are some very specific pieces of data with commercial value.  When I used to work in Esso Expro we were contacted by an employee of another oil company (let’s call it Toxico) offering to sell Esso information on their bid for the next round of North Sea acreage.  Now this information was of real value, and appeared to come from one of the bid team, so was genuine.  What did Esso do?  They rang up the Metropolitan police, followed by the security department of Toxico and told them the full story.  There was no debate about this, no hesitation; it was a decision taken in a heartbeat.  Esso has well-grounded ethical principles and was having none of this.

A second personal experience was of a friend who is one of the three smartest people I ever met.  She had a meteoric rise through management in a large corporate that I suppose I should keep nameless, and was promoted to be in charge of their competitive analysis unit.  This unit did spend its time (legally) analysing its competitors and trying to pick up any snippets of competitive information that it could.  After six months my friend recommended that they close her department down.  Why?  Because she could not find a single of example of her quite well-funded department’s findings ever actually being acted upon.  In other words management liked to know what was going on, but basically did whatever they were going to do anyway. The company didn’t have the courage to actually follow through on this, and my friend duly moved on to another job (she is now in a very senior position as a major investment bank).

So, at least in these two cases, the work of a major corporate competitive analysis unit was assessed by its own boss to have no tangible value at all, while when someone did actually ring up and offer to sell valuable information, Esso declined and turned the would-be informant over to the police.

While internet hackers can undoubtedly cause a great deal of trouble, I honestly wonder just how realistic are the stories of doom and gloom on corporate espionage, in particular the fears about someone hacking into secret information systems. In most companies, the information really isn’t that exciting.  Only a tiny fraction of information is genuinely valuable/sensitive, and even if you got hold of it most ethical companies would do the decent thing as Esso did and turn in the informant.  I am sure there are some exceptions, no doubt involving tales of derring-do, but how many of these are there?

However, stories like mine do not sell security consultancy.  If anyone ever does write a book debunking corporate espionage then I promise to buy it.

 

 

Is the customer always king?

August 23, 2006

Christopher Koch highlights some recent research by Wharton School of Business regarding “customer focus”.  The work is certainly right to dig deeper into what the costs are of “customer focus” and whether it is really a good thing.  It is absolutely correct to try and measure the lifetime value of a customer.  I was surprised when someone I knew working at a world famous investment bank admitted that they had little idea of the total amount of business done with a particular corporate client, let alone to what extent the aggregate business was profitable.  The problem in this case was that different parts of the bank had responsibilities for different aspects of the relationship in different countries, with different IT systems supporting those aspects.  Hence taking an aggregate view was surprisingly difficult.  

Understanding this will be much more difficult (and will make more sense) in some industries than others  A manufacturer like Unilever does not deal with the end consumer, so their customers are really the retailers.  However even in consumer facing industries, small improvements in customer “churn” can have a major effect on profits, since it costs much more to acquire a new customer than to retain and sell new things to an existing customer.  Moreover some customers actually cost a lot do deal with and yet generate little revenue; some may actually end up costing more to deal with than they bring in revenue, in which case it may be better to ease them out towards your competitors. Understanding which customers are actually worth dealing with is consequently important, yet few companies really have a good grasp of this.  Indeed, how many companies even survey their customers regularly and track how happy they are?

So far so good.  However the article makes some rather questionable assertions.  In particular it concludes that an enterprise IT strategy is “left out” when companies announce plans to be closer to the customer.  I don’t think this is really the issue.  It is not that the strategy is left out but rather than it turns out to be very difficult to execute.  After all, weren’t ERP systems supposed to create a “single business model” for the enterprise?  You didn’t find that?  Yet I could have sworn that was what those nice people in smart suits at PWC et al told people was the reason for spending all that money in the 1990s.  In that case, surely CRM was the answer?  I mean all that money on Siebel implementation surely must mean that companies now have a single view of the customer?  No?

The problem is only partially an IT one.  Sales people frequently resist “sales force automation” or, if they are forced into it, will do so under sufferance and so create entries that are, how shall we say this politely, of variable data quality.  This happens whether you are using Siebel, Salesforce or whatever system.  If the sales people don’t feel that they get a direct benefit then the system is just an overhead getting in the way of them making more sales calls.  Any data quality initiative would do well to start by examining the sales force automation system if you quickly want to find fertile ground for improvement.

Even if you overcome this issue, either by excellent discipline or somehow incenting the sales people to care about data quality (good luck with that), the next problem is that the sales force system does not control all the customer touch points.  Customers may well have lots of contact with the helpdesk. Yet how often is this seamlessly linked in with the salesforce system?  Similarly direct marketing campaigns to upsell have a real cost, yet how easy is it to assign these costs back to actual customers?  The costs of marketing, of sales and or support typically reside in entirely different systems in different departments, and few companies can consequently add all their costs up, even assuming that all these systems identify each customer in a consistent and unique way. 

How bad is the reality?  You only have to examine you own junk mail to get a fair idea.  I get four separate copies of every mailshot from Dell, a well run company, suggesting there are at least four places where my information is held, and that is just in their marketing systems.  Ever tried shifting your address when you move house and telling your various utilities and banks?  How quickly and smoothly did everything get redirected?

The article is right in pointing out that organisational issues are at the heart of what makes things difficult here, yet the survey of company respondents that suggested that “within three years they would be organised by customer” (rather than product, function or geography) seems to me entirely wishful thinking.  What does your organisation chart look like?  How realistic would it be organise a whole company by something as transient as “customer”?  If you think you get reorganised too often now, just imagine what this picture would be like.

In my view the difficulty of dealing efficiently with customers by assessing their lifetime value is an issue primarily of organisation and company power struggles, not one of technology.  For this reason I don’t expect it to be fixed any time soon.  If Dell stop sending me those multiple identical mailshots then I’ll let you know, but I am not holding my breath.

 

What’s in a name?

July 27, 2006

In a b-eye network article Bill Inmon reminds us of the importance of patents, especially for small start-up companies.  The current case against Blackberry is indeed a salient reminder that a patent infringement can bring very serious consequences even to a market-leading company.  I would go further than Bill in suggesting that start-ups need to seriously consider all their intellectual property, which as a well as patent applications extends to trademarks also. 

Patents can take a long time to sort out – Kalido’s core design patent was applied for in 1998 yet only granted in 2003 in the UK, and in 2005 in the US.  Certainly the US patent office is a curious beast which has many quirks, but being fast moving is not one of them.  However once you have a patent application in you can claim “patent pending” and are pretty much protected (unless of course, the patent application is rejected).  Start-ups need to consult with a patent lawyer early on if they are to avoid trouble.  For example, if you market something, which can constitute a demo a beta version of your product to a prospect, then you have 12 months to register for a patent on it or you may invalidate any future patent application.  It would be easy to see how someone could fall on this kind of legal tripwire.

The name of your company or product is also worth protecting via trademark in the countries that you expect to be marketing in.  This is less troublesome than it used to be thanks to some international reciprocal arrangements that mean you no longer have to file a trademark in every country individually.  However some countries (including the US) are not yet signatories to this accord.  There are several unfortunate cases of products being marketed and then suddenly discovering that they violated a trademark in a key market.  The costs of either withdrawing the launch, rebranding or fighting a court case can be very high, especially compared to the relatively modest costs of registering trademarks in the first place. 

You would hope by now that people would have figured out that registering web addresses is a free-for-all, yet you still see cases in the newspapers of well-known companies finding that that the natural internet address of their latest product has just been hijacked by some guy in Oklahoma who would like a large some of money for it, thank you very much. 

All these aspects, web names, trademarks and patents, need to be considered carefully even by the smallest start-up.  There are costs to be incurred, but the alternative can be disastrous, and patents in particular can be a genuine asset down the line.

 

 

A marketing tale

June 7, 2006

Marketing is a tricky thing. One lesson that I have begun to learn over time is that simplicity and consistency always seem to triumph over a more comprehensive, but more complex story. Take the case of Tivo in the UK. A couple of my friends bought Tivo when it first appeared in Britain and started to have that kind of scary, glazed expression normally associated with religious fanatics or users of interesting pharmaceutical products. I then saw a cinema ad for Tivo and it seemed great: it would find TV programs for you without you having to know when they were scheduled – how cool was that?! It would learn what programs that you liked and record them speculatively for you; you then ranked how much you liked or disliked them and it would get better and better at finding things you enjoyed. You could turn the whole TV experience from being a passive broadcast experience into one where you effectively had your own TV channel, just with all your favorite programs. Oh, and it looked like you could skip past adverts, though of course the Tivo commercial politely glossed over that.

Well, I bought one and I was like a kid in some kind of store. I soon acquired the same crazed look in my eyes as my fellow Tivo owners, and waited smug in the knowledge that I was at the crest of a wave that would revolutionize broadcasting. My friend at the BBC confirmed that every single engineer there was a Tivo fanatic. And then: nothing happened. Those BBC engineers, myself and a few others constituted the entire UK Tivo market – just 30,000 boxes were sold in the UK. Eventually Tivo gave up and, although Tivo is still (just about) supported in the UK, you can’t even buy Tivo 2, or even a new Tivo 1 except on eBay.

What happened? The message was too complex. Years later Sky caught on to the DVR concept and brought out the vastly functionally inferior Sky+. How did they advertise it? They just showed a few exciting clips with the viewer freezing and then replaying: “you can replay live TV” was all that was said. This was a fairly minor option on a Tivo that the Tivo commercial barely mentioned, yet it was simple to understand. Sky+ sales took off, and myself and some BBC sound engineers are left with our beloved Tivos, praying that they don’t go wrong. It is another Betamax v VHS story, but this time the issue was a marketing one. Tivo still limps on in the US, still growing slowly in subscriber numbers through sheer product brilliance (helped by being boosted on “Sex in the City”), but has clearly not fulfilled its potential.

What this little parable should teach us is that a key to successful marketing is simplicity, stripping everything down to the core thing that represents value to the customer, and then shutting up. With a simple message people can describe the product to their friends or colleagues, and so spread the word. With a complex, multi-part message they get bogged down and so cannot clearly articulate what the product does at its heart. It is so tempting to describe the many things that your product does well, but it is probably a mistake to do so. Find the one core thing that matters to customers, explain this as simply as possible, and repeat as often and as loudly as you can.

Lies, Damned Lies and Surveys

April 10, 2006

A survey sponsored by Oracle (http://www.computing.co.uk/itweek/news/2153695/surveys-show-bi-failing-s) hits a new low in terms of insight. The classic line is: “In Oracle’s survey of 200 UK and Irish IT managers, over half of organisations said they did not have any BI systems, though 69 percent of respondents said BI was important to help senior managers run their business.” Apart from the apparent conclusion that over 20% of the respondents seem to struggle to keep two ideas in their head for more than five minutes, the notion that half of the UK’s companies lack a single BI tool is pretty absurd. We have had well over a decade of Business Objects, Cognos and others jamming BI tools, and even before that there were tools like Focus and Nomad. You would have to be recently returned from the moon not to have encountered a BI software salesman as a UK IT manager.

I do wonder sometimes about the accuracy of some of these surveys. I recall years ago at a Gartner conference being handed a thick survey, which demanded all kinds of detail in terms of IT budget breakdown, future spending trends by area etc. You needed to return the completed survey in order to get a chance of winning a prize, and I remember saying to a guy next to me who had just finished his “how on earth do you remember all of that budget info for your organisation?” The reply was “are you kidding, I just made it up, but I really want that prize”. Many surveys do make use of incentives to get people to fill them in, and I wonder just how accurate the data really is in many of them as a result.

Separately, a more plausible insight in a different survey is: “Meanwhile, a survey of 1,000 UK business managers at companies with over 250 staff, published by ICS, indicates a widespread need for better BI systems. The study found that over three quarters of respondents were forced to make decisions “blind” due to late or insufficient business information”. By contrast, this is entirely believable, though not for the reason that the article gave. The critical issue is that you can have as many pretty reporting tools and dashboards as you like, but you need accurate and timely information to feed those systems coming from a data warehouse (unless you are one of the few brave souls using EII). The problem is that most data warehouses are entirely unable to keep up with the pace of business change (reorganisations, acquisitions etc) and so are constantly out of date. Consider a data warehouse with just ten source systems. A major change in one of its sources will impact the warehouse schema, and may take three months to fix the schema, the load routines and the reports that are impacted by the change (this is a pretty typical figure in my experience at Shell).

A major change of this type does not happen every day, but is almost certain to happen once a year to each of these source systems, maybe twice. There are then ten sets of separate changes, each taking three months worth of changes needed to the warehouse every year. Even assuming that the changes are neatly spread over the year and that you have plenty of programming resources to fix the changes, so you can do these in parallel, you still have 15 months of change to fit into 12 months; basically the warehouse can never catch up. You may well have more than 10 sources for your data warehouse, so the problem could be even worse than this. This is indeed what happens in reality: the data warehouse is usually out of date, so armies of Excel jockeys in finance get the answers via email and have to manually number-crunch for anything really critical while the warehouse lumbers on with out of date information. This situation is not the fault of the BI tools – it is the fault of the data warehouses that feed the BI tools. Until companies admit that the status quo is failing and start abandoning custom-build warehouses this problem will persist. It is like with treating alcoholism: the first step is admitting that there is a problem.

Information as a service?

March 8, 2006

I see in our customer base the stirrings of a movement to take a more strategic view of corporate information. At present there is rarely a central point of responsibility for a company’s information assets; perhaps finance have a team that owns “the numbers” in terms of high level corporate performance, but information needed in marketing and manufacturing will typically be devolved to analysts in those organizations. Internal IT groups may have a database team that looks after the physical storage of corporate data, but this group rarely have responsibility for even the logical data models used within business applications, let alone how those data models are supposed to interact with one another. Of course things are complicated by the fact that application packages will have their own version of key data, and may be the system of record for some of it. Yet how to take a view across the whole enterprise?

organizationally, what is needed is a business-led (and not IT-led) group with enough clout to be able to start to get a grip on key corporate data. This team would be responsible for the core definitions of corporate data, its quality, and being the place that people come to when corporate information is needed. In practice, if this is not to become another incarnation of a 1980s data dictionary team, then this group should also have responsibility for applications that serve up information to multiple applications, and this last point will be an interesting political battle. The reason that such a team may actually succeed this time around is that the technologies now exist to avoid the “repository” (or whatever you want to call it, of master data being a passive copy. Now the advent of EAI tools, enterprise buses, and the more recent master data technologies (from Oracle, Kalido, Siperian, IBM etc) mean that master data can become “live”, and synchronized back to the underlying transaction systems. Pioneers in this area were Shell Lubricants and Unilever, for example.

However technology is necessary, but not sufficient. The team needs to be granted ownership of the data, this notion sometimes being called “data stewardship”. Even if this ownership is virtual, it is key that someone can arbitrate disputes over whose definition of gross margin is the “correct” one, and who can drive the implementation of a new product hierarchy (say) despite that fact that such a hierarchy touches a number of different business applications. It is logical that such a group would also own the enterprise data warehouse, since that (if it exists) is the place where much corporate-wide data ends up right now. This combination of owning the data warehouse and the master data hub(s) would allow infrastructure applications to be developed that can serve up the “golden copy” data back to applications that need it. The messaging infrastructure already exists to allow this to happen.

A few companies are establishing such groups now, and I feel it is a very positive thing. It is time that information came out if its back-room closet and moves to centre stage. Given the political hurdles that exist in large companies, the ride will not be smooth, but the goal is a noble one.

Vendor due diligence

February 21, 2006

In a previous blog I gave some general thoughts about vendor evaluation, and expanded on this to give an outline framework for such evaluations. One thing that should be considered in any evaluation is “his stable is the vendor” i.e. will they still be around in a few years? This question can be surprisingly hard to answer, and in fact the question itself has a flaw. The question should be: “will this product still be further developed in a few years?”. The reason for the distinction is that even the largest vendors sometimes discard technologies due to changes in their product roadmap, internal political issues or because the thing isn’t selling very well.

So, if buying from a very large vendor, it will be easy enough to look at their general health because their finances are public (OK, there are sometimes accounting frauds but you can’t do much about these). The usual ways of analyzing company’s health can be used here. I highly recommend the “Guide to analyzing Companies” by the Economist, which is clearly written and gives excellent examples of the indicators of company health and also of early warning signs. You should not assume that just because a company is publicly listed then it must be OK – ask the customers of Commerce One about that approach, for example.

In such cases you will probably find that the company is fine, so the bulk of the diligence efforts should instead be directed to how important this particular product is to the company, and so assess how likely is to to get ongoing development. Of course the vendor is hardly a reliable source here, but you can seek advice from analysts, and also it is fair to ask the vendor
just how many customers of this particular product there are (you should be able to talk to some of them). For example, SAP’s MDM product had seemingly shifted just twenty copies into customer use throughout its 18,000 strong customer base in two years. Given this dismal penetration rate it was perhaps not a shock that they dropped it (their replacement MDME offering is based on an acquisition of PIM vendor A2i). Anything which is not contributing to a vendor’s sales figures on any scale should be considered suspect.

In the case of small vendors you have different problems. You can be pretty sure that the product is important to the vendor, since it is probably the only one that they have. The question is whether the vendor will survive. This is trickier, since the company is probably privately held, and so is not obliged to publish its accounts, at least in the US. In the UK you can get around this by paying a few pounds to Companies House and look up their acocunts.
If you are making a large purchase then it is fair game for you to ask for information on the company financials, and you should get nervous if they refuse. One thing that will achieve little is to ask for a letter from the VCs backing the company. They will inevitably sing its praises; they are hardly going to say “ah, this one’s on a bit of a knife-edge, I’d watch out if I were you”. Indeed I knew of one case where a major deal was in progress at a BI vendor, and through a contact I became aware that the entire future financing of the (cash strapped) company was dependent on this deal going ahead; in such cases you cannot expect objectivity from investors.

So, what can you do? Well, profits are an opinion but cash is not. Hence, assuming you can see some figures, you can get a sense of how much cash the company has, and attempt to work out the “burn rate” i.e. how fast are they burning through this cash (most VC backed start-ups are unprofitable; if they were profitable then they probably wouldn’t need expensive VC money). However this on its own may give false signals. Due to their IRR-driven instincts, VCs don’t dole out more cash than they need to start-ups; they like to always have the option of pulling out if they need to, so it is rare for a start-up to actually have more than about one year’s cash needs in hand. The question is: will they be able to raise more cash if they need it? This is a complex subject, but essentially you should be able to get a sense of this by talking to analyst familiar with the VC community. For example, companies growing at 50% or more are very likely to be able to raise cash, even if they very unprofitable. The gross margins in software are commonly 90%, so profits will come eventually if the company can just grow large enough; this is why VCs invest in software companies more than, say, restaurants. So if you cannot find someone knowledgeable to look the figures over for you and make an assessment, then a decent proxy for security is the revenue growth rate. If the company’s growth is stalling (say 10% a year growth for a small-medium software company) then things could be sticky in a future financing round. This is a generalization (and companies with a subscription model, for example, have a much more predictable life than ones selling traditional licenses) but it may be the only real set of figures that you can dig out.

Another source of due diligence is other customers, who may well have done exactly the same due diligence exercise as you fairly recently. Of course you have to be careful it was not out of date, and you should check how thorough they really were, but you may be able to save yourself a lot of work. If three Fortune 100 companies recently did detailed due diligence on a vendor and bought its software anyway, this may help you at least feel better.

Remember: the company or product does not have to be around in ten years if your payback case is 13 months. The faster the payback period for the product, the less concerned that you need to be about agonizing over the long term future of the company, or of the product within the vendor. You did do a proper business case for the purchase, right? Of course you did.

Evaluating Software Vendors – a framework


I have written previously in general about vendor evaluation processes. Over time I will flesh this topic out further, as I feel it is a much-neglected area. As I have said, it is important to define up front your main functional and technical requirements from the software package that you want to buy. It is then important to have a process to take the “long list” of candidates down to a manageable number of two to four vendors to evaluate properly.

So, you have done this, and are down to three vendors. What are the mechanics of the evaluation criteria? I show in the diagram a simplified example to illustrate. It is critical that you decide on what is most important to you by selecting weightings for each criteria before you see any products. Ensure that you group the broad criteria into at least two and perhaps three categories: functional should list all the things you actually want the product to do, and you may choose to separate “technical”, which may include things like support for you particular company’s recommended platforms e.g. “Must run on DB2”, or whatever. What is sometimes forgotten is the commercial criteria, which are also important. Here you want things like the market share and financial stability of the company, how comprehensive its support is, how good is its training etc. I would recommend that you exclude price from these criteria. Price can be such a major factor that it can swamp all others, so you may want to consider it as a separate major criteria once you have done the others. I would recommend that the “functional” weightings total not less than 50%, It is no good buying something from a stable vendor if the thing doesn’t do what you need it to.

An important thing about using a weighting system like this one is that the weights must add up to 100. The point here is that it forces you to make trade-offs: you can have an extra functional criteria, but you must reduce the existing weights to make sure that the weights still add to 100. This gives the discipline to stop everything being” essential”. You assign all the weights before the evaluation begins. You can share this with the vendors if you like. Coveniently, however you assign the weights, the scores will come in out of 1000, so can be easily expressed as a percentage e.g. vendor B is a 74% match to the criteria in the example, while vendor C is 67%.

The final stage is that you need to score the various criteria that you have laid out. You want this to be as objective as possible, which is why you do not want too many – you want to see evidence for the functional criteria. Just because the salesman says that it does something is not sufficient to credit a score – you need to see the feature yourself, preferably working against some of your own data rather than faked up demo. I recall doing an evaluation of BI tools in 1992 at Shell and having one vendor with quite a new product that due to a stellar analyst recommendation made it to the short-list. When the pre-sales guy turned up and was presented with a file of our data for him to do the trial on he went white; their whole product was virtually hard-coded around their demo dataset, and it quickly became clear that even the slightest deviation from the data they were expecting caused the product to break.

Score each criteria out of 10. Commercial criteria can be done off-line and in advance; analyst firms can help you with this, as they tend to be up on things like market shard (IDC have the most reliable quantified figures, but rough estimates are probably good enough). Financial stability is a subject all in itself, and I will cover this in another blog.

The evaluation then becomes quite mechanical, as you crank out the scores. You see that in this simplified example vendor B has won, though not by a huge margin. If it turns out that vendor B’s price is twice that of the others then you may decide this difference is not big enough to justify the slightly better scores (we will retunr to this shortly). Again, you could weight price as a factor if you prefer.

However, don’t get too hung up on price; as someone who used to do technology procurement it may seem like the be all and end all, but it is not. The total cost of a new software package to your company is far greater than the initial license cost. There is maintenance and training over several years, and also the people time and cost in actually implementing the technology, which will usually be several times the cost of the software package. Hence getting a package that is 20% more productive than the next best is worth a lot more than 20% extra in the license price, as the associated costs of people will be multiples of the software cost (people costs being five times package software costs in a project is common, ten times is not unusual). It is sensible for you to try and consider the likely full lifetime costs of the software in this way (assume, say five years) since you will then have an idea as to how important the license cost really is. For example if you are about to do a 30 country roll-out budgeted at USD 50 million, making sure that the product you select is the most productive one is a lot more important than if you are doing a single project for USD 500k. Here a product that is 10% more productive than the next one to implement may save you USD 5 million, so haggling to the death over that last USD 10k of license discount may not be so critical. This will give you a true bottom line case for the level of spend you can afford to make.

Taking a structured evaluation approach like this has a number of benefits. Firstly, it reduces the amount of gut feel and “did I like the salesman” mentality that too often creeps in. You’ll probably never see the salesman again unless you want to buy more software, but you will be stuck with the product that you select for years. Secondly, it gives you a documented case for selection that can, if necessary, be used to back up things internally e.g. in the case of an audit, or just to give comfort to senior management that a sound process has been used.

Moreover, given that salesmen get paid on how much they sell you, you’d be surprised at the tactics they can adopt; they will try and go over your head if they think they are going to lose, and make all sorts of accusations about how the process wasn’t fair and how you are about to make a terrible mistake, so having a solid, documented case will make it much easier for your manager to do the right thing and tell the salesmen to take a running jump. I am amazed at how often this tactic was tried when I was procuring software, but I never once had a decision overturned. If you ever find yourself in this situation, remember that revenge is a dish best served cold. After a particularly acrimonious session with one vendor salesman when I was working at Exxon, I was amused to find, a few years later, the same salesman turning up a few years later when I transferred to Shell. He walked in the room, his face fell when he saw me and he walked back out again. Good professional sales staff know that the world is a small place and that it does not pay to aggravate the customer, but all too few remember this.

In another blog I will return to the subject of assessing the financial stability of vendors.

Not quite dead yet

February 16, 2006

Infoworld has a piece intriguingly titled “The Death of the Software salesman“. Those of us who have been on the receiving end of high pressure negotiating tactics from software companies may simply want to ask “when please?”. However rather than being a glorious article about lynch mobs of customers, this article is more mundanely concerned with open source, and how this model may be an alternative to traditional software licensing. It observes that 40% of software company budgets go on sales and marketing, and indeed this estimate is a little on the low side. Enterprise software companies will typically spend 45-55% of their budgets on sales and marketing, with the lion’s share of this going to sales, depending on the stage of the company (obviously this may be lower in very early stage companies which are still mostly in R&D).

It is all a bit ironic. The incremental cost of printing one more CD with a software product on it is less than a dollar, which is why venture capital firms like software companies. However to actually convince anyone to shell out (say) half a million dollars on this CD requires a great deal of expensive sales and marketing effort. It is rather naive of the panelists at the Open Source Business Conference to believe that this is going to change any time soon outside of a few niches. Sure, Linux has done very well, but some of this success is because IBM has put a lot of muscle into it (to avoid Microsoft eating further into the server operating system market). However if you move up a layer or two in the stack, open source is still in the special interest category. MySQL gradually improves, but it is still a long way off being a heavy duty DBMS; Oracle, DB2 and Microsoft are far from quaking in their boots. Higher still, there are very early initiatives in business intelligence like Pentaho and good luck to them, but not even the wildest-eyed open source advocate could accuse them of having made even a dent in the enterprise BI market yet.

Hence, while the idea of running a mostly R&D company and letting the customers come to you may sound appealing to some engineering types, the sad reality is that this is not going to happen. Customers buy through a series of stages. One model of this is called AIDA “awareness” -> “interest” -> desire-> action. Unless people are made aware of your product then they cannot buy it, and so you have to spend money on marketing. Once they have become aware and show interest in the value it offers them, they need to be nudged along, tantalised and have their many objections overcome (“does it really work”,”how many customers are using it”, “will it work with my existing technology” etc). If you make it to this stage then the reality for any enterprise software is that you have what is called the “complex sale” i.e there are multiple people who influence the decision, each of whom have to be convinced or at least be persuaded to not object. Miller Heiman and others make a living by selling training in sales methodologies that go through this. It is very rare for a six or seven figure software purchase to involve just one person, and that’s where sales come in. The salesman needs to get the proposition in front of the prospect, find if there is a project that fits, identify the key influencers in the account, see whether they really have a budget or just like talking to salesmen, navigate the client organization and unravel its purchasing process, perhaps deliver a trial or proof of concept, and all this before you get anywhere near negotiating a deal.

I just can’t see all this happening without a lot of marketing and sales effort, except in very specific situations or niches that can suit open source, and those are too rare at present to put enough money on the table to pay for all those creative software engineers. I fear that, like Mart Twain’s demise that was mistakenly reported during his lifetime, the death of the software salesman is being much exaggerated.

Customer satisfaction pays

February 15, 2006

The software industry is full of awards for the fastest growing companies e.g. the “Inc 500” and numerous others, but it is perhaps revealing that the industry is almost silent on a critical measure: customer satisfaction. There seem to be two objective ways of measuring this: renewal rates i.e. do the customers actually pay maintenance, and surveys. The former has the advantage of being wholly objective i.e. they either pay up or they drop the software, with no wiggle room for spin, though of course it is a somewhat crude measure. A McKinsey report I saw recently reckoned that best practice in the software industry has renewal rates of 85%-95%, and indeed SAS Institute has made considerable (and justified) play over renewal rates of over 90% year after year (I am pleased to say that Kalido’s renewal rate in 97%).

The other measure is surveys, which are a richer measure but of course are always subject to the perversion of framing the question e.g. “are you happy with our software?” is not the same question as “are you delighted with our software?”. One company that has had a lot of fun at some companies’ expense in Nucleus research, who sometimes call up the reference customers of software companies (as featured on their web sites) and ask them how happy they are. An amusing one a few years back was one they did on Siebel, where more than half of the Siebel reference customers they spoke to reported that their projects had cost more than the benefits. It was a similar story when they surveyed I2 reference customers, and an ERP report they produced showed that no ERP vendor could must even a 50% score in response to the question: “would you recommend this software to others”, which is pretty shocking given that these are the reference customers of these vendors (highest scoring was Peoplesoft at a hardly dazzling 47%).

Given that endless studies have shown how more expensive it is to sell to new customers that to sell more to existing ones (four times more is one figure often quoted), I would have expected that software companies, with their high sales and marketing costs, would pay more attention to customer satisfaction. However, sad to say that my own experience as a customer for many years taught me that most software companies treat their customers with anything from indifference to outright hostility. Perhaps in the giddy days of the late 1990s software companies could get away with this, but these days the power is definitely back with the buyer, and so it is in the industry’s interest to improve customer satisfaction, if only for purely selfish reasons. Yet how many software companies even survey their customers on a regular basis? Talking to sometimes unhappy customers may not always be a comfortable experience for software executives, but it can teach you a great deal, and showing that you care enough to listen is itself a way of raising customer satisfaction.