Andy on Enterprise Software

SOA deja vu

August 31, 2006

SOA (service oriented architecture) is nothing if not fashionable, but it seems to me that the reality could be a struggle.  For a start, the very laudable idea, reuse of inter-operable application objects seamlessly, is not exactly new.  Those of us who are a bit long in the tooth will recall the “applets” of the early 1990s, and then there was the whole CORBA thing.  Everyone got excited about the idea of being able to mix and match application objects at will, but in practice everyone did the opposite and just bought monolithic suites from SAP, Oracle and other vendors who are mostly now owned by Oracle (JDE, Peoplesoft).  What exactly is different this time around? 

Surely in order to obtain true re-use you are going to need some standards (which have moved on a bit, but not enough), some way of mapping out business processes, and some way of dealing with the semantic integration of data (if two applications want to trade some data about “customer”, who controls what that particular term means?).  In addition you need some solid infrastructure to do some of the heavy lifting, and there are certainly some choices here.  In this last area we have IBM Websphere, the immature Fusion from Oracle, Netweaver from SAP and  independent alternatives like BEA.  On the process mapping side there are newcomers like Cordys and a host of others.  The trouble here is that it seems to me that the more complete the offering, the more credible it is yet the more difficult it will be to sell since enterprises already have a stack of established middleware that they do not want to swap out.  If you already have an enterprise service but from (say) Cape Clear and EAI software from Tibco, plus some Netweaver as you have SAP applications, then how exactly do you combine these different technologies in a seamless way?  The last thing you want to do is introduce something new unless you cannot avoid it.

To make matters worse, there has been little real progress on data integration, particularly when it comes to semantic integration.  The pure plays which have real technology have either been swallowed up (like Unicorn) or are relatively small vendors (e.g. Kalido).  The giant vendors of middleware have little to offer here, and are intent on grabbing mindshare from each other rather than striving to make their technologies genuinely interoperable with those from other vendors.  Master data management is as yet very immature as a mechanism for helping out here, and again the decent technologies live with innovative but smaller vendors.  Some partnerships between the giants and the best pure-plays will presumably be the way to go, but this has yet to really happen.

Finally, you have what may be the trickiest barrier of all, which is human.  To get reuse then you need to be able to be aware of what is already out there (partly a technical issue) and also really want to take advantage of it (a more human problem).  Programmers are by nature “not invented here” types, and this is one thing that made object orientation hard to scale in terms of reuse beyond programming teams.  Whatever happened to those “exchanges” of objects?  You can read about early SOA pilots, but I observe that these seem generally of limited scale and usually restricted to one set of middleware.  This is a long way from the “plug and play” nirvana that is promised.

To be sure, SOA has a most appealing end-goal, and so will most likely run and run, yet to me the barriers in its path are considerable and some of the necessary components are yet to be fully matured.

 

Culturing BI projects

August 30, 2006

In a beye network article Gabriel Fuchs raises the issue of culture when it comes to business intelligence projects. I think the issue is valid, but it goes far beyond the rather crude generalisations about nationalities that he makes in the article.  In my experience corporate culture is every bit as important as the general culture of the country where you are doing the implementation.  Take two contrasting examples from the oil industry (I choose this because I have worked for both these companies): Exxon has a highly centralised culture driven from central office in the US.  Shell, by contrast, is decentralised in nature and is much more consensus oriented.  This is highly relevant when implementing a BI project, because in a company with a decentralised culture you need to take into account the needs of the local subsidiaries far more than in a centralised culture. In Shell, if something was decided centrally then this was a bit like traffic signals in Manila: they are a starting point for negotiation. Someone in central office could define some new reporting need or technical standard, but the subsidiaries had a high degree of leeway to ignore or subvert the recommendation (Shell is less decentralised now than it was, but even so it is still highly decentralised compared to Exxon).  In such situations it was important to get buy-in from all the potential influences in the project; for example it was wise to produce reports or BI capabilities that the subsidiaries find useful rather than just the notional project sponsors in central office.  By contrast in Exxon, while it is sensible practice to get buy-in, if you were in a hurry or things were intractable then central office would be able to ram decisions down the throats of the subsidiaries without much resistance.  Incidentally, both cultures have advantages and disadvantages, and both companies are highly successful, so it would be a mistake to think that one culture is inherently better than the other.

In BI projects such issues come up a lot when discussing things like data models, and agreeing on international coding structures v regional or local ones.  Sometimes projects that did not go well were ones where these inherent cultures were ignored.  For example Shell spent a lot of time and money trying to ram together five European SAP implementations, and ultimately failed to do so, ending up with five slightly different implementations.  There was no technical reason why this could not be done, or in truth any real business reason, but it went against the culture of the company, so encountered resistance at every level. 

In my view such company cultural issues are very important to consider when carrying out enterprise BI projects, and are often ignored by external consultants or systems integrators who blindly follow what is in the project methodology handbook. 

James Bond or Johnny English?

August 25, 2006

I really wonder about corporate espionage, the subject of a new book which claims that network technology has meant that industrial espionage is on the rise and easier than ever.  I observe that people who write books or articles about this subject, while no doubt very knowledgeable, also tend to be security consultants selling their services.  I certainly would not want to criticise such fine people, especially as they could no doubt easily find out where I lived.  But, let’s face it, a consultant is hardly about to publish an article “corporate espionage is no big deal really; no need to invest much here”.  My scepticism is prompted by a couple of personal experiences.

In a really big company there is actually very little data that is truly “secret”, and then usually only for a certain period of time e.g. quarterly resuts just prior to making them public.  Or plans for an acquisition perhaps prior to a bid, or bidding information for large contracts, maybe certain aspects of R&D.  In most cases company executives have enough trouble making sense of their own corporate information.  Let’s face it, if you can’t figure out who your most profitable customers are (a significant problem for most companies, whether they admit it or not), how are your competitors going to work it out by accessing your information systems?  However, as I say, there are some very specific pieces of data with commercial value.  When I used to work in Esso Expro we were contacted by an employee of another oil company (let’s call it Toxico) offering to sell Esso information on their bid for the next round of North Sea acreage.  Now this information was of real value, and appeared to come from one of the bid team, so was genuine.  What did Esso do?  They rang up the Metropolitan police, followed by the security department of Toxico and told them the full story.  There was no debate about this, no hesitation; it was a decision taken in a heartbeat.  Esso has well-grounded ethical principles and was having none of this.

A second personal experience was of a friend who is one of the three smartest people I ever met.  She had a meteoric rise through management in a large corporate that I suppose I should keep nameless, and was promoted to be in charge of their competitive analysis unit.  This unit did spend its time (legally) analysing its competitors and trying to pick up any snippets of competitive information that it could.  After six months my friend recommended that they close her department down.  Why?  Because she could not find a single of example of her quite well-funded department’s findings ever actually being acted upon.  In other words management liked to know what was going on, but basically did whatever they were going to do anyway. The company didn’t have the courage to actually follow through on this, and my friend duly moved on to another job (she is now in a very senior position as a major investment bank).

So, at least in these two cases, the work of a major corporate competitive analysis unit was assessed by its own boss to have no tangible value at all, while when someone did actually ring up and offer to sell valuable information, Esso declined and turned the would-be informant over to the police.

While internet hackers can undoubtedly cause a great deal of trouble, I honestly wonder just how realistic are the stories of doom and gloom on corporate espionage, in particular the fears about someone hacking into secret information systems. In most companies, the information really isn’t that exciting.  Only a tiny fraction of information is genuinely valuable/sensitive, and even if you got hold of it most ethical companies would do the decent thing as Esso did and turn in the informant.  I am sure there are some exceptions, no doubt involving tales of derring-do, but how many of these are there?

However, stories like mine do not sell security consultancy.  If anyone ever does write a book debunking corporate espionage then I promise to buy it.

 

 

Is the customer always king?

August 23, 2006

Christopher Koch highlights some recent research by Wharton School of Business regarding “customer focus”.  The work is certainly right to dig deeper into what the costs are of “customer focus” and whether it is really a good thing.  It is absolutely correct to try and measure the lifetime value of a customer.  I was surprised when someone I knew working at a world famous investment bank admitted that they had little idea of the total amount of business done with a particular corporate client, let alone to what extent the aggregate business was profitable.  The problem in this case was that different parts of the bank had responsibilities for different aspects of the relationship in different countries, with different IT systems supporting those aspects.  Hence taking an aggregate view was surprisingly difficult.  

Understanding this will be much more difficult (and will make more sense) in some industries than others  A manufacturer like Unilever does not deal with the end consumer, so their customers are really the retailers.  However even in consumer facing industries, small improvements in customer “churn” can have a major effect on profits, since it costs much more to acquire a new customer than to retain and sell new things to an existing customer.  Moreover some customers actually cost a lot do deal with and yet generate little revenue; some may actually end up costing more to deal with than they bring in revenue, in which case it may be better to ease them out towards your competitors. Understanding which customers are actually worth dealing with is consequently important, yet few companies really have a good grasp of this.  Indeed, how many companies even survey their customers regularly and track how happy they are?

So far so good.  However the article makes some rather questionable assertions.  In particular it concludes that an enterprise IT strategy is “left out” when companies announce plans to be closer to the customer.  I don’t think this is really the issue.  It is not that the strategy is left out but rather than it turns out to be very difficult to execute.  After all, weren’t ERP systems supposed to create a “single business model” for the enterprise?  You didn’t find that?  Yet I could have sworn that was what those nice people in smart suits at PWC et al told people was the reason for spending all that money in the 1990s.  In that case, surely CRM was the answer?  I mean all that money on Siebel implementation surely must mean that companies now have a single view of the customer?  No?

The problem is only partially an IT one.  Sales people frequently resist “sales force automation” or, if they are forced into it, will do so under sufferance and so create entries that are, how shall we say this politely, of variable data quality.  This happens whether you are using Siebel, Salesforce or whatever system.  If the sales people don’t feel that they get a direct benefit then the system is just an overhead getting in the way of them making more sales calls.  Any data quality initiative would do well to start by examining the sales force automation system if you quickly want to find fertile ground for improvement.

Even if you overcome this issue, either by excellent discipline or somehow incenting the sales people to care about data quality (good luck with that), the next problem is that the sales force system does not control all the customer touch points.  Customers may well have lots of contact with the helpdesk. Yet how often is this seamlessly linked in with the salesforce system?  Similarly direct marketing campaigns to upsell have a real cost, yet how easy is it to assign these costs back to actual customers?  The costs of marketing, of sales and or support typically reside in entirely different systems in different departments, and few companies can consequently add all their costs up, even assuming that all these systems identify each customer in a consistent and unique way. 

How bad is the reality?  You only have to examine you own junk mail to get a fair idea.  I get four separate copies of every mailshot from Dell, a well run company, suggesting there are at least four places where my information is held, and that is just in their marketing systems.  Ever tried shifting your address when you move house and telling your various utilities and banks?  How quickly and smoothly did everything get redirected?

The article is right in pointing out that organisational issues are at the heart of what makes things difficult here, yet the survey of company respondents that suggested that “within three years they would be organised by customer” (rather than product, function or geography) seems to me entirely wishful thinking.  What does your organisation chart look like?  How realistic would it be organise a whole company by something as transient as “customer”?  If you think you get reorganised too often now, just imagine what this picture would be like.

In my view the difficulty of dealing efficiently with customers by assessing their lifetime value is an issue primarily of organisation and company power struggles, not one of technology.  For this reason I don’t expect it to be fixed any time soon.  If Dell stop sending me those multiple identical mailshots then I’ll let you know, but I am not holding my breath.

 

Method in its madness

August 22, 2006

Webmethods joined in the metadata/master data party through acquiring the assets of Cerebra, a company who brought “active metadata” technology to the market in 2001 but had struggled to make much market impact.  As one of the pioneers of EAI technology, Webmethods makes a logical enough home for Cerebra, whose financial results are not known but whose website shows that it last managed a press release in March 2006. 

Webmethods itself has managed to stabilise its position after some difficult years.  At revenues of USD 201M it is a large company, but over the last five years it has averaged a new loss of over 12% of revenues.  Even its last year, where it managed a small profit, represents a shrinking of revenue by nearly 4% over the prior year.  The stockmarket has not been impressed, marking down the share price of Webmethods by 11% over the last 3 months.

Still, in principle Webmethods ought to be able to make good use of the Cerebra technology, since active discovery of corporate metadata is something that is quite relevant to EAI projects. Given Tibco’s entry into the area some time ago it perhaps only surprising how long it has taken them.  Whether this will be enough to revive Webmethods’ fortunes remains to be seen.

Diverse data warehouse approaches

August 18, 2006

There seem to be a few debates going on about data warehouse architectures at present e.g. one on William McKnight’s blog.  I think that the increasing alternative approaches available is actually a sign of two things.  One is that the problem that data warehousing seeks to address has by no means been solved: people do not have access to the high quality information they need in a consistent of timely fashion.  Secondly, that there is increasing innovation in the area: witness the rise of packaged data warehouses, EII tools and data warehouse appliances in recent years.  It is all a lot more complicated that Inmon v Kimball.

So what have we learnt? Firstly, there are some approaches that just don’t work well.  For a while in the 1990s there was a school of thought that data marts were sufficient without a central warehouse, and this seems to be pretty well debunked.  Just joining up point to point transaction system data via specific data marts results in a potentially vast set of unmanaged data marts, which do nothing to resolve inconsistency between systems at the enterprise level.  Related to this, selling “analytic apps”, which are basically data marts with a specific data model hard wired and some reports on top, does not work either.  The data model always needs modification to the specifics of the customer, and as soon as you do that you no longer have a package but a series of custom-built (or at least custom modified) marts again.  Informatica found this out the hard way until sensibly withdrawing from this flawed approach.

I think it is also clear that EII only has a limited place in a BI architecture.  The pioneer here, Metamatrix, had flat (and modest) sales last year and, moreover, half its customers use it only against one data source: hardly a wild success.  EII does not address issues of data quality or storing data historically, so at best can be only a partial solution.

Within the data warehouse approach I feel that it is important to understand the different types of usage patterns.  In particular, some types of reporting are very operational in nature, and are best served either by reports directly against an individual transaction system (here EII may have a role) or via an operational data store, essentially a straight copy of data into a separate database.  An ODS avoids queries interfering with operational processing, one of the issues with EII.  ERP vendors have started to provide ODS solutions e.g. SAP BW, but don’t confuse these with a full-function enterprise warehouse. These do well in ODS roles, less well when dealing with a wide set of data sources.  The narrower the scope of the report you need, the better suited it is to an ODS (or EII).  The broader the scope (or if it needs historical data), the better suited it is to a data warehouse.  Having a series of ODSs feeding into an enterprise warehouse is a sensible approach. 

However to satisfy reporting needs that span multiple transaction systems, or which deal with historical data, you really need a data warehouse of some sort.  The choice here is widening.  You can now buy packaged, or at least semi-packaged data warehouses from a number of sources.  See the report from Bloor on this market, which you can download in full here.  It has to make sense to buy functionality rather than building it, since it will be quicker and ultimately cheaper.  Data marts can still be part of the picture, but should be dependent i.e. generated from the warehouse.  In this way they stay in line with changes in the source systems.  If you have a very high volume of data, as happens in some industries like retail, Telco and retail banking, then you can now choose from a range of data warehouse appliances, even an open source one, if you don’t fancy Teradata, which was the pioneer and is still the leader in this area.  An alternative to a single giant warehouse is to have federated data warehouses, each feeding up one or more layers to regional or gloabl warehouses.  This approach is offered by Kalido and deployed at companies like Unilever and BP.

Finally, it is becoming clearer that, in parallel with a data warehouse, in order to make the most of it you will want your master data to be as high quality as possible.  A master data repository can act as the hub for improving data quality across the enterprise, and is complementary to the warehouse (indeed, it can be a source for the warehouse, and also to an enterprise bus in more ambitious deployments). The rise of interest in master data management presents a lifeline to data quality vendors, who has been steadily disappearing. Even here there are new approaches in the form of start-ups like Exeros and Zoomix.

Finally, data warehouses can become as real-time as necessary, given sufficient work.  Few BI requirements are truly real-time, but for those that are you can satisfy them either by embedding reporting directly in the transaction system, via EII, an ODS or even by drip-feeding data into an enterprise warehouse.  For example Kalido has an interesting one of these in a financial services setting, where the data appears just ten seconds after changes to the core transaction systems. 

The continuing thirst for better information, and a realisation that few companies have got it right yet, is causing increasing innovation in all these areas: packaged warehouses, appliances, EII, MDM, data quality.  This is a long way from a mature market.  

Is enterprise software finished?

August 16, 2006

In the OnStartups blog is a very perceptive piece on the state of the enterprise software market.  The post Y2k dotcom bust has had some deep effects on the industry.  In a backlash against the vendor bull market of the late 1990s large companies reined in IT budgets and revised their procurement processes.  Deals that previously would have needed one signature now need five, or eight.  Buyers have become ultra-conservative, falling back on the giant vendors to the exclusion of purchasing from smaller companies. This can be seen in the margins of Microsoft (25%) and Oracle (24% net margin) compared to everyone else.  Most other enterprise software companies, even the successful ones, have operating margins in the teens.  The five year average net margin of public application software software companies is 13.6%.  Even a company like Business Objects has a five year net margin of just 7%.

Venture capital firms, seeing this, have moved on to funding “Web 2.0″ ventures, which typically require a lot less capital (Flickr reputedly needed around USD 200k in capital to get going).  Why bother funding companies which need millions in R&D and expensive enterprise sales forces when you might find the next Myspace for a bargain?

This is unhealthy, and not just for small enterprise software companies.  As I have written before, innovation rarely comes from industry behemoths, so by creating an environment where companies are buying only from “safe” companies they are in fact damaging the ecosystem which will bring them their next new and exciting software application.  By sticking to the giant software vendors CIOs are creating an environment where smaller companies struggle, which causes VCs to invest less, which means that fewer and fewer enterprise software companies get started at all.  This in turn allows the giant vendors to charge whatever they want in upgrades (witness those margins) as they now lack serious competition.  Cartels are never a healthy thing for customers, and yet in this case the customers are bringing it on themselves by creating the conditions for a cartel to effectively exist.

 

The lukewarm company awards

August 15, 2006

It is often said in the UK that a kiss of death to a company is to be awarded a ”Queens Award for Industry”.  It is not quite clear why companies that win industry awards of this type often struggle soon afterwards, but there are plenty of examples to choose from.  One might have expected the more commercially savvy US media to be more selective in dishing out “hot company” awards but it would seem this is not the case.  One of the more surreal press releases I have seen for some time involves a magazine called START-IT, which concentrates on the manufacturing industry.  Each year they announce 100 companies who are “hot” in their view.  Usually such awards focus on start-up companies, but START-IT does not; it regularly includes industry behemoths in its awards ranks.  Unusual, but fair enough.  However even the most generous observer might struggle to grasp their awarding Cognos one of their “hot company” awards in 2006.  Cognos is a fine company, but let’s just look at their 12 month share price (below).

Chart

Now call me old fashioned, but in what way exactly would a company with this share price profile be considered “hot”?  I suppose the share price has recovered from its lows recently to be only around a third down from its level last September, but is this what one would think of as a “hot” achievement?  It isn’t due to stratospheric revenue or profit growth in 2006, as can be determined by a cursory look at the company accounts.  Certainly Cognos had a decent first quarter, with revenue up 15% (though important licence revenues up just 8%) compared to the same period in 2005.  However this was below the expectations that management had set and you can see the less than wild enthusiasm of the stock market in response to these in the chart below.

Could it be “hot water” that START-IT meant, given Cognos’ unfortunate SEC investigation? (from which they seem to have emerged quite well).  Let’s review the criteria carefully that START-IT apply:

“Companies selected in this category  are those that have made great strides in recent months and are certainly worth keeping an eye on.”

Hmm.  You lost me there.  I should emphasis that I am in no way having a dig at Cognos here, who have achieved tremendous success over the years and are still one of the two clear leaders in business intelligence.  But a “hot” company?  In 2006?

This was not an outlier: the START-IT award judges seem to be using some mystical criteria as to what constitutes “hot”.  Let’s look at the share price of another  ”Hot” company award winner, Dassault, over the last 6 months:

Share Chart DAS

This is another example where the shareholders of the company might take some issue with the START-IT assessment criteria.  Again, Dassault is a fine company with a great history, but “hot”?  

I’m not quite sure what the journalists at START-IT are smoking, but I think I need to get hold of some of it.

 

Alphabet soup

August 14, 2006

After the first CDI/MDM conferences (arranged by Aaron Zornes, founder of the CDI Institute) we can see further interest in the subject at the CDI Americas Summit in Boston held this week.  The conference attracted good case studies such as Intuit, and featured a keynote from Jill Dyche.  Jill is one of the founders of Baseline consulting and has a lot of real world experience in CDI projects.  She has just published a useful book about CDI, which has plentiful case studies and useful discussions about how CDI fits in with other things like EAI.  It is well worth reading.

I am firmly of the belief that CDI is a subset of MDM, and that companies embarking on a CDI project need to do so with an eye on the broader context of an MDM initiative.  If not then there is a danger of silos of technology, one for “customer”, then another for “product”, and then yet more for all the other (many hundreds of) types of master data that actually exist. 

Even the CDI Institute called its last conference the “CDI MDM” conference, and I would expect to see further blurring of this (in my view artificial) distinction as more companies begin to take a good look at their master data issues.  It would be useful for analysts to help give some structure to the MDM market, since at present there are a wide range of tools being touted as MDM, yet varying widely in scope and capabilities, some of them complementary to one another.  Many customers considering MDM are undoubtedly confused at present by the overlapping slideware that vendors are producing, and the lack of clarity will slow adoption as customers feel they need to take a step back in order to avoid dead-ends.  There are some individual analysts who understand MDM well e.g. Dave Waddington at Ventana, Andrew White at Gartner and Henry Morris at IDC, but the overall market is still nascent and confusing for customers.  The MDM v CDI v PIM discussion simply adds to the smoke, and the quicker an overarching framework gains acceptance the quicker customers will venture into real projects rather than studies and evaluations.  There is an analogy here with consumer electronics: competing formats that muddy the waters in the eyes of customers cause uncertainty and reduce consumer confidence, causing customers to hold back from purchases.  I feel that something similar is happening in MDM, and the faster the industry can get its terminology straight the faster the market will develop.

 

Patently absurd

August 10, 2006

Another attempt at reforming the US patent system is rumbling its way through the US legislature.  One sensible reform is to make patent claims valid only from the date the application is filed, unlikely the current lunacy (unique to the US) which makes it valid from whenever the inventor claims he or she invented it (good luck verifying that).

More controversially, the draft senate bill looks at restricting the value of compensation that can be claimed in case of infringement to the “novel and non obvious” value of a product, rather than its overall value.  This will irritate genuine patent holders and ambulance chasing patent attorneys alike, while cheering up technology firms that violate patents.  A good overview of the legislation can be found on Wikipedia.

It seems to me that tinkering around at the edges of the system is rather like rearranging the deckchairs on the Titanic.  The US patent office needs more root and branch reform to restore its reputation following a series of fiascos in recent years.  In my limited personal experience of the US patent office the system seemed very slow (eight years to get our patent) and we dealt with with patent assessors that seemed barely able to communicate at all, never mind bringing relevant knowledge or perspective.  We got there in the end, but it was an agonising process, and a lot more painful than the UK patent equivalent (which was not exactly a racy process either at five years before being granted).

Given how important intellectual property is, it would seem important to improve the standard of patent reviewers and improve the process itself to avoid some of the crazy patents that have been granted in recent years.  The new legislation does nothing to address this core issue.