FileNet finally goes out to IBM

The consolidation in the ECM market continues with IBM snapping up FileNet. This is a good outcome for FileNet, which had not performed very well and was threatened by changes in the ECM space.

The acquisition price of $35 per FileNet share is only a slight premium over the closing price of FileNet’s stock on Wednesday.

This news comes not long after OpenText got Hummingbird, putting in question the fate of smaller ECM vendors that don’t have a highly differentiated offering. Some of these, like MDY, which ended up recently at CA, will end up being technology acquisitions to fill out larger vendors’ product portfolios. Others will specialize either horizontally or vertically and go after niche markets.

One cannot underestimate the effect that Microsoft Sharepoint is having in the ECM market. SP 2.0 has effectively commoditized basic Web-based collaboration in enterprises. SP 3.0 adds significant document management capabilities. The combination of free Sharepoint Services and the reasonably scalable SharePoint Portal Server will have significant impact on the way current ECM vendors do business. They’ll be forced up-market and will have to deliver value through integration of their ECM suites into broader infrastructure and applications, the core driver behind the IBM acquisition of FileNet. Even then, the larger ECM vendors won’t be able to ignore Microsoft’s presence in the market. IBM’s services group is already the biggest MS integrator. EMC smartly bought Internosis, with one shot giving the company a response to Microsoft’s increasing market share and IBM’s services advantage.

My own investment in the ECM space, Meridio, is riding the SharePoint wave by helping very large enterprises and government agencies deploy scalable and secure records management (RM). Meridio recently won the largest RM contract in the world numbering in the hundreds of thousands of seats. IBM’s purchase of FileNet is good news for the company–there are fewer competitors who can satisfy the needs of the very large enterprises and even fewer who can do so while leveraging existing investments in Microsoft Windows, Office and SharePoint.

Posted in IBM | Tagged , , , | Leave a comment

The trouble with Ruby

Yakov Fain, a Java & Flash/Flex guru, notes that Ruby is climbing up the Tiobe Index and spends some time looking into why this this is the case.

I like Ruby but I don’t see it becoming a mainstream language soon. The biggest strength of Ruby–the OO nature of the language and some of its cooler constructs–are its greatest weakness. Consider continuations, for example. How many people in the world would know how to implement something with continuations without screwing up?

By definition, the vast majority of developers out there have average skills. They need tools and programming models that are safe more than they are powerful. We learned this in spades at Allaire. ColdFusion became one of the most widely used Web development platforms because it created a rubber room where hackers, non-professional programmers and many others could build apps without the thinking too hard. Were they the best architected, most scalable apps? Absolutely not. But they came out quickly and they worked. (Hey, MySpace was built on ColdFusion initially and it served them well.)

If Ruby becomes more popular, it’ll be in the way C++ came to power. Lots of C developers started playing with C++. Initially, they wrote bad code and made all the mistakes of beginning OO programmers. Expect to see lots of people say they are Ruby programmers w/o actually taking advantage of what makes Ruby a great programming language.

Posted in Flex, Ruby on Rails | Tagged , , , , | 3 Comments

The Web at 15, the PC at 25

From Peter O’Kelly’s Reality Check, pointers to intresting BBC and Economist articles about the history of the Web and the PC.

One key date is 6 August 1991 – the day on which links to the fledgling computer code for the www were put on the alt.hypertext discussion group so others could download it and play with it.

On that day the web went world wide.

Posted in Uncategorized | Tagged | Leave a comment

Office apps on-demand?

eWeek has a good analysis of office apps that links into a deeper look at Office 2007 Beta 2. Having installed and played with Beta 2 on a Vista machine I do agree that MS will have a long upgrade cycle. Most people that should use Office, certainly in the US, already have it. Therefore, given the high cost, Office upgrades will likely be tied to PC upgrades. Further, I expect Enterprise Agreement (EA) customers to push back on Office upgrades due to the potential helpdesk hit. The new user experience is neat but there are many incompatibilities with Office 2003, which will be a problem. I’ve used Word since v3.0 on DOS and it took me 15+ seconds to figure out how to zoom a page. Not good.

Does this mean that non-MS alternatives have an opportunity to increase market share? Not in a meaningful way in the US due to market saturation. Rest of the world is a different story–too complex to unravel in this post. I’m especially negative on Web-based tools as replacements for MS Office in enterprises for the following reasons:

  • Quality: the ones I’ve used tend to be buggy.
  • Reliability: because of the way browser processes are handled by the OS, when your browser crashes you can loose your work.
  • Connectivity: AJAX apps haven’t figured out how to deal with occasionally-connected scenarios. It is bizzarre how long it has taken the world to learn from Lotus Notes (and Groove) about the power of auto-synchronization and the ability to work on- and off-line. This is especially important for business travellers.
  • Security: to handle loosely-connected scenarios you need local storage, which requires appropriate handling of security. This gives an edge to Flash-based applications (compared to DHTML apps), both of which use AJAX to communicate with servers. Flash has some local storage capabilities. You can do local storage directly from the browser scripts but you need applets or ActiveX controls that will generate security pop-ups, etc. (What’s old is new. I built an app like that back in 1998 using WDDX and the MS file access ActiveX controls.)

Strangely enough, the criteria above suggest that MS has a chance to solve the on-demand office app problem better than anyone else. They have Ray Ozzie of Notes & Groove fame. They have control of the OS & browser, which would allow them to solve the connectivity & security issues transparently.

I’m all for simpler, on-demand productivity apps for the home but the enterprise will go with MS for a while longer.

Posted in SaaS | Tagged , , , , , | 1 Comment

Databases move closer to commoditization

EnterpriseDB got a nice $20M Series B. Congratulations & good luck to CEO Andy Astor, who’s a friend from the early days of XML and Web services.

The underlying story behind the increased traction MySQL and EnterpriseDB are seeing is that SQL databases are getting commoditized at an increasing pace. Oracle, IBM and MS may not like this but that’s where the world is headed. It’s about time–SQL has been around for 30+ years. On top of this, application frameworks have advanced to the point where some of the heavy lifting that went into databases for added functionality and scalability has moved up into the application runtime tier. Comparatively speaking, much less code is database-specific nowadays than ten years ago.

To sell databases today, you either have to show significantly reduced TCO, which is the open-source startup way, or add significant functionality. The big guys are taking the predictable fatware path–adding more features such as advanced XML processing and high-end BI for ever-smaller user audiences. That’s not a bad way to go when most of the revenue growth comes from up-selling into existing accounts.

Startups have an additional opportunity to identify large market niches and build special-purpose systems that provide 10+x advantages compared to traditional approaches. Here are some examples:

  • Doing well
    • Netezza, which goes after high-end data warehouses with complex BI queries.
  • Execution problems & small exits
    • The XML database startups: Neocore which is having a second life as Xpriori, Ipedo, … Larger XML DB/server players such as TigerLogic at RainingData and SoftwareAG’s Tamino are also facing increased pressure by the native XML features in the Big Three.
  • Big plays, too early to tell
    • StreamBase, which focuses on processing high-volume real-time information streams.
    • DataGrid, going after semantic information processing
    • Dataupia, a very cool startup by the founder of Netezza we recently invested in.
Posted in startups | Tagged , | 6 Comments

Ruby favored by startups

I’m starting to see more startups using Ruby and Ruby-on-Rails for three typical types of projects:

  1. Simple web sites, e.g., the 37signals properties.
  2. Web 2.0ish sites where XML will be passed back’n’forth for both AJAX and site integration purposes.
  3. Internal use around flexible scripting, product extensions, release engineering, etc. For example, one of my startups still in stealth is going this way.

Uses (1) and (3) are right down Ruby’s and Ruby-on-Rail’s alley. My personal experience is that Ruby and Rails are very productive–I really enjoy them.

I thought Tabblo was headed towards (2) but it turns out they decided to not use Ruby on Rails (see comment from founder Antonio Rodriguez).

I’d be a little concerned about the risk of doing (2) without an absolutely top-notch team that has connections into the Ruby community. The risk is three-fold. First, the framework APIs are fluid and the documentation is poor. Second, as noted by Tim Bray, Ruby’s XML/Web services support leaves a lot to be desired. Third, the collection of Ruby libraries wrapping 3rd party systems & services is still immature. That’s why I see Ruby as a great choice primarily for self-contained apps right now.

Posted in Ruby on Rails, startups, Web 2.0 | Tagged , , , , , , | 1 Comment

Bubble 2.0: rising valuations

Inflated valuations by themselves are not an indication of a bubble brewing. Still, they make you think twice about what M&A and IPO outcomes are likely to be several years ahead.

Pre-money valuations are not the best measure of whether VCs and entrepreneurs are getting good/bad deals. Some cool Web 2.0 companies notwithstanding, on average, companies these days are consuming more capital (due to more aggressive execution and more competition, amongst other reasons) than pre-bubble companies. Therefore, you’d expect pre-money valuations to drift up as well to account for the dilution that comes with financings.

A better measure I like to use is the ratio of pre-money to the raise amount. This morning I analyzed Series A data for some of the sectors I’m interested in using self-reported company data. The histogram below was generated by Excel’s Data Analysis pack using automatic binning. For the period 2005-6, the predominant value is around 1.5x. That’s a noticeable increase from 2-3 years ago. Good news for entrepreneurs who are suffering less dilution.

2005-6 Financing Histogram

I haven’t had the chance to compare to pre- and during- Bubble 1.0 data.

Posted in VC, Venture Capital | Tagged , , | Leave a comment

Cyworld vs. MySpace?

Good piece in Business 2.0 on Cyworld’s entry into the US market. It’ll be interesting to see how much the user experience and business model have to be adjusted for US tastes. When will buying virtual currency become cool?

The bulk of Cyworld revenue comes from the sale of virtual items worth nearly $300,000 a day, or more than $7 per user per year. By comparison, ad-heavy MySpace makes an estimated $2.17 per user per year.

Posted in MySpace, virtual worlds | Tagged , , , | 3 Comments

Metcalfe’s Law: more misunderstood than wrong?

The industry is at it again–trying to figure out what to make of Metcalfe’s Law. This time it’s IEEE Spectrum with a controversially titled “Metcalfe’s Law is Wrong”. The main thrust of the argument is that the value of a network grows O(nlogn) as opposed to O(n2). Unfortunately, the authors’ O(nlogn) suggeston is no more accurate or insightful than the original proposal.

There are three issues to consider:

  • The difference between what Bob Metcalfe claimed and what ended up becoming Metcalfe’s Law
  • The units of measurement
  • What happens with large networks

The typical statement of the law is “the value of a network increases proportionately with the square of the number of its users.” That’s what you’ll find at the Wikipedia link above. It happens to not be what Bob Metcalfe claimed in the first place. These days I work with Bob at Polaris Venture Partners. I have seen a copy of the original (circa 1980) transparency that Bob created to communicate his idea. IEEE Spectrum has a good reproduction, shown here.

The original Metcalfe's Law graph

The unit of measurement along the X-axis is “compatibly communicating devices”, not users. The credit for the “users” formulation goes to George Gilder who wrote about Metcalfe’s Law in Forbes ASAP on September 13, 1993. However, Gilder’s article talks about machines and not users. Anyway, both the “users” and “machines” formulations miss the subtlety imposed by the “compatibly communicating” qualifier, which is the key to understanding the concept.

Bob, who invented Ethernet, was addressing small LANs where machines are visible to one another and share services such as discovery, email, etc. He recalls that his goal was to have companies install networks with at least three nodes. Now, that’s a far cry from the Internet, which is huge, where most machines cannot see one another and/or have nothing to communicate about… So, if you’re talking about a smallish network where indeed nodes are “compatibly communicating”, I’d argue that the original suggestion holds pretty well.

The authors of the IEEE article take the “users” formulation and suggest that the value of a network should grow on the order of O(nlogn) as opposed to O(n2). Are they correct? It depends. Is their proposal a meaningful improvement on the original idea? No.

To justify the logn factor, the authors apply Zipf’s Law to large networks. Again, the issue I have is with the unit of measurement. Zipf’s Law applies to homogeneous populations (the original research was on natural language). You can apply it to books, movies and songs. It’s meaningless to apply it to the population of books, movies and songs put together or, for that matter, to the Internet, which is perhaps the most heterogeneous collection of nodes, people, communities, interests, etc. one can point to. For the same reason, you cannot apply it to MySpace, which is a group of sub-communities hosted on the same online community infrastructure (OCI), or to the Cingular / AT&T Wireless merger.

The main point of Metcalfe’s Law is that the value of networks exhibits super-linear growth. If you measure the size of networks in users, the value definitely does not grow O(n2) but I’m not sure O(nlogn) is a significantly better approximation, especially for large networks. A better approximation of value would be something along the lines of O(SumC(O(mclogmc))), where C is the set of homogeneous sub-networks/communities and mc is the size of the particular sub-community/network. Since the same user can be a member of multiple social networks, and since |C| is a function of N (there are more communities in larger networks), it’s not clear what the total value will end up being. That’s a Long Tail argument if you want one…

Very large networks pose a further problem. Size introduces friction and complicates connectivity, discovery, identity management, trust provisioning, etc. Does this mean that at some point the value of a network starts going down (as another good illustration from the IEEE article shows)? It depends on infrastructure. Clients and servers play different roles in networks. (For more on this in the context of Metcalfe’s Law, see Integration is the Killer App, an article I wrote for XML Journal in 2003, having spent less time thinking about the problem ;-)). P2P sharing, search engines and portals, anti-spam tools and federated identity management schemes are just but a few examples of the myriad of technologies that have all come about to address scaling problems on the Internet. MySpace and LinkedIn have very different rules of engagement and policing schemes. These communities will grow and increase in value very differently. That’s another argument for the value of a network aggregating across a myriad of sub-networks.

Bottom line, the article attacks Metcalfe’s Law but fails to propose a meaningful alternative.

Posted in Web 2.0 | Tagged , , , , , , | 31 Comments

Bezos invests in 37signals

I wasn’t one of the 30+ VCs who tried to invest in 37signals. It’s not that I don’t think that what they are doing is cool, just the opposite. I, as well as some of the companies I’ve invested in, use their software, especially Basecamp, on a regular basis. It’s not that I don’t think they have a chance to become a big company. Simple, easy-to-use applications have the best chance to make it big through a software-as-a-service (SaaS) model and the guys at 37signals understand simplicity and design very well. Jason and company had made their feelings about institutional investors well known. Anyone in the community who knew friends of theirs would have understood they were looking for something other than a typical VC.

That’s neither good nor bad. Some entrepreneurs like working with VCs and others don’t. Needless to say, most VCs prefer to work with the former. However, like in dating, sometimes rejection only brings on stronger desire to pursue a deal, to win. If this hot pursuit leads to a forced partnership, there may be trouble in the future.

I don’t buy the S3 connection, BTW. There are ways to establish a relationship there w/o any investment.

Posted in SaaS, VC | Tagged , | Leave a comment