MAXimum Burn

I haven’t posted in nearly two weeks because I’m still recovering from Adobe MAX 2006. Yes, it was in Vegas and, yes, the Macromedians (at least) know how to party till very early in the morning. Plus, two of my companies are fundraising and I’m looking at a couple of very cool new startups in depth. There just wasn’t enough time for ego enlargement through self-publishing…

MAX had great energy. The combination of Macromedia’s product momentum and energy and Adobe’s design sensibilities made the keynotes worth seeing. Kevin Lynch‘s quiet credibility worked especially well. Of course, there weren’t any Steve Job’s-style mega-announcements but that’s the difference between a consumer play (where you keep everything secret till the last second) and a developer/enterprise play (where the Labs concepts works great).

I have too many notes from the conference so here’s just a flavor of what’s important:

  • Flash Player. The new Flash runtime is ridiculously fast thanks to a large extent to the efforts to once JRunners Edwin Smith and Tom Reilly and JIT compiled code running on a new VM. Unofficial numbers are 1/3 the speed of natively-compiled Java. The good news is that the team has a few additional optimizations up their sleeve. The even better news is that in the future, these types of radical performance improvements should make their way into Flash Lite, where they’ll matter even more than on souped-up PCs.
  • Apollo. After a false start with Central, the company has regrouped and solved the basic problem of cross-OS installable applications with access to local resources. Don’t know what this means? Check this video out. Two of my startups at the conference were quite interested in the technology–it saves a lot of time and offers online apps a simple way to have a desktop presence and deeper integration with local resources. eBay had built a cool demo (can’t find a link to it, for the life of me).
  • Tools. The real power of the MM/Adobe merger is in streamlining workflow for web developers & designers. This is great for people who live in the tools. Notable is the push towards better mobile content publishing. Video tools have gotten better and Adobe is for the first time getting into audio (for video pros as opposed to audio pros) with SoundBooth. The Builder Eclipse add-on for Flex is starting to look pretty good.
  • Servers. Flex 2.0 is maturing rapidly–discussions I overheard at the conference were sophisticated. People are building real apps. Lots of stories about pain in getting DHTML to work just right cross-browser. With the Flex SDK selling for $0 and that message spreading in the industry, I expect to see a lot more Flex-powered apps next year. The combination of Flex and Apollo is particularly powerful. The ColdFusion team is continuing to innovate on the ease-of-use front, both with new server features and with great wizards/frameworks that integrate key technologies cross products into solutions. It’s great to see that kind of passion on the team of an eleventh-year-old product. LiveCycle is now in the same BU as ColdFusion and Flex. Expect to see more Web-PDF integration and multi-channel deployment of PDF forms.
  • Mobile. Adobe is really starting to get mobile. They are expanding their focus away from OEMs to operators through FlashCast (good) and are also now starting to leverage the developer community more (great). They have hired a head of developer relations for mobile, a great step. The ecosystem around Adobe Mobile is growing. The Wednesday keynote featured John Stratton (on video) and Peggy Johnson. The biggest news is that Flash Lite apps distributed through a select set of aggregators don’t have to go through a separate certification process. This is a big help for smaller mobile ISVs and content shops and a step in the right direction. Adobe can do much more, though. They have to push to clarify the economic model and simplify the business negotiations with aggregators and carriers on behalf of publishers.
  • Strategy. It seems like the post-acquisition integration is going well. I heard only a few meaningful complaints from various teams and, on balance, many more positive comments. As a friend of mine put it “Adobe has been lucky that the world waited for them to get their act together.” While the post-Vista assault from Microsoft will be intense, the company has a great base on the design side, a fantastic reach to the desktop and the theoretically best technology for mobile experiences. I bet there is a lot of thinking going on about SaaS and getting deeper into the applications business (based on the success of products such as Breeze).
  • Ecosystem. Adobe announced $100M available for distribution through Adobe Ventures to help build the ecosystem in critical areas. In talking to John Leckrone (head of Adobe Ventures) and John Brennan (SVP corpdev) about it, I got the sense that they have a solid yet flexible model in mind that combines the cash with real value add rooted in Adobe’s reach and industry influence. Update: Adobe took a $30M piece of MobiTV’s $100M Series C. This is about getting into the Flash video ecosystem.
  • Clubs. The sampling included Tao, V, Mix, Pure and Tabu. Pure was voted the clear favorite.

Topics to think/write more about:

  • Apollo + X = Revolution. X = ?
  • What’s the tipping point for Flash Lite?
  • What’s Adobe’s SaaS strategy?
  • Does the company have a Web 2.0 or Enterprise 2.0 play? Update: word is that Adobe will make a big announcement of sorts at the Web 2.0 Conference next week.
Posted in Adobe, Flex, Mobile, SaaS | Tagged , , , , , , | 4 Comments

Headed to MAX

I’m heading to Macromedia/Adobe MAX 2006 in Las Vegas (with a short detour to SF to see some cool Web 2.0 startups). Damon Cooper told me there will be 500-600 Adobeans there. That should be fun. The announcement line-up is also quite exciting covering enterprise, online & mobile.

The Las Vegas venue is a mixed blessing–people may be tempted to split up and do their own thing.

Posted in Adobe | Tagged , | Leave a comment

DMCA and Mashups: The GoogTube & MySpace Perspective

Just got the latest Bambi Francisco NetSense email in my box. The topic is how will copyright holders treat GoogTube. After some discussion on strategies for getting the most money out of Google, the discussion moves to DMCA and then mashups.

The biggest reason the media giants don’t have much legal ground to stand on is because of the protection of the Digital Millennium Copyright Act (DMCA) safe harbor, which offers ample protection to YouTube and, frankly everyone on the Internet, according to Fred von Lohmann, senior intellectual property attorney at Electronic Frontier Foundation.

Translation: The DMCA essentially says that even if a company is infringing, they get a free pass.

They would only be found to be willfully infringing if they did not respond to a notice to take down copyrighted material. So, even if I can go to YouTube and see 500 video clips of Seinfeld, YouTube would not be liable under the DMCA, even if it’s not proactive about taking down the material, according to von Lohmann.

This safe harbor protection that Congress granted to Internet companies in 1998 puts the burden on the copyright owner and the infringer, not the technology or service that allows for such material to be delivered or shared, said von Lohmann, who makes an excellent point about how the entire Internet ecosystem would be in trouble if it weren’t for the DMCA.

After all, if the DMCA didn’t work, Google might be held responsible for linking to a page that has copyrighted content, he said. The DMCA has to provide protection or else, as von Lohmann puts it: “The Internet would be sued out of business.”

Not only would the Internet be sued out of business, some potentially expressive and interesting content might not ever emerge. About 19% of Internet-using teenagers and 18% of Internet-using adults have mashed up their own content with content owned by someone else, according to Pew Internet Research.

And, where is that mashed-up content? Well, everywhere, and very likely on the No. 1 hangout on the Web, called MySpace.

Von Lohmann pointed out that News Corp’s MySpace has a significant amount of copyright material too. So the company might be setting a precedent that would adversely affect them if they were to sue Google. So, will there be lawsuits from big media against Google and YouTube? There’s a low probability that the big guys will sue Google, according to von Lohmann. But there will be lawsuits, he predicted. They just won’t be aimed at YouTube, but rather the lower hanging fruit.

Posted in Digital Media | Tagged , | Leave a comment

Botnets, Herders and Money Mules

The current issue of eWeek has a good story on the underground economy around botnets, their masters and the money mules that help them cash in on phishing fraud (in a related story).

Windows is, of course, the target platform of choice but that has more to do with market share than security, IMO. I haven’t been able to find any good info on whether Vista will put a dent in this or not. I can’t imagine it will–after all, there are plenty of exploits against *nixes and they’ve had a better security model for decades.

The trend is towards more organization/centralization of the criminal elements combined with active decentralization of the technology infrastructure to evade detection/shutdown. It begs the question of the type of entity that can go after these businesses.

From an investment standpoint, the real opportunities are in approaches that would circumvent the problem of hijacked machines through a combination of trusted computing and virtualization. vThere (from our portfolio company Sentillion) is a step in the right direction.

Posted in startups | Tagged , , , | Leave a comment

Mashups and DMCA

Copyright and intellectual property questions are starting to pop-up around mashups. For example, Denise Howell recently wrote in Lawgarithms about mashups and patentability.

Someone asked me today whether they could file a patent application concerning their Google Maps mashup.  Not being a patent lawyer, I haven’t the foggiest, but it’s an interesting question.

An equally interesting question that came up in a conversation today is whether the Digital Millenium Copyright Act can be invoked to shut down mashups.

If the mashup is going against published APIs then the answer would depend on the legal restrictions in the API license agreement (if any). Typically, since a Web services API assumes no presentation, I can’t see how DMCA can be invoked. The real question was whether scraping of HTML and/or deep linking may allow the site owner to be able to make a claim under DMCA that the intended presentation is being circumvented.

Update:

I reached out to a few experts on the issue. The answer seems to be no. DMCA would typically apply only in the cases where DRM is circumvented. Other than that, there is a standard copyright question.

Jason Schultz, an EFF attorney and the leader of the Patent Busting Project, comments:

Do you mean the notice and takedown provisions of the DMCA under Section 512 (used to remove unauthorized copyrighted material from the web) or the anti-circumvention provisions under section 1201 (used to shut down devices that unlock encrypted content or circumvent DRM/secure content systems)? They’re two separate parts of the same law.

To the extent that mashups like Google Maps import copyrighted content (like map panels) in violation of the API terms, one could argue that this is a copyright violation and can be taken down under Section 512. But to the extent the mashup simply instructs the browser to import the content directly via the API (e.g. directly from Google’s servers), I can’t see if being used that way.

As for Section 1201, some have argued that exceeding the limits of a license agreement qualifies as circumventing a technological protection measure under the DMCA, but I don’t think this is the majority view. If there is a technical restriction that’s one thing, but a legal restriction is not what the DMCA was intended to enforce, IMO.

There has always been a controversy about “deep linking” and in-line linking on the web. So far the courts have generally ruled in favor of the deep-linkers unless there is an express password or gateway they have to circumvent or they misrepresent the linked-to content as their own.

Steve Frank, a great IP attorney and the author of Intellectual Property for Managers and Investors : A Guide to Evaluating, Protecting and Exploiting IP, makes the following points:

I don’t think using a site in an unintended manner ordinarily rises to the level of a DMCA violation — at least not unless a DRM feature is somehow circumvented. Rather, the issue strikes me as one of copyright and whether some sort of implied license is exceeded. (If you have a license but stray outside what’s permitted, you’re an infringer.) That depends on a lot of factors and there isn’t a lot of law. More generally, if you violate an explicit user agreement or somehow misuse a service so as to cast a burden onto the server owner, you can be liable under various theories. If you link to or use someone’s site in a manner that passes the content off as your own, you’re also liable. But if you merely combine someone else’s site features with features of your own, you make the origins of the components clear, use them as intended, and don’t have an agreement with the site owner, you’re probably safe.

I’m not aware of any case holding that deep linking violates anyone’s rights. I can’t see how it could, given your premise that the linked-to content is properly attributed. How explicit must the attribution be? Good question! More visibility = less risk. Hopefully there is some way to at least indicate that the content is linked rather than owned.

Posted in Web 2.0 | Tagged , | Leave a comment

Venture Capital Video

I was playing with the Magnify.net beta and created a simple site with VC related videos–VCVideo. I haven’t tracked down Borat’s VC pitch yet…

Posted in Digital Media, VC, Venture Capital, Web 2.0 | Tagged , , , | Leave a comment

The End of PCs as We Know Them

I’m on a long plane ride to Phoenix. Outlook corrupted its .ost file so I can’t do email. I don’t want to see another Keanu Reeves and Sandra Bullock movie. My books are in the checked luggage. Time to listen to music and think about trends.

Andy Grove famously said that only the paranoid survive. It’s not clear to what extent Intel’s past success has been driven by paranoia and continued innovation vs. the x86 compatibility requirement for PCs stemming from the relationship with Microsoft (and, let’s not forget, IBM) from 25+ years ago. Otellini certainly has big challenges to deal with. AMD is winning the server business so Intel has certainly had some innovation challenges. And now, there are macro-level movements that challenge the x86 compatibility requirements for PCs. This doesn’t just affect Intel. It will affect the entire PC industry.

The x86-to-PC binding has been a tight one since the beginning of the industry. Apple tried to get in the game with Motorola and then PowerPC chips with little success. The reason was software–most of it and the best of it was only available on x86-compatible machines. Throughout the history of computing, loose & late binding has trumped tight & early binding. The latter is simply more efficient from an economic standpoint being a super-set of the former. Hence, there are market forces trying to break the x86-to-PC binding and, in my opinion, the next 5-10 years will bring a major shift in the PC industry. The main driving force is Asia. The main enablers are a host of new technologies and approaches to delivering software & services that are rapidly maturing.

I’m blogging this on a plane so my research capability is limited. If I recall correctly, economists predict that about half a billion (!) new consumers from India and China alone will enter the world market in the next few years. I use “consumer” to denote a person whose disposable income is high-enough to enable her to consumer goods and services on the world market. The first computing device she’ll likely own (probably already does) is a phone. The phone won’t solve all of her computing needs because of its I/O limitations and, to a secondary extent, storage requirements. (IMO, compute power is not going to be a big issue for much longer.) So, at some point, a good portion of these 500M consumers will want a PC (a laptop, to be precise).

My expectation is that demand will be very elastic. The reason is that income inequality will remain very high in both India and China–most of the net new consumers won’t have very high disposable incomes but a strong desire to own technology that has dual use (entertainment and education/work). PC manufacturers will therefore be incented to offer very low cost machines. What’s the magic price? MIT has been going after the $100 laptop, albeit w/o a disk drive. A team in China is aiming for sub $200 on a MIPS-like chip. HCL in India makes a $200 PC already. Another Indian startup whose name escapes me is doing an even cheaper PC based on a mobile chipset. Walmart and Dell are already selling well-loaded laptops with drives for under $400. I don’t know what the magic price is but I’m sure it’s well within reach in the next few years, especially considering some of the options on the table.

So, how do you get the price of a PC down? The highest-cost (and producer margin) items are the IP-heavy parts, e.g., the microprocessor and the software. Hence, to lower the price of the PC without significantly reducing its capabilities you’d have to throw out the proprietary parts while preserving the key features of the end-user experience. Starting from the top of the stack:

  • Increasingly, the software that consumers use these days is Web-based. AJAX and Flash create desktop-like experiences using SaaS delivery. In addition, major file formats (from Office to PDF) are becoming more open. There goes the need for Microsoft Office and many other downloadable applications. Go Web 2.0 with a few desktop-based tools such as backup, desktop search, etc. May be get OpenOffice, though I’m wondering whether it’s too complicated for the target audience. (Videogames will be hardest one tackle, both because of their compute intensity and because of their tight binding to the hardware and expensive video cards. Virtualization may offer a temporary solution but ultimately new types of videogame experiences may spring up. I expect Asian videogame companies will aggressively target this opportunity. Note how Cyworld doesn’t need the fanciest gamer box.)
  • Why pay for Windows if you don’t need the MS desktop application software? Go with Linux, hacked to your government’s specs. Government specs? Yup.
  • If you’ve gotten this far, then why do you need x86 compatibility? A Web browser with AJAX support can be ported to any HW/OS combo w/o much trouble. Adobe will automatically do the Acrobat Reader and Flash ports for any platform that has significant volume in order to maintain market leadership. The target customer is unlikely to want to buy as many HW add-ons as in the developed world, which will also help.
  • Perhaps you even skip the disk drive, as the MIT approach suggests. This assumes a lot about connectivity but also removes a component with, relatively speaking, high failure rates.

The governments of both India and China have a very strong incentive to push for broad deployment of PC-level computing to build up an educated workforce and a high-tech industry around the consumers of PC content/applications. I’d expect them to use significant leverage–from protectionism to government-level bargaining–to achieve this goal. Given the numbers involved, why shouldn’t China back local companies to build everything from the chip up to the customized OS? At the volumes we’re talking about, all software vendors that have downloadable software will be pushing to port/certify their apps on the new platform. Certainly, I’d expect both countries to push new PC specs using some type of industry consortium approach because specs enable network effects. These fast growing economiest don’t have the time to wait around for markets to figure things out, at least in the beginning. They’d want to focus innovation first and worry about making the wrong choice later. It’s hard to really screw up, BTW. It’s not like PCs are rocket science.

Intel and Microsoft should be worried. Microsoft ultimately has a way out–turn Windows Live into the best set of network services for consumers and make them dirt cheap in the third world. Go Ray! What’s Intel’s way out? I’m not sure… The breakup of the tight binding between the software stack and the hardware in the PC business may be the end of Intel as we know it.

Another possibility is the merging of game console, set-top box and PC capabilities. The trouble I have with this version of the future is that (a) consumers loose portability (we’ve seen a clear trend towards laptops and away from desktops), (b) costs go up for the initial device and (c) there are usage conflicts–watch TV or watch YouTube or play a game?

The icing on this cake will be the education market in the developed world. Ever anxious to maintain leadership in new technologies, developed countries are starting to experiment with introducing technology earlier in the education cycle, all the way down to one-to-one computing starting around 4th grade. (In the US, Maine was the first state to do this. They went with iBooks. NH and MI are following.) Cost pressures and the blank slate that kids are (they don’t expect to use Powerpoint so why do they need it?) may push state’s and governments towards third-world-style PCs. Once kids in the developed world grow up with something different than the PC/Mac as we know it, everything will change. So, take the half billion people in India and China and add a few hundred million kids. Still don’t think this is a huge opportunity?

Who makes money? Well, anyone who’s in the media and advertising business will have a ton more eyeballs. On the software side, Adobe is a good bet, provided it sustains Flash momentum online and positions Flash Lite as the best presentation layer on mobile (remember, the Asian consumers will have phones first). Microsoft’s and Google’s success in software will depend on how they evolve their online apps. Then, whoever gets “blessed” as the chip provider for mass-market PCs in India and China may become the new Intel. I really do expect the governments there to play a hand in this… The same applies to chip-level integrators. Finally, there is a stunning services opportunity–think GeekSquad on steroids.

The weakest link in my argument is the implicit assumption about broadband deployment in Asia. I’m comfortable that it is not too outrageous because most of the new consumers will initially be in the cities and the cities are getting good broadband. Last but not least, 3G is big there.

Posted in Microsoft, SaaS, startups | Tagged , , , , , , , , , | 1 Comment

Was that your VC experience?

One of my entrepreneurs sent this cartoon. Sad but true–some VCs are unwilling to work around the weaknesses of their teams.

wsj-vc.jpg

No teams are perfect but many teams can significantly increase their odds of success with the right combination of (a) focusing the business to play to their strengths and (b) selectively adding to the team, board of directors and board of advisors. Since VCs cannot (and shouldn’t) run companies, their value-add primarily has to do with how well they can partner with entrepreneurs to help achieve this.

Posted in startups, VC, Venture Capital | Tagged , , | 3 Comments

Google & YouTube: a Sign of Weakness?

TechCrunch was there first and then WSJ substantiated the new price in the much-rumored GOOG-YouTube talks. So, the #1 online company captures the #1 online video player to cement its position. $1.6B is not a huge number for a strategic acquisition. No big deal, right?

Perhaps, but there is another interpretation. This is the first big acquisition that GOOG may do and it may signal a marked change in strategy. Yes, there was the $1B to get a stake in AOL and dMarc Broadcasting has $1B performance-based payout but here we’re talking straight M&A.

A while back I was having lunch with a friend from Google corp dev. He outlined the then Google take on M&A the following way:

  • We don’t like to pay much for innovation because it’s likely that the smart people at Google are already working on something similar.
  • We don’t like to pay much for technology because we’ve found out that much of the tech that we acquire has to be rebuilt to operate at Google scale and fit with the rest of our operations.
  • We don’t like to pay much for scale (customers/eyeballs) because anything we launch gets pretty big pretty fast.

Well, in going after YouTube, Google will be paying for scale. It will also be the first time they would be going after a competitor that has trounced them in the market. According to the last comScore data that I checked a couple of weeks back, Google was towards the bottom of the top 10 video sites list with about 10x less streams than YouTube.

Could the acquisition be interpreted as a sign of weakness? Google wasn’t able to go it alone, even though it tried, and is now forced to buy their #1 competitor? If so, where does this end? For example, Facebook has >40x the uniques of Orkut. Buy Facebook? Is large-scale M&A going to play a big role in Google’s future growth?

An equally valid interpretation is that buying YouTube is actually a sign of strength–GOOG stepping out of Not Invented Here mode, acknowledging that they don’t have all the answers and they are not the best all the time (but have a lot of cash with which to fix that by buying whoever is #1 in a strategic area).

Posted in Digital Media, startups | Tagged , , | 4 Comments

Ouch, that hurts

Jeff Nolan points out in Venture Chronicles that East Coast VCs just don’t get it when it comes to Web 2.0. Harsh words but then he asks “where are the Web 2.0 companies on the East Coast?” Is it case closed? To me (an East Coast VC who was at the same conference Jeff went to), there are a few issues here that need to be parceled out.

There has always been more consumer software activity in CA than anywhere else. It’s natural for the Web 2.0 buzz to be loudest in the Valley and SF.

Historically, East Coast startups have been less aggressive (and successful, if you look at aggregate numbers) at marketing than West Coast ones. I know of a number of companies out here doing very cool things with user-generated content of all types, RSS and Web services that for some reason or other have decided that waving the Web 2.0 banner is not the most important thing they should be doing. I spend the majority of my time in Boston yet I’ve seen several West Coast companies’ pitches that go along the lines of “Web 2.0 is huge. We do that. ‘Nuf said.” I have heard only one such pitch in Boston for an Enterprise 2.0 company. Nothing new under the sun. East Coast startups are too slow to ride the hype wave and their left coast counterparts are more than eager to do so.

I was at Web 1.0 companies (Allaire and later Macromedia) and helped put the groundwork for a number of the Web 2.0 technologies from XML to Web services to AJAX & RIAs. At a recent conference in SF, I was struck by (a) how old I felt compared to the Web 2.0 entrepreneurs (I’m 33) and (b) to what extent they saw themselves doing “completely new stuff” as one guy put it. True, there is lots of innovation but I also see tons of re-spins of old ideas with better UI and the benefits of some new standards. (Mike Arrington at TechCrunch had a slide on this in a presentation he gave recently in DC.) I also see some examples of brilliant branding with little net new innovation. (No, AJAX is not new. Lots of people were building AJAX-style apps back in 1998 but they never took off because cross-browser DHTML support sucked back then.) My point is not to gripe about the “young generation” but simply to point out that Web 2.0 (and SOA for that matter but that’s the topic of a much longer discussion) is a mixture of reality and spin. I’ve seen startups with ratios from 9 : 1 to 1 : 9. I like the former and I’ve noticed that many of them care more about building great products/services which delight their customers than the label du jour that’s attached to them.

Personally, I believe in the investment promise of Web 2.0, Enterprise 2.0, E-commerce 2.0 and Mobile 2.0 because (a) there are fresh approaches there to building, using, marketing & selling great software and services and (b) kidding aside, the industry would just love to have Bubble 2.0. Polaris already has investments in three of the four categories (two of which are mine) and we are always looking for great new entrepreneurs to partner with. (Automattic/WordPress and Allurent are the ones you can find lots of info on. 8th Ring is still in stealth.) Which brings me to my final point: what startups are there to invest in out East and what East Coast investors invest in are not one and the same. I just wish Jeff had stopped by to chat while at the conference.

Update:

  • Barry Briggs introduced me to Jeff’s post. Thanks, Barry.
  • Jeff Nolan responds that I missed the point of his post. No, I didn’t. His post has a few great points about Enterprise 1.0 vs. Enterprise 2.0. (More on this here and here.) I’m only responding to his generalizations with some generalizations of my own. 😉
Posted in Automattic, SaaS, startups, VC, Venture Capital, Web 2.0, WordPress | Tagged , , , , , , , , , , | 3 Comments