Jeff Bezos on Amazon Web Services (Mechanical Turk, S3 and EC2)

I’m at the MIT Emerging Technologies Conference today.

Jeff Bezos was the first keynote speaker. His talk focused on Amazon Web Services (AWS), in particular, Mechanical Turk, S3 and EC2. Jeff didn’t talk about the Amazon Simple Queue Service (SQS), which is my favorite of the lot. After his talk he told me he omitted SQS due to time constraints.

Ultimately, the goal of Amazon Web services is laudable–to lower the cost of experimentation and shorten the time from idea to final product. Jeff talked about eliminating the need for “undifferentiated heavy lifting”, a.k.a., “muck”.

  • Mechanical Turk provides infrastructure to aggregate human intelligence by creating a market for humans to answer discrete questions that software has a hard time with, e.g., “is there a human in this picture?”. CastingWords uses Mechanical Turk for podcast transcription. The main issue with the service as it stands is that it doesn’t guarantee the availability /quality of answers. I would expect us to see an interesting set of services emerge leveraging the unique characteristics of cost/accuracy/reliability/throughput of Mechanical Turk. For example, an interesting area to look at would be batch speech-to-text processing. Another area would be machine learning where distributed human effort can play a role in adaptation, e.g., as the fitness function in GA/GP.
  • Amazon Simple Storage Service (S3) provides handle-based storage, similar to content addressed storage (CAS), which was pioneered by EMC Centera. It’s great bit storage in the cloud. Amazon can take it much further, I expect. I have an investment in the digital archiving space, Archivas, which is taking a more comprehensive approach–managing both content and its meta-data and adding a number of extended services such as indexing & search. An Archivas-type solution in the cloud would be able to off-load much more “muck” than S3 as it stands. It’s still cool, of course, and getting increased usage. For example, SmugMug is using S3 for professional storage and sharing.
  • Amazon Elastic Compute Cloud (EC2) allows developers to create a boot image and store it on EC2. When you need it, you can start a machine with this image. You control your machine instances. You can scale up and down in minutes. Amazon charges 10c/CPU/hour ($70/CPU/month) and 20c/Gb transfer. The main advantage is flexibility–you can run one CPU/month or 700 CPUs for an hour. That’s great for compute-intensive tasks and for scalability testing. EC2 has been in beta for just a few weeks but it’s already getting strong buzz. I wish it was around when we were doing scalability testing of our application servers as Allaire/Macromedia.

AWS can definitely remove some muck.

For a long time I’ve talked about the trouble with non-consumer software nowadays being that innovative core IP must be surrounded with 70-90% undifferentiated code that has to do with installation, updates, management, scalability, internationalization, standards support, etc. The net result has been that the bar to entry in the software space has gone way up. The capital requirements to ship quality product have also gone up and because undifferentiated IP doesn’t bring great exit multiples, the end result has been that returns in these sectors have gone down on average. It’s harder now than ever to build a really big enterprise software company. It requires careful planning of where in the value creation curve one should plan to get off and seek an exit.

Consumer-facing Web applications don’t suffer from many of these maladies but even they have to face the growth pains associated with success. Note Friendster’s struggles with scalability and MySpace’s multi-year re-architecture. Outsourcing some of this work to large players such as Amazon makes sense. For really simple applications, one can imagine an AJAX or Flash frontend that relies almost exclusively on S3 and EC2 as the backend.

Jeff Bezos and I had lunch at PC Forum a couple of years ago where we talked about the future of Amazon Web Services. At the time he said that one of the key strategic issues he was focused on was the interplay between Amazon The Platform and Amazon The Brand. With AWS focused primarily on developers and with Amazon affiliates growing stronger than ever, it seems Jeff will be able to have his cake and eat it too.

About Simeon Simeonov

I'm an entrepreneur, hacker, angel investor and reformed VC. I am currently Founder & CTO of Swoop, a search advertising platform. Through FastIgnite I invest in and work with a few great startups to get more done with less. Learn more, follow @simeons on Twitter and connect with me on LinkedIn.
This entry was posted in amazon web services, Digital Media, SaaS, startups, VC, Venture Capital, Web 2.0 and tagged , , , , , , , , , . Bookmark the permalink.

5 Responses to Jeff Bezos on Amazon Web Services (Mechanical Turk, S3 and EC2)

  1. Pingback: Amazon Web Services & Standards? « HighContrast

  2. Pingback: Cliff Reeves

  3. Pingback: » Three Trends Influencing Enterprise 2.0 Naked Open Source

  4. PuReWebDev says:

    Thanks for posting about Amazon. I do some development myself with their e-commerce api and even started messing around with probably the first ever Amazon Associates Video Podcast http://www.youtube.com/user/PuReWebDev

    thanks,
    PuReWebDev

  5. Pingback: Amazon Web Services fera plus d’1 milliard de dollars en 2012 : Mistra Blog

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s