Barbara and I went down to the Tattered Cover on Sunday for some reading time. It was a beautiful day, and I like the TC for a couple of reasons:
Big bookstores ("Tattered Cover", "Powell's" in Portland, the "Harvard Coop" in Cambridge) have a charm all their own
Chain bookstores (modern Barnes and Nobel, the almost-extinct Borders) all seem to carry the same books at all locations. What's the chance of finding something new and different?
Anyway, the TC didn't disappoint, and I came away with three books, each with a neat angle on modern computing
The Annotated Turing
First up is The Annotated Turing, but Charles Petzold. This is one of the most remarkable books I've ever read, in that it takes one of the crowning intellectual efforts of the 20th Century, Alan Turing's "On Computable Numbers, with an Application to the Entscheidungsproblem", and highlights it with mathematical historical and cultural annotations.
You have to admire any book that tries, in its first two chapters, to explain number theory to its readers. Talk about a "thankless task" but Petzold does a nice job of it.
By the time you get to page 33's "and at some point the Heisenberg Uncertainty Principle kicks in, and then we really can't be sure of anything any more", you're hooked. Awesome and fun tell-all of the founding document of computer science.
Viral Spiral
Next up is Viral Spiral, by David Bollier. Tim Berners-Lee originally created what would become the Internet as a sharing platform for academic researches. Some sharing -- by 2003 the Internet was being used by 600 million people worldwide, and Viral Spiral describes the general creative commons and open source worlds -- focused around what Bollier calls the "Great Value Shift" in how valuable things are created for commerce and culture.
In commercial software development this cultural shift is still taking place; the ability to scale at almost zero marginal cost is probably the core driver behind what's been called "Web 2.0." Hard to do better than R. Buckminster Fuller, who said:
You never change things by fighting the existing reality. To change something, build a new model that makes the existing model obsolete.
Developing Facebook Platform Applications with Rails
Facebook matters, and even more than Google and Amazon, Facebook memes dominate 21st century software development thinking right now:
400,000 registered developers?
24,000 applications?
140 new applications per day?
Numbers like that talk, and Michael Mangino's book DFPAwR goes under the hood of this technology phenomenon and describes the building of applications on Facebook's platform.
Facebook is something remarkable; successful, global Level-3 computing platforms generally only come up every couple of decades (IBM 360, Microsoft Windows and the Internet are probably the only other 3 in the last 50 years), so for Facebook to have made the progress they've made since 2007 is mindbending.
My interest here is pragmatic -- how could it be possible to build such a widespread platform in so little time?, and my reading is also with the hope of figuring out how much Level 2 / Level 3 platform-ness we can build into Magellan.
Platform birth is rare, but it's compelling when it happens. I'm open to ideas, and as we come up with things I'll keep you posted...
Since a new year beckons, and this is a blog, it follows that predictions for the new year must follow. Hey, rules are rules. And so, with no further adieu, here are my predictions for 2009:
1. Browser time of transition Part 1 - IE6 finally heads to the boneyard
A whole - cottage - industry has arisen to beat up on Internet Explorer 6, and IE6 support for standards is questionable (see Google Browser Security Handbook for all the brutal facts), but with a 10-year history behind it IE6 is one of the most successful software rollouts of all time.
The real deal here is that IE has had a majority of browser usage for the past 10 years, and it has come to define standard web behavior (iehacks and all). A clydesdale like that won't be replaced by some better kind of horse -- for any kind of transition you need a motorcar. "Web 1.0" and "Web 2.0" didn't require a motorcar, but "Web 3.0" will. Read on for a bit or what "Web 3.0" motorcar-ness looks like...
2. Browser time of transition Part 2 - Firefox++ invents a new web experience for the masses
Firefox has always been a nice browser, awesome for development and great for home use, but not much doing on the PC at work. The world hasn't needed another browser, or even a better browser. To get the world's attention you need nothing less than a whole different kind of application.
ForecastFox was a nice start -- a cool weather app add-in that made FFox a weather machine. Firebug was even better -- turns your FFox into a developers' console. Flock goes all the way -- "A one-stop app for staying connected with your online world." All good, but still only the stuff a 15% share is made of.
Plug-in functionality has long been one of Firefox's most compelling features. In 2009, this capability starts turning Firefox into a whole new application.
3. Browser time of transition Part 3 - Google Chrome heralds a new web experience for the enterprise
Google made some noise in 2008 with the release of a new browser, Coogle Chrome. The most remarkable thing about Chrome isn't that it was released with a comic book for documentation (though that was cool -- very graphic novel) -- what's remarkable about Chrome is that it solves the fundmental have-enough-tabs-and-you're-begging-for-a-crash problem.
With Chrome, each tab is a separate process, so that if (when) any tab process crashes, it only brings down that tab -- not the whole browser. This is a big deal as browser use becomes more mission-critical, and more tabbed. Think of the old '80s movie War Games, but with WWIII launch sequences simulated on a buggy browser and you'll understand why I'm grateful and why I expect big things from Chrome-like advances in 2009.
The real, 2009-immediate value of cloud computing is that it provides the first good estimates for the cost of running real computers in real data centers. Hosting costs have long been the "great unknown" in SMB-enterprise computing budgeting, and the cloud providers give the first decent formulas for guesstimating what "starting the next eBay" will cost.
What should your next "petstore.com" cost per year? We'll start with one wild swag -- a small site takes 3 servers to run reliably, a medium site takes 6, and a large site 12 servers. Typical cloud costs (Amazon, in this case) run $0.10/hour for a small server, $0.40/hour for a medium server, and $0.80/hour for a large one.
Swagging again and adding some bandwidth, a small site then costs about $240/month to run, a medium site is about $500/month, and even a large site only looks like about $2000/month to run -- hosted. Keep in mind that these are only hardware costs, and software must (or mustn't, in the case of an open-source stack) be applied.
Anyway, the cloud vendors are doing a great service by setting a "standard price" for computing, and this will really help planning in 2009.
5. Web 1.0 revisited - the rise of JavaScript and HTML5
I can remember, back in days of old, competing in (and winning :-) better living through technology...) an Advertising contest at Stanford by writing and filming a commercial, and showing it on a pre-release version of Quicktime, courtesy of relationships at Apple.
Those were the days, but the problem (then as now) is that my video really didn't live anywhere -- you could show it on a screen, but it really didn't integrate with any applications very well. And so it has been in all the time since, and neat graphic technologies like Flash, Silverlight and embedded video are resident in web pages, but don't really LIVE in them.
YouTube gets part of their $1.65 billion acquisition fee for driving an integration technology (embedding) at just the time network video reached its tipping point, but the picture is still grim. Finally, all that is changing, from the coincidence of three remarkable technical evolutions:
1. Cheap, ubiquitous graphic hardware. My MacBook Pro has 2 cores of Pentium, and 16 cores of NVIDIA on the backupCPU. New next generation of the Mac OS will be able to work with all those cores. Om nom nom.
"We just did some benchmark runs today," Bak says a couple of weeks before the launch. Indeed, V8 processes JavaScript 10 times faster than Firefox or Safari. And how does it compare in those same benchmarks to the market-share leader, Microsoft's IE 7? Fifty-six times faster. "
3. The rise of HTML 5. Sure it's been coming in pieces, but adding bot-transparent video and audio presentation in HTML has to put a cold chill into Adobe(Flash), Microsoft(Silverlight), and Apple(Quicktime).
Google keeps smiling all the way to the bank.
6. REST - Godzilla of APIs
Prediction: APIs rule, and REST will increasingly rule APIs in 2009.
Back in 2005, Google Maps ignited the mash-up concept in Web 1.0, and changed the face of web apps we see today.
Google is wise and omnipresent, but they aren't omniscient. Back in 2002, when the current round of API design decisions were made, Google had the choice of creating an API in well-known media-darling SOAP, or the little known academic paper-protocol REST. They chose SOAP.
SOAP (originally Simple Object Access Protocol(protocol) was a neat idea—to replace the bulk and complexity of integration schemes such as CORBA with a simple combination of XML and HTTP. Great idea, but to provide fully-functional enterprise integration SOAP had to expand, eventually absorbing much of the complexity of the protocols it meant to replace.
Enter REST. Representational State Transfer is the brainchild and 2000 PhD dissertation of Roy Fielding. Fielding observed that one of the great advantages of the HTTP specification (of which he was also a contributor) was that the client-server, stateless, cacheable, and layered design made access and architecture for the specification straightforward. REST extends these concepts to application-application communication. Very broadly, REST maps the basic CRUD operations (create, retrieve, update and delete) to familiar HTTP operations (POST, GET, PUT, and DELETE). As an additional conceptual benefit, these operations also map analogously to the database operations INSERT, SELECT, UPDATE, and DELETE.
Boiled down, the idea is to have applications interact through conceptually simple HTTP calls for exchange of resources—remote resources, as opposed to remote procedure calls. API creation is then a breeze, because the access methods are already broadly familiar, and the receiving applications need only be ready to respond to requests based on the request information in the HTTP header - say for HTML (web pages), JavaScript (Ajax requests) or XML (application requests).
Basically then you collect up all the publishable "nouns" that you have, and most development platforms (.Net, Ruby, Django) can "RESTIFY" the collection and do all the rest -- provide authentication, standard URLs to access the collection, and XML and JSON support to respond to http requests made by a machine rather than a (html-loving) person. BINGO! All the access you could ask for!
The "operating system" keeps moving to the web, and the desktop continues its transition to a large, relatively immobile handset
General-purpose computing moves to the cloud - Start a doc of any kind on any device, save to the cloud, access later from any other device attached to the cloud
Lots of good bits this week. Since we’ve just entered a new year, there are lots of “2008 Year in Review” articles about, and a similar number of “2009 Predictions” articles too.
Here’s a bit of the best of the rest…
Top Social Media Sites of 2008. It’s hard to know just what to make of the ComScore figures, but this graph is fascinating:
** Has Facebook really overtaken MySpace that completely?
** Is MySpace really flat at 120K uniques?
** Is Windows Live Space really down about 20% in the past year?
Web 1.0 — I can push a page to you, if you can find me
Web 2.0 — You can find me (thanks Google and Yahoo), and I can create applications for you and your friends
Web 3.0 — I have content and a community that are interesting, and you can create the applications you like by yourself, mostly without programming.
The clearest immediate needs for Web 3.0 are:
You need to be able to clearly establish identity — who are you?
You want to establish location — where are you?
What is your community? These things really aren’t complete by any means, and Web 3.0 has now progressed far enough that the communities themselves are creating the applications. That leads to my favorite bit this week:
The Ubiquity Firefox Plug-in : It Doesn’t Have to be This Way. Ubiquity was released back in August, but with Firefox 3.1 now in its second beta (and FF3 pretty firmly established) there’s ample reason to check this neat plug-in out. By all means watch the Ubiquity video, which gives the use-case of setting a meeting, where you can find the meeting place, map it, and create and send the email with the map embedded, all from within Firefox.
This is really cool, because Ubiquity is easy to use and contains a large number of rich commands that lets users and communities create their own mashups. Now THAT is “Web 3.0”.
Next up: predictions for 2009… (really ... next time)
Happy New Year
Busy holiday and weekend, but there were still some techie stocking stuffers this week.
The first is an interesting blog post from Coding Horror: Hardware is Cheap, Programmers are Expensive. The article has some interesting data and graphs, and makes two loosely-related points
The payback on great hardware for development teams is quicker than you might think
“the fastest hardware in the world can’t save you from bad code”
Rally Software gets a nice mention, and the old waterfall model has taken so many lumps it’s hard to believe that there is still debate about agile vs. waterfall.
The third bit is on web analytics, specifically a neat approach that ties Google Analytics to a Google docs spreadsheet as an approach to campaign tracking: No more shooting in the dark- track your marketing campaigns. I like this posting because it takes the wonderful (but kinda context-free) Google Analytics, and turns a cool tool into an interesting solution approach. You can find examples that take the idea further here: No Google Analytics API? No Problem! and here: Homebrew Google Analytics API.
Social Search Categories
This week’s final bit takes on Google, Facebook, and “social search” in The Future Of Social Search. Social (or any kind of search with context – social, geocoded, personal, etc.) search is an interesting idea — Google gives nice general search by comparing your search string with the whole web, but there are a lot of specific cases where it’s really true that less is more.
I’ve been reading the book Planet Google by Randall Stoss
The book is a pretty standard business tome on an interesting technology company, but it has one part particularly worth recommending: Chapter 1: Open and Closed, which describes the “open” web world that Google endorses, and the threat that a company (in this case Facebook) with a large enough Walled Garden presents.
I wouldn’t necessarily have thought that Facebook was even in competition with Google (much less a real threat), but the chapter describes the growing competition give a framework for understanding Open Social.
I don’t know yet if the whole book is worth a read, but the first chapter sure is — if nothing else than as a means of describing some of the competition and tactics in the post “2.0” web.