Thought for the day

The tl;dr version: Arguably all interesting advances in computer science and software engineering occur when a resource that was previously scarce or expensive becomes cheap and plentiful.
The longer version:
This particular thought was provoked by a series of exchanges on blogs and in Twitter yesterday. It started with a piece at Information Week in which Joe Emison bemoaned the fact that Netflix was holding back progress in cloud computing. The Clouderati jumped all over this, and Adrian put together a detailed response which he also posted to his blog. By the time I got around to responding, IW had closed comments on the original piece, and so I followed up on Adrian’s blog.
Joe’s criticism was based on two points:

Netflix’s cloud architecture[…] is fundamentally (a) so intertwined with AWS as to be essentially inseparable, and (b) significantly behind the best *general* open options for configuration management and orchestration.

Point (a) is pretty silly: Netflix is a business, not a charity. Of course they’re going to work with the best of breed. But it was Joe’s second point that really bugged me. I responded (and here’s where the “Thought for the day” comes in):

Amazon and Netflix are dramatically ahead of the curve, not behind it. The configuration management pattern you seem to prefer – just-in-time customization using Chef or Puppet – was pretty old school when Sun acquired CenterRun and built out N1 and Grid Engine. It’s incredibly inefficient compared with early-bound EBS-backed AMIs.
Arguably all interesting advances in computer science and software engineering occur when a resource that was previously scarce or expensive becomes cheap and plentiful. We’ve seen it with graphical user interfaces, interpreted languages, distributed storage, and SOA. Traditional late-bound configuration management treats machine images and VM instances as expensive; AWS and Netflix invite you to imagine the possibilities if they’re effectively free. Welcome to the real Cloud 2.0…

In a subsequent Twitter exchange, I said:

@adrianco We used to talk about “specific excess MIPS” driving change. Now it’s “specific excess VMs”

… to which Adrian replied:

@geoffarnold with SSD excess IOPS can be used in interesting ways

Must-read piece on Massively Scalable Data Center networking

Like Ivan Pepelnjak, I agree that the must-read piece of the moment is Brad Hedlund’s Emergence of the Massively Scalable Data Center. Yes, it’s more about the questions than the answers, but even that’s a step forward. And as a bonus, this led to me browsing Ivan’s blog, where I came across this excellent piece on how we got into the present L2 v. L3 mess (including the impact of what Ivan calls “the elephant in the data center”).

Quote of the day

I believe the true future of cloud computing for developers is to not think about servers at all. It is now time to focus on the Application and new levels of abstraction that allow folks to use the computing resources in easier and easier ways.

Ezra Zygmuntowicz, as part of his blog post on leaving Engine Yard, the Ruby-on-Rails PaaS company he founded.

A collection of thought-provoking posts related to cloud computing

From the last few days….

Spelunking CPUs

As the masthead on my blog says, I’m a Mac lover. I’ve used pretty much only Macs since those days in the 1990s when I was working on hush-hush corporate collaboration schemes between Sun and Apple. Of course at both Amazon.com and Huawei I’ve been required to use Windows laptops for corporate stuff – locked-down beasts, centrally managed, with Microsoft Outlook and all of the trappings of the Redmond monoculture. But I always had Macs for my personal use.
Then I bought my little netbook, an Asus EeePC. OK, that doesn’t really count – it’s like that smartphone that I used to have, which ran Windows Mobile. And pretty soon I replaced Windows XP on the netbook with Ubuntu Netbook Remix. so cosmic balance was restored.
But this week, I decided that I needed a machine for hacking. Something to play with Xen and Eucalyptus and Open Nebula and all of the cool Cloud stuff that’s coming down. Something to write a little Groovy on. And not a big developer workstation, but something I could take along with me on my travels.
Wouldn’t the netbook do? Not really. I had this idea that I could set up a dual boot configuration in which I could either run Ubuntu to do my coding, or start in Xen and load several VMs to let me simulate a network configuration. Perhaps I could combine them: do my coding in a guest VM under Xen, build a new OVF package on the fly, and launch it in a new VM. In any case, I’d really need a multicore CPU with a decent amount of RAM, and enough disk to manage a number of guest OS images. And ideally the CPU should support virtualization, just for efficiency. But I didn’t want to spend a lot of money: blowing over $1200 for a MacBook was not an option.

So I spent an evening at Fry’s and Best Buy, looking at my choices. There were plenty of really cool, and amazingly cheap, laptops. But the frustrating thing was trying to find one with CPU virtualization. There are so many different Intel and AMD CPUs out there, and even though there are only a few brand names – Core Solo, Core Duo, Athlon, and so on – the different model numbers hide a vast divergence in capabilities. Fortunately I had my iPhone handy, and I quickly got into the rhythm of checking the “System” Control Panel info on each unit and then searching the web for chip features. I found Ed Bott’s useful table, but that came out in May, and by now there were several new chips. I started to see a pattern – most cheap Intel chips did not have virtualization, while all AMD CPUs did. That lasted for a while until I bumped in to the AMD Athlon Neo, which doesn’t have virtualization. Sh!t…
The other thing that I noticed was that over the last 18 months or so the AMD:Intel ratio has shifted decisively in favour of Intel. There were relatively few AMD-powered laptops around, and even fewer in the thin-and-light category. Market forces, or market distortion? Hmmm.
Eventually I found what I was looking for at Best Buy: an HP dv4-2045dx with an AMD Turion II (dual-core M500 at 2.2GHz), a 14.1″ screen, 4GB RAM (expandable to 8GB), and a 320GB HD. Yes, the battery life isn’t all that great, and it’s a bit too thick, and the swoopy-dots-on-white design makes it look as if it’s been keyed in the parking lot, but otherwise it’s perfect. And at $575, it was almost exactly half the price of a 13″ Apple MacBook Pro.
Yes, it’s a Windows machine. Or it was – I just loaded Ubuntu 9.10 onto it… Next step, Xen.
Did I really need to go through all of that? And what about non-geeks? After all, Windows 7 requires hardware virtualization in order to run Windows XP mode. There are plenty of stories surfacing of frustrated PC customers who find that they can’t run some favourite application on their brand new Windows 7 machine. If it was so much work for me, how could the average buyer be expected to get it right? Microsoft really screwed up on this one – and Intel too, I think.

Apples in the cloud: an epiphany

I’m processing HD video from this weekend’s visits to my grandchildren in Lynn. All of the projects are about the same size. I copy the raw clips from my MacBook Air to Merry’s new 13″ MacBook Pro, and fire up iMovie on each machine. On my machine, iMovie says that it will take 59 minutes (which turns out to be 90+). On hers: 23. My first reaction is the typical geek’s knee-jerk response: it’s time to upgrade my laptop to something more powerful. My second reaction: that’s absurd. Most of the time, my MacBook Air is quite fast enough. What I really want is a Mac Pro MB535LL/A in the cloud, available on demand…. (Well, that and easier batch support in iMovie.)