Quis Custodiet Ipsos Custodes?

El Reg just reported a major cross-platform flaw in 30 of Symantec’s security products, including Norton AntiVirus 2004, corporate anti-virus apps and Brightmail spam filters. Of course the root cause is a system architecture which is so broken that it requires the use of antivirus software that is so tightly integrated that it becomes a potential source of compromise.

I’ve always thought that I understood the history – or at least the mythology – of how this came about. Cutler and crew knew (from their VMS days) how to make NT secure, but chip support, backward compatibility and performance “optimizations” did them in. They could have used Win31/DOS VMs to cope with the legacy crud, but it wouldn’t have been fast enough. We’re all living with the results today (even if we don’t run Windows.)

I wonder how close this mythology is to reality….

On Apple, BMW, and Minis

Like many Mac users, I’ve dismissed talk of Apple’s miniscule share of the personal computer market by (a) pointing out that many of those PCs are just glorified 3270s/VT100s/Wang word processors/cash registers, and (b) invoking the “BMW argument”: what market share does BMW have – and does that stop them from being a really important, cool, desirable brand? So now Apple goes and releases a couple of down-market products, and various people are asking, understandably, “is Apple blowing its BMW model?”. Frank Steele has a nice response: “Perhaps BMW could create (or purchase) a second brand that sold cars that were not quite so expensive. Maybe comparable in price to other cars, but maybe a little smaller, and fun. […] But what could BMW possibly call such a company?”

(Via Oren.)

More Java cores than you know what to do with

After publishing a skeptical and rather petulant piece about Azul last October, El Reg decided to give Azul’s CMO, Shahin Khan, his own soapbox this week. He certainly waxed lyrical “If you could count CPUs the same way that you count memory, some problems would simply become uninteresting and others would transform in a qualitative way. And completely new possibilities would emerge. […] No need to plan capacity for each individual application. Let all of your users share a huge compute pool and plan capacity across many applications.”

Well, maybe. Remember that Azul is planning to ship up to 1,200 cores in a single rack, but these core will be specialized Java™ engines. Now I’d love to see Java take over the world and remove the need for any other kind of operating environment, but for the next few years, while we’re waiting for this brave new world, systems like Azul’s are going to have to coexist with mundane Solaris and Linux boxes. In other words, it’s a co-processor, an “applications accelerator”. And ever since the days of “intelligent Ethernet cards” (anyone remember the 3C505?) I’ve observed that such co-processors are doomed to be overtaken by general-purpose processors. The only obvious exception is in the area of graphics. Not only are the specialized processors not that much faster than their general-purpose brethren; the cost and complexity of the software needed to manage the co-processor usually eats up all of the savings. In the case of the 3C505, I remember that the host driver to manage the on-board TCP/IP stack was roughly as complex as a TCP/IP stack!

Don’t get me wrong – I think that multiple core are absolutely the way to go. Various companies – Sun, IBM, even Intel – are realizing that the best way forward is to simplify their pipelines to reduce the size and complexity of their cores so that they can stuff more cores on a chip. Designing around Java byte-codes rather than RISC ops doesn’t save all that much.

Will Azul prove me wrong? I’m not holding my breath….

A blast from the past: CaveBear (a.k.a. Karl Auerbach)

Back in the late 80s and early 90s, when I was working on NFS, Windows Sockets, and other TCP/IP related stuff, I would often run into Karl Auerbach, network tools wizard and latterly ICANN member-at-large. Many of our encounters took place in the NOC during that strange, timeless period the night before the opening of each Networld+Interop show. The first Interop took place in San Jose, but as it grew, and merged with Networld, it moved to the Moscone Center in San Francisco and eventually headed off into the desert at Las Vegas. Over time it went the way of all trade shows, and puffed itself up into a content-free carnival, whereupon I stopped attending and lost touch with Karl.

Today a serendipitous blog chain led me to the following gem, reproduced in full:

CaveBear Blog: Sartre meets ICANN
I notice that ICANN issued a press release with the title:
ICANN successfully concludes Cape Town Meetings
Which makes me wonder: What would an unsucessful conclusion be?  Would the ICANN board and staff have to be trapped forever in the meeting room like the characters in Sartre’s play No Exit?

It’s an attractive proposition, isn’t it?

An embarrassment of riches…

One of the joys problems with all of this cool stuff that we have at Sun is figuring out how it all fits together… or doesn’t. Case in point: I was reading John Clingan’s piece about Zones on an E25K, and I started to think about how one might use such a beast. Suppose one was running a horizontally-scaled load-balanced Sun Java System Application Server Enterprise Edition 7 2004Q2 (surely there must be a simpler name) configuration on a cluster of V880s. Can I rehost this in a collection of zones on an E25K? What works? What breaks? How much of my administrative model carries over, and how much has to change? (Everybody talks about ABI compatibility, but compatibility of administrative models is just as important. It’s one of the major issues with Linux today, and it’s bound to affect how we run Linux apps in Solaris x86.)

And that got me thinking about clustered data bases (we use the Clustra technology to support App Server failover), and from that to storage and file systems. (I’m an old NFS guy.) One of Sun’s hidden gems is QFS (OK lawyers, Sun StorEdge QFS software), a massively scalable high performance file system. Although designed for (and mostly used in) high performance technical computing, it’s getting a lot of attention in other applications, due in part to the symbiosis with SAM-FS (Sun StorEdge SAM-FS software), a policy-based archiving system. (Think SarbOx. Think Infinite Mailbox.) Do QFS and SAM-FS work in zones? I turn to the on-line documentation: Solaris Containers-Resource Management and Solaris Zones: “Mounting File Systems in Zones: Options for mounting file systems in non-global zones are described in the following table. Procedures for these mounting alternatives are provided in Configuring, Verifying, and Committing a Zone and Mounting File Systems in Running Non-Global Zones.” Followed by a long table, which doesn’t include SAM-FS or QFS. Hmmm. Can’t tell from this. More reading required, I guess. And so it goes.

There’s nothing wrong with this. It’s just an inevitable combinatorial explosion, exacerbated by our commitment to preserve backward compatibility. (In other words, you can never take a feature out of Solaris.) The challenge is in managing unrealistic expectations. (It isn’t all going to work together seamlessly from day one; in fact some combinations may never work together. It all depends on the business case.) The upside lies in the opportunities for serendipitous synergy. (Or should that be synergistic serendipity?)

At the Jini Community Meeting

newmarch.jpg

Live at the 8th Jini Community Meeting at The Brewery in London; listening to Jan Newmarch of Monash University talking about a variety of Jini based projects at Monash.

(Sign of the times: 90% of the laptops here are Macs….)

Bizarre stuff: hearing references to Geoff Arnold that resolve not to me but to the other Geoff Arnold (who’s not here).

Bob Scheifler is now presenting the changes and new features for the next Porter release of Jini. Cool stuff.

reedy.jpg

Update: After the coffee break, Dennis Reedy is talking about Rio, the policy-based service provisioning framework based on Jini.

Searching for the perfect Linux laptop

Quite a few of my friends and colleagues are running Linux on their laptops, but it seems that each of them reports that something doesn’t work quite right – WiFi, or sleep mode, or power management. (And the Web seems to be filled with horror stories, hacks, and half-baked solutions.) I’m curious if this is a universal truth, or whether someone has managed to achieve The Perfect Linux Laptop configuration. I’m thinking of things like:

  • sleep to RAM works
  • everything works correctly after waking from sleep (even if you’ve unplugged a USB or FireWire device while sleeping)
  • WiFi automatically connects to known and public networks, and reconnects after sleep
  • power settings (screen brightness, CPU speed) automatically adjust when you unplug from the mains
  • able to play, read and write CDs and DVDs
  • automatically switch to mirrored or multiple screens if an external monitor or projector is plugged in
  • etc.

I can’t believe it’s really that hard – is it? (And does the Tecra M2 on CAMS fit the bill?)

Tedium is…

Tedium is installing Windows XP SP2 over a dial-up link, on a machine that’s not up-to-date with security patches. Updating the Software Update libraries took an hour; downloading SP2 took five hours (overnight). Having 6 Mb/s cable modem service at home has spoiled me….

Musings on standards

I’ve been involved in standards work since the late 1970s, and I’ve always viewed the primary objective as interoperability. Interoperability demands unambiguous specifications (as much as humanly possible) and verifiable conformance – preferably machine-verifiable. A standards body creates a spec and defines conformance criteria; people implement that spec and test their implementations for conformance. That’s it. (I like to think in OO terms: a standard is an object with one method, conforms(), which takes an implementation and returns or false.) When someone proposes that a standards body address a particular issue, I always ask myself “how does this affect the spec?” and “what are the conformance criteria?”

About a year ago, this came up in a certain web services group: there was a great flurry of activity to try to develop glossary entries for the terms synchronous and asynchronous, even though the terms were not used in the standard. Everybody had their own pet definition, usually in terms of some (irrelevant) implementation behaviour. I tried to apply my usual thinking to the issue, and I got stuck. I generally find that this is a good reason not to act. (Of course such self-restraint is hard for a standards group: like fishes, lack of forward movement usually presages death….)

Travel plans (slightly updated)

JCM8_join1.jpg

In a couple of weeks I’ll be heading back to my birthplace for a Jini Community meeting. It should be a lot of fun….

Nit-pickers will notice that although the graphic shows West End tube stations, the the earlier, misleading graphic has been updated. The original version is still to be found on Jini.org. Graphics notwithstanding, Jini Community Meeting itself will be at The Brewery in the City of London, near Moorgate and the Barbican.