As the calendar clicks around, I’m reminded of an odd anniversary. Roughly 40 years ago – maybe late 1968, perhaps early 1969 – I wrote my first serious piece of software: a real application, used by real people, and constructed as part of my paid employment. I thought it might be worth revisiting that event.
The first thing you have to understand is that I’d had no computer-related education at all. The closest I came at the Royal Grammar School, High Wycombe, was an after-school seminar in the School Library, when somebody delivered a talk on computers. I’ve forgotten the content of the presentation completely; I only remember that the speaker passed around a core memory module for us to look at. (Hands up those who don’t know what “core memory” is, or how it works.) In the spring of 1968 I applied to Essex University to read Economics, and that summer I took GCE A Levels in Economics, Maths (A+S), and Physics. However I had already decided that it would be useful to spend what is now termed a “gap year” before going to university, in order to get some experience of the real world. Fortune (or nepotism) was in my favor, and I was accepted at the UKAEA Harwell to spend a year as a “Mathematics Assistant”.
I started in September 1968, and lived in a hostel (a barracks, really) in Abingdon. I was working for the Programmes Anaysis Unit (PAU), a group that was trying to understand the economic impact of government-sponsored research and development initiatives. We were interested in how quickly innovation spread through a marketplace, and what the return on investment looked like. I was the only assistant in a team of a couple of dozen eminent scientists and economists. They understood the policy issues, and most understood the mathematics. The challenge was gathering the data and interpreting it.
I started out on issues related to ROI. The models typically involved calculating the year-by-year impact of an investment, with each annual contribution discounted due to monetary deflation and substitution. I worked up a family of models of increasing complexity; for each one, I planned to accumulate the discounted annual contributions until the marginal return was less than some epsilon. But how to run them?
I was put in charge of the department’s Wang Programmable Calculator. The programming model was similar to more recent programmable calculators from TI and HP. The program memory essentially stored keystrokes, which were executed just as if you’d pressed them. Keystroke steps were numbered, and there were conditional and unconditional branch operations. For the Wang, the “program memory” was a pre-scored card, from which “chads” were punched out with a stylus; the card was then “read” in a device that looked like a small toaster. The output display used Nixie tubes…
I programmed up my first model. It ran to completion in 5 minutes. My “second order” model took 30 minutes to finish. The “third order” model ran for four hours. When the “fourth order” model had not converged after an overnight run, I knew that I needed some better technology. My team leader, a physicist who had never recovered from the fleshpots of Cairo during the 8th Army campaign of 1942, directed me to the computing centre. There a rather startled young man with a huge red beard thrust a copy of “McCracken on Fortran” into my hand, created an account for me on the IBM 360/65, and showed me where the card punches were. Two days later, I’d completed all of the ROI calculations, and I was hooked.
In those first programs I used the 360 as a glorified version of the Wang calculator. I didn’t have to manage data sets, or design complex algorithms, or do anything for output beyond printing a single number. But the next job was different. Several PAU teams were interested in how technologies were taken up by a marketplace, and then (as now) it was assumed that adoption tended to follow an S-curve. Today, curve-fitting is a standard feature of every maths library, but in 1968 we were making it up as we went along. Furthermore we weren’t simply throwing a best-fit curve through a bunch of points: we had a number of exogenous constraints that we had to respect.
One of my colleagues came up with a nice set of linear transformations for the primary equations (Sigmoid and Gompertz), which meant that I could vary one parameter (usually the asymptote, which was constrained anyway) and use a linear fit to generate the other values. I demonstrated experimentally that graphing the residual errors against the asymptotes had a single minimum, so I was able to use a simple bisection approach to find the best fit. Some of the data sets were too big to fit in memory, so I added a buffered input reader to stream the data from the disk (or was it a drum?).
My first version of the program simply output the parameters of the S curve and the residual errors. This was OK for the mathematicians, but unsatisfactory for the policy wonks. I made friends with the red-bearded guy in the computer centre (who would later be my lecturer at Essex University!), and discovered that the IBM 360/65 was equipped for COM, or Computer Output on Microfilm. I cut-and-pasted some code from the COM system documentation, and augmented my application with full graphical output, showing the original data points (or bucketed samples thereof) and the various s-curves that corresponded to the different constraints.
By this point, I was more or less lost to the PAU. While I kept doing minor tasks for them, I spent 80% of my time in the computer centre, and by the time I left in June, 1969, I was helping teams from all over Harwell with their applications. I’d also moved on from punched cards to a teletype-based RJE system, which was only one step away from being a real interactive system. (For that, I had to wait until I encountered the PDP-10 in 1970.)
Meanwhile my application was used for a number of years. When I returned to a different branch of Harwell in the summer of 1971, I was asked by my old team to make several small enhancements. Naturally, I looked at the code I had written, and was mortified at how primitive it was. But it was my first, and self-taught to boot, so I cut myself some slack and fixed it.
Category: Computing
Open source
Open source is the altruistic synchronisation of self interests.
Simon Phipps replying on Twitter.
(Via Adriana.)
AWS and Ruby
The book of the moment is James Murty’s “Programming Amazon Web Services: S3, EC2, SQS, FPS, and SimpleDB”. It’s a really nicely-written introduction and tutorial for our utility computing services, with plenty of sample code that just works. Highly recommended.
Murty chose to write his examples in Ruby, which pushed one of my buttons. I have a love-hate relationship with Ruby, and it’s getting to the point where I’d love to find an alternative ((And that doesn’t include PHP or Python, or even Groovy.)). On the one hand, Ruby offers Smalltalk with instant gratification. On the other, we have a syntax replete with ad hoc short-cuts, looping constructs with inconsistent scope rules, and ASCII rather than UTF-8.
My friend Jon Irving agreed:
Hahah, yes – I love it, although the things which I love are the
things that make it horrific for any large app. Re-opening class defs,
awesome, except when you’re trying to find where a method is defined.
Monkey-patching, awesome, except when you suddenly find that for *no
perceptible reason* a core API has been changed by some library you’re
using.
And rails, oh rails. What is this “thread safety” of which you speak?
Srsly, it’s like it’s 1995 all over again. But much prettier, and this
time smalltalk won!
Writing Ruby is great fun; reading someone else’s Ruby application (particularly anything substantial) is deeply frustrating. In other words, Ruby is a candidate write-only language. And that’s a shame.
First post from my XO
This is the first blog post from my XO, the green and white creation of the One Laptop Per Child program. It took a long time, but it’s here at last. Now I have to figure out how to configure it the way I want it…
The truth about Linux
Some of the youngest, brightest minds have been trapped in a 1970s intellectual framework because they are hypnotized into accepting old software designs as if they were facts of nature. Linux is a superbly polished copy of an antique, shinier than the original, perhaps, but still defined by it.
However the prevailing cult of OSS is so dominant that even the most obviously proprietary projects have to pretend to be open source. (The fact that all of the individuals with “commit” privileges happen to work for a single company is purely coincidental.) And try telling any OSS enthusiast that they ought to be “open” to a world with multiple open source operating systems…
Anyway, by picking out the most provocative paragraph, I’m doing an injustice to Jaron. It really is an interesting piece, especially what it has to say about the importance of speciation. Check it out.
XO sighted
One of my colleagues has received his XO laptop and brought it in to show us. Now I’m even more impatient for mine to arrive – it’s quite fascinating (in the xkcd sense).
Give One, Get One
Tim just did this, and so did I. How could I not? We’re talking about the program to “donate an XO laptop to a random Third-World kid and get one for yourself“. It looks like this is US only right now ((And certainly the tax-deductible bit will only impress the IRS.)), but check the comments on Tim’s piece about how you participate from overseas.
Remembering the CDC 6600
Ah, nostalgia, nostalgia! El Reg just published a piece about a classic:
Control Data Corporation 6600
Released: 1964
Price: ~$6m-$10m
OS: COS, SCOPE, MACE, KRONOS
Processor: One 60-bit CPU, ten shared-logic 12-bit peripheral I/O processors
Memory: 128K 60-bit words
Display: Printer, plotter and dual video display console
Storage: 2MB extended core storage, magnetic disk, magnetic drum
I was a systems programmer on one of these beautiful beasts for a year, from July ’72 until August ’73. It was my first job out of school, at the University of London Computer Centre. We had a 6600 and a 6400, and while I was there we took delivery of a 7600. The 6600 had daily scheduled maintenance from 1 PM to 2 PM, and although we usually needed all of it (OS patches, replacing hardware modules, etc.) there were times when there was nothing to do… so we had the most powerful computer in the country ((Well, maybe. We kept hearing about this rather non-standard IBM mainframe up at Daresbury…)) at our disposal, for whatever we wanted! I had this simulation framework that I’d written to explore the behaviour of different paging algorithms, and it ran really nicely on a dedicated 6600!
One of the more idiosyncratic features of the CDC 6600 was the way in which applications issued system calls. In the early versions of the SCOPE OS, all of the operating system functions ran in the PPUs ((Peripheral processing units.)). ((Later on, they introduced some CPU-resident OS services, which I always thought was a real hack.)) So how does a CPU-resident application issue a system call? There were no dedicated instructions for the purpose; no traps, or gates, software interrupts, or anything like that. The technique was simple:
- Construct a request block in memory, including the function code, buffer pointers, etc.
- Clear a flag bit in the request block.
- Store the address of the request block in location 1 of the application’s address space. (This location was referred to as the RA, or Reference Address, plus 1.) [Thanks to Peter Schow for the correction.]
- Poll the flag bit until it is set.
In practice, the busy waiting didn’t last long; as soon as the OS noticed the presence of a (non-zero) address in RA+1, it would switch the CPU to another task.
All this leaves only one question: how do you access memory location RA+1 from a Fortran program? We used a little library function, IADDR(), which returned the address of any variable. And then we wrote code like this:
DIMENSION MEM(0)
INTEGER RA
DIMENSION IORB(16)
RA = -IADDR(MEM)
…
MEM(RA+1) = IADDR(IORB)
All of this looks rather primitive now. However, it’s worth pointing out that it looked quite primitive then. I came to the 6600 from the PDP-10, which was a powerful CISC design with a very rich instruction set ((The most expressive programming language was assembler – at least, until LISP and POP-2 arrived.)), and Seymour Cray’s minimalist design took me by surprise. In many ways the 6600 was the first RISC machine: a clean memory model, specialized register files, and simple instructions that got the most out of the technology of the day. It used ones-complement arithmetic, which meant that we had to cope with two zeros – positive (all 0’s) and negative (all 1’s).
What a gorgeous piece of machinery.
MarkCC tackles Erlang
This should be fun. Mark Chu-Carroll, best known as the author of Good Math, Bad Math, is “going to start writing a series of tutorial articles on Erlang”, probably the hottest language around right now. ((Ruby is so 2005, you know.)) I suspect we’re going to hear some fairly strong opinions…. Don’t miss ’em!
Leopard: the complete review
Or at any rate, the closest thing to a a complete review that you’re going to find outside a $20 book. We’re talking about Mac OS X 10.5 Leopard: the Ars Technica review, by John Siracusa:
These two views of Leopard, the interface and the internals, lead to two very different assessments. Somewhere in between lie the features themselves, judged not by the technology they’re based on or the interface provided for them, but by what they can actually do for the user.
In other words, it’s both comprehensive and balanced, covering everything from the apps and the UI to the developer frameworks and kernel features. ((Siracusa’s description of Dtrace verges on the orgasmic ecstatic.)) It’s a must-read for all Mac users as well as system software aficionados.