Laptop update

Five years ago I was just starting a my new (and final) gig, Chief Open Source Officer for a startup, the (now defunct) Lacuna Technologies. I’d been working at Verizon, where my work laptop was a MacBook, but I knew that Lacuna would be a clean sheet. Hugh Martin (CEO) and I wanted to use Microsoft Office as the basis for the productivity environment, in part because we’d had mixed experiences with Google. I knew I was going to need a new work laptop, and that I’d be doing a fair amount of traveling, and so I picked up a Microsoft Surface Pro X, a 2-in-1 with an ARM CPU and a slot for an LTE SIM. It was risky, because Windows on ARM was still fairly experimental, but Microsoft was making all the right noises.

As it turned out, most of the people we hired into Lacuna wanted to use Google apps, so we abandoned Microsoft Office. And although the Surface Pro X was wonderfully light, I never found the 2-in-1 form factor very convenient. (It’s hard to use on your lap, unless you go for full tablet mode.) Also the reliance on x86 emulation meant that it was slow, and Microsoft did a lousy job of pumping up the Windows-on-ARM ecosystem. And finally the pandemic meant that I was working from home, and didn’t need mobile connectivity beyond what I could get from my phone.

When I retired from Lacuna, I stuck the Surface Pro X on the shelf, and replaced it with a maxed-out Dell laptop, which has served me well for the last two years. But I’ve always been an early adopter of new technology (remember Google Glass?), and so this week I decided to replace the Dell with something lighter. And I chose… an ARM-based Windows laptop! Specifically, I chose the 14 inch version of the Samsung Galaxy Book4 Edge, with a Snapdragon X Elite chip. It weighs 1.17 kg, compared with 1.07 kg for the Surface Pro X and 1.61 kg for the Dell, and it has a gorgeous 120 Hz OLED screen. (I’ve always loved Samsung’s displays.) Best of all, it was a great deal: $899 from Best Buy, $450 less than at the Samsung store. (Most of the current crop of so-called Copilot+ laptops are over $1,000, and, oddly, it’s not available from Amazon….)

I hesitated before buying, however, because I’d seen some conflicting reviews of the product concerning performance and battery life. It turns out that the culprit is Microsoft, which has unnecessarily complicated the configuration of power modes in Windows 11. This excellent video makes it…. well, not easy, but comprehensible.

So far I’m really enjoying the Book4 Edge. As usual, I’ve joined the Windows Insider program, so I’m curious to see how this generation of Windows laptops evolves. More anon.

Project frustration

I need to vent….

I’m working on what I hoped would be an easy little project. I fell in love with the replica computers created by Obsolescence Guaranteed, and decided to get a PiDP-11. Yes, I had worked with PDP-8s and a PDP-10 long before I got my hands on an 11, but there’s a lot more you can do with an 11, including running Unix V6 and V7! I knew my limitations, however (I’ve never been very good at soldering), and so I bought a pre-assembled model. All I needed to do was add a Raspberry Pi, and I’d be in business….

While I was waiting for the PiDP11 to arrive, I decided to familiarise myself with the SIMH software that underpins this and many other computer system simulations. I grabbed the nearest hacking laptop (which happened to have Fedora installed on it), and immediately ran into the tangle of issues that plagues so much open source. First, I learned about the split between SIMH and OpenSIMH. Then I found that the latest OpenSIMH on Github wouldn’t build (because you have to build from source – this is FOSS 🙄) because… something to do with CMake on Fedora. And everything seemed to depend on a bunch of shell scripts that looked like they’d only been tested on Ubuntu but didn’t work on Fedora. Sigh. I looked for pre-built binaries for Windows or Mac, failed, and decided to wait for the hardware.

My assembled PiDP-11 arrived, complete with a preloaded microSD card for the Raspberry Pi. I hooked up a nice new Pi 4 Model B to a screen and keyboard, booted it, and everything worked just fine. But before I installed the Pi into the PiDP-11 case, there was one more thing I needed to set up. I wasn’t planning to leave everything hooked up to an HDMI display and USB keyboard, so I needed a way to log in to the system via ssh over WiFi. So I created two new accounts on the Pi: one for remote admin, and one as a virtual terminal for the PiDP-11. I enabled WiFi and ssh on the Pi, booted it up, and successfully ssh‘d in from a Windows laptop. (I discovered that Windows Terminal now includes an ssh client – nice!) To find out the IP address of the Pi, I logged in to my Netgear Nighthawk router and looked at the list of connected devices. Excellent. All set. Except….

How could I be sure that the Pi would always get that IP address? DHCP hands out addresses on a first come, first served basis. Clearly I needed to assign a static address to the Pi. So I logged in to the Nighthawk, navigated to LAN Configuration, tried to assign an address, and failed with “Bad MAC address”. I consulted the Internet…. and discovered that the Nighthawk firmware has been broken for over a year, and the community has been yelling at Netgear to fix the Address Assignment bug, to no avail. 🤬

I really didn’t want to have to buy a new WiFi AP/router. But I think I’ll have to.

UPDATE: I decided to use the opportunity to upgrade our home WiFi to support WiFi 7. Future-proofing, I hope…..

A project in lieu of coursework

My friend Tom Lyon just dug up a piece put out by the University of Essex in 1970 to describe their new PDP-10. It also mentions that they were acquiring a PDP-15 for “research and teaching”. And that reminded me of an interesting project.

I applied to Essex in the spring of 1968, intending to study economics under Richard Lipsey. Lipsey was something of an enfant terrible in the field of “math-econ”, and I was really interested in his work on mathematical modelling. So I ignored the advice of my teachers at RGS to try for Cambridge, and was accepted at Essex. Rather than going straight up, I decided to do a “gap year” working at AERE Harwell, which introduced me to what was going to be my life-long work: computing. This meant that when I arrived at Essex in September 1969, I wasn’t too disappointed when I learned that Lipsey had resigned and decamped to his native Canada.

The academic structure at Essex was interesting. During my first year, I divided my time between mathematics, economics, and computer science. At the end of that year, I was required to pick a major, which would dictate my studies over the remaining two years. Obviously, I chose computing, which led to another decision. The computer science program included an “industrial experience” segment, which meant that the second year would comprise two terms at Essex followed by five months of work in “industry” (which would be graded).

In September 1970, I returned from my summer job at the Ministry of Technology to begin my second year. As I wrote elsewhere, there were several of us students who had almost as much experience as the academic staff, and in several of the courses it was going to be difficult to accommodate the wide range of abilities and experiences. Creativity ensued.

As mentioned above, Essex had just acquired a PDP-15. This was, perhaps, a poor choice. The department had several PDP-8s, and probably saw the PDP-15 as a nice step up from the limitations of a 12 bit architecture. In practice, the PDP-15 turned out to be a dead end. It was a TTL-based evolution of the family that included the PDP-1, PDP-4, PDP-7 and PDP-9. (IIRC, the electrical engineering department had a PDP-7 or PDP-9.) However, in 1970 DEC introduced the PDP-11, which spelt the end of the 18-bit architecture.

Nevertheless, the PDP-15 was chosen as the design which would be used to teach hardware architecture and machine language programming to undergraduates. And almost immediately a problem arose. The number of students studying computer science was growing rapidly, and it was going to be difficult for them all to get hands-on time on the PDP-15. So one of the staff (probably Dave Lyons) cornered several of us more experienced students and made a proposal. Rather than taking the machine language programming and hardware architecture courses, we could work together to develop a PDP-15 emulator to run on the PDP-10, so that future students would be able to use a virtual PDP-15 for their studies. Naturally, we would need accounts on the PDP-10 to do this work, and we’d be the only undergraduates with full access.

Four of us agreed enthusiastically. We carved up the project – TTY-based console UI, hardware emulation, resource management, and so forth. Everything was written in MACRO-10. I implemented the instruction set emulation, using the PDP-10’s elegant byte manipulation mechanisms to decode PDP-15 instructions. I don’t remember all of the details, but I think we hit our goal of booting up the DECsys operating system on the emulator in February 1971.

In April 1971, we all headed off to our “industrial experience” gigs. (I returned to Harwell, but that’s another story.) I think a couple of grad students were given the job of turning our project into a usable service for students. I hope it served them well.

What I’ve done

My friend Deirdré Straughan recently posted a short rant on FB in which she said:

My career – the entire span of it – is something to be proud of. I have never stopped learning and innovating, and I have worked hard to make things better for customers since long before Amazon came along.

So I have updated my resumé to include ALL my experience. With dates. This is who I am and what I’ve done.

https://www.beginningwithi.com/resume-deirdre-straughan/

This motivated me to finish off a piece that I’ve been kicking around for a few months, and published it here on my blog. I’ve chosen a narrative style rather than a resume, and I’ll probably revise it occasionally to add more details. Anyway, this is my career, this is what I’ve done over more than fifty years, and like Deirdré I’m really proud of it.

Update on moving in

As we continue unpacking and getting set up, I’ve found one system that didn’t handle the move too well.
About four years ago, my storage system (for music, photos, videos, backup, VM images, and so forth) consisted of two Firewire 800 boxes: a WD Macbook Studio (2TB) and an 8TB RAID enclosure. Since my latest iMac didn’t support Firewire, I used a Thunderbolt-to-Firewire adapter. This mostly worked, although the RAID enclosure usually needed power cycling after I rebooted the iMac. Performance was decent.
When I got around to unpacking everything and setting things up on Saturday afternoon, I was met with a lot of noise from the RAID enclosure. Some of it was a failing fan, but there were other worrying undertones. And about 10 minutes after booting, the RAID system simply went away. Ouch.
Fortunately, all of the really important stuff was on the Studio. The RAID was mostly used for local backups and staging to Backblaze (my cloud backup), and I knew I could retrieve or recreate everything on it. (And I was willing to retire a few hundred gigabytes of VM images from OpenStack and CloudFoundry testing.) But I knew I would need more space than the 2TB on the Studio. After lunch, I headed over to the nearest Best Buy and picked up a new Seagate Backup Plus Hub. 8TB for around $200. Who would have thought it, eh?
So today everything has been consolidated on the Seagate, and both of the older units have been retired. I’ve reconfigured my Backblaze setup, and all of the laptops and other devices are happily backing up to the iMac again. And Firewire is history…

Expectations

I was looking over a business plan for a software startup, and I was struck (more accurately, startled) by the fact that it did not mention “open source” anywhere.
Once I’d got past my surprise, I realized that I couldn’t immediately tell which of two explanations was correct:

  1. No reference to open source because the project was not going to involve any open source activities.
  2. No reference to open source because, well, OF COURSE they were doing open source – how could anyone assume anything else?

Desktop system update: Mini -> iMac

Back in November 2008, I acquired a Mac Mini to use as my home desktop computer. At the time, I raved about the little machine with its 1.83 GHz Intel Core 2 Duo CPU, 1 GB of RAM, and an 80 GB Hard Drive.
Over the next year and a half, I became less enchanted. The 80GB disk was far too small, and I resorted to a variety of external USB drives to hold my music, photos, and videos. (Right now I’m using the 1TB drive that was originally installed in my ill-fated Time Capsule.) More serious was the 1GB of RAM. Over the last year, the footprint of application and OS software seems to have exploded, and multitasking even a few major apps has become incredibly frustrating. Alec posted an excellent analysis of the brain-dead paging and swapping strategy in OS X, but I couldn’t bring myself to try the radical surgery he proposed. And so I soldiered on, resigned to the appearance of the spinning beach-ball (or pizza?) whenever I tried to switch from iTunes to Safari, and to the fact that MS Word would take minutes to load if Safari was busy. Etcetera.
What made all this worse was that my other Mac is an original MacBook Air: wonderfully light, but with the smallest and slowest hard disk known to mankind. It’s a sealed unit, with no way to upgrade anything. I briefly considered adding some memory to the Mac Mini, but watching a video of the procedure persuaded me that I shouldn’t even try. I had to face the fact that I didn’t have a usably fast Mac. I did have the cursed (replacement) HP laptop, which showed what an Intel i5 with 4GB could do, but that beast runs Windows 7 and is mostly used for games and various experiments using VirtualBox.
A few months ago, I decided that I’d had enough. I was either going to buy a new Mac Mini (and upgrade it to 4GB RAM), or get one of the new iMacs. I went back and forth, and procrastinated, and eventually decided to take the plunge. There wasn’t much price difference between the Mini (plus RAM) and iMac, but the relatively low resolution of my existing LCD display finally tipped it. I would buy an iMac, my first. “Obviously” I was going to buy it from Amazon, taking advantage of free shipping and avoiding sales tax. I waited for Amazon to show that it had units in stock…
…and then Sarah Palin changed my mind. I watched that ignorant poseur rolling her eyes at the teacher in Alaska, and read the attacks on teachers by Republican the Party of “No” legislators over the last few days, and decided that I wanted to pay my sales tax. Maybe a few bucks from the $110.91 tax would make its way into a teacher’s paycheck. So yesterday evening, I headed over to the Apple store in Palo Alto, and bought myself an Apple iMac MC508LL/A 21.5-Inch Desktop, with an Intel i3, 4GB RAM, and a 500GB 7200rpm HD. I brought it home, unpacked it, plugged it in, and pushed the power button.
Nothing. Repeatedly, nothing.
You know what’s going to come next, don’t you? Today I packed it up, took it back to the store, and it booted up just fine. So I made a “Genius Bar” appointment for Saturday morning (just in case), came home, and set the machine up. I planned to transfer the data and apps from the Mini using FireWire, but I found that I didn’t have a suitable cable. So I wound up doing it over WiFi, which took about 8 hours.
The machine is sweet. Very fast, a beautiful 1920×1080 display, nice wireless keyboard and mouse. I launched a dozen tabs in Safari and started sync’ing my iPad, and then fired up MS Word. It opened even faster than on my Windows 7 machine.
For now, I’m simply replacing the Mini with the iMac. All of the peripherals from the Mini are plugged in to the iMac, and it’s acting as print and scan server for all our computers. Eventually I plan to run the Mini, headless, as a print and media server, but I’ll take the opportunity to do a clean reinstall of OS X beforehand. And with any luck I’ll be cancelling that date with the “Genius Bar”.

My computer is rather busy today (AAC->MP3)

I have several road trips coming up in the next few weeks (LA, Sacramento), and I wanted to burn myself some MP3 CDs with good driving music. Unfortunately, when I ripped the bulk of my CDs into iTunes, I did so using AAC encoding. I guess I was trying to optimize space. Oops.
Anyway, yesterday I created a “Smart Playlist” in iTunes with the following properties:

  • “Kind” includes “AAC”
  • “Kind” does not include “protected”

This gave me a playlist with 7,467 entries. I selected the whole list, and told iTunes to convert them from AAC to MP3. This will create 7,467 new items, of course, so when everything checks out I’ll delete the AAC files.
So far it’s been running for about 14 hours on my 1.83GHz Mac Mini, and the iTunes folder on my external 1TB drive has grown to 155GB. Conversion speed seems to be 45-50x…
(And yes, I did back up my iTunes library!)
UPDATE: The final glitch was tricky: how could I use the smart playlist to remove tracks from my iTunes library? “Delete” simply removes the tracks from the playlist! The answer was to select all tracks in the playlist, right-click, choose “Get Info“, and change the “Artist” for every track to “Zzzzzzzzzzz“. This took a while to run… Then I went back to my library, chose “Zzzzzzzzzzz” from the Artist list, selected all of the corresponding tracks, and deleted them. There has to be an easier way… such as an Automator script?

Weird Apple pricing

One of the side effects of switching digital cameras has been that stuff takes longer. More pixels per picture (and new modes that generate more images) means that it takes a lot more time to do even basic photo management. And I’m not actually very well equipped to handle this: for perfectly good reasons, it turns out that although I have quite a few computers, they are all pretty puny by current standards. I have a Mac Mini and a MacBook Air, both with CPUs in the 1.6GHz range, both with fairly slow disks. The MacBook Air has 2GB of RAM, the Mini just 1GB. (The fastest machine I own, my accursed HP DV4-2045DX laptop, just went back for service – AGAIN!)
So naturally my thoughts have been turning to getting some horsepower. A Mac, of course – that HP has cured me of any interest in Windows. I figured that I wanted something like this:

  • At least 3GHz 2+ core CPU
  • 4GB RAM
  • 500GB HDD
  • Superdrive

My first impulse was to simply get a new Mac Mini. However after maxing out all of the options, I got:

  • 2.66GHz Core 2 Duo
  • 4GB RAM
  • 500GB HDD
  • SuperDrive
  • Wireless Mouse and Keyboard
  • Total price: $1187

That felt quite a bit more expensive (and slower) than I’d expected. Out of curiosity, I looked at the minimum configuration iMac:

  • 3.06GHz Core 2 Duo
  • 4GB RAM
  • 500GB HDD
  • SuperDrive
  • Wireless Mouse and Keyboard
  • 21.5 inch LCD
  • Total price: $1199

So instead of buying a Mac Mini I can spend an extra $12 and get an iMac with a 15% faster CPU and a stunning 21.5 inch LCD. Something doesn’t make sense here….