Tim Bray just posted a delightful piece on the state of LLM (large language model AI), and it got me thinking.
I think that one reason why people have been dismissive of AI (both the current efforts and the possibility) is that we humans tend to be convinced that our consciousness and capacity for reasoning is SPECIAL. Some of it’s religious (things like “souls”, “in g*d’s image”, etc.) and some is based on the fact that we tend to be awfully impressed by complex things we can’t understand. (I studied the philosophy of mind for a few years, and became increasingly frustrated by the way people would talk about “The Hard Problem” of consciousness.) But of course a lot of what we do and think is really mundane, and wouldn’t be much of a challenge to an orang-utan or a dolphin.
Funnily enough, I think that systems like GPT could well be useful deflators of our collective self-importance. People like Tim and I who have worked with Really Large Scale Systems are probably slightly ahead of the curve on this….