Jim Waldo expressed an understandable skepticism about my inclusion of autonomous agents in my piece on the future of software; I thought I’d share my reply to him:
I think that autonomous agents may be an ever-receding category, a bit like AI. Remember how every advance in AI would provoke the retort, “But that’s just X – that’s not A.I.” Every time we add a bit more autonomous capability to a software component (advertisement, peer group discovery, self-monitoring for SLO, negotiating over a shared security context,….) people will say, “Oh, yes, I see how you can do that, but that’s not REAL agent behaviour.” Some people will insist that we expose the inner workings, and if there isn’t an obviously “AI-like” mechanism like multi-level planning or BDI they’ll deny that it’s an agent. Whatever. [shrug] I just want software systems that are a bit more robust, a bit more tolerant of version skew, a bit better about proactive resource management and self-diagnosis, a bit more flexible about how they organize themselves with their peers.