Robotic introspection
One of the topics I got very interested in while I was working on my PhD, though I didn’t have time to pursue it very far, was agent introspection. How does a computing device notice things about itself, and, especially, detect deviations from a stable norm. I came across this problem when trying to devise non-brittle strategies for detecting and repairing failed interactions and dialogue turns, but it features quite prominently in any attempt to visualise how a truly autonomous agent would react. It’s a topic I’d like to get back to working on, some day soon. Anyway, via Ray Kurzweil’s newsletter, Lipson, Zykov and Bongard of universities Cornell and Vermont have created a robot that uses a vision system to build a concept of itself, including its normal range of motion, and then detect deviations from that norm and figure out compensation actions. For example, if one of the legs is shortened, the robot apparently learns, by itself, to limp. That is very, very impressive.
One of the marks, it is said, of a really good program is that when you look at what it does you can’t say, immediately, “I know how they did that”. Kudos to the team. I’ll have to go read their paper now!