Machines of Loving Grace

Another salon.

“Killer robots will come whether we like it or not, programmed with advanced recognition systems….sure, “in the loop” is better than “on the loop”, but it will be generally recognised that anything operating at human speed is just too slow…”

“We need to get the super powers with the capability for advanced research and huge arsenals to remain dominant… as in the Cold War, open conflict is just too dangerous. It’s the rogue terrorist we should be scared of….”

“Meanwhile we must advance our own intelligence and aim for a good transition…our challenge is runaway AI.”

Alan is  dismissive of the campaign by Elon Musk and others to ban LAWS (that’s Lethal Autonomous Weapons Systems). “There are risks in such campaigns because it just means everyone wants them..”

The “salon” was considering a paper from the people at DeepMind on how to design-in a “red button” if an operator gets concerned about the behaviour of an intelligent device. The paper uses the term “Safely Interruptible Agents” and offers a way of stopping it from “acting dangerously”. The paper is heavy on math. 

The speaker was trying to explain the paper in a way that did not require the math. I did not understand it. His approach seems to rely on the theory that an intelligent agent being trained to do something useful by reinforcement learning, i.e. being rewarded for good outcomes, should in theory be ready to try anything but that some learning strategies could be dangerous and there would be someone on hand to interrupt the agent, i.e. switch it off temporarily. But the approach also seems to rely on the agent not realising it had been interrupted.

My question was: if the intelligent agent was working at computer speed, how would the human be ready to intervene in time?

While it was not treated as a stupid question, I was pretty sure I had misunderstood the explanation. I certainly would not be able to write it up.

But Alan seemed to agree.

Alan thinks that while the transition phase will be critical, super-intelligent agents will eventually operate so fast that we humans will seem like vegetables to them. They will be the future of intelligence, maybe the only intelligence in the universe. What role they will find for humans, is hard to guess.

“Maybe they will wish to give us a role in maintaining the ecology of the planet…?”

After the meeting, Alan reverted to his earlier theme: “Super-intelligent agents would have little time for the super-power games of human beings and, at some point, would be able to paralyse the killer robots.  Perhaps that’s our only hope…Machines of Living Grace….”

That phrase comes from the poem All Watched Over by Machines of Loving Grace by Richard Brautigan.