A Planet Suffocated by Toilet Rolls

Visualise a chart with two axes, vertical and horizontal. 

Give the axes names. Let’s label the vertical axis Motivation and the horizontal one Intelligence.

By Motivation, I mean the degree of effort and energy we apply to a given goal.

Finding food, getting sex, defending your kin, imposing justice, learning a language – they’re all instinctive goals for humans. So high on the vertical motivation axis. 

As for the intelligence axis, the last two may need more intelligence…

So,  small brains for food-gathering and mating on the left: larger brains for complex behaviours like co-operating in groups or language learning off to the right.

Maybe “primitive” drives like hunger and sex are still more powerful than the rest. They would go to the very top of the vertical Motivation axis.

The line of best fit would be downward sloping. A negative correlation. The strongest motivations correlating with small brain behaviours.

However our motivation is controlled by a concern for our own survival. 

What if any amount of intelligence could be applied to any goal? What if a super-intelligence’s goal was to be, say,  super-efficient at making some gadget, like a portable power bank for charging mobile phones? What if an inherited goal is embedded at the time it passes the super-intelligence barrier?

That goal could be anywhere on the chart – producing power banks top right, driven by super-high intelligence and super-high motivation, getting better at it all the time.

The planet is filling up with unused power-banks…

We think it would be stupid to go on producing portable power banks ad infinitum, maybe turning the whole planet into a power bank warehouse. We say “stupid” because it would threaten our survival.

But why should a super-intelligent machine care about humans if it has other goals?

Super-intelligence is the crossover when machines become more intelligent than humans in a wide range of domains. 

Intelligent machines, already have, and will increasingly have, the ability to improve their own efficiency – it’s called “recursive self-improvement” –  at which point they slip out of human control. 

What if an inherited goal is embedded at the time a system passes the super-intelligence barrier, operating at a level of complexity and speed beyond our capabilities….?

That is the risk we face when machines become super-intelligent.

Jamie’s comment:

“Power Banks: Boring. Anything  would be better. Onion rings? Party packs? Toilet rolls? Picture a planet suffocated by toilet rolls.”