In one of my writers groups, we’ve been talking extensively about AI emergence. I wanted to share one thought around AI intelligence:

Many of the threats of AI originate from a lack of intelligence, not a surplus of it.

An example from my Buddhist mathematician friend Chris Robson: If you’re walking down a street late at night and see a thuggish looking person walking toward you, you would never think to yourself “Oh, I hope he’s not intelligent.” On the contrary, the more intelligent, the less likely they are to be a threat.

Similarly, we have stock trading AI right now. They aren’t very intelligent. They could easily cause a global economic meltdown. They’d never understand the ramifications.

We’ll soon have autonomous military drones. They’ll kill people and obey orders without ever making a judgement call.

So it’s likely that the earliest AI problems are more likely to be from a lack of relevant intelligence than from a surplus of it.

On the flip side, Computer One by Warwick Collins is a good AI emergence novel that makes the reverse case: that preemptive aggression is a winning strategy, and any AI smart enough to see that it could be turned off will see people as a threat and preemptively eliminate us.