Recently people have been saying nice things about my writing.

First, there was a review of Avogadro Corp on Amazon that was titled “Good, but not Stephenson-good.” My first thought was “Hey, I’m being compared to Neal Stephenson. That’s cool.”
Then there was the Brad Feld post. Brad Feld is a world-renown venture capitalist, one of the founders of Techstars, the managing director of The Foundry Group, and with 100,000 followers on twitter, he’s clearly an influential person.
He had given a talk called Resistance is Futile, and during the talk, he spoke about Avogadro Corp:

But then I mentioned a book I’d just read called Avogadro Corp. While it’s obviously a play on words with Google, it’s a tremendous book that a number of friends had recommended to me. In the vein of Daniel Suarez’s great books Daemon and Freedom (TM), it is science fiction that has a five year aperture – describing issues, in solid technical detail, that we are dealing with today that will impact us by 2015, if not sooner. 

There are very few people who appreciate how quickly this is accelerating. The combination of software, the Internet, and the machines is completely transforming society and the human experience as we know it. As I stood overlooking Park City from the patio of a magnificent hotel, I thought that we really don’t have any idea what things are going to be like in twenty years. And that excites me to no end while simultaneously blowing my mind. 

You can read his full blog post. (Thank you, Brad.)

While I loved the endorsement, what really got me excited is that Brad appreciated the book for exactly the reasons I hoped. Yes, it’s a fun technothriller, but really it’s a tale of how the advent of strong, self-driven, independent artificial intelligence is both very near and will have a very significant impact. Everything from the corporate setting and the technology used should reinforce the fact that this could be happening today.

I first got interested in predicting the path of technology in 1998. That was the year I made a spreadsheet with every computer I had owned over the course of twelve years, tracking the processor speed, the hard drive capacity, the memory size, and the Internet connection speed.

The spreadsheet was darn good at predicting when music sharing would take off (1999, Napster) and video streaming (2005, YouTube). It also tells when the last magnetic platter hard drive will be manufactured (2016), and it predicts when we should expect strong artificial intelligence to emerge.

There’s lots of different ways to talk about artificial intelligence, so let me briefly summarize what I’m concerned about: General-purpose, self-motivated, independently acting intelligence, roughly equal in cognitive capacity to human intelligence.

Lots of other kinds of artificial intelligence are interesting, but they aren’t exactly issues to be worried about. Canine level artificial intelligence might make for great robot helpers for people, similar to guide dogs, but just as we haven’t seen a canine uprising, we’re also not likely to see an A.I. uprising from beings of that level of intelligence.

So how do we predict when we’ll see human-grade A.I.? There’s a range of estimates for how computationally difficult it is to simulate the human brain. One estimate is based on the portion of our brain that we use for image analysis, and comparing that to the amount of computational power it takes to replicate that in software. Here’s the estimates I like to deal with:

Estimate of Complexity Processing Power Needed How Determined
Easy: Ray Kurzweil’s estimate #1 from Singularity Is Near 10^14 instructions/second Extrapolated from the weight of the portion of the brain responsible for image processing to, compared to the computer computation necessary to recreate.
Medium: Ray Kurzweil’s estimate #2 from Singularity Is Near 10^15 instructions/second Based on the human brain containing 10^11 neurons, and it taking 10^4 instructions per neuron.
Hard: My worst case scenario: brute force simulation of every neuron 10^18 instructions/second Brute force simulation of 10^11 neurons, each having 10^4 synapses, firing up to 10^3 times per second.

(Just for the sake of completion, there is yet another estimate that includes glial cells, which may affect cognition, and of which we have ten times as many as neurons. We can guess that this might be about 10^19.)

The growth in computer processing power has been following a very steady curve for a very long time. Since the mid 1980s when I started listening to technology news, scientists have been saying things along the lines of “We’re approaching the fundamental limits of computation. We can’t possibly go any faster or smaller.” Then we find some way around the limitation, whether it’s new materials science, new manufacturing techniques, or parallelism.

So if we take the growth in computing power (47% increase in MIPS per year), and plot that out over time, we get this very nice 3×3 matrix in which we can look at the three estimates of complexity and three ranges for the number of available computers to work with:

Number of Computers Easy Simulation
(10^14 ips)
Medium Simulation*
(10^16 ips)
Difficult Simulation
(10^18 ips)
10,000 now 2016 2028
100 2016 2028 2040
1 2028 2040 2052
*Modified from Kurzweil’s 10^15 estimate, only to give us a more middle-of-the-road prediction.
As we can see from this chart, if it was easy to simulate a human brain, we’d already have people who have access to 10,000 computers doing it. So we’re not quite there yet. Although clearly some of the things most suggestive of strong A.I., like IBM’s Watson and Google’s self-driving cars, are happening first in these large organizations where they have access to loads of raw computational power.
But even in the difficult simulation case, by 2040, it will be within the reach of any dedicated person to assemble a hundred computers and start developing strong A.I.
It’s when we reach this hobbyist level that we really need to be concerned. Thousands of hobbyists will  likely advance A.I. development far faster than a few small research labs. We saw this happen in the Netflix Prize, where the community of contestants quickly equaled and then out-paced Netflix’s own recommendation algorithms. 
Strong A.I. is an issue that we should be thinking about in the same way that we discuss other defining issues of our time: peak oil, water shortages, and climate change. It’s going to happen in the near term, and it’s going to affect us all.
We’re entering a period where the probability of strong A.I. emerging is non-zero for the first time. It’s going to increase with each year that passes, and by 2052, it’s going to be an absolute certainty.

By the way: If you find this stuff interesting, researcher Chris Robson, author Daniel H. Wilson, and I will be discussing this very topic at SXSW Interactive on Tuesday, March 13th at 9:30 AM.