Recently people have been saying nice things about my writing.

First, there was a review of Avogadro Corp on Amazon that was titled “Good, but not Stephenson-good.” My first thought was “Hey, I’m being compared to Neal Stephenson. That’s cool.”
Then there was the Brad Feld post. Brad Feld is a world-renown venture capitalist, one of the founders of Techstars, the managing director of The Foundry Group, and with 100,000 followers on twitter, he’s clearly an influential person.
He had given a talk called Resistance is Futile, and during the talk, he spoke about Avogadro Corp:

But then I mentioned a book I’d just read called Avogadro Corp. While it’s obviously a play on words with Google, it’s a tremendous book that a number of friends had recommended to me. In the vein of Daniel Suarez’s great books Daemon and Freedom (TM), it is science fiction that has a five year aperture – describing issues, in solid technical detail, that we are dealing with today that will impact us by 2015, if not sooner. 

There are very few people who appreciate how quickly this is accelerating. The combination of software, the Internet, and the machines is completely transforming society and the human experience as we know it. As I stood overlooking Park City from the patio of a magnificent hotel, I thought that we really don’t have any idea what things are going to be like in twenty years. And that excites me to no end while simultaneously blowing my mind. 

You can read his full blog post. (Thank you, Brad.)

While I loved the endorsement, what really got me excited is that Brad appreciated the book for exactly the reasons I hoped. Yes, it’s a fun technothriller, but really it’s a tale of how the advent of strong, self-driven, independent artificial intelligence is both very near and will have a very significant impact. Everything from the corporate setting and the technology used should reinforce the fact that this could be happening today.

I first got interested in predicting the path of technology in 1998. That was the year I made a spreadsheet with every computer I had owned over the course of twelve years, tracking the processor speed, the hard drive capacity, the memory size, and the Internet connection speed.

The spreadsheet was darn good at predicting when music sharing would take off (1999, Napster) and video streaming (2005, YouTube). It also tells when the last magnetic platter hard drive will be manufactured (2016), and it predicts when we should expect strong artificial intelligence to emerge.

There’s lots of different ways to talk about artificial intelligence, so let me briefly summarize what I’m concerned about: General-purpose, self-motivated, independently acting intelligence, roughly equal in cognitive capacity to human intelligence.

Lots of other kinds of artificial intelligence are interesting, but they aren’t exactly issues to be worried about. Canine level artificial intelligence might make for great robot helpers for people, similar to guide dogs, but just as we haven’t seen a canine uprising, we’re also not likely to see an A.I. uprising from beings of that level of intelligence.

So how do we predict when we’ll see human-grade A.I.? There’s a range of estimates for how computationally difficult it is to simulate the human brain. One estimate is based on the portion of our brain that we use for image analysis, and comparing that to the amount of computational power it takes to replicate that in software. Here’s the estimates I like to deal with:

Estimate of Complexity Processing Power Needed How Determined
Easy: Ray Kurzweil’s estimate #1 from Singularity Is Near 10^14 instructions/second Extrapolated from the weight of the portion of the brain responsible for image processing to, compared to the computer computation necessary to recreate.
Medium: Ray Kurzweil’s estimate #2 from Singularity Is Near 10^15 instructions/second Based on the human brain containing 10^11 neurons, and it taking 10^4 instructions per neuron.
Hard: My worst case scenario: brute force simulation of every neuron 10^18 instructions/second Brute force simulation of 10^11 neurons, each having 10^4 synapses, firing up to 10^3 times per second.

(Just for the sake of completion, there is yet another estimate that includes glial cells, which may affect cognition, and of which we have ten times as many as neurons. We can guess that this might be about 10^19.)

The growth in computer processing power has been following a very steady curve for a very long time. Since the mid 1980s when I started listening to technology news, scientists have been saying things along the lines of “We’re approaching the fundamental limits of computation. We can’t possibly go any faster or smaller.” Then we find some way around the limitation, whether it’s new materials science, new manufacturing techniques, or parallelism.

So if we take the growth in computing power (47% increase in MIPS per year), and plot that out over time, we get this very nice 3×3 matrix in which we can look at the three estimates of complexity and three ranges for the number of available computers to work with:

Number of Computers Easy Simulation
(10^14 ips)
Medium Simulation*
(10^16 ips)
Difficult Simulation
(10^18 ips)
10,000 now 2016 2028
100 2016 2028 2040
1 2028 2040 2052
*Modified from Kurzweil’s 10^15 estimate, only to give us a more middle-of-the-road prediction.
As we can see from this chart, if it was easy to simulate a human brain, we’d already have people who have access to 10,000 computers doing it. So we’re not quite there yet. Although clearly some of the things most suggestive of strong A.I., like IBM’s Watson and Google’s self-driving cars, are happening first in these large organizations where they have access to loads of raw computational power.
But even in the difficult simulation case, by 2040, it will be within the reach of any dedicated person to assemble a hundred computers and start developing strong A.I.
It’s when we reach this hobbyist level that we really need to be concerned. Thousands of hobbyists will  likely advance A.I. development far faster than a few small research labs. We saw this happen in the Netflix Prize, where the community of contestants quickly equaled and then out-paced Netflix’s own recommendation algorithms. 
Strong A.I. is an issue that we should be thinking about in the same way that we discuss other defining issues of our time: peak oil, water shortages, and climate change. It’s going to happen in the near term, and it’s going to affect us all.
We’re entering a period where the probability of strong A.I. emerging is non-zero for the first time. It’s going to increase with each year that passes, and by 2052, it’s going to be an absolute certainty.

By the way: If you find this stuff interesting, researcher Chris Robson, author Daniel H. Wilson, and I will be discussing this very topic at SXSW Interactive on Tuesday, March 13th at 9:30 AM.

A few days ago, Jason Glaspey, a prominent member of Portland’s tech and startup community, and the man behind PaleoPlan,  approached me and said he would be doing a review of Avogadro Corp: The Singularity Is Closer Than It Appears on Silicon Florist.

Avogadro Corp is my first novel. It’s a techno-thriller about the accidental creation of an artificial intelligence at the world’s largest Internet company and the subsequent race to contain it, as it starts to manipulate people, transfer funds, and arm itself.

It’s set almost entirely in Portland, Oregon. Readers have enjoyed the references to Portland’s coffee scene, imaging a 10,000 employee tech company in downtown Portland, and the realistic portrayal of AI emergence. Some early feedback includes:

  • “jaw-dropping tale about how something as innocuous as email can subvert an entire organization”
  • “a terrific, and stunningly believable, account of how the first sentient artificial intelligence might accidentally arise”
  • “HAL, the self aware CPU from 2001 a Space Odyssey is a kitten compared to ELOPe”
  • “a startling, feasible examination of the emergence of artificial intelligence”

It’s available in paperback, for the kindle, and inepub format for a variety of other e-readers. And so far it’s doing great – averaging 5 star reviews on Amazon.

Jason knew I had been offering a Kindle Fire and some Amazon gift certificates in exchange for help promoting Avogadro Corp. He asked if I would keep it running a little longer until his review came out. That didn’t seem quite fair to people who had already done so much to help get the word out.

So instead I’m going to give away a second Kindle Fire.

Here’s the deal:

  1. Spread the word in the next week! Send people to this blog post or the Avogadro Corp page on Amazon. Here are some ideas: Facebook “like”, Facebook sharing, retweets, Twitter, e-mail, e-mail signature, blog posts, or a review if you’ve already read it. You can sing about it from street corners too, but this may get you funny looks. (Please stick to appropriate sharing to audiences who will appreciate learning about a good book. I don’t want to encourage spammy behavior.)
  2. By 9am PST on Dec. 31 (ya know, the last day of the year), leave a comment on this blog post telling me what you did. If possible, quantify the impact (clicks, page views, etc.).

I’ll consider the first 20 submissions, if I get that many, and from the 3 that I think did the best job (subjective, I know), I’ll pick one to receive the Kindle Fire. The 2 runner ups will receive a $25 Amazon gift card. Void where prohibited, robots and artificial intelligences under 21 not allowed, no prize awarded if the AI apocalypse occurs before the contest ends, etc., etc. Recipients will be announced within a few days after the 31st. (If you don’t want the Kindle Fire, you can donate it to a school or non-profit.)

Most of all, I hope you enjoy Avogadro Corp.

Thanks,
Will

For those of you that haven’t heard, after a two year journey, my novel Avogadro Corp: The Singularity Is Closer Than It Appears is published!

Avogadro Corp is a techno-thriller about the accidental creation of an artificial intelligence at the world’s largest Internet company, and the subsequent race to contain it, as it starts to manipulate people, transfer funds, and arm itself.

It’s available in paperback, for the kindle, and in epub format for a variety of other e-readers. 

If you’ve already bought a copy – THANK YOU! It means so much to me. 

If not, I hope you’ll buy a copy and enjoy it, or consider giving it as a gift to someone who loves techno-thrillers or science fiction.

The Next Step

Writing Avogadro Corp was incredibly fun, and the path to publication was a great learning experience. But now that it’s published, the next challenge I face is to help it rise above the noise of thousands of other books. 

Here’s just a few of the things that help a book get noticed: sharing it on Facebook or twitter, buying it or giving it as a gift, providing a review on Amazon, blog posts that link to it, emails to friends about it.

Anything you can do to help support my book would be tremendous!

Bonus: A Free Kindle Fire

If you don’t yet have a Kindle Fire and would like one for free, I’m giving one away. This is a thank you for all the feedback and help I received over the last six months. (As usual, I was inspired by Tim Ferriss to do this, and in fact won the Kindle Fire from Tim in his own book promotion contest.)
Here’s the deal:
  1. Spread the word in the next 7 days! Send people to this blog post or the Avogadro Corp page on Amazon. Here are some ideas: Facebook “like”, Facebook sharing, retweets, Twitter, e-mail, e-mail signature, blog posts, or a review if you’ve already read it. You can sing about it from street corners too, but this may get you funny looks.
  2. By 9am PST on Dec. 18 (next Sunday), leave a comment on this blog post telling me what you did. If possible, quantify the impact (clicks, page views, etc.).
I’ll consider the first 50 submissions, if I get that many, and from the 5 that I think did the best job (subjective, I know), I’ll pick one to receive the Kindle Fire. The 4 runner ups will receive a $25 Amazon gift card. Void where prohibited, robots and artificial intelligences under 21 not allowed, no prize awarded if the AI apocalypse occurs before the contest ends, etc., etc. Winners will be announced next week.

Again, even if you don’t want the Kindle Fire, anything you can do to help promote Avogadro Corp is still awesome!

Resources

If you take this on, here’s a few links that might help:
Happy holidays!

The Singularity Institute has posted a comprehensive list of all Singularity Summit talks with videos of each talk. A few that I’m particularly interested in:

The Ethics of Artificial Intelligence
Petra Mitchell, Amy Thomson, Daniel H. Wilson, David W. Goldman
OryCon 33
  • Do we have the right to turn off an artificial intelligence?
  • Amy: Buddhist definition of sentience is it is able to suffer?
  • Mitchell: Can you turn it off and turn it back on again? Humans can’t be turned on.
  • Wilson: If you’ve got an AI in a box, and it’s giving off signs that it’s alive, then it’s going to tell you what it wants.
  • In Star Trek, Data has an on/off switch. but he doesn’t want people to know about it.
  • If IBM spends 3/4 of a billion dollars making an AI, can they do anything they want with it?
  • Parents have a lot of rights over their children, but a lot of restrictions too.
  • AI isn’t simply going to arise on the internet. IBM is building the most powerful supercomputer on the planet to achieve it. It’s not going to be random bits. 
  • Evolutionary pressure is one way to evolve an artificial intelligence.
  • We can use genetic algorithms. We’re going to have algorithms compete, we’re going to kill the losers, we’re going to force the winners to procreate. If we were talking about humans, this would be highly unethical.
    • So where to draw the boundary?
    • Daniel: You are being God. You’ve defined the world, you raise a generation, they generate their answers, they’ve lived their life.
  • There may be people who will become attached to illusionary intelligences – like Siri – that isn’t a real intelligence. This will happen long before anything is really intelligent emerges.
  • Turing Tests
    • The [something] prize happens every year. Each year they get a little closer. A higher percentage of people believe the chatbot is human. 
    • It’s reasonable to believe that in ten years, it won’t be possible to distinguish.
  • Other ethical issues besides suffering:
    • If you have an AI, and you clone it, and then you have two that are conscious, then you shut one off – did you kill it?
  • How do you build robots that will behave ethically?
    • Not how do we treat them, but how do they treat us?
  • Now we have issues of robots that are armed and operating autonomously. Unmanned Aerial Vehicles. 
  • We already have autonomous robots on military ships that defends against incoming planes and missiles. And back in the 1970s, it shot down an Iranian passenger jet, killing 300 passengers. 
  • When the stock market tanks, who’s fault is it? The AI? The humans? It happens faster than the humans can react.
  • Neural nets are black boxes. Decision trees are more obvious.
  • Asimov spent most of his time writing stories about how defining laws didn’t work.
  • We can’t simply say “Don’t kill humans”.
  • We have dog attacks, but we don’t ban dogs.
  • Tens of thousands die every year in car accidents, but we don’t eliminate cars.
  • We’ll put up with a lot of loss if we get benefit.
  • Japan is desperately trying to come up with robotic nursing aides because they don’t have enough people to do it.
    • Thomson: a robot nursing aide is an abomination. these people are lonely.
    • Wilson: if the alternative is no care, or inferior care.
    • What happens when someone leaves their robot a million dollars?
  • What happens when the robot butlers of the world, incredibly successful, and deployed everywhere, all go on strike?
    • Wilson: you design the hardware so it can’t do that.
  • If you are designing a robotic pet dog, you have an obligation to design it so it responds like a dog and inspires moral behavior because you don’t want kids to grow up thinking you can mistreat your dog, stick it in the microwave, etc.
  • Questions
    • Q: The internet developing sentience. How would we recognize that it is sentient?
      • It’ll be exhibiting obvious behavior before we get there.
    • Q: The factory of Jeeves. What if we have a factory of lovebots. And one lovebot says “I don’t want to do this anymore.”
      • There was a huge number of women in the south who objected to slavery because their husbands slept with the slaves. There will be lots of opposition to lovebots. 
      • It would be a great story to have a lovebot show up at a battered women’s shelter.
    • Q: The benefits accrued to a different party: the nursing robots may not be loved by the patients, but they will be loved by the administrators. 
    • Q: You have billions of dollars being poured into autonomous trading systems. They are turning them over every year. Evolutionary pressure to make better and better systems. 

Playing God: Apocalyptic Storytelling
EE Knight, Daniel H. Wilson, Victoria Blake
OryCon 33
  • Panel
    • Daniel – background in robotics. Wrote How to Survive a Robot Uprising. Then Robopocalyse.
    • Victoria – publisher of Underland Press. Science fiction, fantasy, and horror. We haven’t yet published a straight apocalypse novel.
    • EE Knight – vampire series, post apocalypse. dragons series. last book of dragon series is apocalyptic.
  • Apocalypse
    • definition: revelation
  • Favorite scenarios
    • Daniel: as a society, we’re totally enmeshed in technology. if you take the technology away, or if you turn the technology against us, that’s really fun. it tears the world around. it explores how we depend on it, and what we get out of it.
    • Victoria: apocalypse stories have to choose where they are situated: a year after the event, 50 years after the event. that choice interests me. 
    • One generations luxuries become another generations necessities. and then the next generation they become unneeded again. e.g. a post apocalyptic society wouldn’t say we need electricity for lights, they would just go to bed when it gets dark.
    • apocalypse is just a 5 minute event, and then it’s over.
  • anything done to death?
    • No… it just need to be done well.
    • zombie apocalypse done to death…and yet, still love to read them.
  • Daniel
    • when I read it, I want to know who the bad guys are.
    • apocalypse scenarios show us what people are made of. people stand up. heros are forged.
    • but when you get into things like people roasting babies on spits… that’s too contrived and too pointless. 
    • i want redemption. the apocalypse is about starting over clean. are we inherently good or bad? do we hunt and kill each other or work together
  • When you blow up whole worlds… movies like that thing. but you need to show the few people, which is what people care about.
    • You can focus too much on one person, or zoom out too far.
  • Zone One
    • literary apocalypse novel
    • no plot
    • three days
    • the end happens at the end.
  • The interesting part is seeing how people survive the scenarios.
  • The little details are what make stuff. 
  • Knight: I really like On The Beach. I do like the scenarios where everyone dies.
    • It’s a resonate book about a whole society where everyone knows when they are going to die.
  • Justin Kronan’s The Passage
    • the book is divided into thirds.
    • first third is the moment of the apocalypse
    • second third shoots forward in time dozens of years: rebuilding
    • third third
  • Q: Is it still an apocalypse novel if the apocalypse occurred 50 or 100 or 200 years ago?
    • Seems like yes.
    • Q: So what are the defining points of an apocalypse story?
    • Daniel: You need to establish the context. You have to have the world. You don’t know to do that if its Poland in WW2, because we know it. But if it’s a different world, you have to establish it. Then once you build it, you rip it apart. 
    • Knight: It’s the story of who lives and who dies. People have to make a choice of whether to live or die. 
  • It’s almost like running a simulation: let’s run five people through the simulation and see who lives and who dies.
  • There’s lots of real stories about apocalypses: every culture that died as a result of English colonization had an apocalypse. Maybe when you are living in the time, you don’t need to write about it.
  • Books discussed
  • If you want to research this stuff, you can live in places that give that feel. Go to South Africa.
  • Most Indian Reservations are post-apocalyptic societies.
  • Daniel: Used Native American background. Loved having cowboys and robots.
  • Q: Does a slow motion apocalypse qualifies? e.g. water wars, rising water levels. Is that an apocalypse, or just science fiction?
    • thermodynamics says everything is in growth or decline.
  • Seems like two types of stories:
    • the actual apocalypse: surviving
    • the post-apocalypse: rebuilding
  • In Orwell’s 1984, you have a protagonist trying to figure out what it was like before.
  • If you have an apocalypse scenario, what is the long story arc? To get across the street? To get to the hospital? Then what? There’s less to explore.
  • Apocalypse novels are exploration of current society’s fears: environment, radiation, government, cold war, robots.
  • Mockingbird – about robots that feed humans birth control drugs.
  • Kurt Vonnegut has a short story in which all people carry these little radios that tell them what to do all the time.
  • Ray Bradbury’s 
  • When you take away all the people, you take away all the meaning.
  • The Sparrow – unintended consequences of an anthropological mission.
  • Robopocalypse: 
    • We barely give each other any rights. We’re highly unlikely to give robots any rights. We’ve never had to deal with another sentient species, let alone a superior sentience.
  • Q: To Victoria Blake: What are you looking for in an apocalypse novel in as a publisher?
    • Blake: It just has to be done well.
  • Q: What’s going to be the next big apocalypse theme?
    • Technology
    • Robots
    • Home mortgages
    • Society collapsing under its own weight
      • The domino effect, the effect of only 3 days of food in new york city.
    • Drought

Great article on slate.com about the risks of robots:

President Obama visited Carnegie Mellon University’s National Robotics Engineering Center to announce up to $70 million to fund theNational Robotics Initiative. In his remarks, Obama quipped, “You might not know this, but one of my responsibilities as commander-in-chief is to keep an eye on robots. And I’m pleased to report that the robots you manufacture here seem peaceful—at least for now.”

We all love a good robot-apocalypse joke. After IBM’s Jeopardy!-playing computer Watson beat the game show’s reigning human champions, Fox News declared, “Our robot overlord isn’t named HAL or SkyNET—it’s Watson.” The same jokes cropped up in 2007 when robots began to take the place of child jockeys on the camel-racing circuit, learned to juggle, panhandle, and buy scones. But not many people actually believe this is a threat. For all their advances, robots are still generally able to execute only those tasks that they are specifically programmed to carry out. But as the speed of robot advances increases—and with Obama’s new National Robotics Initiative, the developments will, he hopes, come that much quicker—there are genuine robot-safety discussions that we need to have—not about them working too well and taking over civilization, but about them not working well enough.

Hope over and read the whole article.

From a new website called Intelligence Explosion (launched by the Singularity Institute):

Every year, computers surpass human abilities in new ways. Machines can now prove theorems, detect underwater mines, play chess and Jeopardy, and even do original science.

One day, we may design a machine that surpasses human skill at designing artificial intelligences. After that, this machine could improve its own intelligence faster and better than humans can, which would make it even more skilled at improving its own intelligence. This could continue in a positive feedback loop such that the machine quickly becomes vastly more intelligent than the smartest human being on Earth: an ‘intelligence explosion’ resulting in a machine superintelligence.

With great intelligence comes great power. It is not superior strength or perception that led humans to dominate this planet, but superior intelligence. A machine that is more intelligent than all of humanity would have unprecedented powers to reshape reality in pursuit of its goals, whatever they are. 

The site provides both scholarly and popular articles to help explain the intelligence explosion popularly known as the technology singularity.

Please vote for my panel on Artificial Intelligence at SXSW: Terminator or Wall-E: Predicting the Rise of AI

The questions we’ll be tackling: When and how will human level artificial intelligence emerge? Will they kill us all like Terminator, or protect the planet like Wall-E? Daniel Wilson, author of Robopocalypse (being filmed by Steven Spielberg) will be on the panel, and the brilliant Chris Robson.

Your votes help decide which of the over 3,000 proposals talks get accepted. Last year you helped my talk on innovation get accepted. Will you please vote again this year?

Thanks,
William Hertling

Here’s how the future robots, er workers, um… citizens of the world will be trained. In the video below, robots are learning how to pick up objects by being fed a continuous series of unfamiliar objects on a conveyer belt. The robot must evaluate different strategies for pick up the objects.