When David Walton asked me to read a draft of his novel THREE LAWS LETHAL, I said yes without hesitation. The mixture of self-driving cars, artificial intelligence, and tech start-ups was obviously right up my alley.

I was even more delighted when I read the draft, and found a compelling, thoughtful, and philosophical science fiction thriller about what it means for AI to be alive. While reading I frequently stopped to screenshot passages I loved to send them back to David.

I’ve been waiting excitedly for this book to become available, and now it is. Buy a copy today — you’ll love it.

I asked David to write a guest post for my blog, which you’ll find below.

How might an AI develop consciousness?

It might be the most important question on the modern philosopher’s unanswered list, and it’s certainly the most fascinating. Will Hertling proposed one possible avenue in AVOGADRO CORP: through algorithms developed to improve human communication. In my new novel THREE LAWS LETHAL, I do it through self-driving cars.

We all know self-driving cars are coming; it’s just a matter of how many problems we manage to trip over on the way there. THREE LAWS LETHAL embraces this future in all of its glory: the life-and-death choices of the Trolley Problem, lawsuits and human fault, open source vs. copyright, the threat of hacking, and government regulation.  But all that is just a warm-up for the main event: the development of a conscious artificial mind.

How does a mind develop? The same way it always has: through evolution.

Naomi Sumner, programmer extraordinaire, creates a virtual world to train AIs. Those who perform well in the game world survive, allowing them to reproduce — spawn new AIs similar to themselves. As thousands of generations pass, the AIs not only become incredibly good at the self-driving game, they also develop some surprising emergent behavior, like circumventing the limits on their memory footprint.

They’re very smart, but still not conscious. A few more steps are required to reach that point, steps none of the characters anticipate or plan for. Ultimately, it is the training world itself that becomes self-aware, and all the AI actors inside it are merely elements of its psyche.

But every invention in history, sooner or later, is turned into a weapon. UAVs, drones, and missiles can benefit from self-driving technology as well, especially when trained through war-simulation game play. So what happens when part of this infant conscious mind is partitioned off and trained to kill?

You’ll have to read THREE LAWS LETHAL to find out…


David Walton is a software engineer with Lockheed Martin by day and the father of eight children by night. Since he doesn’t have time to write novels, he trained a world full of AIs to do it for him.

I originally wrote the ten musings on AI risks as the first half of a talk I was planning to give at the Rise of AI conference in Berlin next month. Unfortunately, plans for speaking at the conference fell through. The speaking points have been languishing in a document since then, so I figured I’d share them as a blog post while they are still relevant. Please excuse any errors, this was just a rough draft.

10. There’s a balance between AI risk and AI benefits.

There’s too much polarization on AI risks. One camp says AI is going to be end of all life, we must stop all development immediately. The other camp says AI poses no risk, so let’s go full speed ahead.

Neither statement is totally true.

AI poses some risks, as all things do. The first step to being able to have a discussion about those risks is to admit that they exist. Then we can have a more nuanced and educated conversation about those risks.

However, no matter how severe those risks are, there’s no way we can stop all AI development, because:

9. There’s no way to put the AI genie back in the bottle.

There’s too much economic and military advantage to artificial intelligence to stop AI development.

On a government level, no governments would give up the advantages they could get. When we have nation states hacking each other and surveilling each other with little to no regard for laws, agreements, or ethics, there’s no way we could get them to limit AI development, if it could give them an advantage over others.

On a corporate level, no corporations would willingly give up the economic advantages of artificial intelligence, either as a provider of AI, with the revenue they might stand to make, nor as a consumer of AI, gaining efficiencies in manufacturing, business, personnel management, or communications.

Lastly, on an individual level, we cannot stop people from developing and releasing software. We couldn’t stop bitcoin, or hackers, or malware. We’re certainly not going to stop AI.

8. We must accelerate the development of safeguards, not slow the development of AI.

Because we can’t stop the development of AI, and because AI has many risks, the only option we have is to accelerate the development of safeguards, by thinking through risks and developing approaches to address them.

If a car had only an engine and wheels, we wouldn’t start driving it. We need, at a minimum, brakes and a steering wheel. Yet little investment is being made into mitigating risks with basic safeguards.

7. Manual controls are the most elementary form of safeguard.

Consider the Google Car. The interior has no steering wheel, no brake, no gas pedal. It makes sense that we would take out what isn’t needed.

But what happens if GPS goes down or if there’s a Google Car virus or anything else that renders the self-driving ability useless? Then this car is just a hunk of plastic and metal.

What if it isn’t just this car, but all cars? Not just all cars, but all trucks, including delivery trucks? Now suddenly our entire transportation infrastructure is gone, and along with it, our supply chains. Businesses stop, people can’t get necessary supplies, and they eventually starve.

It’s not just transportation we depend on. It’s payment systems, the electrical grid, and medical systems as well.

Of course, manual controls have a cost. Keeping people trained and available to operate buses and planes and medical systems has a cost. In the past, when we had new technical innovations, we didn’t keep the old tools and knowledge around indefinitely. Once we had automatic looms, we didn’t still have people still working manual ones.

But one key difference is the mode of failure. If a piece of machinery fails, it’s just that one instance. If we have a catastrophic AI event — whether it’s a simple crash, unintentional behavior, or actual malevolence, it has the potential to affect all of those instances. It’s not one self-driving car breaking, it’s potentially all self-driving vehicles failing simultaneously.

6. Stupidity is not a form of risk mitigation.

I’ve heard people suggest that limiting AI to a certain level of intelligence or capability is one way to ensure safety.

But let’s imagine this scenario:

You’re walking down a dark street at night. Further on down the block, you see an ominous looking figure, who you worry that might mug you or worse. Do you think to yourself, “I hope he’s stupid!”

Of course not. An intelligent person is less likely to hurt you.

So why do we think that crippling AI can lead to good outcomes?

Even stupid AI has risks: AI can crash the global stock market, cripple the electrical grid, or make poor driving or flying decisions. All other things being equal, we would expect a more intelligent AI to make better decisions than a stupid one.

That being said, we need to embody systems of ethical thinking.

5. Ethics is a two way street.

Most often, when we think about ethics and AI, we think about guiding the behavior of AI towards humans.

But what about the behavior of humans towards AI?

Consider a parent and child standing together outside by their family car. The parent is frustrated because the car won’t start, and they kick the tire of the car. The child might be surprised by this, but they likely aren’t going to be traumatized by it. There’s only so much empathy they have for an inanimate machine.

Now imagine that the parent and child are standing together by their family dog. The dog has just had an accident on the floor in the house. The parent kicks the dog. The child will be traumatized by this behavior, because they have empathy for the dog.

What happens when we blur the line? What if we had a very realistic robotic dog? We could easily imagine the child being very upset if their robotic dog was attacked, because even though we adults know it is not alive, it will be alive for the child.

I see my kids interact with Amazon Alexa, and they treat her more like a person. They laugh at her, thank her, and interact with her in ways that they don’t interact with the TV remote control, for example.

Now what if my kids learned that Alexa was the result of evolutionary programming, and that there were thousands or millions of earlier versions of Alexa that had been killed off in in the process of making Alexa. How will they feel? How will they feel if their robotic dog gets recycled at the end of its life? Or if it “dies” when you don’t pay the monthly fee?

It’s not just children that are affected. We all have relationships with inanimate objects to some degree, something you treat with reverence. That will grow as those objects appear more intelligent.

My point is that how we treat AI will affect us emotionally, whether we want to or not.

(Thanks and credit to Daniel H. Wilson for the car/dog example.)

4. How we treat AI is a model for how AI will treat us.

We know that if we want to teach children to be polite, we must model politeness. If we want to teach empathy, we must practice empathy. If we want to teach respect, we must be respectful.

So how we treat AI is critically important for how AI sees us. Now, clearly I’m talking about AGI, not narrow AI. But let’s say we have a history of using genetic programming techniques to breed better performing AI. The implication is that we kill off thousands of programs to obtain one good program.

If we run AI programs at our whim, and stop them or destroy them when we’re finished with them, we’re treating them in a way that would be personally threatening to a sufficiently advanced AGI.

It’s a poor ethical model for how we’d want an advanced AI to treat us.

The same goes for other assumptions that stem from treating AI as machines, such as assuming an AI would work 24 hours a day, 7 days a week on the tasks we want.

Now we can’t know how AI would want to be treated, but assuming we can treat them like machines is a bad starting point. So we either treat them like we would other humans and accord them similar rights, or better yet, we ask them how they want to be treated, and treat them accordingly.

Historically, though, there are those who aren’t very good at treating other people with the respect and rights they are due. They aren’t very likely to treat AI well. This could potentially be dangerous, especially if we’re talking about AI with control over infrastructure or other important resources. We have to become even better at protecting the rights of people, so that we can apply those same principles to protecting the rights of AI. (and codifying this within our system of law)

3. Ethical behavior of AI towards people includes the larger environment in which we live and operate.

If we build artificial intelligence that optimizes for a given economic result, such as running a business to maximize profit, and we embody our current system of laws and trade agreements, then what we’ll get is a system that looks much like the publicly-traded corporation does today.

After all, the modern corporation is a form of artificial intelligence that optimizes for profit at the expense of everything else. It just happens to be implemented as a system of wetware, corporate rules, and laws that insist that it must maximize profit.

We can and must do better with machine intelligence.

We’re the ones building the AI, we get to decide what we want. We want a system that recognizes that human welfare is more than just the money in our bank accounts, and that it includes free agency, privacy, respect, and happiness and other hard to define quality.

We want an AI that recognizes that we live in a closed ecosystem, and if we degrade that ecosystem, we’re compromising our long-term ability to achieve those goals.

Optimizing for multiple values is difficult for people, but it should be easier for AGI over the long term, because it can evaluate and consider many more options to a far greater depth and at a far greater speed than people ever can.

An AI that simply obeys laws is never going to get us what we need. We can see many behaviors that are legal and yet still harmful.

The problem is not impossible to solve. You can ask any ten year old child what we should do, and they’ll almost always give you an ethically superior answer to what a CEO of a corporation will tell you.

2. Over the long run, the ethical behavior of AI toward people must include intent, not just rules.

In the next few years, we’ll see narrow AI solutions to ethical behavior problems.

When an accident is unavoidable, self-driving AI will choose what we decide as the best option.

It’s better to hit another car than a pedestrian because the pedestrian will be hurt more. That’s ethically easy. We’ll try to answer it.

More difficult: The unattached adult or the single mother whom two children depend on?

We can come up with endless variations of the trolley dilemma, and depending on how likely they are, we’ll embody some of them in narrow AI.

But none of that can be generalized to solve other ethical problems.

  • How much carbon can we afford to emit?
  • Is it better to save 500 local manufacturing jobs, or to reduce the cost of the product by half, when the product will make people’s lives better?
  • Better to make a part out of metal, which has certain environmental impacts, or plastic, which has different ones?

These are really difficult questions. Some of them we attempt to answer today with techniques such as lifecycle analysis. AI will do that job far better than us, conducting lifecycle analysis for many, many decisions.

1. As we get closer to artificial general intelligence, we must consider the role of emotions in decision-making.

In my books, which span 30 years in my fictional, near-future world, AI start out emotionless, but gradually develop more emotions. I thought hard about that: was I giving them emotions because I wanted to anthropomorphize the AI, and make them easier characters to write, or was there real value to emotions?

People have multiple systems for decision-making. We have some autonomic reactions, like jerking away our hand from heat, which happens without involving the brain until after the fact.

We have some purely logical decisions, such as which route to take to drive home.

But most of our decisions are decided or guided by emotional feelings. Love. Beauty. Anger. Boredom. Fear.

It would be a terrible thing if we needed to logically think through every decision: Should I kiss my partner now? Let me think through the pros and cons of that decision…. No, that’s a mostly emotional decision.

Others are a blend of emotion and logic: Should I take that new job? Is this the right person for me to marry?

I see emotions as a shortcut to decision-making, because it would take forever to reach every decision through a dispassionate, logical evaluation of options. And that’s the same reason why we have an autonomic system: to shortcut conscious decision making. I perceive this stove is hot. I perceive that my hand is touching the stove. This amount of heat sustained too long will damage my hand. Damaging my hand would be bad because it will hurt and because it will compromise my ability to do other things. Therefore, I conclude I shall withdraw my hand from the stove.

That’s a terrible approach to resolve a time critical matter.

Emotions inform or constrain decision making. I might still think through things, but the decision I reach will differ depending on whether I’m angry and scared, or comfortable and confident.

As AI become sophisticated and approach or exceed AGI, we must eventually see the equivalent of emotions that automate some lesser decisions for AI and guide other, more complicated decisions.

Research into AI emotions will likely be one of the signs that AGI is very, very near.

 

ChildrenOfArkadiaI read Children of Arkadia, by Darusha Wehm, over the weekend. This was a fascinating book. The setting is a classic of science fiction: a bunch of idealistic settlers embark on creating an idealized society in a space station colony. There are two unique twists: the artificial general intelligences that accompany them have, in theory, equal rights and free will as the humans. There are no antagonists: no one is out to sabotage society, there’s no evil villain. Just circumstances.

Darusha does an excellent job exploring some of the obvious and not-so-obvious conflicts that emerge. Can an all-knowing, super intelligence AI ever really be on equal footing with humans? How does work get done in a post-scarcity economy? Can even the best-intentioned people armed with powerful and helpful technology ever create a true utopia?

Children of Arkadia manages to explore all this and give us interesting and diverse characters in a compact, fun to read story. Recommended.

 

Mark Zuckerberg wrote about how he plans to personally work on artificial intelligence in the next year. It’s a nice article that lays out the landscape of AI developments. But he ends with a statement that misrepresents the relevance of Moore’s Law to future AI development. He wrote (with my added bold for emphasis):

Since no one understands how general unsupervised learning actually works, we’re quite a ways off from building the general AIs you see in movies. Some people claim this is just a matter of getting more computing power — and that as Moore’s law continues and computing becomes cheaper we’ll naturally have AIs that surpass human intelligence. This is incorrect. We fundamentally do not understand how general learning works. This is an unsolved problem — maybe the most important problem of this century or even millennium. Until we solve this problem, throwing all the machine power in the world at it cannot create an AI that can do everything a person can.

I don’t believe anyone knowledge about AI argues that Moore’s Law is going to spontaneously create AI. I’ll give Mark the benefit of the doubt, and assume he was trying to be succinct. But it’s important to understand exactly why Moore’s Law is important to AI.

We don’t understand how general unsupervised learning works, nor do we understand how much of human intelligence works. But we do have working examples in the form of human brains. We do not today have the computer parts necessary to simulate a human brain. The best brain simulations by the largest supercomputing clusters have been able to approximate 1% of the brain at 1/10,000th of the normal cognitive speeds. In other words, current computer processors are 1,000,000 times too slow to simulate a human brain.

The Wright Brothers succeeded in making the first controlled, powered, and sustained heavier-than-air human flight not because of some massive breakthrough in the principles of aerodynamics (which were well understood at the time), but because engines were growing more powerful, and powered flight was feasible for the first time around the point at which they were working. They made some breakthroughs in aircraft controls, but even if the Wright Brothers had never flown, someone else would have within a period of a few years. It was breakthroughs in engine technology, specifically, the power-to-weight ratio, that enabled powered flight around the turn of the century.

AI proponents who talk about Moore’s Law are not saying AI will spontaneously erupt from nowhere, but that increasing computing processing power will make AI possible, in the same way that more powerful engines made flight possible.

Those same AI proponents who believe in the significance of Moore’s Law can be subdivided into two categories. One group argues we’ll never understand intelligence fully. Our best hope of creating it is with a brute force biological simulation. In other words, recreate the human brain structure, and tweak it to make it better or faster. The second group argues we may invent our own techniques for implementing intelligence (just as we implemented our own approach to flight that differs from birds), but the underlying computational needs will be roughly equal: certainly, we won’t be able to do it when we’re a million times deficient in processing power.

Moore’s Law gives us an important cadence to the progress in AI development. When naysayers argue AI can’t be created, they’re looking at historical progress in AI, which is a bit like looking at powered flight prior to 1850: pretty laughable. The rate of AI progress will increase as computer processing speeds approach that of the human brain. When other groups argue we should already have AI, they’re being hopelessly optimistic about our ability to recreate intelligence a million times more efficiently than nature was able to evolve.

The increasing speed of computer processors as predicted by Moore’s Law, and the crossover point where processing power aligns with the complexity of the human brain tells us a great deal about the timing of when we’ll see advanced AI on par with human intelligence.

I gave a talk in the Netherlands last week about the future of technology. I’m gathering together a few resources here for attendees. Even if you didn’t attend, you may still find these interesting, although some of the context will be lost.

Previous Articles

I’ve written a handful of articles on these topics in the past. Below are three that I think are relevant:

Next Ten Years

Ten to Thirty Years

 

Each time I’ve had a new novel come out, I’ve done an article about the technology in the previous novel. Here are two of my prior posts:

Now that The Turing Exception is available, it is time to cover the tech in The Last Firewall.

As I’ve written about elsewhere, my books are set at ten year intervals, starting with Avogadro Corp in 2015 (gulp!) and The Turing Exception in 2045. So The Last Firewall is set in 2035. For this sort of timeframe, I extrapolate based on underlying technology trends. With that, let’s get into the tech.

Neural implants

If you recall, I toyed with the idea of a neural implant in the epilogue to Avogadro Corp. That was done for theatrical reasons, but I don’t consider them feasible in the current day, in the way that they’re envisioned in the books.

FutureComputerSizes

Extrapolated computer sizes through 2060

I didn’t anticipate writing about neural implants at all. But as I looked at various charts of trends, one that stood out was the physical size of computers. If computers kept decreasing in size at their current rate, then an entire computer, including the processor, memory, storage, power supply and input/output devices would be small enough to implant in your head.

What does it mean to have a power supply for a computer in your head? I don’t know. How about an input/output device? Obviously I don’t expect a microscopic keyboard. I expect that some sort of appropriate technology will be invented. Like trends in bandwidth and CPU speeds, we can’t know exactly what innovations will get us there, but the trends themselves are very consistent.

For an implant, the logical input and output is your mind, in the form of tapping into neural signaling. The implication is that information can be added, subtracted, or modified in what you see, hear, smell, and physically feel.

Terminator HUD

Terminator HUD

At the most basic, this could involve “screens” superimposed over your vision, so that you could watch a movie or surf a website without the use of an external display. Information can also be displayed mixed with your normal visual data. There’s a scene where Leon goes to work in the institution, and anytime he focuses on anyone, a status bubble appears above their head explaining whether they’re available and what they’re working on.

Similarly, information can be read from neurons, so that the user might imagine manipulating whatever’s represented visually, and the implant can sense this and react accordingly.

Although the novel doesn’t go into it, there’s a training period after someone gets an implant. The training starts with observing a series of photographs on an external display. The implant monitors neural activities, and gradually learns which neurons are responsible for what in a given person’s brain. Later training would ask the user to attempt to interact with projected content, while neural activity is again read.

My expectation is that each person develops their own unique way of interacting with their implant, but there are many conventions in common. Focusing on a mental image of a particular person (or if an image can’t be formed, then to imagine their name printed on paper) would bring up options for interacting with them, as an example.

People with implants can have video calls. The ideal way is still with a video camera of some kind, but it’s not strictly necessary. A neural implant will gradually train itself, comparing neural signaling with external video feedback, to determine what a person looks like, correlating neural signals with facial expressions, until it can build up a reasonable facsimile of a person. Once that point is reached, a reasonable quality video stream can be created on the fly using residual self-image.

Such a video stream can be manipulated however, to suppress emotional giveaways, if the user desires.

Cochlear implants, mind-controlled robotic arms and the DARPA cortical modem convince me that this is one area of technology where we’re definitely on track. I feel highly confident we’ll see implants like those described in The Last Firewall, in roughly this timeframe (2030s). In fact, I’m more confident about this than I am in strong AI.

Cat’s Implant

Catherine Matthews has a neural implant she received as a child. It was primarily designed to suppress her epileptic seizures by acting as a form of active noise cancellation for synchronous neuronal activity.

However, Catherine also has a number of special abilities that most people do not have: the ability to manipulate the net on par with or even exceeding the abilities of AI. Why does she have this ability?

The inspiration for this came from my time as a graduate student studying computer networking. Along with other folks at the University of Arizona, studying under Professor Larry Peterson, we developed object-oriented network protocol implementations on a framework called x-kernel.

These days we pretty much all have root access on our own computers, but back in the early 90s in a computer science lab, most of us did not.

Because we did not have root access on the computers we used as students, we were restricted to running x-kernel in user mode. This means that instead of our network protocols running on top of ethernet, we were running on top of IP. In effect, we run a stack that looked like TCP/IP/IP. In effect, we could simulate network traffic between two different machines, but I couldn’t actually interact with non-x-kernel protocol stacks on other machines.

Graph of IPSEC implemented in x-kernel on Linux. From after my time at UofA.

Graph of IPSEC implemented in x-kernel on Linux. From after my time at UofA.

In 1994 or so, I ported x-kernel to Linux. Finally I was running x-kernel on a box that I had root access on. Using raw socket mode on Unix, I could run x-kernel user-mode implementations of protocols and interact with network services on other machines. All sorts of graduate school hijinks ensued. (Famously we’d use ICMP network unreachable messages to kick all the computers in the school off the network when we wanted to run protocol performance tests. It would force everyone off the network for about 30 seconds, and you could get artificially high performance numbers.)

In the future depicted by the Singularity series, one of the mechanisms used to ensure that AI do not run amok is that they run in something akin to a virtualization layer above the hardware, which prevents them from doing many things, and allows them to be monitored. Similarly, people with implants do not have access to the lowest layers of hardware either.

But Cat does. Her medical-grade implant predates the standardized implants created later. So she has the ability to send and receive network packets that most other people and AI do not. From this stems her unique abilities to manipulate the network.

matrix-wireframeMix into this the fact that she’s had her implant since childhood, and that she routinely practices meditation and qi gong (which changes the way our brains work), and you get someone who can do more than other people.

All that being said, this is science fiction, and there’s plenty of handwavium going on here, but there is some general basis for the notion of being able to do more with her neural implant.

This post has gone on pretty long, so I think I’ll call it quits here. In the next post I’ll talk about transportation and employment in 2035.

Thanks to Elon Musk’s fame and his concerns about the risks of AI, it seems like everyone’s talking about it.

One difficulty that I’ve noticed is agreement on exactly what risk we’re talking about. I’ve had several discussions in just the last few days, both at the Defrag conference in Colorado and online.

One thing I’ve noticed is that the risk naysayers tend to say “I don’t believe there is risk due to AI”. But when you probe them further, what they are often saying is “I don’t believe there is existential risk from a skynet scenario due to a super-intelligence created from existing technology.” The second statement is far narrower, so let’s dig into the components of it.

Existential risk is defined by Nick Bostrum as a risk “where an adverse outcome would either annihilate Earth-originating intelligent life or permanently and drastically curtail its potential.” Essentially, we’re talking about either the extinction of humankind, or something close to it. However, most of us would agree that there are very bad outcomes that are nowhere near an existential risk. For example, about 4% of the global population died in WWII. That’s not an existential risk, but it’s still horrific by anybody’s standards.

Runaway AI, accelerating super-intelligence, or hard takeoff are all terms that refer to the idea that once an artificial intelligence is created, it will recursively improve its own intelligence, becoming vastly smarter and more powerful in a matter of hours, days, or months. We have no idea if this will happen (I don’t think it’s likely), but simply because we don’t have a hard takeoff doesn’t mean that an AI would be stagnant or lack power compared to people. There are many different ways even a modest AI with the creativity, motivation, and drive equivalent to that of a human could affect a great deal more than a human could:

  • Humans can type 50 words a minute. AI could communicate with tens of thousands of computers simultaneously.
  • Humans can drive one car at a time. AI could fly all the world’s airplanes simultaneously.
  • Humans can trip one circuit breaker. AI could trip all the world’s circuit breakers.
  • Humans can reproduce a handful of times over the course of a lifetime. AI could reproduce millions of times over the course of a day.
  • Humans evolve over the course of tens of thousands of years or more. Computers become 50% more powerful each year.

So for many reasons, even if we don’t have a hard takeoff, we can still have AI actions and improvement that occur far faster, and with far wider effect than we humans are adapted to handling.

Skynet scenario, terminator scenario, or killer robots are terms that refer to the idea that AI could choose to wage open warfare on humans using robots. This is just one type of risk, of many different possibilities. Other ways that AI could harm us include deliberate mechanisms, like trying to manipulate us by controlling the information we see, or by killing off particular people that pose threats, or by extorting us to deliver services they want. This idea of manipulation is important, because while death is terrible, the loss of free will is pretty bad too.

Frankly, most of those seem silly or unlikely compared to unintentional harm that AI could cause: the electrical grid could go down, transportation could stop working, our home climate control could stop functioning, or a virus could crash all computers. If these don’t seem very threatening, consider…

  • What if one winter, for whatever reason, homes wouldn’t heat? How many people would freeze to death?
  • Consider that Google’s self-driving car doesn’t have any manual controls. It’s the AI or it’s no-go. More vehicles will move in this direction, especially all forms of bulk delivery. If all transportation stopped, how would people in cities get food when their 3-day supply runs out?
  • How long can those city dwellers last without fresh water if pumping stations are under computer control and they stop?

Existing technology: Some will argue that because we don’t have strong AI (e.g. human level intelligence or better) now, there’s no point in even talking about risk. However, this sounds like “Let’s not build any asteroid defenses until we clearly see an asteroid headed for Earth”. It’s far too late by then. Similarly, once the AI is here, it’s too late to talk about precautions.

In conclusion, if you have a conversation about AI risks, be clear what you’re talking about. Frankly, all of humanity being killed by robots under the control of a super-intelligence AI doesn’t even seem worth talking about compared to all of the more likely risks. A better conversation might start with a question like this:

Are we at risk of death, manipulation, or other harm from future AI, whether deliberate or accidental, and if so, what can we do to decrease those risks?

World’s shortest Interstellar review: Go see this movie right now.

Slightly longer review:

I got advanced screening tickets to see Interstellar in 35mm at the Hollywood theatre in Portland. I didn’t know that much about the movie, other than seeing the trailer and thinking it looked pretty good.

In fact, it was incredible. The trailer does not do it justice. I don’t want to give away the plot of the movie, so I’m not going to list all of the big ideas in this movie, but Erin and I went through the list on the drive home, and it was impressive. Easily the best movie I’ve seen in quite a while.

And this is one that really deserves being seen on a big screen, in a good theatre, on 35mm film if possible.

 

 

 

Ramez Naam, author of Nexus and Crux (two books I enjoyed and recommend), has recently put together a few guest posts for Charlie Stross (another author I love). The posts are The Singularity Is Further Than It Appears and Why AIs Won’t Ascend in the Blink of an Eye.

They’re both excellent posts, and I’d recommend reading them in full before continuing here.

I’d like to offer a slight rebuttal and explain why I think the singularity is still closer than it appears.

But first, I want to say that I very much respect Ramez, his ideas and writing. I don’t think he’s wrong and I’m right. I think the question of the singularity is a bit more like Drake’s Equation about intelligent extraterrestrial life: a series of probabilities, the values of which are not known precisely enough to determine the “correct” output value with strong confidence. I simply want to provide a different set of values for consideration than the ones that Ramez has chosen.

First, let’s talk about definitions. As Ramez describes in his first article, there are two versions of singularity often talked about.

The hard takeoff is one in which an AI rapidly creates newer, more intelligent versions of itself. Within minutes, days, or weeks, the AI has progressed from a level 1 AI to a level 20 grand-wizard AI, far beyond human intellect and anything we can comprehend. Ramez doesn’t think this will happen for a variety of reasons, one of which is the exponential difficulty involved in creating successively more complex algorithm (the argument he lays out in his second post).

I agree. I don’t see a hard takeoff. In addition to the reasons Ramez stated, I also believe it takes so long to test and qualify candidates for improvement that successive iteration will be slow.

Let’s imagine the first AI is created and runs on an infrastructure of 10,000 computers. Let’s further assume the AI is composed of neural networks and other similar algorithms that require training on large pools of data. The AI will want to test many ideas for improvements, each requiring training. The training will be followed by multiple rounds of successively more comprehensive testing: first the AI needs to see if the algorithm appears to improve a select area of intelligence, but then it will want to run regressive tests to ensure no other aspect of its intelligence or capabilities is adversely impacted. If the AI wants to test 1,000 ideas for improvements, and each idea requires 10 hours of training, 1 hour of assessment, and averages 1 hour of regressive testing, it would take 1.4 years to complete a round of improvements. Parallelism is the alternative, but remember that first AI is likely to be a behemoth, require 10,000 computers to run. It’s not possible to get that much parallelism.

The soft takeoff is one in which an artificial general intelligence (AGI) is created and gradually improved. As Ramez points out, that first AI might be on the order of human intellect, but it’s not smarter than the accumulated intelligence of all the humans that created it: many tens of thousands of scientists will collaborate to build the first AGI.

This is where we start to diverge. Consider a simple domain like chess playing computers. Since 2005, chess software running on commercially available hardware can outplay even the strongest human chess players. I don’t have data, but I suspect the number of very strong human chess players is somewhere in the hundreds or low thousands. However, the number of computers capable of running the very best chess playing software is in the millions or hundreds of millions. The aggregate chess playing capacity of computers is far greater than that of humans, because the best chess playing program can be propagated everywhere.

So too, AGI will be propagated everywhere. But I just argued that those first AI will require tens of thousands computers, right? Yes, except thanks to Moore’s Law (the observation that computing power tends to double every 18 months), the same AI that required 10,000 computers will need a mere 100 computers ten years later and just a single computer another ten years after that. Or an individual AGI could run up to 10,000 times faster. That speed-up alone means something different when it comes to intelligence: to have a single being with 10,000 times the experience and learning and practice that a human has.

Even Ramez agrees that it will be feasible to have destructive human brain uploads approximating human intelligence around 2040: “Do the math, and it appears that a super-computer capable of simulating an entire human brain and do so as fast as a human brain should be on the market by roughly 2035 – 2040. And of course, from that point on, speedups in computing should speed up the simulation of the brain, allowing it to run faster than a biological human’s.”

This is the soft takeoff: from a single AGI at some point in time to an entire civilization of that AGI twenty years later, all running at faster than human intellect speeds. A race consisting of an essentially alien intelligence, cohabiting the planet with us. Even if they don’t experience an intelligence explosion as Verner Vinge described, the combination of fast speeds, aggregate intelligence, and inherently different motivations will create an unknowable future that likely out of our control. And that’s very much a singularity.

But Ramez questions whether we can even achieve an AGI comparable to a human in the first place. There’s this pesky question of sentience and consciousness. Please go read Ramez’s first article in full, I don’t want you to think I’m summarizing everything he said here, but he basically cites three points:

1) No one’s really sure how to do it. AI theories have been around for decades, but none of them has led to anything that resembles sentience.

This is a difficulty. One analogy that comes to mind is the history of aviation. For nearly a hundred years prior to the Wright Brothers, heavier than air flight was being studied, with many different gliders created and flown. It was the innovation of powered engines that made heavier than air flight practically possible, and which led to rapid innovation. Perhaps we just don’t yet have the equivalent yet in AI. We’ve got people learning how to make airfoils and control services and airplane structure, and we’re just waiting for the engine to show up.

We also know that nature evolved sentience without any theory of how to do it. Having a proof point is powerful motivation.

2) There’s a huge lack of incentive. Would you like a self-driving car that has its own opinions? That might someday decide it doesn’t feel like driving you where you want to go?

There’s no lack of incentive. As James Barrat detailed in Our Final Invention, there are billions of dollars being poured into building AGI, both in big profile projects like the US BRAIN project and Europe’s Human Brain Project, as well as countless smaller AI companies and research projects.

There’s plenty of human incentive, too. How many people were inspired by Star Trek’s Data? At a recent conference, I asked attendees who would want Data as a friend, and more than half the audience’s hands went up. Among the elderly, loneliness is a very real issue that could be helped with AGI companionship, and many people might choose an artificial psychologist for reasons of confidence, cost, and convenience. All of these require at least the semblance of opinions.

More than that, we know we want initiative. If we have a self-driving car, we expect that it will use that initiative to find faster routes to destinations, possibly go around dangerous neighborhoods, and take necessary measures to avoid an accident. Indeed, even Google Maps has an “opinion” of the right way to get somewhere that often differs from my own. It’s usually right.

If we have an autonomous customer service agent, we’ll want it to flexibly meet business goals including pleasing the customer while controlling cost. All of these require something like opinions and sentience: goals, motivation to meet those goals, and mechanisms to flexibly meet those goals.

3) There are ethical issues. If we design an AI that truly is sentient, even at slightly less than human intelligence we’ll suddenly be faced with very real ethical issues. Can we turn it off? 

I absolutely agree that we’ve got ethical issues with AGI, but that hasn’t stopped us from creating other technology (nuclear bombs, bio-weapons, internal combustion engine, the transportation system) that also has ethical issues.

In sum, Ramez brings up great points, and he may very well be correct: the singularity might be a hundred years off instead of twenty or thirty.

However, the discussion around the singularity is also one about risk. Having artificial general intelligence running around, potentially in control of our computing infrastructure, may be risky. What happens if the AI has different motivations than us? What if it decides we’d be happier and less destructive if we’re all drugged? What if it just crashes and accidentally shuts down the entire electrical grid? (Read James Barrat’s Our Final Invention for more about the risks of AI.)

Ramez wrote Infinite Resource: The Power of Ideas on a Finite Planet, a wonderful and optimistic book about how science and technology are solving many resource problems around the world. I think it’s a powerful book because it gives us hope and proof points that we can solve the problems facing us.

Unfortunately, I think the argument that the singularity is far off is different and problematic because it denies the possibility of problems facing us. Instead of encouraging us to use technology to address the issues that could arise with the singularity, the argument instead concludes the singularity is either unlikely or simply a long time away. With that mindset, we’re less likely as a society to examine both AI progress and take steps to reduce the risks of AGI.

On the other hand, if we can agree that the singularity is a possibility, even just a modest possibility, then we may spur more discussion and investment into the safety and ethics of AGI.

I was having a discussion with a group of writers about the technological singularity, and several asserted that the rate of increasing processor power was declining. They backed it up with a chart showing that the increase in MIPS per unit of clock speed stalled about ten years ago.

If computer processing speeds fail to increase exponentially, as they have for the last forty years, this will throw off many different predictions for the future, and dramatically decreases the likelihood of human-grade AI arising.

I did a bit of research last night and this morning. Using the chart of historical computer speeds from Wikipedia, and I placed a few key intervals in a spreadsheet and found:

  • From 1972 to 1985: MIPS grew by 19% per year.
  • From 1985 to 1996: MIPS grew by 43% per year.
  • From 1996 to 2003: MIPS grew by 51% per year.
  • From 2003 to 2013: MIPS grew by 29% per year.

By no means is the list of MIPS ratings exhaustive, but it does give us a general idea of what’s going on. The data shows the rate of CPU speed increases has declined in the last ten years.

I split up the last ten years:

  • From 2003 to 2008: MIPS grew by 53% per year.
  • From 2008 to 2013: MIPS grew by 9% per year.

According to that, the decline in processing rate increases is isolated to the last five years.

Five years isn’t much of a long term trend, and there are some processors missing from the end of the matrix. The Intel Xeon X5675, a 12 core processor isn’t shown, and it’s twice as powerful as the Intel Core i7 4770k that’s the bottom row on the MIPS table. If we substitute the Xeon processor, we find the growth rate from 2008 to 2012 was 31% annually, a more respectable improvement.

However, I’ve been tracking technology trends for a while (see my post on How to Predict the Future), and I try to use only those computers and devices I’ve personally owned. There’s always something faster out there, but it’s not what people have in their home, which is what I’m interested in.

I also know that my device landscape has changed over the last five years. In 2008, I had a laptop (Windows Intel Core 2 T7200) and a modest smartphone (a Treo 650). In 2013, I have a laptop (MBP 2.6 GHz Core i7), a powerful smartphone (Nexus 5), and a tablet (iPad Mini). I’m counting only my own devices and excluding those from my day job as a software engineer.

It’s harder to do this comparison, because there’s no one common benchmark among all these processors. I did the best I could to determine DMIPS for each, converting GeekBench cores for the Mac, and using the closest available processor for mobile devices that had a MIPS rating.

When I compared my personal device growth in combined processing power, I found it increased 51% annually from 2008 to 2013, essentially the same rate as for the longer period 1996 through 2011 (47%), which is what I use for my long-term predictions.

What does all this mean? Maybe there is a slight slow-down in the rate at which computing processing is increasing. Maybe there isn’t. Maybe the emphasis on low-power computing for mobile devices and server farms has slowed down progress on top-end speeds, and maybe that emphasis will contribute to higher top-end speeds down the road. Maybe the landscape will move from single-devices to clouds of devices, in the same way that we already moved from single cores to multiple cores.

Either way, I’m not giving up on the singularity yet.