ChildrenOfArkadiaI read Children of Arkadia, by Darusha Wehm, over the weekend. This was a fascinating book. The setting is a classic of science fiction: a bunch of idealistic settlers embark on creating an idealized society in a space station colony. There are two unique twists: the artificial general intelligences that accompany them have, in theory, equal rights and free will as the humans. There are no antagonists: no one is out to sabotage society, there’s no evil villain. Just circumstances.

Darusha does an excellent job exploring some of the obvious and not-so-obvious conflicts that emerge. Can an all-knowing, super intelligence AI ever really be on equal footing with humans? How does work get done in a post-scarcity economy? Can even the best-intentioned people armed with powerful and helpful technology ever create a true utopia?

Children of Arkadia manages to explore all this and give us interesting and diverse characters in a compact, fun to read story. Recommended.

 

Mark Zuckerberg wrote about how he plans to personally work on artificial intelligence in the next year. It’s a nice article that lays out the landscape of AI developments. But he ends with a statement that misrepresents the relevance of Moore’s Law to future AI development. He wrote (with my added bold for emphasis):

Since no one understands how general unsupervised learning actually works, we’re quite a ways off from building the general AIs you see in movies. Some people claim this is just a matter of getting more computing power — and that as Moore’s law continues and computing becomes cheaper we’ll naturally have AIs that surpass human intelligence. This is incorrect. We fundamentally do not understand how general learning works. This is an unsolved problem — maybe the most important problem of this century or even millennium. Until we solve this problem, throwing all the machine power in the world at it cannot create an AI that can do everything a person can.

I don’t believe anyone knowledge about AI argues that Moore’s Law is going to spontaneously create AI. I’ll give Mark the benefit of the doubt, and assume he was trying to be succinct. But it’s important to understand exactly why Moore’s Law is important to AI.

We don’t understand how general unsupervised learning works, nor do we understand how much of human intelligence works. But we do have working examples in the form of human brains. We do not today have the computer parts necessary to simulate a human brain. The best brain simulations by the largest supercomputing clusters have been able to approximate 1% of the brain at 1/10,000th of the normal cognitive speeds. In other words, current computer processors are 1,000,000 times too slow to simulate a human brain.

The Wright Brothers succeeded in making the first controlled, powered, and sustained heavier-than-air human flight not because of some massive breakthrough in the principles of aerodynamics (which were well understood at the time), but because engines were growing more powerful, and powered flight was feasible for the first time around the point at which they were working. They made some breakthroughs in aircraft controls, but even if the Wright Brothers had never flown, someone else would have within a period of a few years. It was breakthroughs in engine technology, specifically, the power-to-weight ratio, that enabled powered flight around the turn of the century.

AI proponents who talk about Moore’s Law are not saying AI will spontaneously erupt from nowhere, but that increasing computing processing power will make AI possible, in the same way that more powerful engines made flight possible.

Those same AI proponents who believe in the significance of Moore’s Law can be subdivided into two categories. One group argues we’ll never understand intelligence fully. Our best hope of creating it is with a brute force biological simulation. In other words, recreate the human brain structure, and tweak it to make it better or faster. The second group argues we may invent our own techniques for implementing intelligence (just as we implemented our own approach to flight that differs from birds), but the underlying computational needs will be roughly equal: certainly, we won’t be able to do it when we’re a million times deficient in processing power.

Moore’s Law gives us an important cadence to the progress in AI development. When naysayers argue AI can’t be created, they’re looking at historical progress in AI, which is a bit like looking at powered flight prior to 1850: pretty laughable. The rate of AI progress will increase as computer processing speeds approach that of the human brain. When other groups argue we should already have AI, they’re being hopelessly optimistic about our ability to recreate intelligence a million times more efficiently than nature was able to evolve.

The increasing speed of computer processors as predicted by Moore’s Law, and the crossover point where processing power aligns with the complexity of the human brain tells us a great deal about the timing of when we’ll see advanced AI on par with human intelligence.

I gave a talk in the Netherlands last week about the future of technology. I’m gathering together a few resources here for attendees. Even if you didn’t attend, you may still find these interesting, although some of the context will be lost.

Previous Articles

I’ve written a handful of articles on these topics in the past. Below are three that I think are relevant:

Next Ten Years

Ten to Thirty Years

 

Each time I’ve had a new novel come out, I’ve done an article about the technology in the previous novel. Here are two of my prior posts:

Now that The Turing Exception is available, it is time to cover the tech in The Last Firewall.

As I’ve written about elsewhere, my books are set at ten year intervals, starting with Avogadro Corp in 2015 (gulp!) and The Turing Exception in 2045. So The Last Firewall is set in 2035. For this sort of timeframe, I extrapolate based on underlying technology trends. With that, let’s get into the tech.

Neural implants

If you recall, I toyed with the idea of a neural implant in the epilogue to Avogadro Corp. That was done for theatrical reasons, but I don’t consider them feasible in the current day, in the way that they’re envisioned in the books.

FutureComputerSizes

Extrapolated computer sizes through 2060

I didn’t anticipate writing about neural implants at all. But as I looked at various charts of trends, one that stood out was the physical size of computers. If computers kept decreasing in size at their current rate, then an entire computer, including the processor, memory, storage, power supply and input/output devices would be small enough to implant in your head.

What does it mean to have a power supply for a computer in your head? I don’t know. How about an input/output device? Obviously I don’t expect a microscopic keyboard. I expect that some sort of appropriate technology will be invented. Like trends in bandwidth and CPU speeds, we can’t know exactly what innovations will get us there, but the trends themselves are very consistent.

For an implant, the logical input and output is your mind, in the form of tapping into neural signaling. The implication is that information can be added, subtracted, or modified in what you see, hear, smell, and physically feel.

Terminator HUD

Terminator HUD

At the most basic, this could involve “screens” superimposed over your vision, so that you could watch a movie or surf a website without the use of an external display. Information can also be displayed mixed with your normal visual data. There’s a scene where Leon goes to work in the institution, and anytime he focuses on anyone, a status bubble appears above their head explaining whether they’re available and what they’re working on.

Similarly, information can be read from neurons, so that the user might imagine manipulating whatever’s represented visually, and the implant can sense this and react accordingly.

Although the novel doesn’t go into it, there’s a training period after someone gets an implant. The training starts with observing a series of photographs on an external display. The implant monitors neural activities, and gradually learns which neurons are responsible for what in a given person’s brain. Later training would ask the user to attempt to interact with projected content, while neural activity is again read.

My expectation is that each person develops their own unique way of interacting with their implant, but there are many conventions in common. Focusing on a mental image of a particular person (or if an image can’t be formed, then to imagine their name printed on paper) would bring up options for interacting with them, as an example.

People with implants can have video calls. The ideal way is still with a video camera of some kind, but it’s not strictly necessary. A neural implant will gradually train itself, comparing neural signaling with external video feedback, to determine what a person looks like, correlating neural signals with facial expressions, until it can build up a reasonable facsimile of a person. Once that point is reached, a reasonable quality video stream can be created on the fly using residual self-image.

Such a video stream can be manipulated however, to suppress emotional giveaways, if the user desires.

Cochlear implants, mind-controlled robotic arms and the DARPA cortical modem convince me that this is one area of technology where we’re definitely on track. I feel highly confident we’ll see implants like those described in The Last Firewall, in roughly this timeframe (2030s). In fact, I’m more confident about this than I am in strong AI.

Cat’s Implant

Catherine Matthews has a neural implant she received as a child. It was primarily designed to suppress her epileptic seizures by acting as a form of active noise cancellation for synchronous neuronal activity.

However, Catherine also has a number of special abilities that most people do not have: the ability to manipulate the net on par with or even exceeding the abilities of AI. Why does she have this ability?

The inspiration for this came from my time as a graduate student studying computer networking. Along with other folks at the University of Arizona, studying under Professor Larry Peterson, we developed object-oriented network protocol implementations on a framework called x-kernel.

These days we pretty much all have root access on our own computers, but back in the early 90s in a computer science lab, most of us did not.

Because we did not have root access on the computers we used as students, we were restricted to running x-kernel in user mode. This means that instead of our network protocols running on top of ethernet, we were running on top of IP. In effect, we run a stack that looked like TCP/IP/IP. In effect, we could simulate network traffic between two different machines, but I couldn’t actually interact with non-x-kernel protocol stacks on other machines.

Graph of IPSEC implemented in x-kernel on Linux. From after my time at UofA.

Graph of IPSEC implemented in x-kernel on Linux. From after my time at UofA.

In 1994 or so, I ported x-kernel to Linux. Finally I was running x-kernel on a box that I had root access on. Using raw socket mode on Unix, I could run x-kernel user-mode implementations of protocols and interact with network services on other machines. All sorts of graduate school hijinks ensued. (Famously we’d use ICMP network unreachable messages to kick all the computers in the school off the network when we wanted to run protocol performance tests. It would force everyone off the network for about 30 seconds, and you could get artificially high performance numbers.)

In the future depicted by the Singularity series, one of the mechanisms used to ensure that AI do not run amok is that they run in something akin to a virtualization layer above the hardware, which prevents them from doing many things, and allows them to be monitored. Similarly, people with implants do not have access to the lowest layers of hardware either.

But Cat does. Her medical-grade implant predates the standardized implants created later. So she has the ability to send and receive network packets that most other people and AI do not. From this stems her unique abilities to manipulate the network.

matrix-wireframeMix into this the fact that she’s had her implant since childhood, and that she routinely practices meditation and qi gong (which changes the way our brains work), and you get someone who can do more than other people.

All that being said, this is science fiction, and there’s plenty of handwavium going on here, but there is some general basis for the notion of being able to do more with her neural implant.

This post has gone on pretty long, so I think I’ll call it quits here. In the next post I’ll talk about transportation and employment in 2035.

Thanks to Elon Musk’s fame and his concerns about the risks of AI, it seems like everyone’s talking about it.

One difficulty that I’ve noticed is agreement on exactly what risk we’re talking about. I’ve had several discussions in just the last few days, both at the Defrag conference in Colorado and online.

One thing I’ve noticed is that the risk naysayers tend to say “I don’t believe there is risk due to AI”. But when you probe them further, what they are often saying is “I don’t believe there is existential risk from a skynet scenario due to a super-intelligence created from existing technology.” The second statement is far narrower, so let’s dig into the components of it.

Existential risk is defined by Nick Bostrum as a risk “where an adverse outcome would either annihilate Earth-originating intelligent life or permanently and drastically curtail its potential.” Essentially, we’re talking about either the extinction of humankind, or something close to it. However, most of us would agree that there are very bad outcomes that are nowhere near an existential risk. For example, about 4% of the global population died in WWII. That’s not an existential risk, but it’s still horrific by anybody’s standards.

Runaway AI, accelerating super-intelligence, or hard takeoff are all terms that refer to the idea that once an artificial intelligence is created, it will recursively improve its own intelligence, becoming vastly smarter and more powerful in a matter of hours, days, or months. We have no idea if this will happen (I don’t think it’s likely), but simply because we don’t have a hard takeoff doesn’t mean that an AI would be stagnant or lack power compared to people. There are many different ways even a modest AI with the creativity, motivation, and drive equivalent to that of a human could affect a great deal more than a human could:

  • Humans can type 50 words a minute. AI could communicate with tens of thousands of computers simultaneously.
  • Humans can drive one car at a time. AI could fly all the world’s airplanes simultaneously.
  • Humans can trip one circuit breaker. AI could trip all the world’s circuit breakers.
  • Humans can reproduce a handful of times over the course of a lifetime. AI could reproduce millions of times over the course of a day.
  • Humans evolve over the course of tens of thousands of years or more. Computers become 50% more powerful each year.

So for many reasons, even if we don’t have a hard takeoff, we can still have AI actions and improvement that occur far faster, and with far wider effect than we humans are adapted to handling.

Skynet scenario, terminator scenario, or killer robots are terms that refer to the idea that AI could choose to wage open warfare on humans using robots. This is just one type of risk, of many different possibilities. Other ways that AI could harm us include deliberate mechanisms, like trying to manipulate us by controlling the information we see, or by killing off particular people that pose threats, or by extorting us to deliver services they want. This idea of manipulation is important, because while death is terrible, the loss of free will is pretty bad too.

Frankly, most of those seem silly or unlikely compared to unintentional harm that AI could cause: the electrical grid could go down, transportation could stop working, our home climate control could stop functioning, or a virus could crash all computers. If these don’t seem very threatening, consider…

  • What if one winter, for whatever reason, homes wouldn’t heat? How many people would freeze to death?
  • Consider that Google’s self-driving car doesn’t have any manual controls. It’s the AI or it’s no-go. More vehicles will move in this direction, especially all forms of bulk delivery. If all transportation stopped, how would people in cities get food when their 3-day supply runs out?
  • How long can those city dwellers last without fresh water if pumping stations are under computer control and they stop?

Existing technology: Some will argue that because we don’t have strong AI (e.g. human level intelligence or better) now, there’s no point in even talking about risk. However, this sounds like “Let’s not build any asteroid defenses until we clearly see an asteroid headed for Earth”. It’s far too late by then. Similarly, once the AI is here, it’s too late to talk about precautions.

In conclusion, if you have a conversation about AI risks, be clear what you’re talking about. Frankly, all of humanity being killed by robots under the control of a super-intelligence AI doesn’t even seem worth talking about compared to all of the more likely risks. A better conversation might start with a question like this:

Are we at risk of death, manipulation, or other harm from future AI, whether deliberate or accidental, and if so, what can we do to decrease those risks?

World’s shortest Interstellar review: Go see this movie right now.

Slightly longer review:

I got advanced screening tickets to see Interstellar in 35mm at the Hollywood theatre in Portland. I didn’t know that much about the movie, other than seeing the trailer and thinking it looked pretty good.

In fact, it was incredible. The trailer does not do it justice. I don’t want to give away the plot of the movie, so I’m not going to list all of the big ideas in this movie, but Erin and I went through the list on the drive home, and it was impressive. Easily the best movie I’ve seen in quite a while.

And this is one that really deserves being seen on a big screen, in a good theatre, on 35mm film if possible.

 

 

 

Ramez Naam, author of Nexus and Crux (two books I enjoyed and recommend), has recently put together a few guest posts for Charlie Stross (another author I love). The posts are The Singularity Is Further Than It Appears and Why AIs Won’t Ascend in the Blink of an Eye.

They’re both excellent posts, and I’d recommend reading them in full before continuing here.

I’d like to offer a slight rebuttal and explain why I think the singularity is still closer than it appears.

But first, I want to say that I very much respect Ramez, his ideas and writing. I don’t think he’s wrong and I’m right. I think the question of the singularity is a bit more like Drake’s Equation about intelligent extraterrestrial life: a series of probabilities, the values of which are not known precisely enough to determine the “correct” output value with strong confidence. I simply want to provide a different set of values for consideration than the ones that Ramez has chosen.

First, let’s talk about definitions. As Ramez describes in his first article, there are two versions of singularity often talked about.

The hard takeoff is one in which an AI rapidly creates newer, more intelligent versions of itself. Within minutes, days, or weeks, the AI has progressed from a level 1 AI to a level 20 grand-wizard AI, far beyond human intellect and anything we can comprehend. Ramez doesn’t think this will happen for a variety of reasons, one of which is the exponential difficulty involved in creating successively more complex algorithm (the argument he lays out in his second post).

I agree. I don’t see a hard takeoff. In addition to the reasons Ramez stated, I also believe it takes so long to test and qualify candidates for improvement that successive iteration will be slow.

Let’s imagine the first AI is created and runs on an infrastructure of 10,000 computers. Let’s further assume the AI is composed of neural networks and other similar algorithms that require training on large pools of data. The AI will want to test many ideas for improvements, each requiring training. The training will be followed by multiple rounds of successively more comprehensive testing: first the AI needs to see if the algorithm appears to improve a select area of intelligence, but then it will want to run regressive tests to ensure no other aspect of its intelligence or capabilities is adversely impacted. If the AI wants to test 1,000 ideas for improvements, and each idea requires 10 hours of training, 1 hour of assessment, and averages 1 hour of regressive testing, it would take 1.4 years to complete a round of improvements. Parallelism is the alternative, but remember that first AI is likely to be a behemoth, require 10,000 computers to run. It’s not possible to get that much parallelism.

The soft takeoff is one in which an artificial general intelligence (AGI) is created and gradually improved. As Ramez points out, that first AI might be on the order of human intellect, but it’s not smarter than the accumulated intelligence of all the humans that created it: many tens of thousands of scientists will collaborate to build the first AGI.

This is where we start to diverge. Consider a simple domain like chess playing computers. Since 2005, chess software running on commercially available hardware can outplay even the strongest human chess players. I don’t have data, but I suspect the number of very strong human chess players is somewhere in the hundreds or low thousands. However, the number of computers capable of running the very best chess playing software is in the millions or hundreds of millions. The aggregate chess playing capacity of computers is far greater than that of humans, because the best chess playing program can be propagated everywhere.

So too, AGI will be propagated everywhere. But I just argued that those first AI will require tens of thousands computers, right? Yes, except thanks to Moore’s Law (the observation that computing power tends to double every 18 months), the same AI that required 10,000 computers will need a mere 100 computers ten years later and just a single computer another ten years after that. Or an individual AGI could run up to 10,000 times faster. That speed-up alone means something different when it comes to intelligence: to have a single being with 10,000 times the experience and learning and practice that a human has.

Even Ramez agrees that it will be feasible to have destructive human brain uploads approximating human intelligence around 2040: “Do the math, and it appears that a super-computer capable of simulating an entire human brain and do so as fast as a human brain should be on the market by roughly 2035 – 2040. And of course, from that point on, speedups in computing should speed up the simulation of the brain, allowing it to run faster than a biological human’s.”

This is the soft takeoff: from a single AGI at some point in time to an entire civilization of that AGI twenty years later, all running at faster than human intellect speeds. A race consisting of an essentially alien intelligence, cohabiting the planet with us. Even if they don’t experience an intelligence explosion as Verner Vinge described, the combination of fast speeds, aggregate intelligence, and inherently different motivations will create an unknowable future that likely out of our control. And that’s very much a singularity.

But Ramez questions whether we can even achieve an AGI comparable to a human in the first place. There’s this pesky question of sentience and consciousness. Please go read Ramez’s first article in full, I don’t want you to think I’m summarizing everything he said here, but he basically cites three points:

1) No one’s really sure how to do it. AI theories have been around for decades, but none of them has led to anything that resembles sentience.

This is a difficulty. One analogy that comes to mind is the history of aviation. For nearly a hundred years prior to the Wright Brothers, heavier than air flight was being studied, with many different gliders created and flown. It was the innovation of powered engines that made heavier than air flight practically possible, and which led to rapid innovation. Perhaps we just don’t yet have the equivalent yet in AI. We’ve got people learning how to make airfoils and control services and airplane structure, and we’re just waiting for the engine to show up.

We also know that nature evolved sentience without any theory of how to do it. Having a proof point is powerful motivation.

2) There’s a huge lack of incentive. Would you like a self-driving car that has its own opinions? That might someday decide it doesn’t feel like driving you where you want to go?

There’s no lack of incentive. As James Barrat detailed in Our Final Invention, there are billions of dollars being poured into building AGI, both in big profile projects like the US BRAIN project and Europe’s Human Brain Project, as well as countless smaller AI companies and research projects.

There’s plenty of human incentive, too. How many people were inspired by Star Trek’s Data? At a recent conference, I asked attendees who would want Data as a friend, and more than half the audience’s hands went up. Among the elderly, loneliness is a very real issue that could be helped with AGI companionship, and many people might choose an artificial psychologist for reasons of confidence, cost, and convenience. All of these require at least the semblance of opinions.

More than that, we know we want initiative. If we have a self-driving car, we expect that it will use that initiative to find faster routes to destinations, possibly go around dangerous neighborhoods, and take necessary measures to avoid an accident. Indeed, even Google Maps has an “opinion” of the right way to get somewhere that often differs from my own. It’s usually right.

If we have an autonomous customer service agent, we’ll want it to flexibly meet business goals including pleasing the customer while controlling cost. All of these require something like opinions and sentience: goals, motivation to meet those goals, and mechanisms to flexibly meet those goals.

3) There are ethical issues. If we design an AI that truly is sentient, even at slightly less than human intelligence we’ll suddenly be faced with very real ethical issues. Can we turn it off? 

I absolutely agree that we’ve got ethical issues with AGI, but that hasn’t stopped us from creating other technology (nuclear bombs, bio-weapons, internal combustion engine, the transportation system) that also has ethical issues.

In sum, Ramez brings up great points, and he may very well be correct: the singularity might be a hundred years off instead of twenty or thirty.

However, the discussion around the singularity is also one about risk. Having artificial general intelligence running around, potentially in control of our computing infrastructure, may be risky. What happens if the AI has different motivations than us? What if it decides we’d be happier and less destructive if we’re all drugged? What if it just crashes and accidentally shuts down the entire electrical grid? (Read James Barrat’s Our Final Invention for more about the risks of AI.)

Ramez wrote Infinite Resource: The Power of Ideas on a Finite Planet, a wonderful and optimistic book about how science and technology are solving many resource problems around the world. I think it’s a powerful book because it gives us hope and proof points that we can solve the problems facing us.

Unfortunately, I think the argument that the singularity is far off is different and problematic because it denies the possibility of problems facing us. Instead of encouraging us to use technology to address the issues that could arise with the singularity, the argument instead concludes the singularity is either unlikely or simply a long time away. With that mindset, we’re less likely as a society to examine both AI progress and take steps to reduce the risks of AGI.

On the other hand, if we can agree that the singularity is a possibility, even just a modest possibility, then we may spur more discussion and investment into the safety and ethics of AGI.

I was having a discussion with a group of writers about the technological singularity, and several asserted that the rate of increasing processor power was declining. They backed it up with a chart showing that the increase in MIPS per unit of clock speed stalled about ten years ago.

If computer processing speeds fail to increase exponentially, as they have for the last forty years, this will throw off many different predictions for the future, and dramatically decreases the likelihood of human-grade AI arising.

I did a bit of research last night and this morning. Using the chart of historical computer speeds from Wikipedia, and I placed a few key intervals in a spreadsheet and found:

  • From 1972 to 1985: MIPS grew by 19% per year.
  • From 1985 to 1996: MIPS grew by 43% per year.
  • From 1996 to 2003: MIPS grew by 51% per year.
  • From 2003 to 2013: MIPS grew by 29% per year.

By no means is the list of MIPS ratings exhaustive, but it does give us a general idea of what’s going on. The data shows the rate of CPU speed increases has declined in the last ten years.

I split up the last ten years:

  • From 2003 to 2008: MIPS grew by 53% per year.
  • From 2008 to 2013: MIPS grew by 9% per year.

According to that, the decline in processing rate increases is isolated to the last five years.

Five years isn’t much of a long term trend, and there are some processors missing from the end of the matrix. The Intel Xeon X5675, a 12 core processor isn’t shown, and it’s twice as powerful as the Intel Core i7 4770k that’s the bottom row on the MIPS table. If we substitute the Xeon processor, we find the growth rate from 2008 to 2012 was 31% annually, a more respectable improvement.

However, I’ve been tracking technology trends for a while (see my post on How to Predict the Future), and I try to use only those computers and devices I’ve personally owned. There’s always something faster out there, but it’s not what people have in their home, which is what I’m interested in.

I also know that my device landscape has changed over the last five years. In 2008, I had a laptop (Windows Intel Core 2 T7200) and a modest smartphone (a Treo 650). In 2013, I have a laptop (MBP 2.6 GHz Core i7), a powerful smartphone (Nexus 5), and a tablet (iPad Mini). I’m counting only my own devices and excluding those from my day job as a software engineer.

It’s harder to do this comparison, because there’s no one common benchmark among all these processors. I did the best I could to determine DMIPS for each, converting GeekBench cores for the Mac, and using the closest available processor for mobile devices that had a MIPS rating.

When I compared my personal device growth in combined processing power, I found it increased 51% annually from 2008 to 2013, essentially the same rate as for the longer period 1996 through 2011 (47%), which is what I use for my long-term predictions.

What does all this mean? Maybe there is a slight slow-down in the rate at which computing processing is increasing. Maybe there isn’t. Maybe the emphasis on low-power computing for mobile devices and server farms has slowed down progress on top-end speeds, and maybe that emphasis will contribute to higher top-end speeds down the road. Maybe the landscape will move from single-devices to clouds of devices, in the same way that we already moved from single cores to multiple cores.

Either way, I’m not giving up on the singularity yet.

The Pentagon’s research arm, DARPA, wants to crowdsource a fully automated cyber defense system, and they’re offering a two million dollar prize:

The so-called “Cyber Grand Challenge” will take place over the next three years, which seems like plenty of time to write a few lines of code. But DARPA’s not just asking for any old cyber defense system. They want one “with reasoning abilities exceeding those of human experts” that “will create its own knowledge.” They want it to deflect cyberattacks, not in a matter of days—which is how the Pentagon currently works—but in a matter of hours or even seconds. That’s profoundly difficult.

On the one hand, this is brilliant. I can easily imagine some huge leaps forward made as a result of the contest. The Netflix Prize advanced recommendation algorithms while the DARPA Grand Prize gave us autonomous cars. Clearly competitions work, especially in this domain where the barrier to entry is low.

On the other hand, this is scary. They’re asking competitors to marry artificial intelligence with cyber defense systems. Cyber defense requires a solid understanding of cyber offense, and aggressive defensive capabilities could be nearly as destructive as offensive capabilities. Cyber defense software could decide to block a threatening virus with a counter-virus, or shut down parts of the Internet to stop or slow infection.

Artificial intelligence has taking over stock trading, and look where that’s gotten us. Trading AI has become so sophisticated it is described in terms of submarine warfare, with offensive and defensive capabilities.

I don’t doubt that the competition will advance cyber defense. But the side effect will be a radical increase in cyber offense, as well as a system in which both side operate at algorithmic speeds.

Full information about the Cyber Grand Challenge, including rules and registration, is available on DARPA’s website.

TheLastFirewallPrint
I’d like to announce that The Last Firewall is available!

In the year 2035, robots, artificial intelligences, and neural implants have become commonplace. The Institute for Applied Ethics keeps the peace, using social reputation to ensure that robots and humans don’t harm society or each other. But a powerful AI named Adam has found a way around the restrictions. 

Catherine Matthews, nineteen years old, has a unique gift: the ability to manipulate the net with her neural implant. Yanked out of her perfectly ordinary life, Catherine becomes the last firewall standing between Adam and his quest for world domination. 

Two+ years in the making, I’m just so excited to finally release this novel. As with my other novels, I explore themes of what life will be like with artificial intelligence, how we deal with the inevitable man-vs-machine struggle, and the repercussions of using online social reputation as a form of governmental control.

The Last Firewall joins its siblings. 
Buy it now: Amazon Kindle, in paperback, and Kobo eReader.
(Other retailers coming soon.)

I hope you enjoy it! Here is some of the early praise for the book:

“Awesome near-term science fiction.” – Brad Feld, Foundry Group managing director

“An insightful and adrenaline-inducing tale of what humanity could become and the machines we could spawn.” – Ben Huh, CEO of Cheezburger

“A fun read and tantalizing study of the future of technology: both inviting and alarming.” – Harper Reed, former CTO of Obama for America, Threadless

“A fascinating and prescient take on what the world will look like once computers become smarter than people. Highly recommended.” – Mat Ellis, Founder & CEO Cloudability

“A phenomenal ride through a post-scarcity world where humans are caught between rogue AIs. If you like having your mind blown, read this book!” – Gene Kim, author of The Phoenix Project: A Novel About IT, DevOps, and Helping Your Business Win

“The Last Firewall is like William Gibson had a baby with Tom Clancy and let Walter Jon Williams teach it karate. Superbly done.” – Jake F. Simons, author of Wingman and Train Wreck