I’ll be at the Conference on World Affairs in Boulder, Colorado next week, from April 9th to 13th. I’ll be speaking on topics including artificial intelligence, social media, data ownership and privacy, writing science fiction, and the role of science fiction in technological progress.

If you’ll be at CWA or in Boulder, and would like to meet up, chat, or get a book signed, please reach out to me. You can find me on twitter as @hertling, or by email at william dot hertling at gmail dot com.

 

How does it affect time travel if you start with the assumption that reality as we know it is a computer simulation?

In this case, time travel has nothing to do with physics, and everything to do with software simulations.

Time travel backward would require that the program saves all previous states (or at least checkpoints at fine enough granularity to make it useful enough for time traveling) and the ability to insert logic and data from the present into states of the program in the past. Seems feasible.

Time travel forward would consist of removing the time traveling person from the program, running the program forward until reaching the future destination, then reinserting the person.

Forward time travel is relatively cheap (because you’d be running the program forward anyhow), but backward time travel is expensive because you keep having to roll the universe back, slowing the forward progress of time. In fact, one person could do a denial of service attack on reality simply by continually traveling to the distant past. Then, every time you come back, you would have to immediately return to the past.

ChildrenOfArkadiaI read Children of Arkadia, by Darusha Wehm, over the weekend. This was a fascinating book. The setting is a classic of science fiction: a bunch of idealistic settlers embark on creating an idealized society in a space station colony. There are two unique twists: the artificial general intelligences that accompany them have, in theory, equal rights and free will as the humans. There are no antagonists: no one is out to sabotage society, there’s no evil villain. Just circumstances.

Darusha does an excellent job exploring some of the obvious and not-so-obvious conflicts that emerge. Can an all-knowing, super intelligence AI ever really be on equal footing with humans? How does work get done in a post-scarcity economy? Can even the best-intentioned people armed with powerful and helpful technology ever create a true utopia?

Children of Arkadia manages to explore all this and give us interesting and diverse characters in a compact, fun to read story. Recommended.

 

Mark Zuckerberg wrote about how he plans to personally work on artificial intelligence in the next year. It’s a nice article that lays out the landscape of AI developments. But he ends with a statement that misrepresents the relevance of Moore’s Law to future AI development. He wrote (with my added bold for emphasis):

Since no one understands how general unsupervised learning actually works, we’re quite a ways off from building the general AIs you see in movies. Some people claim this is just a matter of getting more computing power — and that as Moore’s law continues and computing becomes cheaper we’ll naturally have AIs that surpass human intelligence. This is incorrect. We fundamentally do not understand how general learning works. This is an unsolved problem — maybe the most important problem of this century or even millennium. Until we solve this problem, throwing all the machine power in the world at it cannot create an AI that can do everything a person can.

I don’t believe anyone knowledge about AI argues that Moore’s Law is going to spontaneously create AI. I’ll give Mark the benefit of the doubt, and assume he was trying to be succinct. But it’s important to understand exactly why Moore’s Law is important to AI.

We don’t understand how general unsupervised learning works, nor do we understand how much of human intelligence works. But we do have working examples in the form of human brains. We do not today have the computer parts necessary to simulate a human brain. The best brain simulations by the largest supercomputing clusters have been able to approximate 1% of the brain at 1/10,000th of the normal cognitive speeds. In other words, current computer processors are 1,000,000 times too slow to simulate a human brain.

The Wright Brothers succeeded in making the first controlled, powered, and sustained heavier-than-air human flight not because of some massive breakthrough in the principles of aerodynamics (which were well understood at the time), but because engines were growing more powerful, and powered flight was feasible for the first time around the point at which they were working. They made some breakthroughs in aircraft controls, but even if the Wright Brothers had never flown, someone else would have within a period of a few years. It was breakthroughs in engine technology, specifically, the power-to-weight ratio, that enabled powered flight around the turn of the century.

AI proponents who talk about Moore’s Law are not saying AI will spontaneously erupt from nowhere, but that increasing computing processing power will make AI possible, in the same way that more powerful engines made flight possible.

Those same AI proponents who believe in the significance of Moore’s Law can be subdivided into two categories. One group argues we’ll never understand intelligence fully. Our best hope of creating it is with a brute force biological simulation. In other words, recreate the human brain structure, and tweak it to make it better or faster. The second group argues we may invent our own techniques for implementing intelligence (just as we implemented our own approach to flight that differs from birds), but the underlying computational needs will be roughly equal: certainly, we won’t be able to do it when we’re a million times deficient in processing power.

Moore’s Law gives us an important cadence to the progress in AI development. When naysayers argue AI can’t be created, they’re looking at historical progress in AI, which is a bit like looking at powered flight prior to 1850: pretty laughable. The rate of AI progress will increase as computer processing speeds approach that of the human brain. When other groups argue we should already have AI, they’re being hopelessly optimistic about our ability to recreate intelligence a million times more efficiently than nature was able to evolve.

The increasing speed of computer processors as predicted by Moore’s Law, and the crossover point where processing power aligns with the complexity of the human brain tells us a great deal about the timing of when we’ll see advanced AI on par with human intelligence.

Google announced a new analytical AI that analyzes emails to determine their content, then proposes a list of short likely replies for the user to pick from. It’s not exactly ELOPe, but definitely getting closer all the time.

smartreply2

 

I gave a talk in the Netherlands last week about the future of technology. I’m gathering together a few resources here for attendees. Even if you didn’t attend, you may still find these interesting, although some of the context will be lost.

Previous Articles

I’ve written a handful of articles on these topics in the past. Below are three that I think are relevant:

Next Ten Years

Ten to Thirty Years

 

From Huffington Post:

A new app called Crystal calls itself “the biggest improvement to email since spell-check.” Its goal is to help you write emails with empathy. How? By analyzing people’s personalities.

Crystal, which launched on Wednesday, exists in the form of a website and a Chrome extension, which integrates the service with your Gmail…

With the personality profile, you’ll see advice on how to speak to the person, email them, work with them and sell to them. You’ll even be told what comes naturally to them and what does not…

Here’s a screenshot:

Each time I’ve had a new novel come out, I’ve done an article about the technology in the previous novel. Here are two of my prior posts:

Now that The Turing Exception is available, it is time to cover the tech in The Last Firewall.

As I’ve written about elsewhere, my books are set at ten year intervals, starting with Avogadro Corp in 2015 (gulp!) and The Turing Exception in 2045. So The Last Firewall is set in 2035. For this sort of timeframe, I extrapolate based on underlying technology trends. With that, let’s get into the tech.

Neural implants

If you recall, I toyed with the idea of a neural implant in the epilogue to Avogadro Corp. That was done for theatrical reasons, but I don’t consider them feasible in the current day, in the way that they’re envisioned in the books.

FutureComputerSizes

Extrapolated computer sizes through 2060

I didn’t anticipate writing about neural implants at all. But as I looked at various charts of trends, one that stood out was the physical size of computers. If computers kept decreasing in size at their current rate, then an entire computer, including the processor, memory, storage, power supply and input/output devices would be small enough to implant in your head.

What does it mean to have a power supply for a computer in your head? I don’t know. How about an input/output device? Obviously I don’t expect a microscopic keyboard. I expect that some sort of appropriate technology will be invented. Like trends in bandwidth and CPU speeds, we can’t know exactly what innovations will get us there, but the trends themselves are very consistent.

For an implant, the logical input and output is your mind, in the form of tapping into neural signaling. The implication is that information can be added, subtracted, or modified in what you see, hear, smell, and physically feel.

Terminator HUD

Terminator HUD

At the most basic, this could involve “screens” superimposed over your vision, so that you could watch a movie or surf a website without the use of an external display. Information can also be displayed mixed with your normal visual data. There’s a scene where Leon goes to work in the institution, and anytime he focuses on anyone, a status bubble appears above their head explaining whether they’re available and what they’re working on.

Similarly, information can be read from neurons, so that the user might imagine manipulating whatever’s represented visually, and the implant can sense this and react accordingly.

Although the novel doesn’t go into it, there’s a training period after someone gets an implant. The training starts with observing a series of photographs on an external display. The implant monitors neural activities, and gradually learns which neurons are responsible for what in a given person’s brain. Later training would ask the user to attempt to interact with projected content, while neural activity is again read.

My expectation is that each person develops their own unique way of interacting with their implant, but there are many conventions in common. Focusing on a mental image of a particular person (or if an image can’t be formed, then to imagine their name printed on paper) would bring up options for interacting with them, as an example.

People with implants can have video calls. The ideal way is still with a video camera of some kind, but it’s not strictly necessary. A neural implant will gradually train itself, comparing neural signaling with external video feedback, to determine what a person looks like, correlating neural signals with facial expressions, until it can build up a reasonable facsimile of a person. Once that point is reached, a reasonable quality video stream can be created on the fly using residual self-image.

Such a video stream can be manipulated however, to suppress emotional giveaways, if the user desires.

Cochlear implants, mind-controlled robotic arms and the DARPA cortical modem convince me that this is one area of technology where we’re definitely on track. I feel highly confident we’ll see implants like those described in The Last Firewall, in roughly this timeframe (2030s). In fact, I’m more confident about this than I am in strong AI.

Cat’s Implant

Catherine Matthews has a neural implant she received as a child. It was primarily designed to suppress her epileptic seizures by acting as a form of active noise cancellation for synchronous neuronal activity.

However, Catherine also has a number of special abilities that most people do not have: the ability to manipulate the net on par with or even exceeding the abilities of AI. Why does she have this ability?

The inspiration for this came from my time as a graduate student studying computer networking. Along with other folks at the University of Arizona, studying under Professor Larry Peterson, we developed object-oriented network protocol implementations on a framework called x-kernel.

These days we pretty much all have root access on our own computers, but back in the early 90s in a computer science lab, most of us did not.

Because we did not have root access on the computers we used as students, we were restricted to running x-kernel in user mode. This means that instead of our network protocols running on top of ethernet, we were running on top of IP. In effect, we run a stack that looked like TCP/IP/IP. In effect, we could simulate network traffic between two different machines, but I couldn’t actually interact with non-x-kernel protocol stacks on other machines.

Graph of IPSEC implemented in x-kernel on Linux. From after my time at UofA.

Graph of IPSEC implemented in x-kernel on Linux. From after my time at UofA.

In 1994 or so, I ported x-kernel to Linux. Finally I was running x-kernel on a box that I had root access on. Using raw socket mode on Unix, I could run x-kernel user-mode implementations of protocols and interact with network services on other machines. All sorts of graduate school hijinks ensued. (Famously we’d use ICMP network unreachable messages to kick all the computers in the school off the network when we wanted to run protocol performance tests. It would force everyone off the network for about 30 seconds, and you could get artificially high performance numbers.)

In the future depicted by the Singularity series, one of the mechanisms used to ensure that AI do not run amok is that they run in something akin to a virtualization layer above the hardware, which prevents them from doing many things, and allows them to be monitored. Similarly, people with implants do not have access to the lowest layers of hardware either.

But Cat does. Her medical-grade implant predates the standardized implants created later. So she has the ability to send and receive network packets that most other people and AI do not. From this stems her unique abilities to manipulate the network.

matrix-wireframeMix into this the fact that she’s had her implant since childhood, and that she routinely practices meditation and qi gong (which changes the way our brains work), and you get someone who can do more than other people.

All that being said, this is science fiction, and there’s plenty of handwavium going on here, but there is some general basis for the notion of being able to do more with her neural implant.

This post has gone on pretty long, so I think I’ll call it quits here. In the next post I’ll talk about transportation and employment in 2035.

DARPA backing cortical modem:

The first Program Manager to present, Phillip Alvelda, opened the event with his mind blowing project to develop a working “cortical modem”. What is a cortical modem you ask? Quite simply it is a direct neural interface that will allow for the visual display of information without the use of glasses or goggles. I was largely at this event to learn about this project and I wasn’t disappointed.

Leveraging the work of Karl Deisseroth in the area of optogenetics, the cortical modem project aims to build a low cost neural interface based display device. The short term goal of the project is the development of a device about the size of two stacked nickels with a cost of goods on the order of $10 which would enable a simple visual display via a direct interface to the visual cortex with the visual fidelity of something like an early LED digital clock.

The implications of this project are astounding.