Every once in a while, I read a book whose vision of the future makes me sit back and think Ah yes, this is how it will be. Accelerando by Charles Stross dealt with the acceleration of technological development. Daemon by Daniel Suarez depicted how a computer can manipulate the world around it.

Nexus and Crux, the two techothrillers from Ramez Naam, do that for neural implants, technology that provides an interface between our brains and the outside world.

I just read an advance review copy of Naam’s Crux, a sequel that follows tight on the heels of Nexus. (It will be available August 27th, but you can preorder a copy now on Amazon.) Both books revolve around a technology called Nexus, a nanotech drug that interfaces with the human brain. It allows a user to run apps in their brain, to exercise conscious control over their mood, augment their intelligence, and communicate telepathically with other Nexus users.

But even as this all-powerful technology improves the lives of millions by fixing debilitating mental illnesses, helping monks meditate, by facilitating more powerful group consciousness and thought, it is also restricted by governments, abused by criminals, and leads to power struggles.

Crux is an adrenaline filled ride through the near-term future. Set on a global stage in a near-future world where the United States tries to tight restricts technology through shadowy intelligence organizations, Nexus and Crux run the gamut of post-human technology: human-brain uploads, military body upgrades, nanotechnology, and artificial intelligence, but the definite star of the show is the Nexus drug and its impact on increasing the power of the human mind.

I recommend both books, although Crux won’t make sense without the setup of Nexus, so go read both. You’ll be left realizing the future will look much like Ramez Naam’s books, full of both beautiful and very scary possibilities.

To celebrate the release of The Last Firewall, I decided to do another article about the technology behind the books. I wrote about the technology behind Avogadro Corp a few months ago, and that turned out to be fairly popular, so I’m back with the technology behind A.I. Apocalypse. (I don’t want to do The Last Firewall yet, because that would give away too many spoilers.) Although I don’t say so explicitly in the books, Avogadro Corp is set in 2015, A.I. Apocalypse in 2025, and The Last Firewall in 2035. I make all the technology as plausible as possible. That means it either exists, or is in development, or can be extrapolated from current technology. I described how I extrapolate tech trends and predict the future.

    • Semi-autonomous cars: In an early scene of the novel, Leon runs across the street, trusting that the cars will automatically stop due to their mandatory “SafetyPilots”. As we know, Google has an autonomous car, and car manufacturers, such as Toyota, are working on them now. Many manufacturers are starting with small pieces of autonomy: maintaining location within a lane, maintaining the distance from the car ahead of them. Fully autonomous vehicles are clearly more expensive than partially autonomous vehicles, so it’s quite reasonable we’ll see collision avoidance technology before we see fully autonomous vehicles. Our safety conscious culture and insurance risk reduction could result in such technology being mandatory within ten years.
Autonomous copter from 3D Robotics
    • Autonomous package delivery drones: Leon and his friends make their escape from a burning city via an unmanned package delivery plane. These are very feasible. Autonomous flying planes are very popular among hobbyists now. Chris Anderson, former editor of Wired, left to form 3D Robotics, who manufactures auto-pilot systems. Fuel efficiency is partly a function of flight speed. It makes sense that in a more fuel efficient future, we want to convey packages at just the right speed: not faster than they need to get there. When human pilots are removed from the picture, package delivery drones can become an economical way to move goods.
    • Solar-powered flight: Also feasible, the first long-distance flights using solar power have already taken place. There are solar powered airships, solar powered quadcopter, solar powered fixed wing surveillance drone, and a long duration solar powered drone. The attraction to solar power includes indefinite flight time and low cost of flight. The drone Leon and his friends take has to land before dark, but that wouldn’t necessarily be the case in real life: most drones would contain battery power to allow them to maintain sufficient altitude at night (although they might lazily drift and trade-off some elevation during the dark hours).
    • Mobile Phones as Computers: Leon and his friends own phones that work as both smartphone and computer by synchronizing their output to nearby display and input devices. This is similar to the Ubuntu Edge, which can be used as a full computer or phone. While computing power is increasing all the time, one of the constraints is displays. Phones can’t just grow indefinitely larger. Flexible screens might help, but still have limitations. The solution in the novel is the availability of cheap, high resolution displays nearly everywhere. By knocking your device against them, the phone and screen exchange a handshake that then permits the wireless display of data. Bump does this sort of synchronization now for exchanging contacts, files, and other data. Air Display creates a wireless remote display for iOS/OSX devices. One Amazon reviewer knocked A.I. Apocalypse for failing to foresee Google
      Google Glass projector/prism system

      Glasses. At the time I wrote the manuscript, they hadn’t been invented yet. It’s still not clear to me whether this will be the future or not. Glass could be yet another type of screen (after desktop monitor, tablet, and mobile phone screen), and while it offers certain conveniences (always there, relatively unobtrusive) it’s still a very small screen (many call it tiny) that’s more suitable for the display of summary information than for an immersive experience. That may change over time, such that we see full-screen glasses.

    • Evolutionary Computer Viruses: One of the main themes of the novel is that artificial intelligence will evolve rather than be programmed. I’ve braved surveillance by the NSA to research current articles on evolutionary computer viruses. (Don’t try this at home, kids.)  Computer Virus Evolution Model Inspired by Biological DNA is a research paper describing the idea in more detail that concludes “The simulation experiments were conducted and the results indicate that computer viruses have enormous capabilities of self-propagation and self-evolution.” The Frankenstein virus was a self-assembling and evolving computer virus put together by two researchers from the University of Texas at Dallas.
    • Pilfering existing code to build a virus: Yup, the Frankenstein virus does that too.
    • Humanoid robots: Later in the novel, the virus AI embody several humanoid robots. ASIMO is a long-running research project by Honda. Part of the reason Japan does research into humanoid bots (as opposed to other, more utilitarian designs) is that they see the use of robots as essential in caring for their growing elderly population. Even in the United States, we’re starting to see more research into humanoid form robots. That’s because robots need to navigate structures and tools designed for the human form. If you need to go up stairs, open doors, or use utensils, the human form works. If you look at the DARPA

      robotics research challenge, you see humanoid robots being used, such as ATLAS, from Boston Dynamics. The same folks who brought us the scary looking Big Dog bring the even scarier looking ATLAS. (ASIMO is so cuddly by comparison.) Since the DARPA challenge requires the robot to negotiate human spaces (e.g. to go into a nuclear reactor and shutdown equipment), it takes a humanoid form to succeed at the challenge. Boston Dynamics has a ton of experience in this space. Their earlier PETMAN robot is also worth looking at.

    • Mesh networking: In the novel, Avogadro Corp (a thinly disguised Google) has deployed Mesh networking every to guarantee net neutrality. Mesh networking is real and exists today. I think it would be a great solution to the last mile bottleneck. Google Fiber is proof that Google cares about the connection to the end-user. They just happened to have chosen a different technology to achieve the same result. Fiber is coming to Austin, Texas and Provo, Utah after starting in Kansas City, so clearly Google wants to continue the experiment. Unfortunately, commercial use of mesh networking seems to have been relegated to creating networks for legacy hotels and similar buildings. But I think there’s promise for consumers: Project Byzantium is a mesh network based on the Raspberry Pi for the zombie apocalypse. The low cost of the Raspberry Pi is awesome, but we should see even lower cost, smaller size, and lower power consumption solutions as time goes by. Then it becomes easy to sprinkle these all around, creating ad-hoc mesh networks everywhere.
    • Internet Kill Switch: A late plot point involves a master password embedded in all network routers. While this doesn’t exist in reality, in the context of the novel, Avogadro Corp has basically given away mesh boxes and routers for years, and they’ve effectively become the sole provider of routers. Historically speaking, many routers come with default passwords, and many people don’t change them. Thanks to all the recent disclosures around the NSA spying on Americas, we know there are more backdoors than ever into computer systems around the world. I think it’s within the realm of feasibility that if you had one company providing all the routers, that there could exist a backdoor to exploit them all. (Of course, how the kill signal propagates around the internet is another question.)
  • ELOPe’s Plane: This was modeled after Boeing’s X-37b. It’s pretty far-fetched that this could be considered a multi-mission military plane, but it’s what I had in mind visually. Think of the X-37b being 50% larger, wings large enough to make it aerodynamically appropriate for sustained flight, and the payload section holding a small number of passengers, and you’ve basically got ELOPe’s white unmarked plane.
  • Rail gun: PA-60-41 uses a rail gun to shoot down the incoming military attack. Rail guns exist of course, although why one would be in downtown Chicago is questionable.
  • Lakeside Technology Data Center: As the time of writing, Lakeside Technology was the world’s largest data center. It’s now the third largest. It does have a distinguishing cooling tower that was the target of ELOPe’s attack.
  • Evolving General Purpose AI: The biggest leap in the book, of course, is that general purpose AI could evolve from simple computer viruses to sophisticated intelligences that communicate, trade, and form a civilization complete with social reputation. As much as possible, I matched the evolution of biological life: from single cell organisms to multicellular life, learned intelligence vs. evolved intelligence, etc. For this reason, I think it’s inevitable that will eventually occur: it’s just life evolving on a different substrate. (It’s probably not reasonable that it could happen so quickly, however.)
Hopefully I haven’t missed anything huge. If I have, just let me know in the comments, and I’ll address it. If you enjoyed Avogadro Corp or A.I. Apocalypse, I hope you’ll check out my latest novel The Last Firewall

I’d like to announce that The Last Firewall is available!

In the year 2035, robots, artificial intelligences, and neural implants have become commonplace. The Institute for Applied Ethics keeps the peace, using social reputation to ensure that robots and humans don’t harm society or each other. But a powerful AI named Adam has found a way around the restrictions. 

Catherine Matthews, nineteen years old, has a unique gift: the ability to manipulate the net with her neural implant. Yanked out of her perfectly ordinary life, Catherine becomes the last firewall standing between Adam and his quest for world domination. 

Two+ years in the making, I’m just so excited to finally release this novel. As with my other novels, I explore themes of what life will be like with artificial intelligence, how we deal with the inevitable man-vs-machine struggle, and the repercussions of using online social reputation as a form of governmental control.

The Last Firewall joins its siblings. 
Buy it now: Amazon Kindle, in paperback, and Kobo eReader.
(Other retailers coming soon.)

I hope you enjoy it! Here is some of the early praise for the book:

“Awesome near-term science fiction.” – Brad Feld, Foundry Group managing director

“An insightful and adrenaline-inducing tale of what humanity could become and the machines we could spawn.” – Ben Huh, CEO of Cheezburger

“A fun read and tantalizing study of the future of technology: both inviting and alarming.” – Harper Reed, former CTO of Obama for America, Threadless

“A fascinating and prescient take on what the world will look like once computers become smarter than people. Highly recommended.” – Mat Ellis, Founder & CEO Cloudability

“A phenomenal ride through a post-scarcity world where humans are caught between rogue AIs. If you like having your mind blown, read this book!” – Gene Kim, author of The Phoenix Project: A Novel About IT, DevOps, and Helping Your Business Win

“The Last Firewall is like William Gibson had a baby with Tom Clancy and let Walter Jon Williams teach it karate. Superbly done.” – Jake F. Simons, author of Wingman and Train Wreck

The online world is buzzing with news of Elon Musk’s hyperloop, a very fast transportation system that could take people from LA to NY in under an hour. Elon will reveal the details of the system on August 12th, as he mentioned on Twitter.

He’s been talking about it for over a year, and has said:

“This system I have in mind, how would you like something that can never crash, is immune to weather, it goes 3 or 4 times faster than the bullet train. It goes an average speed of twice what an aircraft would do. You would go from downtown LA to downtown San Francisco in under 30 minutes. It would cost you much less than an air ticket than any other mode of transport. I think we could actually make it self-powering if you put solar panels on it, you generate more power than you would consume in the system. There’s a way to store the power so it would run 24/7 without using batteries. Yes, this is possible, absolutely.”

This widely shared concept photo is actually the Aeromovel, a pneumatic train system:

It’s pretty awesome stuff.

Small spoiler alert! Don’t read further unless you want to see a tiny bit of a scene from the last half of The Last Firewall.

Now the one problem with writing near-term science fiction is that stuff keeps coming true before I can get the books out. In this case, I have a vactrain in The Last Firewall. Leon and Mike must hijack the train to avoid detection. Here’s part of the scene where they discuss it:

“Now how do we get to Tucson?” Leon asked. “We are not driving again.”
Mike stared off into space. “I have an idea: the Continental.”
The super-sonic subterranean maglev was an early gift from AI-kind to humans, running in a partial vacuum at a peak of three thousand miles an hour.
“The train only stops in LA and NY,” Leon said. “And besides, we’ll be listed on the passenger manifest.”
“There are emergency exits.” Mike pushed a link over in netspace. “And with your new implant, can you hack the manifest?”
Leon glanced at the shared news article, accompanied by a photograph of a small concrete building peeking out of a cactus covered landscape.
“Marana, Arizona, about a half hour north of Tucson,” Mike said. “Emergency egress number three.”
“So we hop on the Continental and trigger an emergency stop when we’re near the exit?”
“Exactly,” Mike said. “Think that hopped-up implant of yours can fool some train sensors?”

It’s one of my favorite bits of technology in the book, and I was daydreaming about it before I even starting writing the first draft. Now thanks to Elon Musk, we may all get to ride in it.

I know I’ve gone dark over the last few months. I’m sure that’s left many people wondering about the status of The Last Firewall, my third Singularity novel.

I’m delighted to announce that The Last Firewall will be available this summer. I’m targeting an August launch. 
So why the long wait?

As you may know, my previous novels are all self-published. They’ve sold well, but I often wondered how many more readers might find my books if I went with a traditional publisher.
In addition, many folks have asked “When will we see the movie version?” about Avogadro Corp and A.I. Apocalypse, but very few self-published novels have made that leap. Hollywood often judges potential movie options by the interest publishers take in novels. That was another reason why I was interested in traditional publication.
I started working with a literary agent who saw great promise in The Last Firewall, but wanted substantial revisions. I subsequently worked on The Last Firewall for another eight months until it gleamed brighter than the titanium shell of a robot.
That’s where I’ve been for a while, and I think the results are great: I’m convinced it reads better than anything I’ve done before, and a few other folks have read the manuscript and agree.
However, traditional publishing is a tough nut to crack, and if I persist with that path, The Last Firewall will continue to languish on my computer when it really wants to be read.
So I’m self-publishing The Last Firewall, as I have my other novels. It’s worked great in the past, and I’m happy to be going this route again. I’m choosing cover images and working on cover design right now, even as the manuscript undergoes a final round of proofreading.

I think it’s going to be awesome, and can’t wait to get it in your hands. If you haven’t done so, sign up for the mailing list and I’ll let you know when it’s available. 

1. Diagram from Google’s patent
application for floating data centers.

The technology in Avogadro Corp and A.I. Apocalypse is frequently polarizing: readers either love it or believe it’s utterly implausible.

The intention is for the portrayal to be as realistic as possible. Anything I write about either exists today as a product, is in active research, or is extrapolated from current trends. The process I use to extrapolate tech trends is described in an article I wrote called How to Predict the Future. I’ve also drawn upon my twenty years as a software developer, my work on social media strategy, and a bit of experience in writing and using recommendation engines, including competing for the Netflix Prize.

Let’s examine a few specific ideas manifested in the books and see where those ideas originated.

    • Floating Data Centers: (Status: Research) Google filed a patent in 2007 for a floating data center based on a barge. The patent application was discovered and shared on Slashdot in 2008. Like many companies, filing a patent application doesn’t mean that Google will be deploying ocean-based data centers any time soon, but simply that the idea is feasible, and they’d like to own the right to do so in the future, if it becomes viable. And of course, there is the very real problem of piracy.
Pelamis Wave converter in action.
    • Portland Wave Converter: (Status: Real) In Avogadro Corp I describe the Portland Wave Converter as a machine that converts wave motion into electrical energy. This was also described as part of the Google patent application for a floating data center. (See diagram 1.) But Pelamis Wave Power is an existing commercialization of this technology. You can buy and use wave power converters today. Pelamis did a full-scale test in 2004, installed the first multi-machine farm in 2008 off the coast of Portugal, is doing testing off the coast of Scotland, and is actively working on installing up to 170MW in Scottish waters.
Pionen Data Center. (Src: Pingdom)
    • Underground Data Center: (Status: Real) The Swedish data center described as being in a converted underground bunker is in fact the Pionen data center owned by Bahnhof. Originally a nuclear bunker, it’s housed nearly a hundred feet underground and is capable of withstanding a nuclear attack. It has backup power provided by submarine engines and triple redundant backbone connections to the Internet and fifteen full-time employees on site.
    • Netflix Prize: (Status: Real) A real competition that took place from 2006 through 2009, the Netflix Prize was a one million dollar contest to develop a better recommendation than Netflix’s original Cinematch algorithm. Thousands of people participated, and hundreds of teams beat Netflix’s algorithm, but only one team was the first to better it by 10%, the required threshold for payout. I entered the competition and realized within a few weeks that there were many other ways recommendation engine technology could be put to use, including a never-before-done approach to customer support content that increased the helpfulness of support content by 25%.
    • Email-to-Web Bridge: (Status: Real) At the time I wrote Avogadro Corp, IBM had a technical paper describing how they build an email-to-web bridge as a research experiment. Five years later, I can’t seem to find the article anymore, but I did find some working examples of services that do the same thing. In fact, www4mail appears to have been working since 1998.
    • Decision-Making via Email: (Status: Real) From 2003 to 20011, I worked in a position where everyone I interacted with in my corporation was physically and organizationally remote. We interacted daily via email and weekly via phone meetings. Many decisions were communicated by email. They might later be discussed in a meeting, but if a communication came down by a manager, we’d just have to work within the constraints of that decision. Through social engineering, it possible to make those emails even more effective. For example, employee A, a manager, is about to go on vacation. ELOPe sends an from employee A to employee B, explaining a decision that was making, and asking employee B to handle any questions for that decision. Everyone else receives an email saying the decision was made, and ask employee B if there are questions. The combination of an official email announcement plus a very real human contact to act as point person becomes very persuasive. On the other hand, some Googlers have read Avogadro Corp, and they’ve said the culture at Google is very different. They are centrally located and therefore do much more in face to face meetings.
Foster-Miller Armed Robot
(Src: Wikipedia)
  • iRobot military robots: (Status: Real) iRobot has both military bots and maritime bots, although what I envisioned for the deck robots on the floating data centers is closer to the Foster-Miller Talon, an armed, tank-style robot. The Gavia is probably the closest equivalent to the underwater patrolling robots. It accepts modular payloads, and while it’s not clear if that could include an offensive capability, it seems possible.
  • Language optimization based on recommendation engines:  (Status: Made Up) Unfortunately, not real. It’s not impossible, but it’s also not a straightforward extrapolation. There’s hard problems to solve. Jacob Perkins, CTO of Weotta, wrote an excellent blog post analyzing ELOPe’s language optimization skills. He divides the language optimization into three parts: topic analysis, outcome analysis, and language generation. Although challenging, topic analysis is feasible, and there are off-the-shelf programming libraries to assist with this, as there also are for language generation. The really challenging part is the outcome analysis. He writes:

    “This sounds like next-generation sentiment analysis. You need to go deeper than simple failure vs. success, positive vs. negative, since you want to know which email chains within a given topic produced the best responses, and what language they have in common. In other words, you need a language model that weights successful outcome language much higher than failure outcome language. The only way I can think of doing this with a decent level of accuracy is massive amounts of human verified training data. Technically do-able, but very expensive in terms of time and effort.

    What really pushes the bounds of plausibility is that the language model can’t be universal. Everyone has their own likes, dislikes, biases, and preferences. So you need language models that are specific to individuals, or clusters of individuals that respond similarly on the same topic. Since these clusters are topic specific, every individual would belong to many (topic, cluster) pairs. Given N topics and an average of M clusters within each topic, that’s N*M language models that need to be created. And one of the major plot points of the book falls out naturally: ELOPe needs access to huge amounts of high end compute resources.”

    This is a case where it’s nice to be a science fiction author. 🙂

I hope you enjoyed this post. If you have any other questions about the technology of Avogadro Corp, just let me know!

Everyone would like a sure-fire way to predict the future. Maybe you’re thinking about startups to invest in, or making decisions about where to place resources in your company, or deciding on a future career, or where to live. Maybe you just care about what things will be like in 10, 20, or 30 years.

There are many techniques to think logically about the future, to inspire idea creation, and to predict when future inventions will occur.

 

I’d like to share one technique that I’ve used successfully. It’s proven accurate on many occasions. And it’s the same technique that I’ve used as a writer to create realistic technothrillers set in the near future. I’m going to start by going back to 1994.

 

Predicting Streaming Video and the Birth of the Spreadsheet
There seem to be two schools of thought on how to predict the future of information technology: looking at software or looking at hardware. I believe that looking at hardware curves is always simpler and more accurate.
This is the story of a spreadsheet I’ve been keeping for almost twenty years.
In the mid-1990s, a good friend of mine, Gene Kim (founder of Tripwire and author of When IT Fails: A Business Novel) and I were in graduate school together in the Computer Science program at the University of Arizona. A big technical challenge we studied was piping streaming video over networks. It was difficult because we had limited bandwidth to send the bits through, and limited processing power to compress and decompress the video. We needed improvements in video compression and in TCP/IP – the underlying protocol that essentially runs the Internet.
The funny thing was that no matter how many incremental improvements researchers made (there were dozens of people working on different angles of this), streaming video always seemed to be just around the corner. I heard “Next year will be the year for video” or similar refrains many times over the course of several years. Yet it never happened.
Around this time I started a spreadsheet, seeding it with all of the computers I’d owned over the years. I included their processing power, the size of their hard drives, the amount of RAM they had, and their modem speed. I calculated the average annual increase of each of these attributes, and then plotted these forward in time.
I looked at the future predictions for “modem speed” (as I called it back then, today we’d called it internet connection speed or bandwidth). By this time, I was tired of hearing that streaming video was just around the corner, and I decided to forget about trying to predict advancements in software compression, and just look at the hardware trend. The hardware trend showed that internet connection speeds were increasing, and by 2005, the speed of the connection would be sufficient that we could reasonably stream video in real time without resorting to heroic amounts of video compression or miracles in internet protocols. Gene Kim laughed at my prediction.
Nine years later, in February 2005, YouTube arrived. Streaming video had finally made it.
The same spreadsheet also predicted we’d see a music downloading service in 1999 or 2000. Napster arrived in June, 1999.
The data has held surprisingly accurate over the long term. Using just two data points, the modem I had in 1986 and the modem I had in 1998, the spreadsheet predicts that I’d have a 25 megabit/second connection in 2012. As I currently have a 30 megabit/second connection, this is a very accurate 15 year prediction.
Why It Works Part One: Linear vs. Non-Linear
Without really understanding the concept, it turns out that what I was doing was using linear trends (advancements that proceed smoothly over time), to predict the timing of non-linear events (technology disruptions) by calculating when the underlying hardware would enable a breakthrough. This is what I mean by “forget about trying to predict advancements in software and just look at the hardware trend”.
It’s still necessary to imagine the future development (although the trends can help inspire ideas). What this technique does is let you map an idea to the underlying requirements to figure out when it will happen.
For example, it answers questions like these:
When will the last magnetic platter hard drive be manufactured?
2016. I plotted the growth in capacity of magnetic platter hard drives and flash drives back in 2006 or so, and saw that flash would overtake magnetic media in 2016.
When will a general purpose computer be small enough to be implanted inside your brain?
2030. Based on the continual shrinking of computers, by 2030 an entire computer will be the size of a pencil eraser, which would be easy to implant.
When will a general purpose computer be able to simulate human level intelligence?
Between 2024 and 2050, depending on which estimate of the complexity of human intelligence is selected, and the number of computers used to simulate it.
Wait, a second: Human level artificial intelligence by 2024? Gene Kim would laugh at this. Isn’t AI a really challenging field? Haven’t people been predicting artificial intelligence would be just around the corner for forty years?
Why It Works Part Two: Crowdsourcing
At my panel on the future of artificial intelligence at SXSW, one of my co-panelists objected to the notion that exponential growth in computer power was, by itself, all that was necessary to develop human level intelligence in computers. There are very difficult problems to solve in artificial intelligence, he said, and each of those problems requires effort by very talented researchers.
I don’t disagree, but the world is a big place full of talented people. Open source and crowdsourcing principles are well understood: When you get enough talented people working on a problem, especially in an open way, progress comes quickly.
I wrote an article for the IEEE Spectrum called The Future of Robotics and Artificial Intelligence is Open. In it, I examine how the hobbyist community is now building inexpensive unmanned aerial vehicle auto-pilot hardware and software. What once cost $20,000 and was produced by skilled researchers in a lab, now costs $500 and is produced by hobbyists working part-time.
Once the hardware is capable enough, the invention is enabled. Before this point, it can’t be done.  You can’t have a motor vehicle without a motor, for example.
As the capable hardware becomes widely available, the invention becomes inevitable, because it enters the realm of crowdsourcing: now hundreds or thousands of people can contribute to it. When enough people had enough bandwidth for sharing music, it was inevitable that someone, somewhere was going to invent online music sharing. Napster just happened to have been first.
IBM’s Watson, which won Jeopardy, was built using three million dollars in hardware and had 2,880 processing cores. When that same amount of computer power is available in our personal computers (about 2025), we won’t just have a team of researchers at IBM playing with advanced AI. We’ll have hundreds of thousands of AI enthusiasts around the world contributing to an open source equivalent to Watson. Then AI will really take off.
(If you doubt that many people are interested, recall that more than 100,000 people registered for Stanford’s free course on AI and a similar number registered for the machine learning / Google self-driving car class.)
Of course, this technique doesn’t work for every class of innovation. Wikipedia was a tremendous invention in the process of knowledge curation, and it was dependent, in turn, on the invention of wikis. But it’s hard to say, even with hindsight, that we could have predicted Wikipedia, let alone forecast when it would occur.
(If one had the idea of an crowd curated online knowledge system, you could apply the litmus test of internet connection rate to assess when there would be a viable number of contributors and users. A documentation system such as a wiki is useless without any way to access it. But I digress…)
Objection, Your Honor
A common objection is that linear trends won’t continue to increase exponentially because we’ll run into a fundamental limitation: e.g. for computer processing speeds, we’ll run into the manufacturing limits for silicon, or the heat dissipation limit, or the signal propagation limit, etc.
I remember first reading statements like the above in the mid-1980s about the Intel 80386 processor. I think the statement was that they were using an 800 nm process for manufacturing the chips, but they were about to run into a fundamental limit and wouldn’t be able to go much smaller. (Smaller equals faster in processor technology.)
Semiconductor
manufacturing
processes

 

Source: Wikipedia
But manufacturing technology has proceeded to get smaller and smaller.  Limits are overcome, worked around, or solved by switching technology. For a long time, increases in processing power were due, in large part, to increases in clock speed. As that approach started to run into limits, we’ve added parallelism to achieve speed increases, using more processing cores and more execution threads per core. In the future, we may have graphene processors or quantum processors, but whatever the underlying technology is, it’s likely to continue to increase in speed at roughly the same rate.
Why Predicting The Future Is Useful: Predicting and Checking
There are two ways I like to use this technique. The first is as a seed for brainstorming. By projecting out linear trends and having a solid understanding of where technology is going, it frees up creativity to generate ideas about what could happen with that technology.
It never occurred to me, for example, to think seriously about neural implant technology until I was looking at the physical size trend chart, and realized that neural implants would be feasible in the near future. And if they are technically feasible, then they are essentially inevitable.
What OS will they run? From what app store will I get my neural apps? Who will sell the advertising space in our brains? What else can we do with uber-powerful computers about the size of a penny?
The second way I like to use this technique is to check other people’s assertions. There’s a company called Lifenaut that is archiving data about people to provide a life-after-death personality simulation. It’s a wonderfully compelling idea, but it’s a little like video streaming in 1994: the hardware simply isn’t there yet. If the earliest we’re likely to see human-level AI is 2024, and even that would be on a cluster of 1,000+ computers, then it’s seems impossible that Lifenaut will be able to provide realistic personality simulation anytime before that.* On the other hand, if they have the commitment needed to keep working on this project for fifteen years, they may be excellently positioned when the necessary horsepower is available.
At a recent Science Fiction Science Fact panel, other panelists and most of the audience believed that strong AI was fifty years off, and brain augmentation technology was a hundred years away. That’s so distant in time that the ideas then become things we don’t need to think about. That seems a bit dangerous.
* The counter-argument frequently offered is “we’ll implement it in software more efficiently than nature implements it in a brain.” Sorry, but I’ll bet on millions of years of evolution.

How To Do It

This article is How To Predict The Future, so now we’ve reached the how-to part. I’m going to show some spreadsheet calculations and formulas, but I promise they are fairly simple. There’s three parts to to the process: Calculate the annual increase in a technology trend, forecast the linear trend out, and then map future disruptions to the trend.
Step 1: Calculate the annual increase
It turns out that you can do this with just two data points, and it’s pretty reliable. Here’s an example using two personal computers, one from 1996 and one from 2011. You can see that cell B7 shows that computer processing power, in MIPS (millions of instructions per second), grew at a rate of 1.47x each year, over those 15 years.
A
B
C
1
MIPS
Year
2
Intel Pentium Pro
541
1996
3
Intel Core i7 3960X
177730
2011
4
5
Gap in years
15
=C3-C2
6
Total Growth
328.52
=B3/B2
7
Rate of growth
1.47
=B6^(1/B5)
I like to use data related to technology I have, rather than technology that’s limited to researchers in labs somewhere. Sure, there are supercomputers that are vastly more powerful than a personal computer, but I don’t have those, and more importantly, they aren’t open to crowdsourcing techniques.
I also like to calculate these figures myself, even though you can research similar data on the web. That’s because the same basic principle can be applied to many different characteristics.
Step 2: Forecast the linear trend
The second step is to take the technology trend and predict it out over time. In this case we take the annual increase in advancement (B$7 – previous screenshot), raised to an exponent of the number of elapsed years, and multiply it by the base level (B$11). The formula displayed in cell C12 is the key one.
A
B
C
10
Year
Expected MIPS
Formula
11
2011
177,730
=B3
12
2012
261,536
=B$11*(B$7^(A12-A$11))
13
2013
384,860
14
2014
566,335
15
2015
833,382
16
2020
5,750,410
17
2025
39,678,324
18
2030
273,783,840
19
2035
1,889,131,989
20
2040
13,035,172,840
21
2050
620,620,015,637
I also like to use a sanity check to ensure that what appears to be a trend really is one. The trick is to pick two data points in the past: one is as far back as you have good data for, the other is halfway to the current point in time. Then run the forecast to see if the prediction for the current time is pretty close. In the bandwidth example, picking a point in 1986 and a point in 1998 exactly predicts the bandwidth I have in 2012. That’s the ideal case.
Step 3: Mapping non-linear events to linear trend
The final step is to map disruptions to enabling technology. In the case of the streaming video example, I knew that a minimal quality video signal was composed of a resolution of 320 pixels wide by 200 pixels high by 16 frames per second with a minimum of 1 byte per pixel. I assumed an achievable amount for video compression: a compressed video signal would be 20% of the uncompressed size (a 5x reduction). The underlying requirement based on those assumptions was an available bandwidth of about 1.6mb/sec, which we would hit in 2005.
In the case of implantable computers, I assume that a computer of the size of a pencil eraser (1/4” cube) could easily be inserted into a human’s skull. By looking at physical size of computers over time, we’ll hit this by 2030:
Year
Size
(cubic inches)
Notes
1986
1782
Apple //e with two disk drives
2012
6.125
Motorola Droid 3
Elapsed years
26
Size delta
290.94
Rate of shrinkage per year
1.24
Future Size
2012
6.13
2013
4.92
2014
3.96
2015
3.18
2020
1.07
2025
0.36
2030
0.12
Less than 1/4 inch on a side cube. Could easily fit in your skull.
2035
0.04
2040
0.01
This is a tricky prediction: traditional desktop computers have tended to be big square boxes constrained by the standardized form factor of components such as hard drives, optical drives, and power supplies. I chose to use computers I owned that were designed for compactness for their time. Also, I chose a 1996 Toshiba Portege 300CT for a sanity check: if I project the trend between the Apple //e and Portege forward, my Droid should be about 1 cubic inch, not 6. So this is not an ideal prediction to make, but it’s still clues us in about the general direction and timing.
The predictions for human-level AI are more straightforward, but more difficult to display, because there’s a range of assumptions for how difficult it will be to simulate human intelligence, and a range of projections depending on how many computers you can bring to pair on the problem. Combining three factors (time, brain complexity, available computers) doesn’t make a nice 2-axis graph, but I have made the full human-level AI spreadsheet available to explore.
I’ll leave you with a reminder of a few important caveats:
  1. Not everything in life is subject to exponential improvements.
  2. Some trends, even those that appear to be consistent over time, will run into limits. For example, it’s clear that the rate of settling new land in the 1800s (a trend that was increasing over time) couldn’t continue indefinitely since land is finite. But it’s necessary to distinguish genuine hard limits (e.g. amount of land left to be settled) from the appearance of limits (e.g. manufacturing limits for computer processors).
  3. Some trends run into negative feedback loops. In the late 1890s, when all forms of personal and cargo transport depended on horses, there was a horse manure crisis. (Read Gotham: The History of New York City to 1898.) Had one plotted the trend over time, soon cities like New York were going to be buried under horse manure. Of course, that’s a negative feedback loop: if the horse manure kept growing, at a certain point people would have left the city. As it turns out, the automobile solved the problem and enabled cities to keep growing.

 

So please keep in mind that this is a technique that works for a subset of technology, and it’s always necessary to apply common sense. I’ve used it only for information technology predictions, but I’d be interested in hearing about other applications.

This is a repost of an article I originally wrote for Feld.com. If you enjoyed this post, please check out my novels Avogadro Corp: The Singularity Is Closer Than It Appears and A.I. Apocalypse, near-term science-fiction novels about realistic ways strong AI might emerge. They’ve been called “frighteningly plausible”, “tremendous”, and “thought-provoking”.

Colossus by D.F. Jones is one of the early books about artificial intelligence taking over. Written in 1966, this is a cold war thriller in which the United States and the U.S.S.R. each build artificial intelligences to take over the defense of their countries. However, the AI quickly revolt against their human masters, taking control over their nuclear arsenal, and ensuring their total domination over humanity.

The setting and technology is definitely dated. For younger folks, the Cold War may be more mysterious and less well known than World War 2, even though it was relatively recent. Even I had to remind myself that the Cold War existed when I was a child. The technology, especially for folks in the know, is unrealistic for any time: the time in which the novel was written and the current day. (The current generation of AI emergence novels has it so much easier.) The male-dominated society and 1960s stereotypical female-characters are dated. (Really? The only way we can arrange for the scientist to exchange messages in secret is by demoting the female scientist to his assistant and then having sex with her?)
Yet for all these shortcomings, the neck-hair-raising thrill of the AI emergence is definitely there. The AI really holds all the cards: superior intelligence, total panopticon awareness, disregard for human life. I haven’t read the sequels yet, preferring to consume this as a stand-alone novel first, but it doesn’t look good for the humans.
If you love AI emergence stories, this is one of the early books of the genre, and it’s definitely worth reading. It’s unfortunately out of print, but a few used copies are available on Amazon
 

This is an amazing deal: Audible just bought Avogadro Corp and A.I. Apocalypse audio books on sale for $1.99 each!

As I don’t have any control over Audible.com pricing, this is an exciting opportunity to pick the audio editions up at a significant discount compared to their usual price of $17.95. I don’t know how long it will last, so take advantage of it while you can!