Ramez Naam, author of Nexus and Crux (two books I enjoyed and recommend), has recently put together a few guest posts for Charlie Stross (another author I love). The posts are The Singularity Is Further Than It Appears and Why AIs Won’t Ascend in the Blink of an Eye.

They’re both excellent posts, and I’d recommend reading them in full before continuing here.

I’d like to offer a slight rebuttal and explain why I think the singularity is still closer than it appears.

But first, I want to say that I very much respect Ramez, his ideas and writing. I don’t think he’s wrong and I’m right. I think the question of the singularity is a bit more like Drake’s Equation about intelligent extraterrestrial life: a series of probabilities, the values of which are not known precisely enough to determine the “correct” output value with strong confidence. I simply want to provide a different set of values for consideration than the ones that Ramez has chosen.

First, let’s talk about definitions. As Ramez describes in his first article, there are two versions of singularity often talked about.

The hard takeoff is one in which an AI rapidly creates newer, more intelligent versions of itself. Within minutes, days, or weeks, the AI has progressed from a level 1 AI to a level 20 grand-wizard AI, far beyond human intellect and anything we can comprehend. Ramez doesn’t think this will happen for a variety of reasons, one of which is the exponential difficulty involved in creating successively more complex algorithm (the argument he lays out in his second post).

I agree. I don’t see a hard takeoff. In addition to the reasons Ramez stated, I also believe it takes so long to test and qualify candidates for improvement that successive iteration will be slow.

Let’s imagine the first AI is created and runs on an infrastructure of 10,000 computers. Let’s further assume the AI is composed of neural networks and other similar algorithms that require training on large pools of data. The AI will want to test many ideas for improvements, each requiring training. The training will be followed by multiple rounds of successively more comprehensive testing: first the AI needs to see if the algorithm appears to improve a select area of intelligence, but then it will want to run regressive tests to ensure no other aspect of its intelligence or capabilities is adversely impacted. If the AI wants to test 1,000 ideas for improvements, and each idea requires 10 hours of training, 1 hour of assessment, and averages 1 hour of regressive testing, it would take 1.4 years to complete a round of improvements. Parallelism is the alternative, but remember that first AI is likely to be a behemoth, require 10,000 computers to run. It’s not possible to get that much parallelism.

The soft takeoff is one in which an artificial general intelligence (AGI) is created and gradually improved. As Ramez points out, that first AI might be on the order of human intellect, but it’s not smarter than the accumulated intelligence of all the humans that created it: many tens of thousands of scientists will collaborate to build the first AGI.

This is where we start to diverge. Consider a simple domain like chess playing computers. Since 2005, chess software running on commercially available hardware can outplay even the strongest human chess players. I don’t have data, but I suspect the number of very strong human chess players is somewhere in the hundreds or low thousands. However, the number of computers capable of running the very best chess playing software is in the millions or hundreds of millions. The aggregate chess playing capacity of computers is far greater than that of humans, because the best chess playing program can be propagated everywhere.

So too, AGI will be propagated everywhere. But I just argued that those first AI will require tens of thousands computers, right? Yes, except thanks to Moore’s Law (the observation that computing power tends to double every 18 months), the same AI that required 10,000 computers will need a mere 100 computers ten years later and just a single computer another ten years after that. Or an individual AGI could run up to 10,000 times faster. That speed-up alone means something different when it comes to intelligence: to have a single being with 10,000 times the experience and learning and practice that a human has.

Even Ramez agrees that it will be feasible to have destructive human brain uploads approximating human intelligence around 2040: “Do the math, and it appears that a super-computer capable of simulating an entire human brain and do so as fast as a human brain should be on the market by roughly 2035 – 2040. And of course, from that point on, speedups in computing should speed up the simulation of the brain, allowing it to run faster than a biological human’s.”

This is the soft takeoff: from a single AGI at some point in time to an entire civilization of that AGI twenty years later, all running at faster than human intellect speeds. A race consisting of an essentially alien intelligence, cohabiting the planet with us. Even if they don’t experience an intelligence explosion as Verner Vinge described, the combination of fast speeds, aggregate intelligence, and inherently different motivations will create an unknowable future that likely out of our control. And that’s very much a singularity.

But Ramez questions whether we can even achieve an AGI comparable to a human in the first place. There’s this pesky question of sentience and consciousness. Please go read Ramez’s first article in full, I don’t want you to think I’m summarizing everything he said here, but he basically cites three points:

1) No one’s really sure how to do it. AI theories have been around for decades, but none of them has led to anything that resembles sentience.

This is a difficulty. One analogy that comes to mind is the history of aviation. For nearly a hundred years prior to the Wright Brothers, heavier than air flight was being studied, with many different gliders created and flown. It was the innovation of powered engines that made heavier than air flight practically possible, and which led to rapid innovation. Perhaps we just don’t yet have the equivalent yet in AI. We’ve got people learning how to make airfoils and control services and airplane structure, and we’re just waiting for the engine to show up.

We also know that nature evolved sentience without any theory of how to do it. Having a proof point is powerful motivation.

2) There’s a huge lack of incentive. Would you like a self-driving car that has its own opinions? That might someday decide it doesn’t feel like driving you where you want to go?

There’s no lack of incentive. As James Barrat detailed in Our Final Invention, there are billions of dollars being poured into building AGI, both in big profile projects like the US BRAIN project and Europe’s Human Brain Project, as well as countless smaller AI companies and research projects.

There’s plenty of human incentive, too. How many people were inspired by Star Trek’s Data? At a recent conference, I asked attendees who would want Data as a friend, and more than half the audience’s hands went up. Among the elderly, loneliness is a very real issue that could be helped with AGI companionship, and many people might choose an artificial psychologist for reasons of confidence, cost, and convenience. All of these require at least the semblance of opinions.

More than that, we know we want initiative. If we have a self-driving car, we expect that it will use that initiative to find faster routes to destinations, possibly go around dangerous neighborhoods, and take necessary measures to avoid an accident. Indeed, even Google Maps has an “opinion” of the right way to get somewhere that often differs from my own. It’s usually right.

If we have an autonomous customer service agent, we’ll want it to flexibly meet business goals including pleasing the customer while controlling cost. All of these require something like opinions and sentience: goals, motivation to meet those goals, and mechanisms to flexibly meet those goals.

3) There are ethical issues. If we design an AI that truly is sentient, even at slightly less than human intelligence we’ll suddenly be faced with very real ethical issues. Can we turn it off? 

I absolutely agree that we’ve got ethical issues with AGI, but that hasn’t stopped us from creating other technology (nuclear bombs, bio-weapons, internal combustion engine, the transportation system) that also has ethical issues.

In sum, Ramez brings up great points, and he may very well be correct: the singularity might be a hundred years off instead of twenty or thirty.

However, the discussion around the singularity is also one about risk. Having artificial general intelligence running around, potentially in control of our computing infrastructure, may be risky. What happens if the AI has different motivations than us? What if it decides we’d be happier and less destructive if we’re all drugged? What if it just crashes and accidentally shuts down the entire electrical grid? (Read James Barrat’s Our Final Invention for more about the risks of AI.)

Ramez wrote Infinite Resource: The Power of Ideas on a Finite Planet, a wonderful and optimistic book about how science and technology are solving many resource problems around the world. I think it’s a powerful book because it gives us hope and proof points that we can solve the problems facing us.

Unfortunately, I think the argument that the singularity is far off is different and problematic because it denies the possibility of problems facing us. Instead of encouraging us to use technology to address the issues that could arise with the singularity, the argument instead concludes the singularity is either unlikely or simply a long time away. With that mindset, we’re less likely as a society to examine both AI progress and take steps to reduce the risks of AGI.

On the other hand, if we can agree that the singularity is a possibility, even just a modest possibility, then we may spur more discussion and investment into the safety and ethics of AGI.

Here’s a scary paragraph from a longer article about Google’s acquisition of AI company Deep Mind:

One of DeepMind’s cofounders, Demis Hassabis, possesses an impressive resume packed with prestigious titles, including software developer, neuroscientist, and teenage chess prodigy among the bullet points. But as the Economist suggested, one of Hassabis’s better-known contributions to society might be a video game; a niche but adored 2006 simulator called Evil Genius, in which you play as a malevolent mastermind hell-bent on world domination.

That sounds just like the plot of Daniel Suarez’s Daemon:

When a designer of computer games dies, he leaves behind a program that unravels the Internet’s interconnected world. It corrupts, kills, and runs independent of human control. It’s up to Detective Peter Sebeck to wrest the world from the malevolent virtual enemy before its ultimate purpose is realized: to dismantle society and bring about a new world order.

I’m reading Our Final Invention by James Barrat right now, about the dangers of artificial intelligence. I just got to a chapter in which he discussed that any reasonably complex artificial general intelligence (AGI) is going to want to control its own resources: e.g. if it has a goal, even a simple goal like playing chess, it will be able to achieve its goal better with more computing resources, and won’t be able to achieve its goal at all if its shut off. (Similar themes exist in all of my novels.)

This made me snap back to a conversation I had last week at my day job. I’m a web developer, and my current project, without giving too much away, is a RESTful web service that runs workflows composed of other RESTful web services.

We’re currently automating some of our operational tasks. For example, when our code passes unit tests, it’s automatically deployed. We’d like to expand on that so that after deployment, it will run integration tests, and if those pass, deploy up to the next stack, and then run performance tests, and so on.

Although we’re running on a cloud provider, it’s not AWS, and they don’t support autoscaling, so another automation task we need is to roll our own scaling solution.

Then we realized that running tests, deployments, and scaling all require calling RESTful JSON APIs, and that’s exactly what our service is designed to do. So the logical solution is that our software will test itself, deploy itself, and autoscale itself.

That’s an awful lot like the kind of resource control that James Barrat was writing about.

The new Nexus 5 processor (2.3 ghz quad-core) is about 11,000 Dhrystone MIPS, roughly equal to the best dual-core processors of 2005/2006.

This suggests that at this moment in time smartphone processors are roughly 7 years behind general purpose CPUs.

Then I remembered that in 1995, I got an HP 200LX, a handheld PDA. It had an 8mhz 8086-equivalent processor, which was roughly equal to what I had in 1988 when I had an IBM PC clone at 8mhz. So then too, they were 7 years behind desktop processors.

It looks like it’s a fairly constant ratio, over nearly a twenty year period.

Barge with shipping containers suspected
of being a floating data center owned by Google.
Credit: James Martin/CNET 

By now most of you have heard about the barges suspected to be Google’s floating data centers. CNET reported the first on Friday:

Something big and mysterious is rising from a floating barge at the end of Treasure Island, a former Navy base in the middle of San Francisco Bay. And Google’s fingerprints are all over it.

It’s unclear what’s inside the structure, which stands about four stories high and was made with a series of modern cargo containers. The same goes for when it will be unveiled, but the big tease has already begun. Locals refer to it as the secret project.

Google did not respond to multiple requests for comment. But after going through lease agreements, tracking a contact tied to the project on LinkedIn, talking to locals on Treasure Island, and consulting with experts, it’s all but certain that Google is the entity that is building the massive structure that’s in plain sight, but behind tight security.

Could the structure be a sea-faring data center? One expert who was shown pictures of the structure thinks so, especially because being on a barge provides easy access to a source of cooling, as well as an inexpensive source of power — the sea. And even more tellingly, Google was granted a patent in 2009 for a floating data center, and putting data centers inside shipping containers is already a well-established practice.

Barge seen in Portland, Maine with very similar structure.
Data center? Office space?
Credit: John Ewing/Portland Press Herald

They also reported another, nearly identical barge off the coast of Maine:

Now it seems as though Google may well have built a sister version of the project, and, according to the Portland Press Herald, it recently showed up in the harbor in Portland, Maine.

In both cases, the structures on both barges appear to be made from a number of shipping containers, many of which have small slats for windows, and each has one container that slants down to ground level at a 45-degree angle.

I wouldn’t be surprised that they’d build a floating data center, but I do wonder why the containers would have windows. Maybe it’s to make negotiating the interior easier when there’s no power. As much as I’d love to see it turn out to be a data center, I could also see it being temporary housing, or a proof of concept for a new way to building housing.

If it is a data center, there’s no word yet on whether they’ll be arming them with autonomous fighting robots.

An Apple //e with 7 modems, one in each expansion slot.
Photo from http://rmac.d-dial.com/

People who have grown up in the age of the Internet usually don’t know how hard it was to chat with other people in the dawn of the connected age.

I found this photo recently. It’s an Apple //e, a 1 Mhz computer with 64KB of RAM. It had seven expansion slots. One was usually taken up by a converter to allow the Apple //e to display 80 columns of text instead of 40, and another slot usually taken up by a disk drive controller board, and a third by a modem.

You’d use that modem to call BBSes or bulletin board systems that usually supported just a single caller at a time. That meant a popular BBS would be busy most of the time. One BBS I ran, a board called The Programmer’s Pitstop, was busy 95% of the time, 24 hours a day, for months on end.

BBSes were asynchronous mediums: one person would post messages, hang up, and then another person would call in.

But people craved more: they wanted real-time, simultaneous chat. And it was made possible by a select, crazy few (including me) in the mid 1980s. We’d tear out the 80-column display card, and the disk drive controller card, and stuff an Apple //e with 7 modems, filling every slot. At that capacity, the Apple //e couldn’t cool itself off, so we’d either have to keep the cover off, or cut holes in. Without a disk drive controller, the only way to get the software loaded was from an audiotape connected to the audio input port. (Even in the mid 1980s this seemed a prehistoric way to load software – a mechanism that dated back to the 1970s.)

Then we’d call the phone company and have to convince them to run seven phone lines to a residence. When I did this in 1986, there were no phone lines left on my block. Three telephone company trucks showed up at my house every day for two weeks while they ran new lines from the nearest junction box, about three blocks away. When all was said and done, we had ten phone lines coming into our home: 7 for the chat system, 1 for outgoing modem calls, 1 voice line for me, and 1 voice line for my parents.

But the feeling of chatting in real-time for the first time with other people, sometimes on the other side of the country or world, was simply amazing. 

The Pentagon’s research arm, DARPA, wants to crowdsource a fully automated cyber defense system, and they’re offering a two million dollar prize:

The so-called “Cyber Grand Challenge” will take place over the next three years, which seems like plenty of time to write a few lines of code. But DARPA’s not just asking for any old cyber defense system. They want one “with reasoning abilities exceeding those of human experts” that “will create its own knowledge.” They want it to deflect cyberattacks, not in a matter of days—which is how the Pentagon currently works—but in a matter of hours or even seconds. That’s profoundly difficult.

On the one hand, this is brilliant. I can easily imagine some huge leaps forward made as a result of the contest. The Netflix Prize advanced recommendation algorithms while the DARPA Grand Prize gave us autonomous cars. Clearly competitions work, especially in this domain where the barrier to entry is low.

On the other hand, this is scary. They’re asking competitors to marry artificial intelligence with cyber defense systems. Cyber defense requires a solid understanding of cyber offense, and aggressive defensive capabilities could be nearly as destructive as offensive capabilities. Cyber defense software could decide to block a threatening virus with a counter-virus, or shut down parts of the Internet to stop or slow infection.

Artificial intelligence has taking over stock trading, and look where that’s gotten us. Trading AI has become so sophisticated it is described in terms of submarine warfare, with offensive and defensive capabilities.

I don’t doubt that the competition will advance cyber defense. But the side effect will be a radical increase in cyber offense, as well as a system in which both side operate at algorithmic speeds.

Full information about the Cyber Grand Challenge, including rules and registration, is available on DARPA’s website.

Alexis Ohanian, cofounder of Reddit, spoke at Powell’s yesterday about the founding of Reddit and his new book Without Their Permission.

It was a fun talk: He’s a good speaker, clearly knows and enjoys his subject matter, and has plenty of fun anecdotes. His basic message is that we need to become entrepreneurs because its good for us personally and good for the world. The barrier to gain attention and achievement is high, but there are no gatekeepers stopping us from pursuing our dreams as there were even just ten years ago.

Here are my full notes from the talk:

Alexis Ohanian
Cofounder of Reddit
Author of Without Their Permission
  • The fear of being embarrassed holds people back.
    • Reddit started as an incredibly barebones html site. If being embarrassed had held them back, they would never have done it.
  • The fear of not knowing what you’re doing holds people back.
    • The secret is that nobody really knows what they’re doing if they’re trying anything new.
  • The world is not flat, but the world-wide-web is flat. It’s all equal. Two guys with a laptop and an internet connection can anything.
  • Reddit started in a suburb of a suburb of Cambridge, which is a suburb of Boston, which is not at all silicon valley.
    • great burritos in silicon valley, but no need to be there.
  • The enemy is the back button — Paul Graham
    • The enemy is whether people leave your site. 
    • You have to be better than cat photos.
  • So much is competing for our attention.
    • If you have ideas you want to spread, the bar is high to compete.
    • But word of mouth is just as important as it ever was
    • Watercooler conversations have just moved online
    • It’s so exciting to see people doing philanthropy, art, and getting their message out.  
  • Gatekeepers vs. no gatekeepers
    • Anecdote about Gary Larson and The Farside. He just barely managed to get syndication, and only then because he went on vacation and managed to get a deal. 
    • What about all the other Gary Larson’s out there who got filtered out because the gatekeepers said no?
  • All the web comics out there today could only exist in this era. 
    • xkcd only exists because of the internet. 
    • it’s a business that earns a living for their creator. 
    • there’s no way that the math joke from the programmer drawn with a stick figure would have been put on the page next to family circus.
  • Kickstarter has now given more money to art than the National Endowment for Art.
  • Choose your own adventure version of Hamlet.
    • No traditional publisher would have backed this. If they had, it would have been a meager, several thousand dollar advance.
    • On Kickstarter, he got $600,000 to write this.
  • Alexis would have been an immigration lawyer if he hadn’t had a fateful conversation in a waffle house.
  • If you are a programmer today, you have the most valuable skill possible.
    • And, you can be entirely self-taught. 
    • All the knowledge you need is available for free.
  • Paul Graham gving a talk about how to start a startup.
    • Alexis and Steve heard of this talk, realized it would be during spring break.
    • Decided to go there, rather than the beach.
    • Went from Virginia to Boston. 
    • Introduced themselves, talk him that they wanted to pitch their business (Mmm… something mobile using text message) to him.
    • He said “you have a slightly better than awful chance”.
  • Soon after, Paul formed Y combinator.
    • They interviewed for it.
    • Were rejected that night.
    • Decided they would prove him wrong.
  • Out that night with a bunch of Harvard guys
    • Harvard guys bragging about their bank jobs
    • They lied, bragged that they got into Y combinator
  • On the train back the next morning, get a call from Paul.
    • He hated the idea because it was based on phones (not smart phones). Said he’d let them in if they did something on the web instead.
    • They got off at the next stop, turned around, went back to Boston.
  • Had discussion about their experiences using Slashdot, reading newspapers, etc.
    • Paul said yes, on the right direction…build the front page of the internet.
    • A lot of pressure, when all they wanted to do was continue to live the college lifestyle.
  • Humans of New York — Brandon
    • failed out of University of Georgia
    • bounced around different finance jobs
    • suddenly decided he wanted to move to NY and become a photographer
    • his friends told him he was crazy. he didn’t even own a camera, didn’t know anything about photography.
    • But he goes to NY. his photos sucks.
    • He reads how to get better. A few days turned into a few weeks turned into a few months.
    • In four years went from having never taken a photo in his life, to being a worldwide celebrated and viewed photographer
    • From no fame or credibility to worldwide credibility, making the world suck a little less.
    • Hilarious and moving.
    • Ten years ago, that never would have been possible.
  • “Dude, sucking is the first step to being kind of good at something.” — Adventure Time
Questions
  • “Nobody does know what they’re doing. It all seems like its luck.”
    • Malcome Gladwell’s book Outliers — chance is still the biggest component of the equation.
    • The idea that all web links are equal. True only on a technical level. In the real world, still subject to luck: which link gets attention. 
  • “A lot of luck is part of it, but it also seems like dedication is part of it. You need talents like design, development, ideas, etc. What do you think your talents are that got your over the hill?”
    • The ability to build the thing is probably the most important.
    • My talent is the willingness to do anything/everything else: get takeout, call the wireless provider, anything…
    • Tenacity
      • Airbnb: founders ate nothing but cereal for a year and a half to keep costs down. They just wouldn’t quit.
        • Aside: Airbnb one day discovered that really good, attractive photos make people far more comfortable to rent an apartment. That discovery helped them turn the corner, turn into a billion dollar company.
    • Be comfortable with failure / negative feedback. Alexis has a negative reinforcement wall:
      • “You are a rounding error to Yahoo. Why are you even here talking to us?”
  • “What’s appearing to you about bit coin?”
    • It’s a digital crypto currency for the internet.
    • I’m cautiously optimistic about it because it’s just absurd how much money financial institutions make just moving money around. they’re a drain on the economy/world/money.
    • Even though silk road is shut down, we still see bitcoin transactions at roughly the same level. Good sign that it’s being used for legitimate stuff.
    • He gets bitcoin donations from people who torrent his book.

To celebrate the release of The Last Firewall, I decided to do another article about the technology behind the books. I wrote about the technology behind Avogadro Corp a few months ago, and that turned out to be fairly popular, so I’m back with the technology behind A.I. Apocalypse. (I don’t want to do The Last Firewall yet, because that would give away too many spoilers.) Although I don’t say so explicitly in the books, Avogadro Corp is set in 2015, A.I. Apocalypse in 2025, and The Last Firewall in 2035. I make all the technology as plausible as possible. That means it either exists, or is in development, or can be extrapolated from current technology. I described how I extrapolate tech trends and predict the future.

    • Semi-autonomous cars: In an early scene of the novel, Leon runs across the street, trusting that the cars will automatically stop due to their mandatory “SafetyPilots”. As we know, Google has an autonomous car, and car manufacturers, such as Toyota, are working on them now. Many manufacturers are starting with small pieces of autonomy: maintaining location within a lane, maintaining the distance from the car ahead of them. Fully autonomous vehicles are clearly more expensive than partially autonomous vehicles, so it’s quite reasonable we’ll see collision avoidance technology before we see fully autonomous vehicles. Our safety conscious culture and insurance risk reduction could result in such technology being mandatory within ten years.
Autonomous copter from 3D Robotics
    • Autonomous package delivery drones: Leon and his friends make their escape from a burning city via an unmanned package delivery plane. These are very feasible. Autonomous flying planes are very popular among hobbyists now. Chris Anderson, former editor of Wired, left to form 3D Robotics, who manufactures auto-pilot systems. Fuel efficiency is partly a function of flight speed. It makes sense that in a more fuel efficient future, we want to convey packages at just the right speed: not faster than they need to get there. When human pilots are removed from the picture, package delivery drones can become an economical way to move goods.
    • Solar-powered flight: Also feasible, the first long-distance flights using solar power have already taken place. There are solar powered airships, solar powered quadcopter, solar powered fixed wing surveillance drone, and a long duration solar powered drone. The attraction to solar power includes indefinite flight time and low cost of flight. The drone Leon and his friends take has to land before dark, but that wouldn’t necessarily be the case in real life: most drones would contain battery power to allow them to maintain sufficient altitude at night (although they might lazily drift and trade-off some elevation during the dark hours).
    • Mobile Phones as Computers: Leon and his friends own phones that work as both smartphone and computer by synchronizing their output to nearby display and input devices. This is similar to the Ubuntu Edge, which can be used as a full computer or phone. While computing power is increasing all the time, one of the constraints is displays. Phones can’t just grow indefinitely larger. Flexible screens might help, but still have limitations. The solution in the novel is the availability of cheap, high resolution displays nearly everywhere. By knocking your device against them, the phone and screen exchange a handshake that then permits the wireless display of data. Bump does this sort of synchronization now for exchanging contacts, files, and other data. Air Display creates a wireless remote display for iOS/OSX devices. One Amazon reviewer knocked A.I. Apocalypse for failing to foresee Google
      Google Glass projector/prism system

      Glasses. At the time I wrote the manuscript, they hadn’t been invented yet. It’s still not clear to me whether this will be the future or not. Glass could be yet another type of screen (after desktop monitor, tablet, and mobile phone screen), and while it offers certain conveniences (always there, relatively unobtrusive) it’s still a very small screen (many call it tiny) that’s more suitable for the display of summary information than for an immersive experience. That may change over time, such that we see full-screen glasses.

    • Evolutionary Computer Viruses: One of the main themes of the novel is that artificial intelligence will evolve rather than be programmed. I’ve braved surveillance by the NSA to research current articles on evolutionary computer viruses. (Don’t try this at home, kids.)  Computer Virus Evolution Model Inspired by Biological DNA is a research paper describing the idea in more detail that concludes “The simulation experiments were conducted and the results indicate that computer viruses have enormous capabilities of self-propagation and self-evolution.” The Frankenstein virus was a self-assembling and evolving computer virus put together by two researchers from the University of Texas at Dallas.
    • Pilfering existing code to build a virus: Yup, the Frankenstein virus does that too.
    • Humanoid robots: Later in the novel, the virus AI embody several humanoid robots. ASIMO is a long-running research project by Honda. Part of the reason Japan does research into humanoid bots (as opposed to other, more utilitarian designs) is that they see the use of robots as essential in caring for their growing elderly population. Even in the United States, we’re starting to see more research into humanoid form robots. That’s because robots need to navigate structures and tools designed for the human form. If you need to go up stairs, open doors, or use utensils, the human form works. If you look at the DARPA

      robotics research challenge, you see humanoid robots being used, such as ATLAS, from Boston Dynamics. The same folks who brought us the scary looking Big Dog bring the even scarier looking ATLAS. (ASIMO is so cuddly by comparison.) Since the DARPA challenge requires the robot to negotiate human spaces (e.g. to go into a nuclear reactor and shutdown equipment), it takes a humanoid form to succeed at the challenge. Boston Dynamics has a ton of experience in this space. Their earlier PETMAN robot is also worth looking at.

    • Mesh networking: In the novel, Avogadro Corp (a thinly disguised Google) has deployed Mesh networking every to guarantee net neutrality. Mesh networking is real and exists today. I think it would be a great solution to the last mile bottleneck. Google Fiber is proof that Google cares about the connection to the end-user. They just happened to have chosen a different technology to achieve the same result. Fiber is coming to Austin, Texas and Provo, Utah after starting in Kansas City, so clearly Google wants to continue the experiment. Unfortunately, commercial use of mesh networking seems to have been relegated to creating networks for legacy hotels and similar buildings. But I think there’s promise for consumers: Project Byzantium is a mesh network based on the Raspberry Pi for the zombie apocalypse. The low cost of the Raspberry Pi is awesome, but we should see even lower cost, smaller size, and lower power consumption solutions as time goes by. Then it becomes easy to sprinkle these all around, creating ad-hoc mesh networks everywhere.
    • Internet Kill Switch: A late plot point involves a master password embedded in all network routers. While this doesn’t exist in reality, in the context of the novel, Avogadro Corp has basically given away mesh boxes and routers for years, and they’ve effectively become the sole provider of routers. Historically speaking, many routers come with default passwords, and many people don’t change them. Thanks to all the recent disclosures around the NSA spying on Americas, we know there are more backdoors than ever into computer systems around the world. I think it’s within the realm of feasibility that if you had one company providing all the routers, that there could exist a backdoor to exploit them all. (Of course, how the kill signal propagates around the internet is another question.)
  • ELOPe’s Plane: This was modeled after Boeing’s X-37b. It’s pretty far-fetched that this could be considered a multi-mission military plane, but it’s what I had in mind visually. Think of the X-37b being 50% larger, wings large enough to make it aerodynamically appropriate for sustained flight, and the payload section holding a small number of passengers, and you’ve basically got ELOPe’s white unmarked plane.
  • Rail gun: PA-60-41 uses a rail gun to shoot down the incoming military attack. Rail guns exist of course, although why one would be in downtown Chicago is questionable.
  • Lakeside Technology Data Center: As the time of writing, Lakeside Technology was the world’s largest data center. It’s now the third largest. It does have a distinguishing cooling tower that was the target of ELOPe’s attack.
  • Evolving General Purpose AI: The biggest leap in the book, of course, is that general purpose AI could evolve from simple computer viruses to sophisticated intelligences that communicate, trade, and form a civilization complete with social reputation. As much as possible, I matched the evolution of biological life: from single cell organisms to multicellular life, learned intelligence vs. evolved intelligence, etc. For this reason, I think it’s inevitable that will eventually occur: it’s just life evolving on a different substrate. (It’s probably not reasonable that it could happen so quickly, however.)
Hopefully I haven’t missed anything huge. If I have, just let me know in the comments, and I’ll address it. If you enjoyed Avogadro Corp or A.I. Apocalypse, I hope you’ll check out my latest novel The Last Firewall

By now I’m sure everyone knows about how the NSA is spying on virtually every email coming in or out of the country, and nearly everyone that’s connected, even indirectly, to anyone even vaguely suspicious. If you’re not sure why we should care about privacy, read this piece by Cory Doctorow.

Brad Feld posted this morning about Lavabit committing corporate suicide. Lavabit is the company who provided Edward Snowden with secure email, and they were being forced by the US government (presumably) to violate their privacy/security agreement with their users. Rather than compromise security, they chose to end business operations.

What I found particularly interesting was the comment thread, in which Brad’s readers were asking him to take a stance, and he said that he didn’t yet know what action to take (more or less).

During my drive to work, I started brainstorming what possible actions might be. I don’t know what would be effective, so consider this nothing more than a list of raw ideas.

  • Donate to the Electronic Frontier Foundation (EFF). They, more than anyone else, are the single point organization on the topic of privacy and security on the Internet. They’re organizing information and fighting legal cases. 
  • Don’t make it easy for people to spy on you. While we should assume that our emails, web browser activity, and everything else is widely available (both for legitimate government use as well as abuses of that power), we can still take steps to make it more difficult to be spied on. Some of these include:
  • If you are running a business, reconsider your use of cloud services. Although that’s the direction we’ve all been heading in the last few years, is it worth the potential risk? How would you be affected if your private business correspondence, plans, and data were leaked to random folks, including your competitors? For many years the argument in favor of cloud computing was that you can leave the security to the professionals. Now that we know virtually all cloud computing companies are insecure, that argument is no longer valid. (Consider also that many companies host on AWS. If Amazon is providing data to the NSA, then every company using AWS is also compromised.)
  • If you’re an investor in a tech startup, consider the cloud strategy for that company. Is privacy or security an integral aspect of what they’re offering? If so, they should strongly consider hosting in a privacy-friendly country, like Sweden. The company itself might be better off being located outside the US. If privacy or security is an integral part of their product, this should be a serious concern. That doesn’t just mean companies providing privacy or security as a product, but any product where the value of the product is threatened or diminished without privacy. For example, we can’t even begin to comprehend how genetic data might be used in the future. I’d like to know where my 23andme data is housed. (Given that Google is an investor, is it on Google servers? Great, now the NSA has my genetic profile.)
Any other ideas?