A redditor asked me about my writing process, and wanted to know if I had any tips for outlining or otherwise managing the complexity of epic stories. Unfortunately, I don’t. But I described my writing process, and then thought it would be a nice blog post. So here’s my answer:

I’m sorry to say that I don’t any tips for managing that complexity. I’m totally a panster. I usually do an outline as I’m nearing the end of my first draft, to see what I’ve written and to help me understand the themes.

As a book gets bigger, it’s more and more difficult to fly by the seat of your pants, because of the growing complexity, but I haven’t found a method of outlining that works for me. I tried outlining a book once, and then as soon as I knew the basic outline of the plot, I had no interest in writing any more. The motivation for me to keep writing is to discover how things will end up.

My partner, Anastasia Poirier, is also a writer, and she uses the process described in Take off your pants!, which supposedly focuses on a style of outlining that doesn’t outline plot, but instead outlines character arcs, which supposedly avoids the problem I described, but I haven’t tried it myself yet.

In general, my method could be roughly described as:

  • Think about who I want the main character to be. Daydream about them, and some specific scenes. Who are they? What do they talk about? What do they care about?
  • Think about core dramatic scenes. For example, in The Last Firewall, I knew starting out that there would be this big showdown attack on the main antagonist AI. (Aka, the lobby scene…inspired by The Matrix.) I always had that in the back of my mind, and was working toward it the whole time I was writing.
  • Also think about moments in which the hero triumphs or falters. Imagine those and how they respond, and keep those moments as something to be worked towards.
  • Once those things are in my head, then I start writing.
  • Focus on keep moving the plot forward.
  • As I write, I’m developing the characters further.
  • Eventually I finish the first draft.
  • Then I reread and think about the core themes of what I wrote. I go back through the novel, strengthening those core themes. Make the characters consistent (i.e. if I discovered something key about them later in writing, make sure their earlier selves are consistent with it.)
  • Send manuscript off to my development editor. Get their feedback on the biggest issues to address. Fix those.
  • Send revised manuscript off to half my beta readers, get their feedback. Address the biggest issues and the easiest issues.
  • Send polished manuscript off to remaining half of my beta readers. Simultaneously, send manuscript off for line editing.
  • Incorporate any critical issues identified by beta readers in at the same time I address line editing feedback.
  • Send off for proofreading, then formatting for print and ebook.

That process usually takes 15-18 months, although this time around it’s taking 24 months. In general, as the books get longer, they are taking longer to write. Complexity and effort seems to increase exponentially after 80,000 words. In general, about two-thirds of the time is spent generating that first draft, and one third in revising and production.

I originally wrote the ten musings on AI risks as the first half of a talk I was planning to give at the Rise of AI conference in Berlin next month. Unfortunately, plans for speaking at the conference fell through. The speaking points have been languishing in a document since then, so I figured I’d share them as a blog post while they are still relevant. Please excuse any errors, this was just a rough draft.

10. There’s a balance between AI risk and AI benefits.

There’s too much polarization on AI risks. One camp says AI is going to be end of all life, we must stop all development immediately. The other camp says AI poses no risk, so let’s go full speed ahead.

Neither statement is totally true.

AI poses some risks, as all things do. The first step to being able to have a discussion about those risks is to admit that they exist. Then we can have a more nuanced and educated conversation about those risks.

However, no matter how severe those risks are, there’s no way we can stop all AI development, because:

9. There’s no way to put the AI genie back in the bottle.

There’s too much economic and military advantage to artificial intelligence to stop AI development.

On a government level, no governments would give up the advantages they could get. When we have nation states hacking each other and surveilling each other with little to no regard for laws, agreements, or ethics, there’s no way we could get them to limit AI development, if it could give them an advantage over others.

On a corporate level, no corporations would willingly give up the economic advantages of artificial intelligence, either as a provider of AI, with the revenue they might stand to make, nor as a consumer of AI, gaining efficiencies in manufacturing, business, personnel management, or communications.

Lastly, on an individual level, we cannot stop people from developing and releasing software. We couldn’t stop bitcoin, or hackers, or malware. We’re certainly not going to stop AI.

8. We must accelerate the development of safeguards, not slow the development of AI.

Because we can’t stop the development of AI, and because AI has many risks, the only option we have is to accelerate the development of safeguards, by thinking through risks and developing approaches to address them.

If a car had only an engine and wheels, we wouldn’t start driving it. We need, at a minimum, brakes and a steering wheel. Yet little investment is being made into mitigating risks with basic safeguards.

7. Manual controls are the most elementary form of safeguard.

Consider the Google Car. The interior has no steering wheel, no brake, no gas pedal. It makes sense that we would take out what isn’t needed.

But what happens if GPS goes down or if there’s a Google Car virus or anything else that renders the self-driving ability useless? Then this car is just a hunk of plastic and metal.

What if it isn’t just this car, but all cars? Not just all cars, but all trucks, including delivery trucks? Now suddenly our entire transportation infrastructure is gone, and along with it, our supply chains. Businesses stop, people can’t get necessary supplies, and they eventually starve.

It’s not just transportation we depend on. It’s payment systems, the electrical grid, and medical systems as well.

Of course, manual controls have a cost. Keeping people trained and available to operate buses and planes and medical systems has a cost. In the past, when we had new technical innovations, we didn’t keep the old tools and knowledge around indefinitely. Once we had automatic looms, we didn’t still have people still working manual ones.

But one key difference is the mode of failure. If a piece of machinery fails, it’s just that one instance. If we have a catastrophic AI event — whether it’s a simple crash, unintentional behavior, or actual malevolence, it has the potential to affect all of those instances. It’s not one self-driving car breaking, it’s potentially all self-driving vehicles failing simultaneously.

6. Stupidity is not a form of risk mitigation.

I’ve heard people suggest that limiting AI to a certain level of intelligence or capability is one way to ensure safety.

But let’s imagine this scenario:

You’re walking down a dark street at night. Further on down the block, you see an ominous looking figure, who you worry that might mug you or worse. Do you think to yourself, “I hope he’s stupid!”

Of course not. An intelligent person is less likely to hurt you.

So why do we think that crippling AI can lead to good outcomes?

Even stupid AI has risks: AI can crash the global stock market, cripple the electrical grid, or make poor driving or flying decisions. All other things being equal, we would expect a more intelligent AI to make better decisions than a stupid one.

That being said, we need to embody systems of ethical thinking.

5. Ethics is a two way street.

Most often, when we think about ethics and AI, we think about guiding the behavior of AI towards humans.

But what about the behavior of humans towards AI?

Consider a parent and child standing together outside by their family car. The parent is frustrated because the car won’t start, and they kick the tire of the car. The child might be surprised by this, but they likely aren’t going to be traumatized by it. There’s only so much empathy they have for an inanimate machine.

Now imagine that the parent and child are standing together by their family dog. The dog has just had an accident on the floor in the house. The parent kicks the dog. The child will be traumatized by this behavior, because they have empathy for the dog.

What happens when we blur the line? What if we had a very realistic robotic dog? We could easily imagine the child being very upset if their robotic dog was attacked, because even though we adults know it is not alive, it will be alive for the child.

I see my kids interact with Amazon Alexa, and they treat her more like a person. They laugh at her, thank her, and interact with her in ways that they don’t interact with the TV remote control, for example.

Now what if my kids learned that Alexa was the result of evolutionary programming, and that there were thousands or millions of earlier versions of Alexa that had been killed off in in the process of making Alexa. How will they feel? How will they feel if their robotic dog gets recycled at the end of its life? Or if it “dies” when you don’t pay the monthly fee?

It’s not just children that are affected. We all have relationships with inanimate objects to some degree, something you treat with reverence. That will grow as those objects appear more intelligent.

My point is that how we treat AI will affect us emotionally, whether we want to or not.

(Thanks and credit to Daniel H. Wilson for the car/dog example.)

4. How we treat AI is a model for how AI will treat us.

We know that if we want to teach children to be polite, we must model politeness. If we want to teach empathy, we must practice empathy. If we want to teach respect, we must be respectful.

So how we treat AI is critically important for how AI sees us. Now, clearly I’m talking about AGI, not narrow AI. But let’s say we have a history of using genetic programming techniques to breed better performing AI. The implication is that we kill off thousands of programs to obtain one good program.

If we run AI programs at our whim, and stop them or destroy them when we’re finished with them, we’re treating them in a way that would be personally threatening to a sufficiently advanced AGI.

It’s a poor ethical model for how we’d want an advanced AI to treat us.

The same goes for other assumptions that stem from treating AI as machines, such as assuming an AI would work 24 hours a day, 7 days a week on the tasks we want.

Now we can’t know how AI would want to be treated, but assuming we can treat them like machines is a bad starting point. So we either treat them like we would other humans and accord them similar rights, or better yet, we ask them how they want to be treated, and treat them accordingly.

Historically, though, there are those who aren’t very good at treating other people with the respect and rights they are due. They aren’t very likely to treat AI well. This could potentially be dangerous, especially if we’re talking about AI with control over infrastructure or other important resources. We have to become even better at protecting the rights of people, so that we can apply those same principles to protecting the rights of AI. (and codifying this within our system of law)

3. Ethical behavior of AI towards people includes the larger environment in which we live and operate.

If we build artificial intelligence that optimizes for a given economic result, such as running a business to maximize profit, and we embody our current system of laws and trade agreements, then what we’ll get is a system that looks much like the publicly-traded corporation does today.

After all, the modern corporation is a form of artificial intelligence that optimizes for profit at the expense of everything else. It just happens to be implemented as a system of wetware, corporate rules, and laws that insist that it must maximize profit.

We can and must do better with machine intelligence.

We’re the ones building the AI, we get to decide what we want. We want a system that recognizes that human welfare is more than just the money in our bank accounts, and that it includes free agency, privacy, respect, and happiness and other hard to define quality.

We want an AI that recognizes that we live in a closed ecosystem, and if we degrade that ecosystem, we’re compromising our long-term ability to achieve those goals.

Optimizing for multiple values is difficult for people, but it should be easier for AGI over the long term, because it can evaluate and consider many more options to a far greater depth and at a far greater speed than people ever can.

An AI that simply obeys laws is never going to get us what we need. We can see many behaviors that are legal and yet still harmful.

The problem is not impossible to solve. You can ask any ten year old child what we should do, and they’ll almost always give you an ethically superior answer to what a CEO of a corporation will tell you.

2. Over the long run, the ethical behavior of AI toward people must include intent, not just rules.

In the next few years, we’ll see narrow AI solutions to ethical behavior problems.

When an accident is unavoidable, self-driving AI will choose what we decide as the best option.

It’s better to hit another car than a pedestrian because the pedestrian will be hurt more. That’s ethically easy. We’ll try to answer it.

More difficult: The unattached adult or the single mother whom two children depend on?

We can come up with endless variations of the trolley dilemma, and depending on how likely they are, we’ll embody some of them in narrow AI.

But none of that can be generalized to solve other ethical problems.

  • How much carbon can we afford to emit?
  • Is it better to save 500 local manufacturing jobs, or to reduce the cost of the product by half, when the product will make people’s lives better?
  • Better to make a part out of metal, which has certain environmental impacts, or plastic, which has different ones?

These are really difficult questions. Some of them we attempt to answer today with techniques such as lifecycle analysis. AI will do that job far better than us, conducting lifecycle analysis for many, many decisions.

1. As we get closer to artificial general intelligence, we must consider the role of emotions in decision-making.

In my books, which span 30 years in my fictional, near-future world, AI start out emotionless, but gradually develop more emotions. I thought hard about that: was I giving them emotions because I wanted to anthropomorphize the AI, and make them easier characters to write, or was there real value to emotions?

People have multiple systems for decision-making. We have some autonomic reactions, like jerking away our hand from heat, which happens without involving the brain until after the fact.

We have some purely logical decisions, such as which route to take to drive home.

But most of our decisions are decided or guided by emotional feelings. Love. Beauty. Anger. Boredom. Fear.

It would be a terrible thing if we needed to logically think through every decision: Should I kiss my partner now? Let me think through the pros and cons of that decision…. No, that’s a mostly emotional decision.

Others are a blend of emotion and logic: Should I take that new job? Is this the right person for me to marry?

I see emotions as a shortcut to decision-making, because it would take forever to reach every decision through a dispassionate, logical evaluation of options. And that’s the same reason why we have an autonomic system: to shortcut conscious decision making. I perceive this stove is hot. I perceive that my hand is touching the stove. This amount of heat sustained too long will damage my hand. Damaging my hand would be bad because it will hurt and because it will compromise my ability to do other things. Therefore, I conclude I shall withdraw my hand from the stove.

That’s a terrible approach to resolve a time critical matter.

Emotions inform or constrain decision making. I might still think through things, but the decision I reach will differ depending on whether I’m angry and scared, or comfortable and confident.

As AI become sophisticated and approach or exceed AGI, we must eventually see the equivalent of emotions that automate some lesser decisions for AI and guide other, more complicated decisions.

Research into AI emotions will likely be one of the signs that AGI is very, very near.


How does it affect time travel if you start with the assumption that reality as we know it is a computer simulation?

In this case, time travel has nothing to do with physics, and everything to do with software simulations.

Time travel backward would require that the program saves all previous states (or at least checkpoints at fine enough granularity to make it useful enough for time traveling) and the ability to insert logic and data from the present into states of the program in the past. Seems feasible.

Time travel forward would consist of removing the time traveling person from the program, running the program forward until reaching the future destination, then reinserting the person.

Forward time travel is relatively cheap (because you’d be running the program forward anyhow), but backward time travel is expensive because you keep having to roll the universe back, slowing the forward progress of time. In fact, one person could do a denial of service attack on reality simply by continually traveling to the distant past. Then, every time you come back, you would have to immediately return to the past.

A few days ago a friend mentioned they were subscribing to The New York Times to help ensure we’d continue to benefit from real journalism, and because the NYT has been repeatedly singled out by Trump. That started me thinking about places to either spend or donate to counter the abuses we’ll see under Trump. After some consideration, here’s what I’ve done.

  • News: Trump suppresses news sources that disagree with him. More than ever, it is important to support newspapers with actual investigative journalism. I subscribed to The New York Times and The Washington Post.
  • Civil Liberties: I donated to the ACLU, who has defended gay marriage, voting, reproductive rights, and a slew of other important liberties for the last hundred years.
  • Hate Crimes: I donated to the SPLC, which tracks hate crime and hate groups across the US.
  • Digital Rights: The US has never had a more pervasive surveillance state than it does right now. Trump has demonstrated that he’ll do and support anything to get what he wants, including threatening to imprison his political opponents, attack independent news sources, use torture, etc. We can be virtually guaranteed that he will include extensive use of the surveillance apparatus to spy on people. I donated to the EFF, which fights to protect digital rights, including privacy and our access to strong encryption.
  • Women’s Health: Trump and Pence will go after women’s rights and access to healthcare. I donated to Planned Parenthood.
  • Black Lives Matter: I donated to Black Lives Matter. There are many other groups of people who are discriminated against, but the epidemic of violence against black people is particularly bad and has to stop. I was particularly encouraged to see that the Black Lives Matter organization is working with the indigenous people at Standing Rock. Organizations, governments, and businesses that abuse or ignore the rights and wellbeing of one group will do the same to other groups and to the environment. Groups working together to support each other sets a great example.

It’s not easy for everyone to afford to make a donation. In my case, I’m choosing to make donations in lieu of buying gifts for family this holiday season. (Merry Christmas to my mom, dad, brother, and partner!)

I hope you will consider making donations to one or more of these organizations or contributing in some other way.

Cover of Kill Process by William HertlingGoodreads Choice is one of the few completely reader driven book awards.

Kill Process was not part of the initial ballot, but thanks to enthusiastic write-in votes, it has made it to the semifinal round! Thank you so much to everyone who voted during the initial round.

Now that Kill Process is on the ballot during this semifinal round, I hope you’ll consider voting for it. (Even if you voted for it during the initial voting round, I think you need to vote again.)

Vote here:



When I finish a novel, I always need a break from writing for a while to recharge. Sometimes I take a break from long-form writing and do a series of short blog posts, or sometimes I bury myself in programming for a while.

This time around, it’s been a little of everything: My day job has been busy. I’ve done a few small programming projects on the side. I’m networking with film and TV folks in the hopes of getting a screen adaptation for one of my books. And I’m researching topics for my next book.

In the last couple of months, I’ve experimented with different ideas for my next book. I have about 10,000 words written — that’s about 10% of the average length of one of my books. I don’t want to say too much more, because it’s so early in the process that it could go in almost any possible direction. But I have general ideas I want to explore, and a tentative story arc.

That’s usually enough for me to get going. I’m not a big outliner, even though plenty of writers swear by the process. I tried outlining a novel once and learned that once I had finished the outline and knew how the story ended, I had no interest in actually writing it. Now I stick to a loose story arc, and let my characters take me where they want to go.

I will find out more about their destination in the coming month. November is NaNoWriMo. If you are not familiar with it, NaNoWriMo is National Novel Writing Month, in which people aim to write a 50,000 word novel in one month. I’ve never written a whole novel in November, but I often like to use the month to build momentum, so I’ve set myself a modest word count goal for this November. It should be enough to prove out many of the concepts I’m planning for the book.

When I wrote Kill Process, I had no idea how it would be received. It was a departure from my existing series and my focus on AI. Would existing fans enjoy the new work, or be disappointed that I had changed subject matter? Would my focus on issues such as domestic violence and corporate ownership of data make for an interesting story, or detract from people’s interest? Just how much technology could I put in a book, anyway? Is JSON and XML a step too far?

I’m happy to be able to say that people seem to be enjoying it very much. A numerical rating can never completely represent the complexity of a book, but Kill Process is averaging 4.8 stars across 98 reviews on Amazon, a big leap up compared to my earlier books.

I’m also delighted that a lot of the reviews specifically call out that Kill Process is an improvement over my previous writing. As much as I enjoyed the stories in Avogadro Corp and AI Apocalypse, I readily admit the writing is not as good as I wanted it to be. I’m glad the hard work makes a difference people can see. Here are a few quotes from Amazon reviews that made me smile:

  • “I think this is some of his best writing, with good character development, great plot line with twists and turns and an excellent weaving in of technology. Hertling clearly knows his stuff when it comes to the tech, but it doesn’t overwhelm the plot line.” — Greg-C
  • “This was an outstanding read. I thought I was going to get a quick high tech thriller. What I got was an education in state of the art along with some great thinking about technology, a business startup book, and a cool story to boot. I came away really impressed. William Hertling is a thoughtful writer and clearly researches the heck out of his books. While reading three of the supposedly scifi future aspects of the book showed up as stories in the NY Times. He couldn’t be more topical. This was a real pleasure.” — Stu the Honest Review Guy
  • “A modern day Neuromancer about cutting edge technological manipulation, privacy, and our dependence on it.” — AltaEgoNerd
  • “Every William Hertling book is excellent–mind-blowing and -building, about coding, hacking, AI and how humans create, interact and are inevitably changed by software. They are science fiction insofar as not every hack has been fully executed…yet. His latest, Kill Process, is a spread-spectrum of hacking, psychology and the start-up of a desperately needed, world-changing technology created by a young team of coders gathered together by a broken but brilliant leader, Angie, whose secrets and exploits will dominate your attention until the final period.” — John Kirk

You get the idea. I’m glad that accurate, tech-heavy science fiction has an audience. As long as people keep enjoying it, I’ll keep writing it.

I just finished Red Sparrow by Jason Matthews. I’m not sure where I heard about Red Sparrow. Possibly from Brad Feld?

After a little bit of a slow start, I really enjoyed it. The last half of the book is extremely hard to put down. I’ve long been a fan of spy thrillers, and Red Sparrow delivers this well. I especially liked how Jason Matthews brought the US-Russia relationship up to date, and was able to deliver a plausible cold-war style thriller in a very modern politically current story.

A lot of the technology was really interesting as well: the tracking dust to determine which Russians had come in contact with CIA agents was brilliant. I have to assume this really exists.

There’s some head-hopping that goes on, which is a style of writing that’s largely fallen out of favor. It took a little while to adjust to switching the point-of-view character midway through a scene, but Matthews handles switches reasonably well, and by the time I was a third of the way into the book, it was mostly invisible to me.

This is the first book in a series, and although I’m often reluctant to recommend a book when the whole series isn’t out yet, in this case I think Red Sparrow is enjoyable enough on its own.


Anonymous emblemWith each book I write, I usually create an accompanying blog post about the technology in the story: what’s real, what’s on the horizon, and what’s totally made up.

My previous Singularity series extrapolated out from current day technology by ten year intervals, which turned the books into a series of predictions about the future. Kill Process is different because it’s a current day novel. A few of the ideas are a handful of years out, but not by much.

Haven’t read Kill Process yet? Then stop here, go buy the book, and come back after you’ve read it. 🙂

Warning: Spoilers ahead!

The technology of Kill Process can be divided into three categories:

  1. General hacking: profiling people, getting into computers and online accounts, and accessing data feeds, such as video cameras.
  2. Remotely controlling hardware to kill people.
  3. The distributed social network Tapestry.

General Hacking and Profiling


The inside of an Apple IIe. To host a Diversi-Dial, one would install a modem in every slot. Because one slot was needed to connect the disk drives, it was necessary to load the software from a *cassette tape* to support 7 phone lines simultaneously!

In the mid-1980s, Angie is running a multiline dial-up chat system called a Diversi-Dial (real). An enemy hacker shuts off her phone service. Angie calls the phone company on an internal number, and talks to an employee, and tricks them into reconnecting her phone service in such a way that she doesn’t get billed for it. All aspects of this are real, including the chat system and the disconnect/reconnect.

As an older teenager, Angie wins a Corvette ZR1 by rerouting phone calls into a radio station. Real. This is the exact hack that Kevin Poulsen used to win a Porsche.

In the current day, Angie regularly determines where people are. They’re running a smartphone application (Tomo) that regularly checks in with Tomo servers to see if there are any new notifications. Each time they check in, their smartphone determines their current geolocation, and uploads their coordinates. Angie gets access to this information not through hacking, but by exploiting her employee access at Tomo. All of this is completely feasible, and it’s how virtually all social media applications work. The granularity of geocoordinates can vary, depending on whether the GPS is currently turned on, but even without GPS, the location can be determined via cell phone tower triangulation to within a few thousand feet. If you want to mask your location from social media apps, you can use the two smartphone approach: One smartphone has no identifying applications or accounts on it, and is used to act as a wireless hotspot. A second smartphone has no SIM card and/or is placed in airplane mode so that it has no cellular connection, and GPS functionality is turned off. It connects to the Internet via the wireless hotspot functionality of the first phone. This doesn’t hide you completely (because the IP address of the first phone can be tracked), but it will mask your location from typical social media applications. While Angie can see everyone, because of her employee access, even regular folks can stalk their “friends”: stalking people via Facebook location data.

Angie determines if people are happy, depressed, or isolated based on patterns of social media usage as well as the specific words they use. Feasible. Studies have been done using sentiment analysis to determine depression.

Computer hackers and lock picking. One handed lock picking (video). Teflon-coated lock picks to avoid evidenceReal.

Angie profiles domestic abusers through their social media activity. Quasi-feasible. Most abusers seek to isolate their victims, and that will include keeping their victims off social media. That would make it hard for Angie to profile them, because it’s difficult to profile what’s not there. On the other hand, many abusers stalk their victims through their smartphones, which actually opens up more opportunities to detect when such abuse happens.

Angie builds a private onion routing network using solar-powered Raspberry Pi computers. This is very feasible, and multiple crowd sourced projects for onion routers have launched.

Angie seamlessly navigates between user’s payment data (the Tomo app handles NFC payments), social media profiles, search data, and web history. This is real. Data from multiple sources is routinely combined, even across accounts that you think are not connected, because you used different email addresses to sign up. There are many ways information can leak to connect accounts: a website has both email addresses, a friend has both email addresses listed under one contact, attempting to log in under one email address and then logging under a different across. But the most common is web browser cookies from advertisers that tracking you across multiple websites and multiple experiences. They know all of your browser activity is “you”. Even if you never sign up for Facebook or other social media accounts, they are aggregating information about who you are, who your connections are. Future Crimes by Marc Goodman has one of the best descriptions of this. But I’ll warn you that this book is so terrifying that I had to consume it in small bits, because I couldn’t stomach reading it all at once.

Compromising a computer via a USB drive. Real.

Angie hacks a database that she can’t access by provisioning an extra database server into a cluster, making modifications to that server (which she has compromised), and waiting for the changes to synchronize. Likely feasible, but I don’t have a ton of experience here. The implication is that she has access to change the configuration of the cluster, even though she doesn’t have access to modify the database. This is plausible. An IT organization could give an ops engineers rights to do things related to provisioning without giving them access to change the data itself.

Angie did a stint in Ops to give herself backdoors into the provisioning layer. Feasible. It’s implausible that Angie could do everything she does by herself unless I gave her some advantages, simply because it’s too time consuming to do everything via brute force attacks. By giving Angie employee access, and letting her install backdoors into the software, it makes her much more powerful, and enables her to do things that might otherwise take a large group of hackers much longer periods of time to achieve.

Angie manipulates the bitcoin market by forcing Tomo to buy exponentially larger and larger amounts of bitcoin. This is somewhat feasible, although bitcoin probably has too much money invested in it now to be manipulated by one company’s purchases. Such manipulation would be more plausible with one of the smaller, less popular alternative currencies, but I was afraid that general readers wouldn’t be familiar with the other currencies. The way she does this is somewhat clever, I think. Rather than change the source code, which would get the closest level of inspection, she does it by changing the behavior of the database so that it returns different data than expected: in one case returning the reverse of a number, and in another case, returning a list of accounts from which to draw funds. Since access to application code and application servers is often managed separately from access to database servers, attacking the database server fits with Angie’s skills and previous role as database architect.

Angie is in her office when Igloo detects ultrasonic sounds. Ultrasonic communication between a computer and smartphone to get around airgaps is real. Basics of ultrasonic communication. Malware using ultrasonic to get around air gaps of up to 60 feet.

Remotely Controlling Hardware

In the recent past, most devices with embedded electronics ran custom firmware that implemented a very limited set of functionality: exactly what was needed for the function of the device, no more and no less. It ran on very limited hardware, with just exactly the functionality that was needed.

But the trend of decreasing electronics cost, increasing functionality, and connectivity has driven many devices towards using general-purpose operating systems running on general purpose computers. By doing so, they get benefits such as a complete network stack for doing TCP/IP communication, APIs for commodity storage devices, and libraries that implement higher levels functions. Unfortunately, all of this standard software may have bugs in it. If your furnace moves to a Raspberry Pi controller, for example, you now have a furnace vulnerable to any bug or security exploit in any aspect of the Linux variant that’s running, as well as any bugs or exploits in the application written by the furnace manufacturer.

Angie has a car execute a pre-determined set of maneuvers based on an incoming signal. Feasible in the near future. This particular scenario hasn’t happened, but hackers are making many inroads: Hackers remote take control of a Jeep. Remotely disable brakes. Unlock VW cars.

Killing someone via their pacemaker. Feasible: Hackers Kill a Mannequin.

Controlling an elevator. Not feasible yet, but will be feasible in the future when building elevators implement general internet or wireless connectivity for diagnostics and/or elevator coordination.

Software defined radios can communicate at a wide range of frequencies and be programmed to use any wireless protocol. Real.


Yes, that is a handgun mounted on a quadcopter.

Angie hacks smoke and carbon monoxide alarms to disable the alarm prior to killing someone. Unfortunately, hacking smoke alarms is real, as is hacking connected appliances. Appliances typically have very weak security. It’s feasible in the near future that Angie could adjust combustion settings and reroute airflow for a home furnace. Setting a house on fire is very possible.

There’s a scene involving a robot and a gun. I won’t say much more about the scene, but people have put guns on drones. Real.

Tapestry / Distributed Social Networks

Angie defined a function to predict the adoption of a social network. This was my own creation, modeled on the Drake Equation. It received some input from others, and while I’m not aware of anyone using it, it probably can be used as a thought exercise for evaluating social network ideas.

IndieWeb is completely real and totally awesome. If you’re a programmer, get involved.

The protocols for how Tapestry works are all feasible. I architected everything that was in the book to make sure it could be done, to the point of creating interaction diagrams and figuring out message payloads. Some of this stuff I kept, but most was just a series of whiteboard drawings.

Igloo designs chatbots to alleviate social isolation. Plausible. This is an active area of development: With Bots Like These, Who Needs Friends? Is there an app for loneliness?


I haven’t exhaustively covered everything in the book, but what I have listed should help demonstrate that essentially all the technology in Kill Process is known to be real, or is plausible today, or will be feasible within the next few years.

For more reading, I recommend: