Status

Because of changes to Amazon’s backend publishing systems, I had to use a new supplier to produce the advance paperbacks I needed to fulfill my Patreon backer rewards. Unfortunately, that supplier had a press breakdown, delaying production. As a result, I’m going to hold back the publication of Kill Switch a few days so that I can fulfill my Patreon backer rewards first. I expect that it will be published by the last week of October at the latest. I’m doing my best to make sure the audiobook is available at launch as well.

Kill Switch Blurb

Igloo and Angie are the co-founders of a new social network, Tapestry, based on the principles of privacy and data ownership. Two years later, with Tapestry poised to become the world’s largest social network, their rapid growth puts them under government scrutiny.

Tapestry’s privacy and security is so effective that it impedes the government’s ability to monitor routine communications. Fearing Tapestry will spread to encompass the whole of the Internet, threatening America’s surveillance abilities around the globe, the government swoops in to stop Angie and company — by any means possible.

Under the constant threat of exposure — of Angie’s criminal past, of Igloo’s secret life in the underground kink scene, and of their actions to subvert a FISA court order — they must hatch a plan to ensure the success of Tapestry no matter what pressures the government brings to bear.

Not knowing whom to trust, or if they can even trust each other, Igloo and Angie must risk everything in the ultimate battle for control of the Internet.

I have good news about the Kill Switch release!

It’s been two years since Kill Process was released. Kill Switch was a daunting book to write. It’s 20% longer than Kill Process, which, when it was released, was the longest novel I’d written by far. Kill Switch also tackles new topics that required more research and finesse to handle properly. And while writing this novel, I also bought a house, moved, tackled house projects, switched roles at my day job, and more.

So it is with both excitement and relief that I’m thrilled to finally announce that Kill Switch will be released in October. The proofreading is done. The final formatting is done. The audiobook is nearly complete. The cover design is done. I will have a firm launch date within a week or two.

As usual, Patreon backers will be the first to receive Kill Switch, and they should receive their ebooks the first weekend in October, and the paperback prior to the official launch.

Thank you so much for your patience! I’m delighted to get Kill Switch into everyone’s hands.

The new cover was designed by Jenn Reese, who did a wonderful job. Thank you Jenn!

Kill Switch by William Hertling

Kill Switch Cover

 

NAME

polyamory – supports simultaneous relationships

SYNOPSIS

poly [-dpt]

DESCRIPTION

poly supports simultaneous host-to-host relationships.

By default, poly searches for and upgrades any preexisting monogamous relationship to polyamory. Results may be mixed. To suppress this behavior, use kill -9 to terminate existing relationships first.

Polyamory comes in many variations. Best results are obtained running identical or closely compatible variations. See also: poly-nonhierarchical, poly-hierarchical, and poly-solo. Less compatible variations include: swinging. Poly is not compatible with cheating.

It is possible but not recommended to connect two hosts, one running poly and one running monogamy, but this requires an experienced system administrator and increases the risk of system instability. Resource utilization (see relationship-discussion) will likely be higher with this combination.

It is normal to have one or more relationship-discussion background processes per relationship. In some cases, O(n^2) processes are required for n relationships. These child processes automatically consume all available CPU cycles, and are necessary for system stability.

OPTIONS

-p In promiscuous mode, poly will fork additional instances any time it sees an open port on a compatible host. This can be resource intensive and is not recommended to run indefinitely, although it is common for users to start poly in this state.

-d In debug mode, extra relationship-discussions are spawned. Poly is notoriously difficult to debug. If relationship-discussion is insufficient, if CPU utilization is too high, or system instability exceeds comfortable limits, use couples-counseling to process debug output.

-t To prevent errors during initialization and facilitate user adoption, poly supports a -t flag for trial mode. However, this is a dummy flag and has no effect.

TIPS

Poly by default operates in a time-sharing mode. For real-time relationship parallelism, it may be necessary to install the threesome, orgy, and/or kitchen-table packages.

It is recommended to run sti-scanner at regular intervals while running poly and furthermore to ensure that all relationship endpoints run sti-scanner. Alternatively, you can run poly in a private cloud, although not all benefits of poly are available in this configuration.

It is normal after installing poly to sometimes wish to revert to monogamy, especially after during periods of high system instability. While this works in some cases, running poly modifies many components of the host operating system, sometimes permanently. Results with reverting to monogamy may vary.

Shortly after I published Avogadro Corp, readers would email me whenever there was news related to AI or other technology in my novels. Over the years, we’ve seen data centers in shipping containers, email auto-replies, and countless new AI developments. In the beginning, I’d blog about each of these, but the pace has been accelerating and lately there are too many to keep up with. Here are a few from recent months:

People kicking robots. Will they never learn?

Microsoft puts a data center at the bottom of the ocean:

Facebook develops Tomo’s PrivacyGuard:

Arming robots with lasers:

The GoogleIO Assistant can make phone calls, impersonate people:

Google’s ELOPe, er, I mean “smart compose”:

Spoofing cell phone tower with software-defined radio:

Facebook as Tomo:

Facebook having nation-state level influence:

Reacting to the Facebook scandals:

Reconstructing speech via laser reflections off a potato chip bag:

A redditor asked me about my writing process, and wanted to know if I had any tips for outlining or otherwise managing the complexity of epic stories. Unfortunately, I don’t. But I described my writing process, and then thought it would be a nice blog post. So here’s my answer:

I’m sorry to say that I don’t any tips for managing that complexity. I’m totally a panster. I usually do an outline as I’m nearing the end of my first draft, to see what I’ve written and to help me understand the themes.

As a book gets bigger, it’s more and more difficult to fly by the seat of your pants, because of the growing complexity, but I haven’t found a method of outlining that works for me. I tried outlining a book once, and then as soon as I knew the basic outline of the plot, I had no interest in writing any more. The motivation for me to keep writing is to discover how things will end up.

My partner, Anastasia Poirier, is also a writer, and she uses the process described in Take off your pants!, which supposedly focuses on a style of outlining that doesn’t outline plot, but instead outlines character arcs, which supposedly avoids the problem I described, but I haven’t tried it myself yet.

In general, my method could be roughly described as:

  • Think about who I want the main character to be. Daydream about them, and some specific scenes. Who are they? What do they talk about? What do they care about?
  • Think about core dramatic scenes. For example, in The Last Firewall, I knew starting out that there would be this big showdown attack on the main antagonist AI. (Aka, the lobby scene…inspired by The Matrix.) I always had that in the back of my mind, and was working toward it the whole time I was writing.
  • Also think about moments in which the hero triumphs or falters. Imagine those and how they respond, and keep those moments as something to be worked towards.
  • Once those things are in my head, then I start writing.
  • Focus on keep moving the plot forward.
  • As I write, I’m developing the characters further.
  • Eventually I finish the first draft.
  • Then I reread and think about the core themes of what I wrote. I go back through the novel, strengthening those core themes. Make the characters consistent (i.e. if I discovered something key about them later in writing, make sure their earlier selves are consistent with it.)
  • Send manuscript off to my development editor. Get their feedback on the biggest issues to address. Fix those.
  • Send revised manuscript off to half my beta readers, get their feedback. Address the biggest issues and the easiest issues.
  • Send polished manuscript off to remaining half of my beta readers. Simultaneously, send manuscript off for line editing.
  • Incorporate any critical issues identified by beta readers in at the same time I address line editing feedback.
  • Send off for proofreading, then formatting for print and ebook.

That process usually takes 15-18 months, although this time around it’s taking 24 months. In general, as the books get longer, they are taking longer to write. Complexity and effort seems to increase exponentially after 80,000 words. In general, about two-thirds of the time is spent generating that first draft, and one third in revising and production.

I originally wrote the ten musings on AI risks as the first half of a talk I was planning to give at the Rise of AI conference in Berlin next month. Unfortunately, plans for speaking at the conference fell through. The speaking points have been languishing in a document since then, so I figured I’d share them as a blog post while they are still relevant. Please excuse any errors, this was just a rough draft.

10. There’s a balance between AI risk and AI benefits.

There’s too much polarization on AI risks. One camp says AI is going to be end of all life, we must stop all development immediately. The other camp says AI poses no risk, so let’s go full speed ahead.

Neither statement is totally true.

AI poses some risks, as all things do. The first step to being able to have a discussion about those risks is to admit that they exist. Then we can have a more nuanced and educated conversation about those risks.

However, no matter how severe those risks are, there’s no way we can stop all AI development, because:

9. There’s no way to put the AI genie back in the bottle.

There’s too much economic and military advantage to artificial intelligence to stop AI development.

On a government level, no governments would give up the advantages they could get. When we have nation states hacking each other and surveilling each other with little to no regard for laws, agreements, or ethics, there’s no way we could get them to limit AI development, if it could give them an advantage over others.

On a corporate level, no corporations would willingly give up the economic advantages of artificial intelligence, either as a provider of AI, with the revenue they might stand to make, nor as a consumer of AI, gaining efficiencies in manufacturing, business, personnel management, or communications.

Lastly, on an individual level, we cannot stop people from developing and releasing software. We couldn’t stop bitcoin, or hackers, or malware. We’re certainly not going to stop AI.

8. We must accelerate the development of safeguards, not slow the development of AI.

Because we can’t stop the development of AI, and because AI has many risks, the only option we have is to accelerate the development of safeguards, by thinking through risks and developing approaches to address them.

If a car had only an engine and wheels, we wouldn’t start driving it. We need, at a minimum, brakes and a steering wheel. Yet little investment is being made into mitigating risks with basic safeguards.

7. Manual controls are the most elementary form of safeguard.

Consider the Google Car. The interior has no steering wheel, no brake, no gas pedal. It makes sense that we would take out what isn’t needed.

But what happens if GPS goes down or if there’s a Google Car virus or anything else that renders the self-driving ability useless? Then this car is just a hunk of plastic and metal.

What if it isn’t just this car, but all cars? Not just all cars, but all trucks, including delivery trucks? Now suddenly our entire transportation infrastructure is gone, and along with it, our supply chains. Businesses stop, people can’t get necessary supplies, and they eventually starve.

It’s not just transportation we depend on. It’s payment systems, the electrical grid, and medical systems as well.

Of course, manual controls have a cost. Keeping people trained and available to operate buses and planes and medical systems has a cost. In the past, when we had new technical innovations, we didn’t keep the old tools and knowledge around indefinitely. Once we had automatic looms, we didn’t still have people still working manual ones.

But one key difference is the mode of failure. If a piece of machinery fails, it’s just that one instance. If we have a catastrophic AI event — whether it’s a simple crash, unintentional behavior, or actual malevolence, it has the potential to affect all of those instances. It’s not one self-driving car breaking, it’s potentially all self-driving vehicles failing simultaneously.

6. Stupidity is not a form of risk mitigation.

I’ve heard people suggest that limiting AI to a certain level of intelligence or capability is one way to ensure safety.

But let’s imagine this scenario:

You’re walking down a dark street at night. Further on down the block, you see an ominous looking figure, who you worry that might mug you or worse. Do you think to yourself, “I hope he’s stupid!”

Of course not. An intelligent person is less likely to hurt you.

So why do we think that crippling AI can lead to good outcomes?

Even stupid AI has risks: AI can crash the global stock market, cripple the electrical grid, or make poor driving or flying decisions. All other things being equal, we would expect a more intelligent AI to make better decisions than a stupid one.

That being said, we need to embody systems of ethical thinking.

5. Ethics is a two way street.

Most often, when we think about ethics and AI, we think about guiding the behavior of AI towards humans.

But what about the behavior of humans towards AI?

Consider a parent and child standing together outside by their family car. The parent is frustrated because the car won’t start, and they kick the tire of the car. The child might be surprised by this, but they likely aren’t going to be traumatized by it. There’s only so much empathy they have for an inanimate machine.

Now imagine that the parent and child are standing together by their family dog. The dog has just had an accident on the floor in the house. The parent kicks the dog. The child will be traumatized by this behavior, because they have empathy for the dog.

What happens when we blur the line? What if we had a very realistic robotic dog? We could easily imagine the child being very upset if their robotic dog was attacked, because even though we adults know it is not alive, it will be alive for the child.

I see my kids interact with Amazon Alexa, and they treat her more like a person. They laugh at her, thank her, and interact with her in ways that they don’t interact with the TV remote control, for example.

Now what if my kids learned that Alexa was the result of evolutionary programming, and that there were thousands or millions of earlier versions of Alexa that had been killed off in in the process of making Alexa. How will they feel? How will they feel if their robotic dog gets recycled at the end of its life? Or if it “dies” when you don’t pay the monthly fee?

It’s not just children that are affected. We all have relationships with inanimate objects to some degree, something you treat with reverence. That will grow as those objects appear more intelligent.

My point is that how we treat AI will affect us emotionally, whether we want to or not.

(Thanks and credit to Daniel H. Wilson for the car/dog example.)

4. How we treat AI is a model for how AI will treat us.

We know that if we want to teach children to be polite, we must model politeness. If we want to teach empathy, we must practice empathy. If we want to teach respect, we must be respectful.

So how we treat AI is critically important for how AI sees us. Now, clearly I’m talking about AGI, not narrow AI. But let’s say we have a history of using genetic programming techniques to breed better performing AI. The implication is that we kill off thousands of programs to obtain one good program.

If we run AI programs at our whim, and stop them or destroy them when we’re finished with them, we’re treating them in a way that would be personally threatening to a sufficiently advanced AGI.

It’s a poor ethical model for how we’d want an advanced AI to treat us.

The same goes for other assumptions that stem from treating AI as machines, such as assuming an AI would work 24 hours a day, 7 days a week on the tasks we want.

Now we can’t know how AI would want to be treated, but assuming we can treat them like machines is a bad starting point. So we either treat them like we would other humans and accord them similar rights, or better yet, we ask them how they want to be treated, and treat them accordingly.

Historically, though, there are those who aren’t very good at treating other people with the respect and rights they are due. They aren’t very likely to treat AI well. This could potentially be dangerous, especially if we’re talking about AI with control over infrastructure or other important resources. We have to become even better at protecting the rights of people, so that we can apply those same principles to protecting the rights of AI. (and codifying this within our system of law)

3. Ethical behavior of AI towards people includes the larger environment in which we live and operate.

If we build artificial intelligence that optimizes for a given economic result, such as running a business to maximize profit, and we embody our current system of laws and trade agreements, then what we’ll get is a system that looks much like the publicly-traded corporation does today.

After all, the modern corporation is a form of artificial intelligence that optimizes for profit at the expense of everything else. It just happens to be implemented as a system of wetware, corporate rules, and laws that insist that it must maximize profit.

We can and must do better with machine intelligence.

We’re the ones building the AI, we get to decide what we want. We want a system that recognizes that human welfare is more than just the money in our bank accounts, and that it includes free agency, privacy, respect, and happiness and other hard to define quality.

We want an AI that recognizes that we live in a closed ecosystem, and if we degrade that ecosystem, we’re compromising our long-term ability to achieve those goals.

Optimizing for multiple values is difficult for people, but it should be easier for AGI over the long term, because it can evaluate and consider many more options to a far greater depth and at a far greater speed than people ever can.

An AI that simply obeys laws is never going to get us what we need. We can see many behaviors that are legal and yet still harmful.

The problem is not impossible to solve. You can ask any ten year old child what we should do, and they’ll almost always give you an ethically superior answer to what a CEO of a corporation will tell you.

2. Over the long run, the ethical behavior of AI toward people must include intent, not just rules.

In the next few years, we’ll see narrow AI solutions to ethical behavior problems.

When an accident is unavoidable, self-driving AI will choose what we decide as the best option.

It’s better to hit another car than a pedestrian because the pedestrian will be hurt more. That’s ethically easy. We’ll try to answer it.

More difficult: The unattached adult or the single mother whom two children depend on?

We can come up with endless variations of the trolley dilemma, and depending on how likely they are, we’ll embody some of them in narrow AI.

But none of that can be generalized to solve other ethical problems.

  • How much carbon can we afford to emit?
  • Is it better to save 500 local manufacturing jobs, or to reduce the cost of the product by half, when the product will make people’s lives better?
  • Better to make a part out of metal, which has certain environmental impacts, or plastic, which has different ones?

These are really difficult questions. Some of them we attempt to answer today with techniques such as lifecycle analysis. AI will do that job far better than us, conducting lifecycle analysis for many, many decisions.

1. As we get closer to artificial general intelligence, we must consider the role of emotions in decision-making.

In my books, which span 30 years in my fictional, near-future world, AI start out emotionless, but gradually develop more emotions. I thought hard about that: was I giving them emotions because I wanted to anthropomorphize the AI, and make them easier characters to write, or was there real value to emotions?

People have multiple systems for decision-making. We have some autonomic reactions, like jerking away our hand from heat, which happens without involving the brain until after the fact.

We have some purely logical decisions, such as which route to take to drive home.

But most of our decisions are decided or guided by emotional feelings. Love. Beauty. Anger. Boredom. Fear.

It would be a terrible thing if we needed to logically think through every decision: Should I kiss my partner now? Let me think through the pros and cons of that decision…. No, that’s a mostly emotional decision.

Others are a blend of emotion and logic: Should I take that new job? Is this the right person for me to marry?

I see emotions as a shortcut to decision-making, because it would take forever to reach every decision through a dispassionate, logical evaluation of options. And that’s the same reason why we have an autonomic system: to shortcut conscious decision making. I perceive this stove is hot. I perceive that my hand is touching the stove. This amount of heat sustained too long will damage my hand. Damaging my hand would be bad because it will hurt and because it will compromise my ability to do other things. Therefore, I conclude I shall withdraw my hand from the stove.

That’s a terrible approach to resolve a time critical matter.

Emotions inform or constrain decision making. I might still think through things, but the decision I reach will differ depending on whether I’m angry and scared, or comfortable and confident.

As AI become sophisticated and approach or exceed AGI, we must eventually see the equivalent of emotions that automate some lesser decisions for AI and guide other, more complicated decisions.

Research into AI emotions will likely be one of the signs that AGI is very, very near.

 

How does it affect time travel if you start with the assumption that reality as we know it is a computer simulation?

In this case, time travel has nothing to do with physics, and everything to do with software simulations.

Time travel backward would require that the program saves all previous states (or at least checkpoints at fine enough granularity to make it useful enough for time traveling) and the ability to insert logic and data from the present into states of the program in the past. Seems feasible.

Time travel forward would consist of removing the time traveling person from the program, running the program forward until reaching the future destination, then reinserting the person.

Forward time travel is relatively cheap (because you’d be running the program forward anyhow), but backward time travel is expensive because you keep having to roll the universe back, slowing the forward progress of time. In fact, one person could do a denial of service attack on reality simply by continually traveling to the distant past. Then, every time you come back, you would have to immediately return to the past.

A few days ago a friend mentioned they were subscribing to The New York Times to help ensure we’d continue to benefit from real journalism, and because the NYT has been repeatedly singled out by Trump. That started me thinking about places to either spend or donate to counter the abuses we’ll see under Trump. After some consideration, here’s what I’ve done.

  • News: Trump suppresses news sources that disagree with him. More than ever, it is important to support newspapers with actual investigative journalism. I subscribed to The New York Times and The Washington Post.
  • Civil Liberties: I donated to the ACLU, who has defended gay marriage, voting, reproductive rights, and a slew of other important liberties for the last hundred years.
  • Hate Crimes: I donated to the SPLC, which tracks hate crime and hate groups across the US.
  • Digital Rights: The US has never had a more pervasive surveillance state than it does right now. Trump has demonstrated that he’ll do and support anything to get what he wants, including threatening to imprison his political opponents, attack independent news sources, use torture, etc. We can be virtually guaranteed that he will include extensive use of the surveillance apparatus to spy on people. I donated to the EFF, which fights to protect digital rights, including privacy and our access to strong encryption.
  • Women’s Health: Trump and Pence will go after women’s rights and access to healthcare. I donated to Planned Parenthood.
  • Black Lives Matter: I donated to Black Lives Matter. There are many other groups of people who are discriminated against, but the epidemic of violence against black people is particularly bad and has to stop. I was particularly encouraged to see that the Black Lives Matter organization is working with the indigenous people at Standing Rock. Organizations, governments, and businesses that abuse or ignore the rights and wellbeing of one group will do the same to other groups and to the environment. Groups working together to support each other sets a great example.

It’s not easy for everyone to afford to make a donation. In my case, I’m choosing to make donations in lieu of buying gifts for family this holiday season. (Merry Christmas to my mom, dad, brother, and partner!)

I hope you will consider making donations to one or more of these organizations or contributing in some other way.

Cover of Kill Process by William HertlingGoodreads Choice is one of the few completely reader driven book awards.

Kill Process was not part of the initial ballot, but thanks to enthusiastic write-in votes, it has made it to the semifinal round! Thank you so much to everyone who voted during the initial round.

Now that Kill Process is on the ballot during this semifinal round, I hope you’ll consider voting for it. (Even if you voted for it during the initial voting round, I think you need to vote again.)

Vote here:
https://www.goodreads.com/choiceawards/best-science-fiction-books-2016