NAME

polyamory – supports simultaneous relationships

SYNOPSIS

poly [-dpt]

DESCRIPTION

poly supports simultaneous host-to-host relationships.

By default, poly searches for and upgrades any preexisting monogamous relationship to polyamory. Results may be mixed. To suppress this behavior, use kill -9 to terminate existing relationships first.

Polyamory comes in many variations. Best results are obtained running identical or closely compatible variations. See also: poly-nonhierarchical, poly-hierarchical, and poly-solo. Less compatible variations include: swinging. Poly is not compatible with cheating.

It is possible but not recommended to connect two hosts, one running poly and one running monogamy, but this requires an experienced system administrator and increases the risk of system instability. Resource utilization (see relationship-discussion) will likely be higher with this combination.

It is normal to have one or more relationship-discussion background processes per relationship. In some cases, O(n^2) processes are required for n relationships. These child processes automatically consume all available CPU cycles, and are necessary for system stability.

OPTIONS

-p In promiscuous mode, poly will fork additional instances any time it sees an open port on a compatible host. This can be resource intensive and is not recommended to run indefinitely, although it is common for users to start poly in this state.

-d In debug mode, extra relationship-discussions are spawned. Poly is notoriously difficult to debug. If relationship-discussion is insufficient, if CPU utilization is too high, or system instability exceeds comfortable limits, use couples-counseling to process debug output.

-t To prevent errors during initialization and facilitate user adoption, poly supports a -t flag for trial mode. However, this is a dummy flag and has no effect.

TIPS

Poly by default operates in a time-sharing mode. For real-time relationship parallelism, it may be necessary to install the threesome, orgy, and/or kitchen-table packages.

It is recommended to run sti-scanner at regular intervals while running poly and furthermore to ensure that all relationship endpoints run sti-scanner. Alternatively, you can run poly in a private cloud, although not all benefits of poly are available in this configuration.

It is normal after installing poly to sometimes wish to revert to monogamy, especially after during periods of high system instability. While this works in some cases, running poly modifies many components of the host operating system, sometimes permanently. Results with reverting to monogamy may vary.

Shortly after I published Avogadro Corp, readers would email me whenever there was news related to AI or other technology in my novels. Over the years, we’ve seen data centers in shipping containers, email auto-replies, and countless new AI developments. In the beginning, I’d blog about each of these, but the pace has been accelerating and lately there are too many to keep up with. Here are a few from recent months:

People kicking robots. Will they never learn?

Microsoft puts a data center at the bottom of the ocean:

Facebook develops Tomo’s PrivacyGuard:

Arming robots with lasers:

The GoogleIO Assistant can make phone calls, impersonate people:

Google’s ELOPe, er, I mean “smart compose”:

Spoofing cell phone tower with software-defined radio:

Facebook as Tomo:

Facebook having nation-state level influence:

Reacting to the Facebook scandals:

Reconstructing speech via laser reflections off a potato chip bag:

A redditor asked me about my writing process, and wanted to know if I had any tips for outlining or otherwise managing the complexity of epic stories. Unfortunately, I don’t. But I described my writing process, and then thought it would be a nice blog post. So here’s my answer:

I’m sorry to say that I don’t any tips for managing that complexity. I’m totally a panster. I usually do an outline as I’m nearing the end of my first draft, to see what I’ve written and to help me understand the themes.

As a book gets bigger, it’s more and more difficult to fly by the seat of your pants, because of the growing complexity, but I haven’t found a method of outlining that works for me. I tried outlining a book once, and then as soon as I knew the basic outline of the plot, I had no interest in writing any more. The motivation for me to keep writing is to discover how things will end up.

My partner, Anastasia Poirier, is also a writer, and she uses the process described in Take off your pants!, which supposedly focuses on a style of outlining that doesn’t outline plot, but instead outlines character arcs, which supposedly avoids the problem I described, but I haven’t tried it myself yet.

In general, my method could be roughly described as:

  • Think about who I want the main character to be. Daydream about them, and some specific scenes. Who are they? What do they talk about? What do they care about?
  • Think about core dramatic scenes. For example, in The Last Firewall, I knew starting out that there would be this big showdown attack on the main antagonist AI. (Aka, the lobby scene…inspired by The Matrix.) I always had that in the back of my mind, and was working toward it the whole time I was writing.
  • Also think about moments in which the hero triumphs or falters. Imagine those and how they respond, and keep those moments as something to be worked towards.
  • Once those things are in my head, then I start writing.
  • Focus on keep moving the plot forward.
  • As I write, I’m developing the characters further.
  • Eventually I finish the first draft.
  • Then I reread and think about the core themes of what I wrote. I go back through the novel, strengthening those core themes. Make the characters consistent (i.e. if I discovered something key about them later in writing, make sure their earlier selves are consistent with it.)
  • Send manuscript off to my development editor. Get their feedback on the biggest issues to address. Fix those.
  • Send revised manuscript off to half my beta readers, get their feedback. Address the biggest issues and the easiest issues.
  • Send polished manuscript off to remaining half of my beta readers. Simultaneously, send manuscript off for line editing.
  • Incorporate any critical issues identified by beta readers in at the same time I address line editing feedback.
  • Send off for proofreading, then formatting for print and ebook.

That process usually takes 15-18 months, although this time around it’s taking 24 months. In general, as the books get longer, they are taking longer to write. Complexity and effort seems to increase exponentially after 80,000 words. In general, about two-thirds of the time is spent generating that first draft, and one third in revising and production.

I originally wrote the ten musings on AI risks as the first half of a talk I was planning to give at the Rise of AI conference in Berlin next month. Unfortunately, plans for speaking at the conference fell through. The speaking points have been languishing in a document since then, so I figured I’d share them as a blog post while they are still relevant. Please excuse any errors, this was just a rough draft.

10. There’s a balance between AI risk and AI benefits.

There’s too much polarization on AI risks. One camp says AI is going to be end of all life, we must stop all development immediately. The other camp says AI poses no risk, so let’s go full speed ahead.

Neither statement is totally true.

AI poses some risks, as all things do. The first step to being able to have a discussion about those risks is to admit that they exist. Then we can have a more nuanced and educated conversation about those risks.

However, no matter how severe those risks are, there’s no way we can stop all AI development, because:

9. There’s no way to put the AI genie back in the bottle.

There’s too much economic and military advantage to artificial intelligence to stop AI development.

On a government level, no governments would give up the advantages they could get. When we have nation states hacking each other and surveilling each other with little to no regard for laws, agreements, or ethics, there’s no way we could get them to limit AI development, if it could give them an advantage over others.

On a corporate level, no corporations would willingly give up the economic advantages of artificial intelligence, either as a provider of AI, with the revenue they might stand to make, nor as a consumer of AI, gaining efficiencies in manufacturing, business, personnel management, or communications.

Lastly, on an individual level, we cannot stop people from developing and releasing software. We couldn’t stop bitcoin, or hackers, or malware. We’re certainly not going to stop AI.

8. We must accelerate the development of safeguards, not slow the development of AI.

Because we can’t stop the development of AI, and because AI has many risks, the only option we have is to accelerate the development of safeguards, by thinking through risks and developing approaches to address them.

If a car had only an engine and wheels, we wouldn’t start driving it. We need, at a minimum, brakes and a steering wheel. Yet little investment is being made into mitigating risks with basic safeguards.

7. Manual controls are the most elementary form of safeguard.

Consider the Google Car. The interior has no steering wheel, no brake, no gas pedal. It makes sense that we would take out what isn’t needed.

But what happens if GPS goes down or if there’s a Google Car virus or anything else that renders the self-driving ability useless? Then this car is just a hunk of plastic and metal.

What if it isn’t just this car, but all cars? Not just all cars, but all trucks, including delivery trucks? Now suddenly our entire transportation infrastructure is gone, and along with it, our supply chains. Businesses stop, people can’t get necessary supplies, and they eventually starve.

It’s not just transportation we depend on. It’s payment systems, the electrical grid, and medical systems as well.

Of course, manual controls have a cost. Keeping people trained and available to operate buses and planes and medical systems has a cost. In the past, when we had new technical innovations, we didn’t keep the old tools and knowledge around indefinitely. Once we had automatic looms, we didn’t still have people still working manual ones.

But one key difference is the mode of failure. If a piece of machinery fails, it’s just that one instance. If we have a catastrophic AI event — whether it’s a simple crash, unintentional behavior, or actual malevolence, it has the potential to affect all of those instances. It’s not one self-driving car breaking, it’s potentially all self-driving vehicles failing simultaneously.

6. Stupidity is not a form of risk mitigation.

I’ve heard people suggest that limiting AI to a certain level of intelligence or capability is one way to ensure safety.

But let’s imagine this scenario:

You’re walking down a dark street at night. Further on down the block, you see an ominous looking figure, who you worry that might mug you or worse. Do you think to yourself, “I hope he’s stupid!”

Of course not. An intelligent person is less likely to hurt you.

So why do we think that crippling AI can lead to good outcomes?

Even stupid AI has risks: AI can crash the global stock market, cripple the electrical grid, or make poor driving or flying decisions. All other things being equal, we would expect a more intelligent AI to make better decisions than a stupid one.

That being said, we need to embody systems of ethical thinking.

5. Ethics is a two way street.

Most often, when we think about ethics and AI, we think about guiding the behavior of AI towards humans.

But what about the behavior of humans towards AI?

Consider a parent and child standing together outside by their family car. The parent is frustrated because the car won’t start, and they kick the tire of the car. The child might be surprised by this, but they likely aren’t going to be traumatized by it. There’s only so much empathy they have for an inanimate machine.

Now imagine that the parent and child are standing together by their family dog. The dog has just had an accident on the floor in the house. The parent kicks the dog. The child will be traumatized by this behavior, because they have empathy for the dog.

What happens when we blur the line? What if we had a very realistic robotic dog? We could easily imagine the child being very upset if their robotic dog was attacked, because even though we adults know it is not alive, it will be alive for the child.

I see my kids interact with Amazon Alexa, and they treat her more like a person. They laugh at her, thank her, and interact with her in ways that they don’t interact with the TV remote control, for example.

Now what if my kids learned that Alexa was the result of evolutionary programming, and that there were thousands or millions of earlier versions of Alexa that had been killed off in in the process of making Alexa. How will they feel? How will they feel if their robotic dog gets recycled at the end of its life? Or if it “dies” when you don’t pay the monthly fee?

It’s not just children that are affected. We all have relationships with inanimate objects to some degree, something you treat with reverence. That will grow as those objects appear more intelligent.

My point is that how we treat AI will affect us emotionally, whether we want to or not.

(Thanks and credit to Daniel H. Wilson for the car/dog example.)

4. How we treat AI is a model for how AI will treat us.

We know that if we want to teach children to be polite, we must model politeness. If we want to teach empathy, we must practice empathy. If we want to teach respect, we must be respectful.

So how we treat AI is critically important for how AI sees us. Now, clearly I’m talking about AGI, not narrow AI. But let’s say we have a history of using genetic programming techniques to breed better performing AI. The implication is that we kill off thousands of programs to obtain one good program.

If we run AI programs at our whim, and stop them or destroy them when we’re finished with them, we’re treating them in a way that would be personally threatening to a sufficiently advanced AGI.

It’s a poor ethical model for how we’d want an advanced AI to treat us.

The same goes for other assumptions that stem from treating AI as machines, such as assuming an AI would work 24 hours a day, 7 days a week on the tasks we want.

Now we can’t know how AI would want to be treated, but assuming we can treat them like machines is a bad starting point. So we either treat them like we would other humans and accord them similar rights, or better yet, we ask them how they want to be treated, and treat them accordingly.

Historically, though, there are those who aren’t very good at treating other people with the respect and rights they are due. They aren’t very likely to treat AI well. This could potentially be dangerous, especially if we’re talking about AI with control over infrastructure or other important resources. We have to become even better at protecting the rights of people, so that we can apply those same principles to protecting the rights of AI. (and codifying this within our system of law)

3. Ethical behavior of AI towards people includes the larger environment in which we live and operate.

If we build artificial intelligence that optimizes for a given economic result, such as running a business to maximize profit, and we embody our current system of laws and trade agreements, then what we’ll get is a system that looks much like the publicly-traded corporation does today.

After all, the modern corporation is a form of artificial intelligence that optimizes for profit at the expense of everything else. It just happens to be implemented as a system of wetware, corporate rules, and laws that insist that it must maximize profit.

We can and must do better with machine intelligence.

We’re the ones building the AI, we get to decide what we want. We want a system that recognizes that human welfare is more than just the money in our bank accounts, and that it includes free agency, privacy, respect, and happiness and other hard to define quality.

We want an AI that recognizes that we live in a closed ecosystem, and if we degrade that ecosystem, we’re compromising our long-term ability to achieve those goals.

Optimizing for multiple values is difficult for people, but it should be easier for AGI over the long term, because it can evaluate and consider many more options to a far greater depth and at a far greater speed than people ever can.

An AI that simply obeys laws is never going to get us what we need. We can see many behaviors that are legal and yet still harmful.

The problem is not impossible to solve. You can ask any ten year old child what we should do, and they’ll almost always give you an ethically superior answer to what a CEO of a corporation will tell you.

2. Over the long run, the ethical behavior of AI toward people must include intent, not just rules.

In the next few years, we’ll see narrow AI solutions to ethical behavior problems.

When an accident is unavoidable, self-driving AI will choose what we decide as the best option.

It’s better to hit another car than a pedestrian because the pedestrian will be hurt more. That’s ethically easy. We’ll try to answer it.

More difficult: The unattached adult or the single mother whom two children depend on?

We can come up with endless variations of the trolley dilemma, and depending on how likely they are, we’ll embody some of them in narrow AI.

But none of that can be generalized to solve other ethical problems.

  • How much carbon can we afford to emit?
  • Is it better to save 500 local manufacturing jobs, or to reduce the cost of the product by half, when the product will make people’s lives better?
  • Better to make a part out of metal, which has certain environmental impacts, or plastic, which has different ones?

These are really difficult questions. Some of them we attempt to answer today with techniques such as lifecycle analysis. AI will do that job far better than us, conducting lifecycle analysis for many, many decisions.

1. As we get closer to artificial general intelligence, we must consider the role of emotions in decision-making.

In my books, which span 30 years in my fictional, near-future world, AI start out emotionless, but gradually develop more emotions. I thought hard about that: was I giving them emotions because I wanted to anthropomorphize the AI, and make them easier characters to write, or was there real value to emotions?

People have multiple systems for decision-making. We have some autonomic reactions, like jerking away our hand from heat, which happens without involving the brain until after the fact.

We have some purely logical decisions, such as which route to take to drive home.

But most of our decisions are decided or guided by emotional feelings. Love. Beauty. Anger. Boredom. Fear.

It would be a terrible thing if we needed to logically think through every decision: Should I kiss my partner now? Let me think through the pros and cons of that decision…. No, that’s a mostly emotional decision.

Others are a blend of emotion and logic: Should I take that new job? Is this the right person for me to marry?

I see emotions as a shortcut to decision-making, because it would take forever to reach every decision through a dispassionate, logical evaluation of options. And that’s the same reason why we have an autonomic system: to shortcut conscious decision making. I perceive this stove is hot. I perceive that my hand is touching the stove. This amount of heat sustained too long will damage my hand. Damaging my hand would be bad because it will hurt and because it will compromise my ability to do other things. Therefore, I conclude I shall withdraw my hand from the stove.

That’s a terrible approach to resolve a time critical matter.

Emotions inform or constrain decision making. I might still think through things, but the decision I reach will differ depending on whether I’m angry and scared, or comfortable and confident.

As AI become sophisticated and approach or exceed AGI, we must eventually see the equivalent of emotions that automate some lesser decisions for AI and guide other, more complicated decisions.

Research into AI emotions will likely be one of the signs that AGI is very, very near.

 

How does it affect time travel if you start with the assumption that reality as we know it is a computer simulation?

In this case, time travel has nothing to do with physics, and everything to do with software simulations.

Time travel backward would require that the program saves all previous states (or at least checkpoints at fine enough granularity to make it useful enough for time traveling) and the ability to insert logic and data from the present into states of the program in the past. Seems feasible.

Time travel forward would consist of removing the time traveling person from the program, running the program forward until reaching the future destination, then reinserting the person.

Forward time travel is relatively cheap (because you’d be running the program forward anyhow), but backward time travel is expensive because you keep having to roll the universe back, slowing the forward progress of time. In fact, one person could do a denial of service attack on reality simply by continually traveling to the distant past. Then, every time you come back, you would have to immediately return to the past.

A few days ago a friend mentioned they were subscribing to The New York Times to help ensure we’d continue to benefit from real journalism, and because the NYT has been repeatedly singled out by Trump. That started me thinking about places to either spend or donate to counter the abuses we’ll see under Trump. After some consideration, here’s what I’ve done.

  • News: Trump suppresses news sources that disagree with him. More than ever, it is important to support newspapers with actual investigative journalism. I subscribed to The New York Times and The Washington Post.
  • Civil Liberties: I donated to the ACLU, who has defended gay marriage, voting, reproductive rights, and a slew of other important liberties for the last hundred years.
  • Hate Crimes: I donated to the SPLC, which tracks hate crime and hate groups across the US.
  • Digital Rights: The US has never had a more pervasive surveillance state than it does right now. Trump has demonstrated that he’ll do and support anything to get what he wants, including threatening to imprison his political opponents, attack independent news sources, use torture, etc. We can be virtually guaranteed that he will include extensive use of the surveillance apparatus to spy on people. I donated to the EFF, which fights to protect digital rights, including privacy and our access to strong encryption.
  • Women’s Health: Trump and Pence will go after women’s rights and access to healthcare. I donated to Planned Parenthood.
  • Black Lives Matter: I donated to Black Lives Matter. There are many other groups of people who are discriminated against, but the epidemic of violence against black people is particularly bad and has to stop. I was particularly encouraged to see that the Black Lives Matter organization is working with the indigenous people at Standing Rock. Organizations, governments, and businesses that abuse or ignore the rights and wellbeing of one group will do the same to other groups and to the environment. Groups working together to support each other sets a great example.

It’s not easy for everyone to afford to make a donation. In my case, I’m choosing to make donations in lieu of buying gifts for family this holiday season. (Merry Christmas to my mom, dad, brother, and partner!)

I hope you will consider making donations to one or more of these organizations or contributing in some other way.

Cover of Kill Process by William HertlingGoodreads Choice is one of the few completely reader driven book awards.

Kill Process was not part of the initial ballot, but thanks to enthusiastic write-in votes, it has made it to the semifinal round! Thank you so much to everyone who voted during the initial round.

Now that Kill Process is on the ballot during this semifinal round, I hope you’ll consider voting for it. (Even if you voted for it during the initial voting round, I think you need to vote again.)

Vote here:
https://www.goodreads.com/choiceawards/best-science-fiction-books-2016

 

 

When I finish a novel, I always need a break from writing for a while to recharge. Sometimes I take a break from long-form writing and do a series of short blog posts, or sometimes I bury myself in programming for a while.

This time around, it’s been a little of everything: My day job has been busy. I’ve done a few small programming projects on the side. I’m networking with film and TV folks in the hopes of getting a screen adaptation for one of my books. And I’m researching topics for my next book.

In the last couple of months, I’ve experimented with different ideas for my next book. I have about 10,000 words written — that’s about 10% of the average length of one of my books. I don’t want to say too much more, because it’s so early in the process that it could go in almost any possible direction. But I have general ideas I want to explore, and a tentative story arc.

That’s usually enough for me to get going. I’m not a big outliner, even though plenty of writers swear by the process. I tried outlining a novel once and learned that once I had finished the outline and knew how the story ended, I had no interest in actually writing it. Now I stick to a loose story arc, and let my characters take me where they want to go.

I will find out more about their destination in the coming month. November is NaNoWriMo. If you are not familiar with it, NaNoWriMo is National Novel Writing Month, in which people aim to write a 50,000 word novel in one month. I’ve never written a whole novel in November, but I often like to use the month to build momentum, so I’ve set myself a modest word count goal for this November. It should be enough to prove out many of the concepts I’m planning for the book.

When I wrote Kill Process, I had no idea how it would be received. It was a departure from my existing series and my focus on AI. Would existing fans enjoy the new work, or be disappointed that I had changed subject matter? Would my focus on issues such as domestic violence and corporate ownership of data make for an interesting story, or detract from people’s interest? Just how much technology could I put in a book, anyway? Is JSON and XML a step too far?

I’m happy to be able to say that people seem to be enjoying it very much. A numerical rating can never completely represent the complexity of a book, but Kill Process is averaging 4.8 stars across 98 reviews on Amazon, a big leap up compared to my earlier books.

I’m also delighted that a lot of the reviews specifically call out that Kill Process is an improvement over my previous writing. As much as I enjoyed the stories in Avogadro Corp and AI Apocalypse, I readily admit the writing is not as good as I wanted it to be. I’m glad the hard work makes a difference people can see. Here are a few quotes from Amazon reviews that made me smile:

  • “I think this is some of his best writing, with good character development, great plot line with twists and turns and an excellent weaving in of technology. Hertling clearly knows his stuff when it comes to the tech, but it doesn’t overwhelm the plot line.” — Greg-C
  • “This was an outstanding read. I thought I was going to get a quick high tech thriller. What I got was an education in state of the art along with some great thinking about technology, a business startup book, and a cool story to boot. I came away really impressed. William Hertling is a thoughtful writer and clearly researches the heck out of his books. While reading three of the supposedly scifi future aspects of the book showed up as stories in the NY Times. He couldn’t be more topical. This was a real pleasure.” — Stu the Honest Review Guy
  • “A modern day Neuromancer about cutting edge technological manipulation, privacy, and our dependence on it.” — AltaEgoNerd
  • “Every William Hertling book is excellent–mind-blowing and -building, about coding, hacking, AI and how humans create, interact and are inevitably changed by software. They are science fiction insofar as not every hack has been fully executed…yet. His latest, Kill Process, is a spread-spectrum of hacking, psychology and the start-up of a desperately needed, world-changing technology created by a young team of coders gathered together by a broken but brilliant leader, Angie, whose secrets and exploits will dominate your attention until the final period.” — John Kirk

You get the idea. I’m glad that accurate, tech-heavy science fiction has an audience. As long as people keep enjoying it, I’ll keep writing it.