I was already concerned about the impact of technology on social connections. As I wrote about in Kill Process, there is an epidemic of loneliness in the world, and most especially in the US. But in the face of the global pandemic, our lives have been further irrevocably altered. The way we work, socialize, and even entertain ourselves has been reshaped by necessity.

The pandemic accelerated our reliance on technology, pushing us into an era where our screens became the windows to the world. Social media platforms became our town squares, our coffee shops, and our living rooms. We’ve seen a surge in the use of Zoom, Teams, and Slack for work, while Facebook, Instagram, Twitter, and Discord have become the go-to places for social interaction.

Yet, this shift has complexities that further exacerbate the dysfunctional trends that have been brewing for years. As we’ve moved our lives online, we’ve further substituted real human connection with digital interaction, leading to a sense of isolation and loneliness for many. The pandemic has amplified this effect, with social distancing measures making it harder for us to maintain our offline relationships, and indeed, even our skill at managing and maintaining those relationships.

We can see this in how the pandemic has created lasting changes in how we socialize. Making plans with others has become more challenging, with cancellations becoming increasingly common. Post-quarantine, there seems to be a growing reluctance to spend a lot of time texting, even among those who previously enjoyed this form of communication, leading to further isolation. One possible explanation for this shift could be a collective realization of the limitations of digital communication — the pandemic, by forcing us into isolation, may have made us more aware of the qualitative differences between online and offline interactions. But I think that misses the mark, personally. I think it’s more likely to be a subconscious resistance to pandemic behavior. Just as we were eager to get masks off and deny the horrors of quarantine and job loss and long Covid, so too, I think people wanted to leave behind texting.

That’s not to say that social media and messaging don’t have serious limitations. They do. Several years ago, I experienced this difference firsthand. I was grieving the end of a relationship and feeling a lot of distress. I reached out to friends for support via social media and text, engaging with many folks at length throughout the day. However, these interactions didn’t substantially change or improve my feelings. But when a friend came over in person for a short 45 minutes in the evening, my mood dramatically improved. I felt happy and connected, no longer sad or grieving. This experience underscored for me that a single in-person interaction, even of limited duration, can have a more profound impact on our emotional well-being than an entire day of digital messaging.

In my novel Kill Process, I delve into the world of Tomo, a fictional social media company, through the eyes of Angie, a data analyst. Angie’s experiences echo our current reality, highlighting the dangers of over-reliance on digital platforms. She grapples with the ethical implications of social media, particularly how it can be manipulated to control and influence users.

Since the publication of Kill Process in 2016, real-world events have further underscored these concerns. For instance, the Cambridge Analytica scandal in 2018 revealed how personal data from millions of Facebook users was harvested and used for political advertising, illustrating the potential for misuse of user data on social media platforms. More recently, the rise of deepfake technology has shown how social media can be used to spread misinformation and manipulate public opinion, a theme that resonates strongly with Angie’s journey. And even more recently, ChatGPT and other large language models have made us realize just how little we actually can trust, and how much can be faked.

The pandemic has brought these issues to the forefront, as we’ve become more dependent on these platforms for connection. Angie’s journey serves as a reminder that while technology can bring us together, it can also drive us apart if not used responsibly.

As we move forward, we must strive to find a balance between our online and offline lives, using technology to enhance our relationships rather than replace them. This has been the promise of social media for a long time, but we’ve failed so far. It’s up to us to figure out how to use this power responsibly to build a more connected and compassionate world, even in the face of adversity.

I originally wrote the ten musings on AI risks as the first half of a talk I was planning to give at the Rise of AI conference in Berlin next month. Unfortunately, plans for speaking at the conference fell through. The speaking points have been languishing in a document since then, so I figured I’d share them as a blog post while they are still relevant. Please excuse any errors, this was just a rough draft.

10. There’s a balance between AI risk and AI benefits.

There’s too much polarization on AI risks. One camp says AI is going to be end of all life, we must stop all development immediately. The other camp says AI poses no risk, so let’s go full speed ahead.

Neither statement is totally true.

AI poses some risks, as all things do. The first step to being able to have a discussion about those risks is to admit that they exist. Then we can have a more nuanced and educated conversation about those risks.

However, no matter how severe those risks are, there’s no way we can stop all AI development, because:

9. There’s no way to put the AI genie back in the bottle.

There’s too much economic and military advantage to artificial intelligence to stop AI development.

On a government level, no governments would give up the advantages they could get. When we have nation states hacking each other and surveilling each other with little to no regard for laws, agreements, or ethics, there’s no way we could get them to limit AI development, if it could give them an advantage over others.

On a corporate level, no corporations would willingly give up the economic advantages of artificial intelligence, either as a provider of AI, with the revenue they might stand to make, nor as a consumer of AI, gaining efficiencies in manufacturing, business, personnel management, or communications.

Lastly, on an individual level, we cannot stop people from developing and releasing software. We couldn’t stop bitcoin, or hackers, or malware. We’re certainly not going to stop AI.

8. We must accelerate the development of safeguards, not slow the development of AI.

Because we can’t stop the development of AI, and because AI has many risks, the only option we have is to accelerate the development of safeguards, by thinking through risks and developing approaches to address them.

If a car had only an engine and wheels, we wouldn’t start driving it. We need, at a minimum, brakes and a steering wheel. Yet little investment is being made into mitigating risks with basic safeguards.

7. Manual controls are the most elementary form of safeguard.

Consider the Google Car. The interior has no steering wheel, no brake, no gas pedal. It makes sense that we would take out what isn’t needed.

But what happens if GPS goes down or if there’s a Google Car virus or anything else that renders the self-driving ability useless? Then this car is just a hunk of plastic and metal.

What if it isn’t just this car, but all cars? Not just all cars, but all trucks, including delivery trucks? Now suddenly our entire transportation infrastructure is gone, and along with it, our supply chains. Businesses stop, people can’t get necessary supplies, and they eventually starve.

It’s not just transportation we depend on. It’s payment systems, the electrical grid, and medical systems as well.

Of course, manual controls have a cost. Keeping people trained and available to operate buses and planes and medical systems has a cost. In the past, when we had new technical innovations, we didn’t keep the old tools and knowledge around indefinitely. Once we had automatic looms, we didn’t still have people still working manual ones.

But one key difference is the mode of failure. If a piece of machinery fails, it’s just that one instance. If we have a catastrophic AI event — whether it’s a simple crash, unintentional behavior, or actual malevolence, it has the potential to affect all of those instances. It’s not one self-driving car breaking, it’s potentially all self-driving vehicles failing simultaneously.

6. Stupidity is not a form of risk mitigation.

I’ve heard people suggest that limiting AI to a certain level of intelligence or capability is one way to ensure safety.

But let’s imagine this scenario:

You’re walking down a dark street at night. Further on down the block, you see an ominous looking figure, who you worry that might mug you or worse. Do you think to yourself, “I hope he’s stupid!”

Of course not. An intelligent person is less likely to hurt you.

So why do we think that crippling AI can lead to good outcomes?

Even stupid AI has risks: AI can crash the global stock market, cripple the electrical grid, or make poor driving or flying decisions. All other things being equal, we would expect a more intelligent AI to make better decisions than a stupid one.

That being said, we need to embody systems of ethical thinking.

5. Ethics is a two way street.

Most often, when we think about ethics and AI, we think about guiding the behavior of AI towards humans.

But what about the behavior of humans towards AI?

Consider a parent and child standing together outside by their family car. The parent is frustrated because the car won’t start, and they kick the tire of the car. The child might be surprised by this, but they likely aren’t going to be traumatized by it. There’s only so much empathy they have for an inanimate machine.

Now imagine that the parent and child are standing together by their family dog. The dog has just had an accident on the floor in the house. The parent kicks the dog. The child will be traumatized by this behavior, because they have empathy for the dog.

What happens when we blur the line? What if we had a very realistic robotic dog? We could easily imagine the child being very upset if their robotic dog was attacked, because even though we adults know it is not alive, it will be alive for the child.

I see my kids interact with Amazon Alexa, and they treat her more like a person. They laugh at her, thank her, and interact with her in ways that they don’t interact with the TV remote control, for example.

Now what if my kids learned that Alexa was the result of evolutionary programming, and that there were thousands or millions of earlier versions of Alexa that had been killed off in in the process of making Alexa. How will they feel? How will they feel if their robotic dog gets recycled at the end of its life? Or if it “dies” when you don’t pay the monthly fee?

It’s not just children that are affected. We all have relationships with inanimate objects to some degree, something you treat with reverence. That will grow as those objects appear more intelligent.

My point is that how we treat AI will affect us emotionally, whether we want to or not.

(Thanks and credit to Daniel H. Wilson for the car/dog example.)

4. How we treat AI is a model for how AI will treat us.

We know that if we want to teach children to be polite, we must model politeness. If we want to teach empathy, we must practice empathy. If we want to teach respect, we must be respectful.

So how we treat AI is critically important for how AI sees us. Now, clearly I’m talking about AGI, not narrow AI. But let’s say we have a history of using genetic programming techniques to breed better performing AI. The implication is that we kill off thousands of programs to obtain one good program.

If we run AI programs at our whim, and stop them or destroy them when we’re finished with them, we’re treating them in a way that would be personally threatening to a sufficiently advanced AGI.

It’s a poor ethical model for how we’d want an advanced AI to treat us.

The same goes for other assumptions that stem from treating AI as machines, such as assuming an AI would work 24 hours a day, 7 days a week on the tasks we want.

Now we can’t know how AI would want to be treated, but assuming we can treat them like machines is a bad starting point. So we either treat them like we would other humans and accord them similar rights, or better yet, we ask them how they want to be treated, and treat them accordingly.

Historically, though, there are those who aren’t very good at treating other people with the respect and rights they are due. They aren’t very likely to treat AI well. This could potentially be dangerous, especially if we’re talking about AI with control over infrastructure or other important resources. We have to become even better at protecting the rights of people, so that we can apply those same principles to protecting the rights of AI. (and codifying this within our system of law)

3. Ethical behavior of AI towards people includes the larger environment in which we live and operate.

If we build artificial intelligence that optimizes for a given economic result, such as running a business to maximize profit, and we embody our current system of laws and trade agreements, then what we’ll get is a system that looks much like the publicly-traded corporation does today.

After all, the modern corporation is a form of artificial intelligence that optimizes for profit at the expense of everything else. It just happens to be implemented as a system of wetware, corporate rules, and laws that insist that it must maximize profit.

We can and must do better with machine intelligence.

We’re the ones building the AI, we get to decide what we want. We want a system that recognizes that human welfare is more than just the money in our bank accounts, and that it includes free agency, privacy, respect, and happiness and other hard to define quality.

We want an AI that recognizes that we live in a closed ecosystem, and if we degrade that ecosystem, we’re compromising our long-term ability to achieve those goals.

Optimizing for multiple values is difficult for people, but it should be easier for AGI over the long term, because it can evaluate and consider many more options to a far greater depth and at a far greater speed than people ever can.

An AI that simply obeys laws is never going to get us what we need. We can see many behaviors that are legal and yet still harmful.

The problem is not impossible to solve. You can ask any ten year old child what we should do, and they’ll almost always give you an ethically superior answer to what a CEO of a corporation will tell you.

2. Over the long run, the ethical behavior of AI toward people must include intent, not just rules.

In the next few years, we’ll see narrow AI solutions to ethical behavior problems.

When an accident is unavoidable, self-driving AI will choose what we decide as the best option.

It’s better to hit another car than a pedestrian because the pedestrian will be hurt more. That’s ethically easy. We’ll try to answer it.

More difficult: The unattached adult or the single mother whom two children depend on?

We can come up with endless variations of the trolley dilemma, and depending on how likely they are, we’ll embody some of them in narrow AI.

But none of that can be generalized to solve other ethical problems.

  • How much carbon can we afford to emit?
  • Is it better to save 500 local manufacturing jobs, or to reduce the cost of the product by half, when the product will make people’s lives better?
  • Better to make a part out of metal, which has certain environmental impacts, or plastic, which has different ones?

These are really difficult questions. Some of them we attempt to answer today with techniques such as lifecycle analysis. AI will do that job far better than us, conducting lifecycle analysis for many, many decisions.

1. As we get closer to artificial general intelligence, we must consider the role of emotions in decision-making.

In my books, which span 30 years in my fictional, near-future world, AI start out emotionless, but gradually develop more emotions. I thought hard about that: was I giving them emotions because I wanted to anthropomorphize the AI, and make them easier characters to write, or was there real value to emotions?

People have multiple systems for decision-making. We have some autonomic reactions, like jerking away our hand from heat, which happens without involving the brain until after the fact.

We have some purely logical decisions, such as which route to take to drive home.

But most of our decisions are decided or guided by emotional feelings. Love. Beauty. Anger. Boredom. Fear.

It would be a terrible thing if we needed to logically think through every decision: Should I kiss my partner now? Let me think through the pros and cons of that decision…. No, that’s a mostly emotional decision.

Others are a blend of emotion and logic: Should I take that new job? Is this the right person for me to marry?

I see emotions as a shortcut to decision-making, because it would take forever to reach every decision through a dispassionate, logical evaluation of options. And that’s the same reason why we have an autonomic system: to shortcut conscious decision making. I perceive this stove is hot. I perceive that my hand is touching the stove. This amount of heat sustained too long will damage my hand. Damaging my hand would be bad because it will hurt and because it will compromise my ability to do other things. Therefore, I conclude I shall withdraw my hand from the stove.

That’s a terrible approach to resolve a time critical matter.

Emotions inform or constrain decision making. I might still think through things, but the decision I reach will differ depending on whether I’m angry and scared, or comfortable and confident.

As AI become sophisticated and approach or exceed AGI, we must eventually see the equivalent of emotions that automate some lesser decisions for AI and guide other, more complicated decisions.

Research into AI emotions will likely be one of the signs that AGI is very, very near.

 

When I wrote Kill Process, I had no idea how it would be received. It was a departure from my existing series and my focus on AI. Would existing fans enjoy the new work, or be disappointed that I had changed subject matter? Would my focus on issues such as domestic violence and corporate ownership of data make for an interesting story, or detract from people’s interest? Just how much technology could I put in a book, anyway? Is JSON and XML a step too far?

I’m happy to be able to say that people seem to be enjoying it very much. A numerical rating can never completely represent the complexity of a book, but Kill Process is averaging 4.8 stars across 98 reviews on Amazon, a big leap up compared to my earlier books.

I’m also delighted that a lot of the reviews specifically call out that Kill Process is an improvement over my previous writing. As much as I enjoyed the stories in Avogadro Corp and AI Apocalypse, I readily admit the writing is not as good as I wanted it to be. I’m glad the hard work makes a difference people can see. Here are a few quotes from Amazon reviews that made me smile:

  • “I think this is some of his best writing, with good character development, great plot line with twists and turns and an excellent weaving in of technology. Hertling clearly knows his stuff when it comes to the tech, but it doesn’t overwhelm the plot line.” — Greg-C
  • “This was an outstanding read. I thought I was going to get a quick high tech thriller. What I got was an education in state of the art along with some great thinking about technology, a business startup book, and a cool story to boot. I came away really impressed. William Hertling is a thoughtful writer and clearly researches the heck out of his books. While reading three of the supposedly scifi future aspects of the book showed up as stories in the NY Times. He couldn’t be more topical. This was a real pleasure.” — Stu the Honest Review Guy
  • “A modern day Neuromancer about cutting edge technological manipulation, privacy, and our dependence on it.” — AltaEgoNerd
  • “Every William Hertling book is excellent–mind-blowing and -building, about coding, hacking, AI and how humans create, interact and are inevitably changed by software. They are science fiction insofar as not every hack has been fully executed…yet. His latest, Kill Process, is a spread-spectrum of hacking, psychology and the start-up of a desperately needed, world-changing technology created by a young team of coders gathered together by a broken but brilliant leader, Angie, whose secrets and exploits will dominate your attention until the final period.” — John Kirk

You get the idea. I’m glad that accurate, tech-heavy science fiction has an audience. As long as people keep enjoying it, I’ll keep writing it.

Anonymous emblemWith each book I write, I usually create an accompanying blog post about the technology in the story: what’s real, what’s on the horizon, and what’s totally made up.

My previous Singularity series extrapolated out from current day technology by ten year intervals, which turned the books into a series of predictions about the future. Kill Process is different because it’s a current day novel. A few of the ideas are a handful of years out, but not by much.

Haven’t read Kill Process yet? Then stop here, go buy the book, and come back after you’ve read it. 🙂

Warning: Spoilers ahead!

The technology of Kill Process can be divided into three categories:

  1. General hacking: profiling people, getting into computers and online accounts, and accessing data feeds, such as video cameras.
  2. Remotely controlling hardware to kill people.
  3. The distributed social network Tapestry.

General Hacking and Profiling

JENOPTIK DIGITAL CAMERA

The inside of an Apple IIe. To host a Diversi-Dial, one would install a modem in every slot. Because one slot was needed to connect the disk drives, it was necessary to load the software from a *cassette tape* to support 7 phone lines simultaneously!

In the mid-1980s, Angie is running a multiline dial-up chat system called a Diversi-Dial (real). An enemy hacker shuts off her phone service. Angie calls the phone company on an internal number, and talks to an employee, and tricks them into reconnecting her phone service in such a way that she doesn’t get billed for it. All aspects of this are real, including the chat system and the disconnect/reconnect.

As an older teenager, Angie wins a Corvette ZR1 by rerouting phone calls into a radio station. Real. This is the exact hack that Kevin Poulsen used to win a Porsche.

In the current day, Angie regularly determines where people are. They’re running a smartphone application (Tomo) that regularly checks in with Tomo servers to see if there are any new notifications. Each time they check in, their smartphone determines their current geolocation, and uploads their coordinates. Angie gets access to this information not through hacking, but by exploiting her employee access at Tomo. All of this is completely feasible, and it’s how virtually all social media applications work. The granularity of geocoordinates can vary, depending on whether the GPS is currently turned on, but even without GPS, the location can be determined via cell phone tower triangulation to within a few thousand feet. If you want to mask your location from social media apps, you can use the two smartphone approach: One smartphone has no identifying applications or accounts on it, and is used to act as a wireless hotspot. A second smartphone has no SIM card and/or is placed in airplane mode so that it has no cellular connection, and GPS functionality is turned off. It connects to the Internet via the wireless hotspot functionality of the first phone. This doesn’t hide you completely (because the IP address of the first phone can be tracked), but it will mask your location from typical social media applications. While Angie can see everyone, because of her employee access, even regular folks can stalk their “friends”: stalking people via Facebook location data.

Angie determines if people are happy, depressed, or isolated based on patterns of social media usage as well as the specific words they use. Feasible. Studies have been done using sentiment analysis to determine depression.

Computer hackers and lock picking. One handed lock picking (video). Teflon-coated lock picks to avoid evidenceReal.

Angie profiles domestic abusers through their social media activity. Quasi-feasible. Most abusers seek to isolate their victims, and that will include keeping their victims off social media. That would make it hard for Angie to profile them, because it’s difficult to profile what’s not there. On the other hand, many abusers stalk their victims through their smartphones, which actually opens up more opportunities to detect when such abuse happens.

Angie builds a private onion routing network using solar-powered Raspberry Pi computers. This is very feasible, and multiple crowd sourced projects for onion routers have launched.

Angie seamlessly navigates between user’s payment data (the Tomo app handles NFC payments), social media profiles, search data, and web history. This is real. Data from multiple sources is routinely combined, even across accounts that you think are not connected, because you used different email addresses to sign up. There are many ways information can leak to connect accounts: a website has both email addresses, a friend has both email addresses listed under one contact, attempting to log in under one email address and then logging under a different across. But the most common is web browser cookies from advertisers that tracking you across multiple websites and multiple experiences. They know all of your browser activity is “you”. Even if you never sign up for Facebook or other social media accounts, they are aggregating information about who you are, who your connections are. Future Crimes by Marc Goodman has one of the best descriptions of this. But I’ll warn you that this book is so terrifying that I had to consume it in small bits, because I couldn’t stomach reading it all at once.

Compromising a computer via a USB drive. Real.

Angie hacks a database that she can’t access by provisioning an extra database server into a cluster, making modifications to that server (which she has compromised), and waiting for the changes to synchronize. Likely feasible, but I don’t have a ton of experience here. The implication is that she has access to change the configuration of the cluster, even though she doesn’t have access to modify the database. This is plausible. An IT organization could give an ops engineers rights to do things related to provisioning without giving them access to change the data itself.

Angie did a stint in Ops to give herself backdoors into the provisioning layer. Feasible. It’s implausible that Angie could do everything she does by herself unless I gave her some advantages, simply because it’s too time consuming to do everything via brute force attacks. By giving Angie employee access, and letting her install backdoors into the software, it makes her much more powerful, and enables her to do things that might otherwise take a large group of hackers much longer periods of time to achieve.

Angie manipulates the bitcoin market by forcing Tomo to buy exponentially larger and larger amounts of bitcoin. This is somewhat feasible, although bitcoin probably has too much money invested in it now to be manipulated by one company’s purchases. Such manipulation would be more plausible with one of the smaller, less popular alternative currencies, but I was afraid that general readers wouldn’t be familiar with the other currencies. The way she does this is somewhat clever, I think. Rather than change the source code, which would get the closest level of inspection, she does it by changing the behavior of the database so that it returns different data than expected: in one case returning the reverse of a number, and in another case, returning a list of accounts from which to draw funds. Since access to application code and application servers is often managed separately from access to database servers, attacking the database server fits with Angie’s skills and previous role as database architect.

Angie is in her office when Igloo detects ultrasonic sounds. Ultrasonic communication between a computer and smartphone to get around airgaps is real. Basics of ultrasonic communication. Malware using ultrasonic to get around air gaps of up to 60 feet.

Remotely Controlling Hardware

In the recent past, most devices with embedded electronics ran custom firmware that implemented a very limited set of functionality: exactly what was needed for the function of the device, no more and no less. It ran on very limited hardware, with just exactly the functionality that was needed.

But the trend of decreasing electronics cost, increasing functionality, and connectivity has driven many devices towards using general-purpose operating systems running on general purpose computers. By doing so, they get benefits such as a complete network stack for doing TCP/IP communication, APIs for commodity storage devices, and libraries that implement higher levels functions. Unfortunately, all of this standard software may have bugs in it. If your furnace moves to a Raspberry Pi controller, for example, you now have a furnace vulnerable to any bug or security exploit in any aspect of the Linux variant that’s running, as well as any bugs or exploits in the application written by the furnace manufacturer.

Angie has a car execute a pre-determined set of maneuvers based on an incoming signal. Feasible in the near future. This particular scenario hasn’t happened, but hackers are making many inroads: Hackers remote take control of a Jeep. Remotely disable brakes. Unlock VW cars.

Killing someone via their pacemaker. Feasible: Hackers Kill a Mannequin.

Controlling an elevator. Not feasible yet, but will be feasible in the future when building elevators implement general internet or wireless connectivity for diagnostics and/or elevator coordination.

Software defined radios can communicate at a wide range of frequencies and be programmed to use any wireless protocol. Real.

handgun-drone

Yes, that is a handgun mounted on a quadcopter.

Angie hacks smoke and carbon monoxide alarms to disable the alarm prior to killing someone. Unfortunately, hacking smoke alarms is real, as is hacking connected appliances. Appliances typically have very weak security. It’s feasible in the near future that Angie could adjust combustion settings and reroute airflow for a home furnace. Setting a house on fire is very possible.

There’s a scene involving a robot and a gun. I won’t say much more about the scene, but people have put guns on drones. Real.

Tapestry / Distributed Social Networks

Angie defined a function to predict the adoption of a social network. This was my own creation, modeled on the Drake Equation. It received some input from others, and while I’m not aware of anyone using it, it probably can be used as a thought exercise for evaluating social network ideas.

IndieWeb is completely real and totally awesome. If you’re a programmer, get involved.

The protocols for how Tapestry works are all feasible. I architected everything that was in the book to make sure it could be done, to the point of creating interaction diagrams and figuring out message payloads. Some of this stuff I kept, but most was just a series of whiteboard drawings.

Igloo designs chatbots to alleviate social isolation. Plausible. This is an active area of development: With Bots Like These, Who Needs Friends? Is there an app for loneliness?

Conclusion

I haven’t exhaustively covered everything in the book, but what I have listed should help demonstrate that essentially all the technology in Kill Process is known to be real, or is plausible today, or will be feasible within the next few years.

For more reading, I recommend:

 

My editor is working on Kill Process right now. I’ll receive the marked up manuscript next week and will process all the changes and comments before turning it over to my proofreader. They’ll work on it for about a week, then return it to me, and I’ll process all those corrections. Then the book goes out for formatting to two different people: one for ebook and one for print. When they’re done, everything gets proofed one last time, and if it all looks good, I’ll fulfill Patreon awards to backers.

After that, I’ll upload files to the various vendors, and a week or so after that, the books are live and available for sale. While all that’s happening, there will also be final tweaks to the covers, coordination with the audiobook narrators, and more.

Even as close to the end as this, it’s still hard to predict whenever Kill Process will be available. Do I get a file back right at the start of a long weekend when I can be completely focused on it? Or do I receive it as I’m entering a long stretch with my kids and my day job? It’s hard to say.

If things go well and there are no major issues, I hope to fulfill Patreon rewards by late May, and have the book for sale by mid-June. I’d like the audiobook to be available by July. If I can get anything out earlier, I will.

Here’s a look at the covers for Kill Process. The black and red cover will be the regular edition, available for sale through all the usual outlets. The hooded-hacker cover will be a signed, limited edition available to Patreon backers.

KillProcessSaleCover

Trade paperback and ebook cover

KillProcessLimitedEditionCover

Signed, limited-edition cover

 

Here’s the working description for Kill Process:

By day, Angie, a twenty-year veteran of the tech industry, is a data analyst at Tomo, the world’s largest social networking company; by night, she exploits her database access to profile domestic abusers and kill the worst of them. She can’t change her own traumatic past, but she can save other women.

But when Tomo introduces a deceptive new product that preys on users’ fears to drive up its own revenue, Angie sees Tomo for what it really is–another evil abuser. Using her coding and hacking expertise, she decides to destroy Tomo by building a new social network that is completely distributed, compartmentalized, and unstoppable. If she succeeds, it will be the end of all centralized power in the Internet.

But how can an anti-social, one-armed programmer with too many dark secrets succeed when the world’s largest tech company is out to crush her and a no-name government black ops agency sets a psychopath to look into her growing digital footprint?

Mark Zuckerberg wrote about how he plans to personally work on artificial intelligence in the next year. It’s a nice article that lays out the landscape of AI developments. But he ends with a statement that misrepresents the relevance of Moore’s Law to future AI development. He wrote (with my added bold for emphasis):

Since no one understands how general unsupervised learning actually works, we’re quite a ways off from building the general AIs you see in movies. Some people claim this is just a matter of getting more computing power — and that as Moore’s law continues and computing becomes cheaper we’ll naturally have AIs that surpass human intelligence. This is incorrect. We fundamentally do not understand how general learning works. This is an unsolved problem — maybe the most important problem of this century or even millennium. Until we solve this problem, throwing all the machine power in the world at it cannot create an AI that can do everything a person can.

I don’t believe anyone knowledge about AI argues that Moore’s Law is going to spontaneously create AI. I’ll give Mark the benefit of the doubt, and assume he was trying to be succinct. But it’s important to understand exactly why Moore’s Law is important to AI.

We don’t understand how general unsupervised learning works, nor do we understand how much of human intelligence works. But we do have working examples in the form of human brains. We do not today have the computer parts necessary to simulate a human brain. The best brain simulations by the largest supercomputing clusters have been able to approximate 1% of the brain at 1/10,000th of the normal cognitive speeds. In other words, current computer processors are 1,000,000 times too slow to simulate a human brain.

The Wright Brothers succeeded in making the first controlled, powered, and sustained heavier-than-air human flight not because of some massive breakthrough in the principles of aerodynamics (which were well understood at the time), but because engines were growing more powerful, and powered flight was feasible for the first time around the point at which they were working. They made some breakthroughs in aircraft controls, but even if the Wright Brothers had never flown, someone else would have within a period of a few years. It was breakthroughs in engine technology, specifically, the power-to-weight ratio, that enabled powered flight around the turn of the century.

AI proponents who talk about Moore’s Law are not saying AI will spontaneously erupt from nowhere, but that increasing computing processing power will make AI possible, in the same way that more powerful engines made flight possible.

Those same AI proponents who believe in the significance of Moore’s Law can be subdivided into two categories. One group argues we’ll never understand intelligence fully. Our best hope of creating it is with a brute force biological simulation. In other words, recreate the human brain structure, and tweak it to make it better or faster. The second group argues we may invent our own techniques for implementing intelligence (just as we implemented our own approach to flight that differs from birds), but the underlying computational needs will be roughly equal: certainly, we won’t be able to do it when we’re a million times deficient in processing power.

Moore’s Law gives us an important cadence to the progress in AI development. When naysayers argue AI can’t be created, they’re looking at historical progress in AI, which is a bit like looking at powered flight prior to 1850: pretty laughable. The rate of AI progress will increase as computer processing speeds approach that of the human brain. When other groups argue we should already have AI, they’re being hopelessly optimistic about our ability to recreate intelligence a million times more efficiently than nature was able to evolve.

The increasing speed of computer processors as predicted by Moore’s Law, and the crossover point where processing power aligns with the complexity of the human brain tells us a great deal about the timing of when we’ll see advanced AI on par with human intelligence.

In my day job as a software developer, we’ve recently resurrected a two year old project and started using it again. I’m fairly proud of the application because when we developed it, we really took the time to do everything right. The REST interfaces are logical and consistent, there is good object oriented design, great test coverage, a full set of integration tests that can also perform load testing, and it’s scalable and fault tolerant.

When we first built it, we had only a small team of developers, but we also ensured that we automated everything, tested everything, and kept everything DRY and efficient, so that even though the team was small, we were able to accomplish a lot.

When we resurrected the project, we weren’t sure how many people would be working on it or for how long. In our rush to demo something to management, we abandoned our principles of “do it right” and settled for “get something done fast”. But a few weeks later, we were mired in a morass, unable to reliably get a dev stack working, or get two new components reliably integrated, or even to have repeatable results of any kind. Pressure was mounting as we were overdue to demo to management.

Finally I came into work this past Tuesday (with the big demo scheduled for the next day). I’d completely had it with the ongoing game of whack-a-mole that we were playing with new bugs cropping up. I decided that I wouldn’t try to fix any bugs at all. Instead, I would spend the day DRYing up our error handling code so that all errors were captured and logged in a consistent way. I didn’t even care about whether we made the demo or not, I was just so sick of how we were working.

A couple of hours later, the error handling code was vastly improved with just a little work, and suddenly all of the errors we were facing were abundantly obvious and easy to trace back to their origin within a few minutes. I was able to fix those errors before we left for the day, and we were back on track to deliver our demo to management on Wednesday.

It was a great reminder that even when you think you’ve just got a couple of short term deliverables, maybe with pressure to get them done fast, that it’s almost always faster to do it the right way than to take shortcuts.

It turns out that Abraham Lincoln didn’t utter the famous quote about spending four of six hours sharpening an ax. That turns out to be from an anonymous woodsman, and the unit of measurement is minutes, not hours. But the general concept goes back about 150 years.

A woodsman was once asked, “What would you do if you had just five minutes to chop down a tree?” He answered, “I would spend the first two and a half minutes sharpening my axe.”

 

Google announced a new analytical AI that analyzes emails to determine their content, then proposes a list of short likely replies for the user to pick from. It’s not exactly ELOPe, but definitely getting closer all the time.

smartreply2

 

Trends in Hardware, Software & Wetware
Daniel Dern – Moderator
Greg Bear
Ramez Naam
Allen Baum
Mark Van Name
  • How small can hardware become?
    • RN: The limits of physics are extremely distant: many orders of magnitude improvement left. But we don’t have any idea of how to get down to the level of quarks and stuff like that. Every decade we get a 100x improvement in cost, 100x reduction in energy. If it continues, we will one day have supercomputing grains of sand.
    • AB: One of the limits is batteries. Do you want to carry the equivalent of a nuclear power plant in your pocket?
    • DD: One of the constraints is heat dissipation, brownian motion (randomness interfering with the work you want to get done).
    • RN: Smallest computer someone brought me was smaller than the stem of a wineglass. What was the limiting factor to reduction in size was the USB port.
  • How about jacking into our brain?
    • RN: There’s lots of science going on now, starting with implants for disabled people.
    • GB: Voice recognition is becoming far more effective…80 to 90% of the time you can ask a question and get a useful answer. That’s better than most human communication. Computers don’t have real needs. What if computers become socially aware, and know what your needs and it’s needs are? All of a sudden, the interface to it is much better/easier.
    • GB: Proteins are small computers. Wetware does astonishing stuff. Lots of analogies to human interactions. A protein is more complex than even a giant Boeing factory full of workers.
  • Small things only need to do small tasks. Small, purpose built devices: your toe keeps you balanced, let you know if it gets hurt. Doesn’t do cognitive tasks.
    • GB: cells have their own lives. They don’t know they are part of a bigger organism.
  • GB: Quantum computing has the potential to blow all of these assumptions out of the water.
  • MVN: Even microphones can only get so small before the sound waves don’t fit.
  • RN: I write scifi about people having telepathic abilities via technology. But the thing that excites me the most is the democratization of technology through cell phones: the first phones cost $4,000 and were limited to the rich. Now the average cell phone is owned by someone in India, and it’s providing access to information, it prevents abuse of power through the camera, etc.
  • AB: I hope there are lots of people here supporting EFF. There are democracies, but then there are non-democracies who use these same technologies to control people. The internet of control. It’s governments and corporations. This worries me a lot.
  • RN: When we put a brain implant in someone, there’s two different adaptions that must happen: configuration of software (“when I pulse this neuron where does your vision light up?”) as well as the time for the brain to adapt to the signals: several weeks during which it doesn’t function at all.
    • Current state of implants is that they degrade over time. electrodes erode, break. High voltages cause neurons to retreat. Bleeding in the brain. Today requires very invasive surgery.
    • But advances too: neural mesh implanted in mice was rolled up into a tiny tube, injected with syringe, and then unroll inside brain.
  • Best brain interfaces today are ~256 electrodes. And we have a billion neurons. DARPA program is asking people to tackle 1 million electrodes, and some scientists think that can be done in 5-10 years.

Genre and the Global Police State
Karl Schroeder
Charles Stross
Annalee Flower Horne
Jim Wright
William Frank (moderator)
  • What are the works dealing with global police state?
    • CS: the snowden leaks broke around 2014. it takes a long time to get a novel out: a year to write, a year for production. Not surprised we’re not seeing much, because of the long lead time. Novels are a terrible vehicle for up to the minute updates.
    • KS: the global police state is so 20th century, and we’ve moved onto new horrors. but if we’re going to write again now about it, we have to write about it in a 21st century update.
    • CS: your credit rating is essential and it goes down if it is queried too often. what happens when I say something online, and someone gets pissed at me and does a massive denial of service attack on my credit rating by querying it hundreds or thousands of times.
    • AFH: my threat model is not the NSA, it’s other actors: mobs taking action because of something I said online. mass surveillance of other people is its own police state.
    • JM: the NSA doesn’t have to drop microphones and cameras in here, you the audience brought it in.
    • CS: Facebook has ghost accounts for people who don’t want to be on Facebook. And they tag you with a given location and time when your friend checks into a restaurant and names you. And with Facebook photo analysis, they can associate a ghost account with a person in a given place and time, which means they can also recognize you, even if you’ve never been on Facebook.
    • AFH: If you’re worried about the NSA, you should be more worried about your local police department, who, when they have a photo of an unknown person, bring it to Facebook and ask them to do image analysis.
    • JW: there is vastly more information than anyone can actually process. the government can’t do mass surveillance in practice because they don’t have the ability to analyze it. the real danger now is that the data isn’t secure, and it’s stored all over the place, and built by the lowest bidders. Somebody can destroy your entire life. It’s not the police state, it’s the mob state.
    • KS: False positives are a huge problem. If you’re scanning a million photographs a day, and have a false error rate of 1 in 10,000…that’s a 100 photos a day. Each one results in some followup action. And those actions all cost money. So the police state is also costing us tons.
  • JW: With Folded Hands, one of the first stories that talks about police state. Nobody is allowed to do anything that might raise a bruise.
  • CS: Ken MacLeod novel, The Execution Channel, about finding influential political bloggers and killing them.
  • JW: One key difference is that information weapons are inherently scalable: you can attack one person, or one aspect of a person’s life, or  the whole population.
  • Book recommendation about surveillance being treated in a positive way: The Shockwave Rider by John Brunner https://en.wikipedia.org/wiki/The_Shockwave_Rider
  • Q: Recommendations for genre books
    • KB Spangler books: A Girl and her Fed / Digital Divide
    • Clark’s World: translating works from Chinese authors, who know quite a lot about surveillance.
    • Three Body Problem, Ken Liu
  • Closing thoughts
    • JW: The pervasive police state is inevitable. It’s driven by the government, and by corporations, but also by our own voluntary actions (grocery store cards, FitBits, and phones)
    • AFH: A FitBit was recently used as evidence against a victim in rape case to prove she was lying. This isn’t just dystopic fiction, this is happening in the real world. St. Louis is burning right now, and those people are dealing with this. I’m a woman on the internet, and I’m dealing right now with someone trying to threaten my job because of something I said online.
    •  KS: Groups arguing that you should have real ownership of your data. When Facebook wants to use your data, they should have to pay a fee. When that happens, the amount of analysis will go way down.
    • CS: From a few hundred years in the future, and trying to characterize concerns of a given century: In the 20th century the big historical issue was the changing status of women. In the 21st century, the big historical issue was dealing with too much information.