When David Walton asked me to read a draft of his novel THREE LAWS LETHAL, I said yes without hesitation. The mixture of self-driving cars, artificial intelligence, and tech start-ups was obviously right up my alley.
I was even more delighted when I read the draft, and found a compelling, thoughtful, and philosophical science fiction thriller about what it means for AI to be alive. While reading I frequently stopped to screenshot passages I loved to send them back to David.
I’ve been waiting excitedly for this book to become available, and now it is. Buy a copy today — you’ll love it.
I asked David to write a guest post for my blog, which you’ll find below.
How might an AI develop consciousness?
It might be the most important question on the modern philosopher’s unanswered list, and it’s certainly the most fascinating. Will Hertling proposed one possible avenue in AVOGADRO CORP: through algorithms developed to improve human communication. In my new novel THREE LAWS LETHAL, I do it through self-driving cars.
We all know self-driving cars are coming; it’s just a matter of how many problems we manage to trip over on the way there. THREE LAWS LETHAL embraces this future in all of its glory: the life-and-death choices of the Trolley Problem, lawsuits and human fault, open source vs. copyright, the threat of hacking, and government regulation. But all that is just a warm-up for the main event: the development of a conscious artificial mind.
How does a mind develop? The same way it always has: through evolution.
Naomi Sumner, programmer extraordinaire, creates a virtual world to train AIs. Those who perform well in the game world survive, allowing them to reproduce — spawn new AIs similar to themselves. As thousands of generations pass, the AIs not only become incredibly good at the self-driving game, they also develop some surprising emergent behavior, like circumventing the limits on their memory footprint.
They’re very smart, but still not conscious. A few more steps are required to reach that point, steps none of the characters anticipate or plan for. Ultimately, it is the training world itself that becomes self-aware, and all the AI actors inside it are merely elements of its psyche.
But every invention in history, sooner or later, is turned into a weapon. UAVs, drones, and missiles can benefit from self-driving technology as well, especially when trained through war-simulation game play. So what happens when part of this infant conscious mind is partitioned off and trained to kill?
You’ll have to read THREE LAWS LETHAL to find out…
David Walton is a software engineer with Lockheed Martin by day and the father of eight children by night. Since he doesn’t have time to write novels, he trained a world full of AIs to do it for him.