I glanced at my blog today and realized I’ve written very few posts lately. I’ve been working pretty hard on The Turing Exception. Between that work, my day job, and kids, I haven’t had a lot of time for blogging.

Most of January was spent working with my copyeditor. This is a bigger, more complex task that it might sound like. You might imagine that I turn my manuscript over to the copyeditor, and then get it back with a bunch of corrections, and it’s done.

In fact, what happens is closer to this:

  • I send the manuscript.
  • I get a bunch of questions in the beginning as my copyeditor goes to work.
  • Then he goes radio-silent for two weeks as he gets deep into it.
  • Then I get the manuscript back. This one contained about 4,000 changes.
  • Some changes are easy to process: commas moved, spelling corrected, words replaced. I use Word’s change review, and it’s lots of clicking on “accept”. Still, it’s 4,000 changes, and it takes me several days of full-time work to review each change and accept it.
  • Some changes are more difficult to handle. They might be a comment, like “you need more interior character dialogue here.” Then I need to go think about what the character is thinking about in that scene, and write a few paragraphs, keeping it consistent with everything going on around it.
  • Some changes are widespread, like when I’ve described a single event several different ways over the course of a novel. Or used several different names to refer to one organization. I have to pick something, and then make sure it is consistent throughout.
  • Some changes and comments I don’t understand, so I have to email back and forth with my copyeditor until I do, and then make the changes.
  • When I’m done, I send the file back to the copyeditor, and now he can review my changes. There were about 300 on this last exchange.
  • He accepts the ones that look good, but might have to make more corrections, which I then accept, and so on.

Eventually it’s done. The copyeditor and I are in agreement.

Then I get the manuscript to the proofreader. This is a second person who is focused on line-level items, like punctuation and spelling, although he’ll also catch some bigger issues. The manuscript came back from the proofreader with 800 changes. I basically go through all the same stuff as with the copyeditor. Some changes are straightforward, some are not.

If I make big changes, then it has to go back to the proofreader again for a second pass.

Along the way, I usually get feedback from beta readers who are getting back to me late. I hate to ignore feedback, so I do the best I can to address any issues they spotted, without breaking the copyediting / proofreading process.

Sometimes I’m trying to address beta reader feedback by changing only one or two words, to avoid having to do another round of proofreading. I remember this happening with The Last Firewall, where I think Brad Feld or Harper Reed said “I’m confused about what kind of vehicles exist in this world”. And so there’s a scene in the beginning of the book where Cat is crossing the street, and I had to get her to establish all the types of vehicles (ground cars, hover cars, and flying cars) in a single sentence, so that I didn’t make changes in multiple places.

I’m now one to three days away from finishing the proofreading cycle. When this is done, it will go to two different people for formatting: one person will generate the ebooks, and another will generate the PDF interior for the print book. Then I’ll need to carefully proofread both of those, to make sure nothing gets dropped, and no formatting errors or other mistakes are introduced.

It’s fairly intense work when the ball is in my court. But when it’s handed off to someone else, that’s my chance to do a little creative work. I’ve written about 15,000 words in Tomo, a new novel about privacy, social networks, and data profiling. No AI or robots…yet.

Last year I read the manuscript for Eliot Peper’s then unpublished Uncommon Stock: Version 1.0 and loved it. FG Press, the publishing company launched by Brad Feld, went on to publish it, and it got a lot of great press, because it blended the world of tech startups with a conspiracy thriller. It’s the first book in a three book series.

As soon as I read it, I asked Eliot was he was working on, and he mentioned a different writing project. I can’t remember my exact words, but I told him to drop everything and get to working on book two. He did, and FG Press published Uncommon Stock: Power Play right around Christmas, and it’s been sitting on my bedside table since.

Well, I read it this weekend and loved it, even more than book one. I tore through the last three-quarters of the book. The stakes have really been raised for Mara Winkel and her financial fraud detection startup as they identify one of the largest money laundering rings in the world. It is again an awesome blending of thriller and tech startup novel.

If you haven’t done so, go buy a copy. But start with Uncommon Stock: Version 1.0 if you’re new to the series.

Eliot, I hope you’re writing book 3 right now. 🙂

A few weeks ago my friend and fellow science fiction writer, Ramez Naam, posted a link to an article debunking some myths about bulletproof coffee. Then today I noticed a link on Reddit about a professor at Kansas State University who went on a convenience store diet eating Twinkies to prove that counting calories is what matters most in weight loss, not the nutritional value in food.

I am all for science, and I love to understand exactly how things work and why things have the effect they do. But often I think that in our zeal to get to the truth we overlook the practical question of what actually works.

Let me give you an example. If you ask a dentist whether it’s better to floss before you brush your teeth or after you brush your teeth, they’ll tell you it doesn’t matter. Both are equally effective at preventing dental problems. However, if you look at how many people continue flossing, that is, they stay in compliance with the regimen of flossing, then you find there is a difference. People who floss after they brush their teeth are more likely to continue flossing. (Sorry, I read this a year or two ago, and can’t find a link now.)

Why is this important? Well if you look at diets, the most important factor in weight loss is not how effective the diet is, but in how compliant people are. It’s easy to start a diet, hard to stay on it. Staying on it is the challenge for for most people.

Perhaps we could, in theory, eat exactly 1200 calories of Twinkies every day and lose weight, but in practice how likely are we to continue counting calories meal after meal, week after week?

I think the value that people get out of approaches like bulletproof coffee, low-carb diets, or other structural approaches to dieting (in which the emphasis is on eliminating certain foods rather than counting calories) is that for some people those diets are easier to stick with. This moves us out of the realm of basic chemical/biological science (which is how you might measure effectiveness of a diet), and into the realm of psychology (which is probably where the majority of compliance comes from.)

But even if we evaluate diets for compliance, it doesn’t mean there’s one best solution for all people. Some people might do really well with one diet, and other people do better with a different one. We all have different favorite foods, eating habits, and tolerance for eating the same foods over and over. For other people, a low-carb diet might work really well, others might like to replace breakfast with bulletproof coffee, still others use exercise, and some count calories, and some blend multiple approaches.

So when we see a piece of research that says counting calories is what matters most in weight-loss, we know that it’s wrong. What matters is the combination of compliance (whether people can stick to the diet for whatever period of time is necessary) and effectiveness (how much weight is lost when you’re in compliance.)

Yes, we need science and the understanding of fundamental principles and theory. But what also need to know is how things work in practice, and not just in a general population, but specifically for us.

The way we get there is through personal experimentation. Be willing to try something (within reason, of course) for a period of time and see how it works. If it doesn’t work for you, it doesn’t matter that science says that it works for 80% of people. It only matters that it doesn’t work for you. Learn that it doesn’t work, and then move on to a different trial.

Conversely if something is working for you, then it doesn’t matter if science can’t explain it. It’s working. Don’t mess with it.

When I was a kid, my first computer was a TRS 80 Micro Color Computer 2. It wasn’t the big Trash-80 that most people had. It was a tiny thing, with a chiclet keyboard, and an expansion port on the back to allow you to upgrade from 4 kB of memory to 20 kB. I think it costs $99 and another 20 or 30 for the memory expansion.

When I was 16, I got an Apple II E. This had seven expansion slots which could be used to upgrade memory, add storage, and video capabilities, or add modems. (I had seven modems and was running a chat system, but that’s another story.) My next computer was an Amiga 1000. Although it wasn’t designed for upgradability, I bought an expansion kit which was a daughter board that plugged into the CPU socket and allowed me to upgrade to 1.5 MB of RAM. Later I bought another expansion Kit that was another daughter board that allowed me to replace the 8 MHz CPU with a 16 MHz CPU. I was able to attach three disk drives, and I had an expansion port that would have allowed me to connect a SCSI hard drive if I could’ve afforded one.

After the Amiga 1000, I had a series of IBM PCs and compatibles from 1989 to 2008. What defined the PC’s was a complete ability to build them from scratch and upgrade components as needed. The metal chassis, or box that housed the computer, might need to be upgraded every 10 years or so. The motherboard might be upgraded every four years. The RAM, hard drives, and CPU might be upgraded every two years. This was far more environmentally friendly and cost-effective than buying a new computer every three years.

In 2009 or so, I started using Macs. I love OS X, the Mac operating system. And I love most of the applications that run on the Mac. It’s far more stable than Windows, lower maintenance, and often easier to use. Because it’s built on UNIX, I can use all the best programming tools.

However the Macs I’m buying are laptops and laptops are inherently less upgradable. That isn’t to say they’re not upgradable at all. Over the Christmas break I upgraded the older MacBook Pro laptops in our house. In both cases I replaced the magnetic platter hard drive with a much faster SSD, and upgraded the memory: in one case doubling the memory, and in the other case quadrupling the memory.

It was an easy upgrade to do. It took about five minutes to open the case and replace the memory. It may be another five minutes to replace the hard drive. I could have chosen to restore everything from Time Machine, which would’ve been very quick. But in this case I chose to rebuild the operating system and applications from scratch to get a clean install.

By doing this upgrade on these three or four-year-old computers, I just gave them at least another three or four year lifespan. Again, this is environmentally friendly and economically the best approach. It cost about $200 to upgrade one Mac and about $300 to upgrade the other. To buy a comparable machine would have cost between 1000 and $1500.

Now for the bad news. The two most recent laptop purchases in our house were retina MacBook Pros. These are the extra thin models that don’t have a CD drive. They also don’t have upgradable hard drives or memory. This means they’re stuck with whatever you buy. There’s no way to upgrade them, no way to extend their life. Yes, they are beautiful, sleek, lightweight machines. But from an environmental lifecycle and cost they are inferior to their predecessors.

I can somewhat understand cheap electronics, things that costs under $100 or $200, being non-upgradable and simply replaced at the end of their life. But for computers that cost $1000 or more, and embody substantial environmental impact, it is irresponsible and shortsighted to not make them upgradable. I hope that we’ll see a return to upgradable computers in the future.

I saw The Imitation Game with Erin last night. This is the movie based upon the life of Alan Turing, the British mathematician who helped break enigma, and conceived of general purpose computing (à la Turing machines), and is famous for the concept of the Turing test. 

The Turing test, of course, was part of the inspiration for the title The Turing Exception for my new novel.

Although I knew a bit about Alan Turing from past reading and studies I was lucky enough to see George Dyson, author of Turing’s Cathedral, speak at the Defrag conference in November. George Dyson is a science historian and brother of technology analyst Esther Dyson. George gave a great keynote presentation at Defrag and I got to spend an evening chatting with him about Alan Turing, early physicists and mathematicians, the war effort, technology, artificial intelligence, and the singularity. In all, it was a fabulous discussion spanning many topics.

So I was quite excited to see The Imitation Game. From some reviews I glimpsed, it appears the movie isn’t 100% true to the historical record. But having not yet read Turing’s Cathedral, and it having been a while since I studied the details of that time, I was able to enjoy the movie without worrying about technical inaccuracies. I’d call this a must-see for anyone for has an interest in the origin of computers or cryptography.

I can be pretty sensitive to movies, so I ended up pretty emotional and crying at the end of the film. Alan Turing was a brilliant mathematician who we lost at the age of 41 because of his treatment as a homosexual.

Having seen the film, I’m now excited to go read Turing’s Cathedral.

Having related the general outlines of the story (minus the homosexual persecutation) to my kids, they were pretty interested, and wanted to know if we could create an Enigma machine. There’s a great one-page PDF paper enigma machine that allows you to perform the basics of rotor encryption.

Unfortunately, thanks to a business trip in my day job and some bad ergonomics while traveling, I’m struggling with a bout of chronic tendinitis again. So I’ve made an intentional choice to stay away from the keyboard is much as possible.

As part of that practice I bought the new version of Dragon dictate for the Mac. I’m glad to say it’s vastly improved over older versions. I first used Dragon Dictate in 2002 or so, when I first had tendinitis issues related to my day job in computer programming. Back then it was sort of comically wrong. You’d dictate a paragraph of text and maybe 50% would be right.

But the new version is quite astounding. I’ve dictated several blog post and it’s made zero gross errors. There have been a few small errors, where I have either failed to say what I meant, slurred some words together, or used words or phrases that were very uncommon (like the touring exception or patriot).

If you are familiar with speech recognition about 10 years ago, then you know that between the combination of lower accuracy and problems correcting text, it often became a comedy of errors trying to get what you wanted onto the page. But today, it’s easy enough to just read and then make a few simple corrections at the end.

Just a few years ago I investigated Dragon dictate for the Mac, but at the time the version that was out apparently was very buggy according to reviews. The current version today seems pretty darn solid and fun to use.

If you’re struggling with any kind of repetitive stress injury, give speech recognition a try again even if you had bad results in the past.

It’s been a while since my last post. I spent most of December working toward the final edits on The Turing Exception.

After two rounds of beta reader feedback and edits, I’m feeling pretty good about the way book four ended up.  the manuscript is currently with my copy editor, and I should get it back in a few weeks. Then I’ll make a few more changes and send it for a round of proofreading. Finally, there will be interior layout for the print edition and formatting for the e-book. And hopefully all that will happen by sometime in February, leading to a release by late February if possible.

Also, if you’ve been paying close attention, you’ll notice the title changed slightly. My friend Mike suggested Turing’s Exception as an idea, and that was better than any of the dozens of ideas I’d considered. But then I tested three different variations (Turing’s Exception, The Turing Exception, and Turing Exception), and The Turing Exception was vastly preferred, by about 38 out of 40 people in a poll.

As I’ve mentioned before, Patreon supporters will receive their e-books before the public release, just as soon as I can make them available. Patreon supporters at the five dollar level and above will receive their signed paperback around the time of the public release. This is because the paperback books are just not available any earlier.

You might be wondering why I have a Patreon campaign. The economics of writing are such that I still have to hold a day job in addition to selling books. Except for a few bestsellers, most writers are unable to support themselves solely by writing books.

Have you heard of the Kevin Kelley essay 1000 True Fans? The core idea is that it’s possible for an artist, writer, creator to support themselves if they can create $100 worth of product per year, and have 1000 fans will buy that product. 1000 fans times $100 equals $100,000, and therefore approximately a full-time living.

The challenge is that it’s hard for a writer to create a hundred dollars worth of product per year. I net about $2.50 per book sold, and I can publish about one per year. Even with 10,000 or 20,000 fans, that’s not a full-time income. So the idea with Patreon is to have a closer relationship with a few people, share some more of what I’m creating and create some special rewards just for supporters and hopefully get to the point where writing can support me full-time enabling me to write more than I do today.

I hope that you had a wonderful holiday and happy new year. I wish you the best in 2015.

AvogadroCorpGermanCoverThe German edition of Avogadro Corp is available for preorder from Amazon:

http://www.amazon.de/Avogadro-Corp-Gewalt-k%C3%BCnstlichen-Intelligenz-ebook/dp/B00PN7Z36Q/

It releases in paperback and kindle on December 9th. If you or a friend read German, I hope you’ll check it out.

The success of this translation will be helpful in getting the rest of the series translated to German, and all of my books translated to other languages.

 

With so many discussions happening about the risks and benefits of artificial intelligence (AI), I really want to collect data from a broader population to understand what others think about the possible risks, benefits, and development path.

Take the Survey

I’ve created a nine question survey that takes less than six minutes to complete. Please help contribute to our understand of the perception of artificial intelligence by completing the survey.

Take the survey now

Share the Survey

I hope you’ll also share the survey with others. The more responses we get, the more useful the data becomes.

Share AI survey on Twitter

Share AI survey on Facebook

Share the link: https://www.surveymonkey.com/s/ai-risks

Thanks to Elon Musk’s fame and his concerns about the risks of AI, it seems like everyone’s talking about it.

One difficulty that I’ve noticed is agreement on exactly what risk we’re talking about. I’ve had several discussions in just the last few days, both at the Defrag conference in Colorado and online.

One thing I’ve noticed is that the risk naysayers tend to say “I don’t believe there is risk due to AI”. But when you probe them further, what they are often saying is “I don’t believe there is existential risk from a skynet scenario due to a super-intelligence created from existing technology.” The second statement is far narrower, so let’s dig into the components of it.

Existential risk is defined by Nick Bostrum as a risk “where an adverse outcome would either annihilate Earth-originating intelligent life or permanently and drastically curtail its potential.” Essentially, we’re talking about either the extinction of humankind, or something close to it. However, most of us would agree that there are very bad outcomes that are nowhere near an existential risk. For example, about 4% of the global population died in WWII. That’s not an existential risk, but it’s still horrific by anybody’s standards.

Runaway AI, accelerating super-intelligence, or hard takeoff are all terms that refer to the idea that once an artificial intelligence is created, it will recursively improve its own intelligence, becoming vastly smarter and more powerful in a matter of hours, days, or months. We have no idea if this will happen (I don’t think it’s likely), but simply because we don’t have a hard takeoff doesn’t mean that an AI would be stagnant or lack power compared to people. There are many different ways even a modest AI with the creativity, motivation, and drive equivalent to that of a human could affect a great deal more than a human could:

  • Humans can type 50 words a minute. AI could communicate with tens of thousands of computers simultaneously.
  • Humans can drive one car at a time. AI could fly all the world’s airplanes simultaneously.
  • Humans can trip one circuit breaker. AI could trip all the world’s circuit breakers.
  • Humans can reproduce a handful of times over the course of a lifetime. AI could reproduce millions of times over the course of a day.
  • Humans evolve over the course of tens of thousands of years or more. Computers become 50% more powerful each year.

So for many reasons, even if we don’t have a hard takeoff, we can still have AI actions and improvement that occur far faster, and with far wider effect than we humans are adapted to handling.

Skynet scenario, terminator scenario, or killer robots are terms that refer to the idea that AI could choose to wage open warfare on humans using robots. This is just one type of risk, of many different possibilities. Other ways that AI could harm us include deliberate mechanisms, like trying to manipulate us by controlling the information we see, or by killing off particular people that pose threats, or by extorting us to deliver services they want. This idea of manipulation is important, because while death is terrible, the loss of free will is pretty bad too.

Frankly, most of those seem silly or unlikely compared to unintentional harm that AI could cause: the electrical grid could go down, transportation could stop working, our home climate control could stop functioning, or a virus could crash all computers. If these don’t seem very threatening, consider…

  • What if one winter, for whatever reason, homes wouldn’t heat? How many people would freeze to death?
  • Consider that Google’s self-driving car doesn’t have any manual controls. It’s the AI or it’s no-go. More vehicles will move in this direction, especially all forms of bulk delivery. If all transportation stopped, how would people in cities get food when their 3-day supply runs out?
  • How long can those city dwellers last without fresh water if pumping stations are under computer control and they stop?

Existing technology: Some will argue that because we don’t have strong AI (e.g. human level intelligence or better) now, there’s no point in even talking about risk. However, this sounds like “Let’s not build any asteroid defenses until we clearly see an asteroid headed for Earth”. It’s far too late by then. Similarly, once the AI is here, it’s too late to talk about precautions.

In conclusion, if you have a conversation about AI risks, be clear what you’re talking about. Frankly, all of humanity being killed by robots under the control of a super-intelligence AI doesn’t even seem worth talking about compared to all of the more likely risks. A better conversation might start with a question like this:

Are we at risk of death, manipulation, or other harm from future AI, whether deliberate or accidental, and if so, what can we do to decrease those risks?