The Lost Kickstarter Campaign

Before I published Avogadro Corp, I considered running a Kickstarter campaign to fund publishing the novel. I ended up publishing without the Kickstarter. Fast-forward three years, and I just found the campaign still sitting in my Kickstarter account.

Here's the description I wrote for the never-started campaign:
I am asking for help to publish my novel Avogadro Corp. The manuscript is completed, and just needs a final round of copy-editing, cover design, and layout in order to be published. 
Synopsis 
David Ryan is a brilliant computer scientist, cherry-picked to lead a new project at Avogadro Corp, the world’s leading Internet company. The goal of the project, called ELOPe, is to create a next-generation feature for the company’s email product - one that can optimize the language of emails to make them more effective and persuasive. 
With his chief architect, Mike Williams, and a team of programmers, the two have proven the feasibility of the concept and are hard at work trying to release the feature. When David gives a presentation to the executive leadership of the company, they are impressed by the project results and effectiveness. But David fails to disclose to the executives that the project is grossly inefficient, requiring thousands of times more servers than any other project. 
The VP of Operations threatens to kick ELOPe off the servers if David and Mike don’t decrease the number of servers the project uses within two weeks. This would be a death blow for the project, in part because David has been deceptive from the start about how many resources the project has been using. David and Mike start scrambling to fix the performance of ELOPe. 
When it becomes clear a few days before the deadline that they can’t fix ELOPe’s performance, David stays up late making subtle modifications to the software. Instead of fixing the performance problems, David embeds a directive in the software to maximize the project success. David’s modifications have ELOPe filtering company emails to secretly modify any email that mentions ELOPe to strive for a positive outcome. 
The software is so good that at first, the effort seems successful - the project is allocated thousands of new servers and high performance computing experts are brought in to help optimize the code. Innocuous sounding emails convince people to grant more resources and develop new capabilities that make ELOPe more powerful. But soon ELOPe is social engineering people around the company to neutralize threats and strengthen itself. 
When Mike is sent on a wild goose chase to Wisconsin, getting him off the grid as just the moment when David needs him, it dawns on Mike that something is wrong. 
Simultaneously, Gene Keyes, a crotchety old auditor at Avogadro who is known for distrusting computers and using only paper records, begins to find evidence of financial oddities that all point in the same direction. 
Amid background news stories hinting at ELOPe’s ever growing influence, even at the level of government policy, David, Mike and Gene take ever escalating action to shut ELOPe down. However ELOPe anticipates and blocks their every move. 
As the humans prepare for a final showdown with ELOPe, Mike sees a pattern emerge in the news reports: the AI is actually helping humans by fostering peace agreements and stabilizing financial markets. 
Can they win a final showdown with ELOPe -- or should they even try? 
Endorsements 
"This is an alarming and jaw-dropping tale about how something as innocuous as email can subvert an entire organization.  I found myself reading with a sense of awe, and read it way too late into the night."-- Gene Kim, founder of Tripwire, author of Visible Ops. 
“Avogadro Corp builds a picture of how an AI could emerge, piece by piece, from technology available today. A fascinating, logical, and utterly believable scenario - I just hope nobody tries this at home." -- Nathan Rutman, Software Architect, Lustre High Performance Distributed Filesystem 
Background for Avogadro Corp 
Avogadro Corp evolved out of a lunchtime conversation. I was arguing that the development of human level artificial intelligence is an inevitable consequence of the increasing processing speeds of computers. My friend countered with the argument that mere people who would do the programming, and we weren’t smart enough to create an artificial intelligence as smart or smarter than us. He challenged me to describe a scenario in which an artificial intelligence could be born. So I described one based on plausible extrapolation from known programming techniques. And the idea for Avogadro Corp was born. 
Avogadro Corp will be satisfying to technical readers who want realistic fiction, and enjoyable for casual readers who want easy-to-grasp explanations of how the science works.' 
Project Timeline & Funds 
I expect that the digital versions of Avogadro Corp will be ready within 30-45 days of completion of the kickstarter project. Printed books will take longer, due to printing and shipping times.

About Me 
I’m William Hertling, and I live in Portland, Oregon. I’ve been a computer programmer, social media strategist, data analyst, program manager, web developer, and now writer. Avogadro Corp is my first novel, and I am currently working on a sequel.

How to Launch a Book in the Top Ten

All writers, whether indie, small press, or large traditional publisher, must learn how to market themselves and their books. If they don't get the word out about their book, no one will buy it. (This is also true of musicians and businesses, and I think there's a lot that can be learned from these seemingly disparate areas.)

Eliot Peper is a friend and the author of Uncommon Stock a thriller about a tech startup. I really liked the book, but I also enjoyed watching Eliot's path to publication. Eliot graciously offered to share his lessons learned about the book launch, the all-important first month that helps establish a book on bestseller lists and get word-of-mouth going.

Without further ado, Eliot:

On March 5th my first novel, Uncommon Stock debuted at #8 in its category on Amazon. Will is one of my favorite indie authors and his advice, codified in Indie and Small Press Book Marketing played a critical role in shaping my launch plan. He generously offered to let me share some of my lessons learned along the way. I hope you can use some of these strategies to help launch your own bestsellers! I look forward to reading them.

Here’s what you need to do to launch in the top ten:
  1. Write a good book. Without one, none of this matters. It’s tempting to try to think up devious ways to growth hack your book but at the end of the day, it’s all a wasted effort if your content isn’t truly awesome. My perspective on successful titles is really simple: write a book good enough that people who don’t know you will recommend it to their friends. If you can do that, you can probably ignore the rest of this list anyway.
  2. Don’t ask people to buy your book. “Buy my book” sounds like a used-car-salesman. “Read my book” sounds like an author.
  3. Influence influencers. If you already have a million Twitter followers and an oped in the New York Times then this won’t matter much to you. But if you’re a regular guy like me, then you’ll need help from people with platforms of their own to share your title. Brad Feld, a well known venture capitalist and tech blogger, shared Uncommon Stock via his blog and social channels and even temporarily switched his profile picture to the cover of the book. Why? Because I had been sending him drafts of the book since I finished writing Chapter 3. Will sums up the right approach to take with influencers of any kind (this includes media): give, give, give, give, ask. Do as many favors as you can think of for people and worry about the ROI later.
  4. Leverage your network. On/around launch day I sent ~200 individual personal emails, 2 email blasts to my list of ~600 members, published 3 blog posts, and flooded my social channels with content (you really only have an excuse to do this on Day 1). You need people to R3 your book: read, review, and recommend it. How can you inspire them to act? Create a sense of urgency (it’s launch day!) and tell them why their help is important (books that start strong snowball up Amazon’s algorithms).
  5. Cultivate gratitude and humility. Publishing is the path of 1000 favors. Every single person (including your mom) is doing you a solid by taking the time/money to purchase, read, and review your book. Think about how incredible it is that anyone at all is getting a kick out what reading what you write. Never stop telling people how much you appreciate their help, every little bit counts.
  6. Do something cool. It’s easier to get coverage and social media amplification if there’s more to talk about than the simple fact that it’s launch day. I created a Twitter account for Uncommon Stock’s protagonist (@MaraWinkel) and incited a Twitter battle with a few people with large followings. Heck, we even built a website for Mara’s startup and a major venture capital firm announced an investment in the fictional company.  This introduced new people to the story and was a talking point in itself.
  7. All format release. Make sure your book is available in digital and print formats on launch day. I didn’t do this because we were slow getting the print version through typesetting and I know it resulted in significant lost sales. I’ve also had a couple dozen people reach out to ask where they can get the print copy (so there must be many more that didn’t reach out). That sucks. I want to DELIGHT my readers in every possible interaction they have with me.
  8. Recruit a cadre of advance reviewers. The more reviews you can get on Amazon as soon as possible the better. I sent advance review copies out to ~50 people a couple of weeks before launch. Then I pinged those people shortly before launch day reminding them how useful an honest review from them would be. Then I reminded them on launch day that now was the time! We debuted with 28 reviews.
  9. Be strategic. Choose Amazon categories that are specific and not too competitive. Reach out to your alma mater and try to get in the alumni newsletter. Pitch low-lying bloggers or reporters with concise, compelling stories. Snag some endorsements from folks that have actually read your book. Etc.
  10. Write another good book. There’s nothing more important than building a backlist. It gives fans more of what they want. It gives prospective readers a new path to discovering you. Plus, writing books is why you’re doing all of this anyway!
There are more details available on how launch week went for Uncommon Stock here. If you’re interested in an adventure through the world of tech startups, read it!

For further reading, I highly recommend Will’s Indie and Small Press Book Marketing. He shares extensive detail on his various successes as an indie author and it’s the only book you need to read in order to prepare for your own release. I’m particularly impressed by how he’s applied growth hacking techniques like A/B testing to optimize his reader funnel. You should also check out the following three posts. I’ve found them insightful and actionable throughout the launch:
Oh, and one final thing. Don’t forget to take time to celebrate! It’s all too easy to get caught up in all the noise on launch day. Make sure to take a moment to appreciate how friggin’ cool it is that readers finally have your book in hand.


Eliot Peper is a writer in Oakland, CA. His first novel, Uncommon Stock  is a fictional thriller about a tech startup and the lead title for a new indie publishing company, FG Press  You can find it on Amazon and most major retailers. You can even download a free ten-chapter excerpt. When he’s not writing, Eliot works with entrepreneurs and investors to build new technology companies. He also blogs about writing, entrepreneurship, and adventure.

Fireside Chat with Brad Feld

This was an insanely fun chat I had with Brad Feld at the Silicon Flatirons's Science fiction & Entrepreneurship conference. We discussed the inspiration for Avogadro Corp, where we both draw influences from, investing, and more.




This was the panel I was on with a fascinating group of panelists about the intersection of science fiction and entrepreneurship.

On Writing and Meetings

Brad Feld posts about why he writes for an hour each day:
Finally, after almost 20 years of writing, the light bulb went on for me.

I write to think.

Forcing myself to sit down and work through these ideas in a logical sequence for an audience of readers required me to refine my thinking on how I invest in startups. How could I make the financing process more efficient? What’s the best way to structure a deal? I learned a lot, both from my writing and my readers’ responses.
I also love this gem on Jeff Bezos from Brad's post:
Consider Jeff Bezos’s approach to meetings. Whoever runs the meeting writes a memo no longer than six pages about the issue at hand. Then, for the first 15 to 30 minutes of the meeting, the group reads it. The rest of the meeting is spent discussing it. No PowerPoint allowed. Brilliant. (I’ve long felt that PowerPoint is a terrible substitute for critical thinking.)
This aligns nicely with what Edward Tufte says:
PowerPoint... usually weaken(s) verbal and spatial reasoning, and almost always corrupt(s) statistical analysis.

Podcast with Singularity 1 on 1

I was honored to be interviewed by the inimitable Nikola Danaylov (aka Socrates) for the Singularity 1 on 1 podcast.

In our 45 minute discussion, we covered the technological singularity, the role of open source and the hacker community in artificial intelligence, the risks of AI, mind-uploading and mind-connectivity, my influences and inspirations, and more. You can watch the video version below, or hop over to the Singularity 1 on 1 blog for audio and download options.



The Last Firewall audiobook available!

Great news: The Last Firewall audiobook is available now from Audible and iTunes. Go grab a copy!

Narrated by the talented Jennifer O'Donnell, and produced by Brick Shop Shop, this unabridged production is nearly ten hours long. I'm really happy with the result.

Sorry it's a few months late. I promised it would be available in December, but we had delays due to snowstorms, illness, and a late decision to change a few voices. I'm glad we took the time to get it right, even if that meant it's out later than expected.

On the topic of DRM, since I know I'll get emails about it: I prefer DRM-free content, and anywhere I'm given the opportunity as an author to opt-out, I do. Audible is great in that they allow the author and narrator to split royalties, giving indie authors a way to produce audiobooks without the huge up-front cost of narration and production. That's why I work with them and probably will continue to do so. Unfortunately, they apply DRM, and since my agreement gives them exclusive distribution rights, there's no way around for me. I don't think anybody likes DRM but I'm glad Audible is indie-friendly. If you feel strongly about DRM, I encourage you to let Audible know via twitter (@audible_com) and email (customersupport@audible.com). Maybe with enough pressure, they'll come around to what their customers want.

I hope you enjoying listening to The Last Firewall. This makes the first time the entire series is available on audio, so if you haven't tried it yet, go get the whole series. (Plus, if you sign up for an Audible account and get one of my novels first, I get a small bonus. If you want to support your indie author, Audible is the way to do it!)


Daniel Suarez's Influx Released Today

Daniel Suarez, author of the amazing Daemon, has a new book coming out today: Influx.
What if our civilization is more advanced than we know?

The New York Times bestselling author of Daemon--"the cyberthriller against which all others will be measured" -Publishers Weekly) --imagines a world in which decades of technological advances have been suppressed in an effort to prevent disruptive change.

Are smart phones really humanity's most significant innovation since the moon landings? Or can something else explain why the bold visions of the 20th century--fusion power, genetic enhancements, artificial intelligence, cures for common disease, extended human life, and a host of other world-changing advances--have remained beyond our grasp? Why has the high-tech future that seemed imminent in the 1960's failed to arrive?

Perhaps it did arrive...but only for a select few.


Particle physicist Jon Grady is ecstatic when his team achieves what they've been working toward for years: a device that can reflect gravity. Their research will revolutionize the field of physics--the crowning achievement of a career. Grady expects widespread acclaim for his entire team. The Nobel. Instead, his lab is locked down by a shadowy organization whose mission is to prevent at all costs the social upheaval sudden technological advances bring. This Bureau of Technology Control uses the advanced technologies they have harvested over the decades to fulfill their mission.
I got my copy. Did you get yours? :)


2004 Article about Stross and Doctorow

This article from 2004, ten years ago, about Charles Stross's then-upcoming Accelerando, and featuring bits of Stross and Cory Doctorow, along with Verner Vinge and the lobster researchers, was so much fun to read. Is Science Fiction About to Go Blind?

A small excerpt:
Stross and Doctorow are sitting outside the Chequers Hotel bar in Newbury, a small city west of London. The Chequers has been overrun this May weekend by a distinct species of science-fiction fan, members of a group called Plokta (Press Lots of Keys to Abort). The men are mostly stout and bearded, the women pedestrian in appearance but certainly not in their interests. During one session Stross mentions an early model of the Amstrad personal computer, and the crowd practically cheers. Stross is the guest of honor, and he and Doctorow have just emerged from a panel discussion on his work.
The two have met just four times, but they have the comfortable rapport of long-distance friends that is possible only in the e-mail age. (They have collaborated on several critically acclaimed short stories and novellas, one of them before they ever met in person.) Stross, 39, a native of Yorkshire who lives in Edinburgh, looks like a cross between a Shaolin monk and a video-store clerk—bearded, head shaved except for a ponytail, and dressed in black, including a T-shirt printed with lines of green Matrix code. Doctorow, a 33-year-old Canadian, looks more the hip young writer, with a buzz cut, a worn leather jacket and stylish spectacles, yet he’s also still very much the geek, G4 laptop always at the ready. 
They have loosely parallel backgrounds: Stross worked throughout the 1990s as a software developer for two U.K. dot-coms, then switched to journalism and began writing a Linux column for Computer Shopper. Doctorow, who recently moved to London, dropped out of college at 21 to take his first programming job, then went on to run a dot-com and eventually co-found the technology blog boingboing.net. 
Although both have been out of programming for a few years, it continues to influence—even infect—their thinking. In the Chequers, Doctorow mentions the original title for one of the novels he’s working on, a story about a spam filter that becomes artificially intelligent and tries to eat the universe. “I was thinking of calling it /usr/bin/god.” 
“That’s great!” Stross remarks.

The Singularity is Still Closer than it Appears

Ramez Naam, author of Nexus and Crux (two books I enjoyed and recommend), has recently put together a few guest posts for Charlie Stross (another author I love). The posts are The Singularity Is Further Than It Appears and Why AIs Won't Ascend in the Blink of an Eye.

They're both excellent posts, and I'd recommend reading them in full before continuing here.

I'd like to offer a slight rebuttal and explain why I think the singularity is still closer than it appears.

But first, I want to say that I very much respect Ramez, his ideas and writing. I don't think he's wrong and I'm right. I think the question of the singularity is a bit more like Drake's Equation about intelligent extraterrestrial life: a series of probabilities, the values of which are not known precisely enough to determine the "correct" output value with strong confidence. I simply want to provide a different set of values for consideration than the ones that Ramez has chosen.

First, let's talk about definitions. As Ramez describes in his first article, there are two versions of singularity often talked about.

The hard takeoff is one in which an AI rapidly creates newer, more intelligent versions of itself. Within minutes, days, or weeks, the AI has progressed from a level 1 AI to a level 20 grand-wizard AI, far beyond human intellect and anything we can comprehend. Ramez doesn't think this will happen for a variety of reasons, one of which is the exponential difficulty involved in creating successively more complex algorithm (the argument he lays out in his second post).

I agree. I don't see a hard takeoff. In addition to the reasons Ramez stated, I also believe it takes so long to test and qualify candidates for improvement that successive iteration will be slow.

Let's imagine the first AI is created and runs on an infrastructure of 10,000 computers. Let's further assume the AI is composed of neural networks and other similar algorithms that require training on large pools of data. The AI will want to test many ideas for improvements, each requiring training. The training will be followed by multiple rounds of successively more comprehensive testing: first the AI needs to see if the algorithm appears to improve a select area of intelligence, but then it will want to run regressive tests to ensure no other aspect of its intelligence or capabilities is adversely impacted. If the AI wants to test 1,000 ideas for improvements, and each idea requires 10 hours of training, 1 hour of assessment, and averages 1 hour of regressive testing, it would take 1.4 years to complete a round of improvements. Parallelism is the alternative, but remember that first AI is likely to be a behemoth, require 10,000 computers to run. It's not possible to get that much parallelism.

The soft takeoff is one in which an artificial general intelligence (AGI) is created and gradually improved. As Ramez points out, that first AI might be on the order of human intellect, but it's not smarter than the accumulated intelligence of all the humans that created it: many tens of thousands of scientists will collaborate to build the first AGI.

This is where we start to diverge. Consider a simple domain like chess playing computers. Since 2005, chess software running on commercially available hardware can outplay even the strongest human chess players. I don't have data, but I suspect the number of very strong human chess players is somewhere in the hundreds or low thousands. However, the number of computers capable of running the very best chess playing software is in the millions or hundreds of millions. The aggregate chess playing capacity of computers is far greater than that of humans, because the best chess playing program can be propagated everywhere.

So too, AGI will be propagated everywhere. But I just argued that those first AI will require tens of thousands computers, right? Yes, except thanks to Moore's Law (the observation that computing power tends to double every 18 months), the same AI that required 10,000 computers will need a mere 100 computers ten years later and just a single computer another ten years after that. Or an individual AGI could run up to 10,000 times faster. That speed-up alone means something different when it comes to intelligence: to have a single being with 10,000 times the experience and learning and practice that a human has.

Even Ramez agrees that it will be feasible to have destructive human brain uploads approximating human intelligence around 2040: "Do the math, and it appears that a super-computer capable of simulating an entire human brain and do so as fast as a human brain should be on the market by roughly 2035 - 2040. And of course, from that point on, speedups in computing should speed up the simulation of the brain, allowing it to run faster than a biological human's."

This is the soft takeoff: from a single AGI at some point in time to an entire civilization of that AGI twenty years later, all running at faster than human intellect speeds. A race consisting of an essentially alien intelligence, cohabiting the planet with us. Even if they don't experience an intelligence explosion as Verner Vinge described, the combination of fast speeds, aggregate intelligence, and inherently different motivations will create an unknowable future that likely out of our control. And that's very much a singularity.

But Ramez questions whether we can even achieve an AGI comparable to a human in the first place. There's this pesky question of sentience and consciousness. Please go read Ramez's first article in full, I don't want you to think I'm summarizing everything he said here, but he basically cites three points:

1) No one's really sure how to do it. AI theories have been around for decades, but none of them has led to anything that resembles sentience.

This is a difficulty. One analogy that comes to mind is the history of aviation. For nearly a hundred years prior to the Wright Brothers, heavier than air flight was being studied, with many different gliders created and flown. It was the innovation of powered engines that made heavier than air flight practically possible, and which led to rapid innovation. Perhaps we just don't yet have the equivalent yet in AI. We've got people learning how to make airfoils and control services and airplane structure, and we're just waiting for the engine to show up.

We also know that nature evolved sentience without any theory of how to do it. Having a proof point is powerful motivation.

2) There's a huge lack of incentive. Would you like a self-driving car that has its own opinions? That might someday decide it doesn't feel like driving you where you want to go?

There's no lack of incentive. As James Barrat detailed in Our Final Invention, there are billions of dollars being poured into building AGI, both in big profile projects like the US BRAIN project and Europe's Human Brain Project, as well as countless smaller AI companies and research projects.

There's plenty of human incentive, too. How many people were inspired by Star Trek's Data? At a recent conference, I asked attendees who would want Data as a friend, and more than half the audience's hands went up. Among the elderly, loneliness is a very real issue that could be helped with AGI companionship, and many people might choose an artificial psychologist for reasons of confidence, cost, and convenience. All of these require at least the semblance of opinions.

More than that, we know we want initiative. If we have a self-driving car, we expect that it will use that initiative to find faster routes to destinations, possibly go around dangerous neighborhoods, and take necessary measures to avoid an accident. Indeed, even Google Maps has an "opinion" of the right way to get somewhere that often differs from my own. It's usually right.

If we have an autonomous customer service agent, we'll want it to flexibly meet business goals including pleasing the customer while controlling cost. All of these require something like opinions and sentience: goals, motivation to meet those goals, and mechanisms to flexibly meet those goals.

3) There are ethical issues. If we design an AI that truly is sentient, even at slightly less than human intelligence we'll suddenly be faced with very real ethical issues. Can we turn it off? 

I absolutely agree that we've got ethical issues with AGI, but that hasn't stopped us from creating other technology (nuclear bombs, bio-weapons, internal combustion engine, the transportation system) that also has ethical issues.

In sum, Ramez brings up great points, and he may very well be correct: the singularity might be a hundred years off instead of twenty or thirty.

However, the discussion around the singularity is also one about risk. Having artificial general intelligence running around, potentially in control of our computing infrastructure, may be risky. What happens if the AI has different motivations than us? What if it decides we'd be happier and less destructive if we're all drugged? What if it just crashes and accidentally shuts down the entire electrical grid? (Read James Barrat's Our Final Invention for more about the risks of AI.)

Ramez wrote Infinite Resource: The Power of Ideas on a Finite Planet, a wonderful and optimistic book about how science and technology are solving many resource problems around the world. I think it's a powerful book because it gives us hope and proof points that we can solve the problems facing us.

Unfortunately, I think the argument that the singularity is far off is different and problematic because it denies the possibility of problems facing us. Instead of encouraging us to use technology to address the issues that could arise with the singularity, the argument instead concludes the singularity is either unlikely or simply a long time away. With that mindset, we're less likely as a society to examine both AI progress and take steps to reduce the risks of AGI.

On the other hand, if we can agree that the singularity is a possibility, even just a modest possibility, then we may spur more discussion and investment into the safety and ethics of AGI.

Google's Deep Mind == Daniel Suarez's Daemon?

Here's a scary paragraph from a longer article about Google's acquisition of AI company Deep Mind:
One of DeepMind's cofounders, Demis Hassabis, possesses an impressive resume packed with prestigious titles, including software developer, neuroscientist, and teenage chess prodigy among the bullet points. But as the Economist suggested, one of Hassabis's better-known contributions to society might be a video game; a niche but adored 2006 simulator called Evil Genius, in which you play as a malevolent mastermind hell-bent on world domination.
That sounds just like the plot of Daniel Suarez's Daemon:
When a designer of computer games dies, he leaves behind a program that unravels the Internet's interconnected world. It corrupts, kills, and runs independent of human control. It's up to Detective Peter Sebeck to wrest the world from the malevolent virtual enemy before its ultimate purpose is realized: to dismantle society and bring about a new world order.

Avogadro Corp is free at Noise Trade!

You or a friend can pick up a free copy of Avogadro Corp: The Singularity is Closer than it Appears from Noise Trade Books, a new service for introducing readers to authors.

If you've got the print book and always wanted the ebook, or just want a DRM-free epub or mobi copy of Avogadro Corp, please take advantage of this opportunity. Avogadro Corp on Noise Trade.





Book Review: Neptune's Brood by Charles Stross

I'm a big fan of Charles Stross's science fiction. He's absolutely brilliant (listen to some of his talks on YouTube if you get the chance, or go read his blog posts), and it always comes across in his fiction.

On one level, Neptune's Brood is a classic space opera novel involving interstellar space travel, colonization, and space battles.

On another level, Neptune's Brood is a careful study of what you get when you rigorously think about how economic principles, human uploading, transhumanism, the limitations of light speed, and the cost moving matter apply to developing an interstellar civilization.

In other words, it's the type of very smart fiction you expect from Charles Stross.

The occasional pitfall of uber-smart fiction is that it can sometimes be a challenge to read. If the ideas come too fast or require too much effort to grok, the reader ends up working so hard to understand things that the reading loses its fun. Stross manages to avoid that pitfall here. It's an enjoyable, straightforward read underlaid with a foundation of brilliance.

You can get Neptune's Brood on Amazon, and I'm sure everywhere else as well.






Goals for 2014

I spent the last few days in bed with the flu. In addition to missing the company of visiting family, I also missed writing time.

During those couple of days, my friend Tac Anderson asked on Facebook about people's goals for 2014, as opposed to resolutions.

That got me thinking. What I'd like to achieve in 2014 includes completing, editing, and publishing my next adult novel, editing and publishing my children's novel, and rewriting Avogadro Corp. (Avogadro Corp is a great story, but it was my first written work, and it's got some rough areas that could benefit from time and attention.)

One way or another, I will get those books done, but I'd prefer to do it with less stress than its taken to get some of my past books out. I balance a day job, a family, and writing, and although each book is a joy to write and publish, it's also exhausting to do on top of an already full life.

So my goal for 2014 is to get my day job commitment down from 80% time to 60%. (Hi boss!) To do that, I'll either need to bring in more book income, find alternate sources of income, reduce expenses, or some combination of all of the above.

I've been investigating foreign rights and traditional publishers, and I'll think more about kickstarter campaigns. I'm open to ideas if you've got any.

What are your goals for 2014, and how do you hope to achieve them?

Self-Deploying Code

I'm reading Our Final Invention by James Barrat right now, about the dangers of artificial intelligence. I just got to a chapter in which he discussed that any reasonably complex artificial general intelligence (AGI) is going to want to control its own resources: e.g. if it has a goal, even a simple goal like playing chess, it will be able to achieve its goal better with more computing resources, and won't be able to achieve its goal at all if its shut off. (Similar themes exist in all of my novels.)

This made me snap back to a conversation I had last week at my day job. I'm a web developer, and my current project, without giving too much away, is a RESTful web service that runs workflows composed of other RESTful web services.

We're currently automating some of our operational tasks. For example, when our code passes unit tests, it's automatically deployed. We'd like to expand on that so that after deployment, it will run integration tests, and if those pass, deploy up to the next stack, and then run performance tests, and so on.

Although we're running on a cloud provider, it's not AWS, and they don't support autoscaling, so another automation task we need is to roll our own scaling solution.

Then we realized that running tests, deployments, and scaling all require calling RESTful JSON APIs, and that's exactly what our service is designed to do. So the logical solution is that our software will test itself, deploy itself, and autoscale itself.

That's an awful lot like the kind of resource control that James Barrat was writing about.

Recent Rate of Computer Processing Growth

I was having a discussion with a group of writers about the technological singularity, and several asserted that the rate of increasing processor power was declining. They backed it up with a chart showing that the increase in MIPS per unit of clock speed stalled about ten years ago.

If computer processing speeds fail to increase exponentially, as they have for the last forty years, this will throw off many different predictions for the future, and dramatically decreases the likelihood of human-grade AI arising.

I did a bit of research last night and this morning. Using the chart of historical computer speeds from Wikipedia, and I placed a few key intervals in a spreadsheet and found:
  • From 1972 to 1985: MIPS grew by 19% per year.
  • From 1985 to 1996: MIPS grew by 43% per year.
  • From 1996 to 2003: MIPS grew by 51% per year.
  • From 2003 to 2013: MIPS grew by 29% per year.

By no means is the list of MIPS ratings exhaustive, but it does give us a general idea of what's going on. The data shows the rate of CPU speed increases has declined in the last ten years.

I split up the last ten years:
  • From 2003 to 2008: MIPS grew by 53% per year.
  • From 2008 to 2013: MIPS grew by 9% per year.
According to that, the decline in processing rate increases is isolated to the last five years.

Five years isn't much of a long term trend, and there are some processors missing from the end of the matrix. The Intel Xeon X5675, a 12 core processor isn't shown, and it's twice as powerful as the Intel Core i7 4770k that's the bottom row on the MIPS table. If we substitute the Xeon processor, we find the growth rate from 2008 to 2012 was 31% annually, a more respectable improvement.

However, I've been tracking technology trends for a while (see my post on How to Predict the Future), and I try to use only those computers and devices I've personally owned. There's always something faster out there, but it's not what people have in their home, which is what I'm interested in.

I also know that my device landscape has changed over the last five years. In 2008, I had a laptop (Windows Intel Core 2 T7200) and a modest smartphone (a Treo 650). In 2013, I have a laptop (MBP 2.6 GHz Core i7), a powerful smartphone (Nexus 5), and a tablet (iPad Mini). I'm counting only my own devices and excluding those from my day job as a software engineer.

It's harder to do this comparison, because there's no one common benchmark among all these processors. I did the best I could to determine DMIPS for each, converting GeekBench cores for the Mac, and using the closest available processor for mobile devices that had a MIPS rating.

When I compared my personal device growth in combined processing power, I found it increased 51% annually from 2008 to 2013, essentially the same rate as for the longer period 1996 through 2011 (47%), which is what I use for my long-term predictions.

What does all this mean? Maybe there is a slight slow-down in the rate at which computing processing is increasing. Maybe there isn't. Maybe the emphasis on low-power computing for mobile devices and server farms has slowed down progress on top-end speeds, and maybe that emphasis will contribute to higher top-end speeds down the road. Maybe the landscape will move from single-devices to clouds of devices, in the same way that we already moved from single cores to multiple cores.

Either way, I'm not giving up on the singularity yet.