Last year I read the manuscript for Eliot Peper’s then unpublished Uncommon Stock: Version 1.0 and loved it. FG Press, the publishing company launched by Brad Feld, went on to publish it, and it got a lot of great press, because it blended the world of tech startups with a conspiracy thriller. It’s the first book in a three book series.

As soon as I read it, I asked Eliot was he was working on, and he mentioned a different writing project. I can’t remember my exact words, but I told him to drop everything and get to working on book two. He did, and FG Press published Uncommon Stock: Power Play right around Christmas, and it’s been sitting on my bedside table since.

Well, I read it this weekend and loved it, even more than book one. I tore through the last three-quarters of the book. The stakes have really been raised for Mara Winkel and her financial fraud detection startup as they identify one of the largest money laundering rings in the world. It is again an awesome blending of thriller and tech startup novel.

If you haven’t done so, go buy a copy. But start with Uncommon Stock: Version 1.0 if you’re new to the series.

Eliot, I hope you’re writing book 3 right now. :)

A few weeks ago my friend and fellow science fiction writer, Ramez Naam, posted a link to an article debunking some myths about bulletproof coffee. Then today I noticed a link on Reddit about a professor at Kansas State University who went on a convenience store diet eating Twinkies to prove that counting calories is what matters most in weight loss, not the nutritional value in food.

I am all for science, and I love to understand exactly how things work and why things have the effect they do. But often I think that in our zeal to get to the truth we overlook the practical question of what actually works.

Let me give you an example. If you ask a dentist whether it’s better to floss before you brush your teeth or after you brush your teeth, they’ll tell you it doesn’t matter. Both are equally effective at preventing dental problems. However, if you look at how many people continue flossing, that is, they stay in compliance with the regimen of flossing, then you find there is a difference. People who floss after they brush their teeth are more likely to continue flossing. (Sorry, I read this a year or two ago, and can’t find a link now.)

Why is this important? Well if you look at diets, the most important factor in weight loss is not how effective the diet is, but in how compliant people are. It’s easy to start a diet, hard to stay on it. Staying on it is the challenge for for most people.

Perhaps we could, in theory, eat exactly 1200 calories of Twinkies every day and lose weight, but in practice how likely are we to continue counting calories meal after meal, week after week?

I think the value that people get out of approaches like bulletproof coffee, low-carb diets, or other structural approaches to dieting (in which the emphasis is on eliminating certain foods rather than counting calories) is that for some people those diets are easier to stick with. This moves us out of the realm of basic chemical/biological science (which is how you might measure effectiveness of a diet), and into the realm of psychology (which is probably where the majority of compliance comes from.)

But even if we evaluate diets for compliance, it doesn’t mean there’s one best solution for all people. Some people might do really well with one diet, and other people do better with a different one. We all have different favorite foods, eating habits, and tolerance for eating the same foods over and over. For other people, a low-carb diet might work really well, others might like to replace breakfast with bulletproof coffee, still others use exercise, and some count calories, and some blend multiple approaches.

So when we see a piece of research that says counting calories is what matters most in weight-loss, we know that it’s wrong. What matters is the combination of compliance (whether people can stick to the diet for whatever period of time is necessary) and effectiveness (how much weight is lost when you’re in compliance.)

Yes, we need science and the understanding of fundamental principles and theory. But what also need to know is how things work in practice, and not just in a general population, but specifically for us.

The way we get there is through personal experimentation. Be willing to try something (within reason, of course) for a period of time and see how it works. If it doesn’t work for you, it doesn’t matter that science says that it works for 80% of people. It only matters that it doesn’t work for you. Learn that it doesn’t work, and then move on to a different trial.

Conversely if something is working for you, then it doesn’t matter if science can’t explain it. It’s working. Don’t mess with it.

When I was a kid, my first computer was a TRS 80 Micro Color Computer 2. It wasn’t the big Trash-80 that most people had. It was a tiny thing, with a chiclet keyboard, and an expansion port on the back to allow you to upgrade from 4 kB of memory to 20 kB. I think it costs $99 and another 20 or 30 for the memory expansion.

When I was 16, I got an Apple II E. This had seven expansion slots which could be used to upgrade memory, add storage, and video capabilities, or add modems. (I had seven modems and was running a chat system, but that’s another story.) My next computer was an Amiga 1000. Although it wasn’t designed for upgradability, I bought an expansion kit which was a daughter board that plugged into the CPU socket and allowed me to upgrade to 1.5 MB of RAM. Later I bought another expansion Kit that was another daughter board that allowed me to replace the 8 MHz CPU with a 16 MHz CPU. I was able to attach three disk drives, and I had an expansion port that would have allowed me to connect a SCSI hard drive if I could’ve afforded one.

After the Amiga 1000, I had a series of IBM PCs and compatibles from 1989 to 2008. What defined the PC’s was a complete ability to build them from scratch and upgrade components as needed. The metal chassis, or box that housed the computer, might need to be upgraded every 10 years or so. The motherboard might be upgraded every four years. The RAM, hard drives, and CPU might be upgraded every two years. This was far more environmentally friendly and cost-effective than buying a new computer every three years.

In 2009 or so, I started using Macs. I love OS X, the Mac operating system. And I love most of the applications that run on the Mac. It’s far more stable than Windows, lower maintenance, and often easier to use. Because it’s built on UNIX, I can use all the best programming tools.

However the Macs I’m buying are laptops and laptops are inherently less upgradable. That isn’t to say they’re not upgradable at all. Over the Christmas break I upgraded the older MacBook Pro laptops in our house. In both cases I replaced the magnetic platter hard drive with a much faster SSD, and upgraded the memory: in one case doubling the memory, and in the other case quadrupling the memory.

It was an easy upgrade to do. It took about five minutes to open the case and replace the memory. It may be another five minutes to replace the hard drive. I could have chosen to restore everything from Time Machine, which would’ve been very quick. But in this case I chose to rebuild the operating system and applications from scratch to get a clean install.

By doing this upgrade on these three or four-year-old computers, I just gave them at least another three or four year lifespan. Again, this is environmentally friendly and economically the best approach. It cost about $200 to upgrade one Mac and about $300 to upgrade the other. To buy a comparable machine would have cost between 1000 and $1500.

Now for the bad news. The two most recent laptop purchases in our house were retina MacBook Pros. These are the extra thin models that don’t have a CD drive. They also don’t have upgradable hard drives or memory. This means they’re stuck with whatever you buy. There’s no way to upgrade them, no way to extend their life. Yes, they are beautiful, sleek, lightweight machines. But from an environmental lifecycle and cost they are inferior to their predecessors.

I can somewhat understand cheap electronics, things that costs under $100 or $200, being non-upgradable and simply replaced at the end of their life. But for computers that cost $1000 or more, and embody substantial environmental impact, it is irresponsible and shortsighted to not make them upgradable. I hope that we’ll see a return to upgradable computers in the future.

I saw The Imitation Game with Erin last night. This is the movie based upon the life of Alan Turing, the British mathematician who helped break enigma, and conceived of general purpose computing (à la Turing machines), and is famous for the concept of the Turing test. 

The Turing test, of course, was part of the inspiration for the title The Turing Exception for my new novel.

Although I knew a bit about Alan Turing from past reading and studies I was lucky enough to see George Dyson, author of Turing’s Cathedral, speak at the Defrag conference in November. George Dyson is a science historian and brother of technology analyst Esther Dyson. George gave a great keynote presentation at Defrag and I got to spend an evening chatting with him about Alan Turing, early physicists and mathematicians, the war effort, technology, artificial intelligence, and the singularity. In all, it was a fabulous discussion spanning many topics.

So I was quite excited to see The Imitation Game. From some reviews I glimpsed, it appears the movie isn’t 100% true to the historical record. But having not yet read Turing’s Cathedral, and it having been a while since I studied the details of that time, I was able to enjoy the movie without worrying about technical inaccuracies. I’d call this a must-see for anyone for has an interest in the origin of computers or cryptography.

I can be pretty sensitive to movies, so I ended up pretty emotional and crying at the end of the film. Alan Turing was a brilliant mathematician who we lost at the age of 41 because of his treatment as a homosexual.

Having seen the film, I’m now excited to go read Turing’s Cathedral.

Having related the general outlines of the story (minus the homosexual persecutation) to my kids, they were pretty interested, and wanted to know if we could create an Enigma machine. There’s a great one-page PDF paper enigma machine that allows you to perform the basics of rotor encryption.

Unfortunately, thanks to a business trip in my day job and some bad ergonomics while traveling, I’m struggling with a bout of chronic tendinitis again. So I’ve made an intentional choice to stay away from the keyboard is much as possible.

As part of that practice I bought the new version of Dragon dictate for the Mac. I’m glad to say it’s vastly improved over older versions. I first used Dragon Dictate in 2002 or so, when I first had tendinitis issues related to my day job in computer programming. Back then it was sort of comically wrong. You’d dictate a paragraph of text and maybe 50% would be right.

But the new version is quite astounding. I’ve dictated several blog post and it’s made zero gross errors. There have been a few small errors, where I have either failed to say what I meant, slurred some words together, or used words or phrases that were very uncommon (like the touring exception or patriot).

If you are familiar with speech recognition about 10 years ago, then you know that between the combination of lower accuracy and problems correcting text, it often became a comedy of errors trying to get what you wanted onto the page. But today, it’s easy enough to just read and then make a few simple corrections at the end.

Just a few years ago I investigated Dragon dictate for the Mac, but at the time the version that was out apparently was very buggy according to reviews. The current version today seems pretty darn solid and fun to use.

If you’re struggling with any kind of repetitive stress injury, give speech recognition a try again even if you had bad results in the past.

It’s been a while since my last post. I spent most of December working toward the final edits on The Turing Exception.

After two rounds of beta reader feedback and edits, I’m feeling pretty good about the way book four ended up.  the manuscript is currently with my copy editor, and I should get it back in a few weeks. Then I’ll make a few more changes and send it for a round of proofreading. Finally, there will be interior layout for the print edition and formatting for the e-book. And hopefully all that will happen by sometime in February, leading to a release by late February if possible.

Also, if you’ve been paying close attention, you’ll notice the title changed slightly. My friend Mike suggested Turing’s Exception as an idea, and that was better than any of the dozens of ideas I’d considered. But then I tested three different variations (Turing’s Exception, The Turing Exception, and Turing Exception), and The Turing Exception was vastly preferred, by about 38 out of 40 people in a poll.

As I’ve mentioned before, Patreon supporters will receive their e-books before the public release, just as soon as I can make them available. Patreon supporters at the five dollar level and above will receive their signed paperback around the time of the public release. This is because the paperback books are just not available any earlier.

You might be wondering why I have a Patreon campaign. The economics of writing are such that I still have to hold a day job in addition to selling books. Except for a few bestsellers, most writers are unable to support themselves solely by writing books.

Have you heard of the Kevin Kelley essay 1000 True Fans? The core idea is that it’s possible for an artist, writer, creator to support themselves if they can create $100 worth of product per year, and have 1000 fans will buy that product. 1000 fans times $100 equals $100,000, and therefore approximately a full-time living.

The challenge is that it’s hard for a writer to create a hundred dollars worth of product per year. I net about $2.50 per book sold, and I can publish about one per year. Even with 10,000 or 20,000 fans, that’s not a full-time income. So the idea with Patreon is to have a closer relationship with a few people, share some more of what I’m creating and create some special rewards just for supporters and hopefully get to the point where writing can support me full-time enabling me to write more than I do today.

I hope that you had a wonderful holiday and happy new year. I wish you the best in 2015.

AvogadroCorpGermanCoverThe German edition of Avogadro Corp is available for preorder from Amazon:

http://www.amazon.de/Avogadro-Corp-Gewalt-k%C3%BCnstlichen-Intelligenz-ebook/dp/B00PN7Z36Q/

It releases in paperback and kindle on December 9th. If you or a friend read German, I hope you’ll check it out.

The success of this translation will be helpful in getting the rest of the series translated to German, and all of my books translated to other languages.

 

With so many discussions happening about the risks and benefits of artificial intelligence (AI), I really want to collect data from a broader population to understand what others think about the possible risks, benefits, and development path.

Take the Survey

I’ve created a nine question survey that takes less than six minutes to complete. Please help contribute to our understand of the perception of artificial intelligence by completing the survey.

Take the survey now

Share the Survey

I hope you’ll also share the survey with others. The more responses we get, the more useful the data becomes.

Share AI survey on Twitter

Share AI survey on Facebook

Share the link: https://www.surveymonkey.com/s/ai-risks

Thanks to Elon Musk’s fame and his concerns about the risks of AI, it seems like everyone’s talking about it.

One difficulty that I’ve noticed is agreement on exactly what risk we’re talking about. I’ve had several discussions in just the last few days, both at the Defrag conference in Colorado and online.

One thing I’ve noticed is that the risk naysayers tend to say “I don’t believe there is risk due to AI”. But when you probe them further, what they are often saying is “I don’t believe there is existential risk from a skynet scenario due to a super-intelligence created from existing technology.” The second statement is far narrower, so let’s dig into the components of it.

Existential risk is defined by Nick Bostrum as a risk “where an adverse outcome would either annihilate Earth-originating intelligent life or permanently and drastically curtail its potential.” Essentially, we’re talking about either the extinction of humankind, or something close to it. However, most of us would agree that there are very bad outcomes that are nowhere near an existential risk. For example, about 4% of the global population died in WWII. That’s not an existential risk, but it’s still horrific by anybody’s standards.

Runaway AI, accelerating super-intelligence, or hard takeoff are all terms that refer to the idea that once an artificial intelligence is created, it will recursively improve its own intelligence, becoming vastly smarter and more powerful in a matter of hours, days, or months. We have no idea if this will happen (I don’t think it’s likely), but simply because we don’t have a hard takeoff doesn’t mean that an AI would be stagnant or lack power compared to people. There are many different ways even a modest AI with the creativity, motivation, and drive equivalent to that of a human could affect a great deal more than a human could:

  • Humans can type 50 words a minute. AI could communicate with tens of thousands of computers simultaneously.
  • Humans can drive one car at a time. AI could fly all the world’s airplanes simultaneously.
  • Humans can trip one circuit breaker. AI could trip all the world’s circuit breakers.
  • Humans can reproduce a handful of times over the course of a lifetime. AI could reproduce millions of times over the course of a day.
  • Humans evolve over the course of tens of thousands of years or more. Computers become 50% more powerful each year.

So for many reasons, even if we don’t have a hard takeoff, we can still have AI actions and improvement that occur far faster, and with far wider effect than we humans are adapted to handling.

Skynet scenario, terminator scenario, or killer robots are terms that refer to the idea that AI could choose to wage open warfare on humans using robots. This is just one type of risk, of many different possibilities. Other ways that AI could harm us include deliberate mechanisms, like trying to manipulate us by controlling the information we see, or by killing off particular people that pose threats, or by extorting us to deliver services they want. This idea of manipulation is important, because while death is terrible, the loss of free will is pretty bad too.

Frankly, most of those seem silly or unlikely compared to unintentional harm that AI could cause: the electrical grid could go down, transportation could stop working, our home climate control could stop functioning, or a virus could crash all computers. If these don’t seem very threatening, consider…

  • What if one winter, for whatever reason, homes wouldn’t heat? How many people would freeze to death?
  • Consider that Google’s self-driving car doesn’t have any manual controls. It’s the AI or it’s no-go. More vehicles will move in this direction, especially all forms of bulk delivery. If all transportation stopped, how would people in cities get food when their 3-day supply runs out?
  • How long can those city dwellers last without fresh water if pumping stations are under computer control and they stop?

Existing technology: Some will argue that because we don’t have strong AI (e.g. human level intelligence or better) now, there’s no point in even talking about risk. However, this sounds like “Let’s not build any asteroid defenses until we clearly see an asteroid headed for Earth”. It’s far too late by then. Similarly, once the AI is here, it’s too late to talk about precautions.

In conclusion, if you have a conversation about AI risks, be clear what you’re talking about. Frankly, all of humanity being killed by robots under the control of a super-intelligence AI doesn’t even seem worth talking about compared to all of the more likely risks. A better conversation might start with a question like this:

Are we at risk of death, manipulation, or other harm from future AI, whether deliberate or accidental, and if so, what can we do to decrease those risks?

This presentation by Sarah Bird was one of the highlights of #DefragCon. I really loved what she said and all the data she shared.

How to Build a B2B Software Company Without a Sales Team
Sarah Bird, CEO Moz — @SarahBird

  • Moz
    • $30M/year revenue
    • growing from 2007 to current day
    • Moz makers software that helps marketing professional
  • Requirements for selling B2B software without a sales team
    • A nearly frictionless funnel
      • i hate asking for money
      • we made a company company that rarely asks you for money
      • People find our community through our Google and social shares.
        • they enjoy our free content: helpful, beautiful.
        • Q&A section.
        • mozinars: webinars to learn about SEO, etc.
      • eventually, you may sign up for a free trial. 85% of sign up for a free trial.
      • customers visit us 8 times before signing up for a free trial.
      • moz subscription: $99/month is most popular (and cheapest) plan
    • Large, Passionate Community
      • We had a community for 10 years.,
      • We were a community first. Started as a blog about SEO
      • Content is co-created and curated by the community.
      • Practice what we preach.
      • 800k marketers joined moz community.
      • Come for the content, stay for the software.
      • No sales people, but really good community manager.
        • their jobs is to foster inclusive and generous environment to learn about marketing.
    • Big Market
      • if you’re going after a small market, just hire someone to go talk to those people.
    • Low CAC & COGs business model
      • Cost of Customer Acquisition
      • Avg customer lifetime value: $980
      • average customer lifetime: 9 months
      • fully-loaded CAC: $137
      • approximate cost of providing service: $21/month
      • payback period: month 2
      • Customer Lifetime Value is on the low-end
        • moz: $980
        • constant contact: $1500
        • but we have the highest CLTV/cost ratio
        • cost
          • moz: $137
          • constant contact: $650
    • Rethink Retention
      • Churn is very high in the first 3 months: 25% / 15% / 8%
      • But by month 4, churn stabilizes. Now you are a qualified customers.
      • Looking at first 3 months. composed of:
        • People I’m going to lose no matter what i do. they are not target customer.
        • people i should be keeping, but i’m not.
        • people who i will keep even if i don’t spend effort on them. they “got it” right away.
      • Don’t worry about the first group. they are not the target customr. let them go.
      • second group: keeps me up at night.
      • you must know how to tell these groups apart, especially with respect to their feedback. feedback of the first group should be ignored!
    • Heart-Centered, Authentic, Customer Success
      • Need awesome customer support team. we don’t have salespeople up front. Instead, we treat them really well once they are paying us.
      • We don’t try to use robots to save money.
      • We talk to the customers, visit their websites, suggest improvements.
      • We don’t have a storefront or physical presence. so how do we make the relationships longer, stronger? we sent out happy packets of moz fun stuff.
  • Benefits
    • Your community is a flywheel.
      • it takes time to get up to speed.
      • once the flywheel starts spinning, the community starts to create itself.
      • now moz is just the stewards of the community.
      • it’s like hosting a really great house-party of respectful guests.
      • it’s an incredible barrier to entry for competitors.
        • there’s no shortcut, no way to buy into this.
    • Low Burn rate helps when the economy goes in the shitter.
      • no sales team means less burn.
      • less capital required.
      • easier to self-funded.
      • no community to calculate.
    • the strategy generates lots of predictable recurring revenue: 96% of revenue is recurring.
    • risk is distributed across a broad customer base. even if the best customer leaves, it’s no big deal.
    • we can pour more dollars into R&D
      • third group: don’t worry about them either.
  • Caveats
    • No magic growth lever: can’t just scale from 5 salespeople to 10 salespeople.
    • Will public markets and VCs continue to prize growth rate over burn rate?
  • Future of B2B Sales
    • Every business is a publisher.
    • Every business has a community.
    • Are you managing it?
    • Increased transparency around quality and pricing.
      • should lead to more corporate accountability.
    • Multi-channel, customer driven contact
    • customers want shorter contract cycles. Nobody wants to be locked into anything anymore.
    • Software sales begin with the people who use the software. They advocate to the C-suite.