AvogadroCorpGermanCoverThe German edition of Avogadro Corp is available for preorder from Amazon:


It releases in paperback and kindle on December 9th. If you or a friend read German, I hope you’ll check it out.

The success of this translation will be helpful in getting the rest of the series translated to German, and all of my books translated to other languages.


With so many discussions happening about the risks and benefits of artificial intelligence (AI), I really want to collect data from a broader population to understand what others think about the possible risks, benefits, and development path.

Take the Survey

I’ve created a nine question survey that takes less than six minutes to complete. Please help contribute to our understand of the perception of artificial intelligence by completing the survey.

Take the survey now

Share the Survey

I hope you’ll also share the survey with others. The more responses we get, the more useful the data becomes.

Share AI survey on Twitter

Share AI survey on Facebook

Share the link: https://www.surveymonkey.com/s/ai-risks

Thanks to Elon Musk’s fame and his concerns about the risks of AI, it seems like everyone’s talking about it.

One difficulty that I’ve noticed is agreement on exactly what risk we’re talking about. I’ve had several discussions in just the last few days, both at the Defrag conference in Colorado and online.

One thing I’ve noticed is that the risk naysayers tend to say “I don’t believe there is risk due to AI”. But when you probe them further, what they are often saying is “I don’t believe there is existential risk from a skynet scenario due to a super-intelligence created from existing technology.” The second statement is far narrower, so let’s dig into the components of it.

Existential risk is defined by Nick Bostrum as a risk “where an adverse outcome would either annihilate Earth-originating intelligent life or permanently and drastically curtail its potential.” Essentially, we’re talking about either the extinction of humankind, or something close to it. However, most of us would agree that there are very bad outcomes that are nowhere near an existential risk. For example, about 4% of the global population died in WWII. That’s not an existential risk, but it’s still horrific by anybody’s standards.

Runaway AI, accelerating super-intelligence, or hard takeoff are all terms that refer to the idea that once an artificial intelligence is created, it will recursively improve its own intelligence, becoming vastly smarter and more powerful in a matter of hours, days, or months. We have no idea if this will happen (I don’t think it’s likely), but simply because we don’t have a hard takeoff doesn’t mean that an AI would be stagnant or lack power compared to people. There are many different ways even a modest AI with the creativity, motivation, and drive equivalent to that of a human could affect a great deal more than a human could:

  • Humans can type 50 words a minute. AI could communicate with tens of thousands of computers simultaneously.
  • Humans can drive one car at a time. AI could fly all the world’s airplanes simultaneously.
  • Humans can trip one circuit breaker. AI could trip all the world’s circuit breakers.
  • Humans can reproduce a handful of times over the course of a lifetime. AI could reproduce millions of times over the course of a day.
  • Humans evolve over the course of tens of thousands of years or more. Computers become 50% more powerful each year.

So for many reasons, even if we don’t have a hard takeoff, we can still have AI actions and improvement that occur far faster, and with far wider effect than we humans are adapted to handling.

Skynet scenario, terminator scenario, or killer robots are terms that refer to the idea that AI could choose to wage open warfare on humans using robots. This is just one type of risk, of many different possibilities. Other ways that AI could harm us include deliberate mechanisms, like trying to manipulate us by controlling the information we see, or by killing off particular people that pose threats, or by extorting us to deliver services they want. This idea of manipulation is important, because while death is terrible, the loss of free will is pretty bad too.

Frankly, most of those seem silly or unlikely compared to unintentional harm that AI could cause: the electrical grid could go down, transportation could stop working, our home climate control could stop functioning, or a virus could crash all computers. If these don’t seem very threatening, consider…

  • What if one winter, for whatever reason, homes wouldn’t heat? How many people would freeze to death?
  • Consider that Google’s self-driving car doesn’t have any manual controls. It’s the AI or it’s no-go. More vehicles will move in this direction, especially all forms of bulk delivery. If all transportation stopped, how would people in cities get food when their 3-day supply runs out?
  • How long can those city dwellers last without fresh water if pumping stations are under computer control and they stop?

Existing technology: Some will argue that because we don’t have strong AI (e.g. human level intelligence or better) now, there’s no point in even talking about risk. However, this sounds like “Let’s not build any asteroid defenses until we clearly see an asteroid headed for Earth”. It’s far too late by then. Similarly, once the AI is here, it’s too late to talk about precautions.

In conclusion, if you have a conversation about AI risks, be clear what you’re talking about. Frankly, all of humanity being killed by robots under the control of a super-intelligence AI doesn’t even seem worth talking about compared to all of the more likely risks. A better conversation might start with a question like this:

Are we at risk of death, manipulation, or other harm from future AI, whether deliberate or accidental, and if so, what can we do to decrease those risks?

This presentation by Sarah Bird was one of the highlights of #DefragCon. I really loved what she said and all the data she shared.

How to Build a B2B Software Company Without a Sales Team
Sarah Bird, CEO Moz — @SarahBird

  • Moz
    • $30M/year revenue
    • growing from 2007 to current day
    • Moz makers software that helps marketing professional
  • Requirements for selling B2B software without a sales team
    • A nearly frictionless funnel
      • i hate asking for money
      • we made a company company that rarely asks you for money
      • People find our community through our Google and social shares.
        • they enjoy our free content: helpful, beautiful.
        • Q&A section.
        • mozinars: webinars to learn about SEO, etc.
      • eventually, you may sign up for a free trial. 85% of sign up for a free trial.
      • customers visit us 8 times before signing up for a free trial.
      • moz subscription: $99/month is most popular (and cheapest) plan
    • Large, Passionate Community
      • We had a community for 10 years.,
      • We were a community first. Started as a blog about SEO
      • Content is co-created and curated by the community.
      • Practice what we preach.
      • 800k marketers joined moz community.
      • Come for the content, stay for the software.
      • No sales people, but really good community manager.
        • their jobs is to foster inclusive and generous environment to learn about marketing.
    • Big Market
      • if you’re going after a small market, just hire someone to go talk to those people.
    • Low CAC & COGs business model
      • Cost of Customer Acquisition
      • Avg customer lifetime value: $980
      • average customer lifetime: 9 months
      • fully-loaded CAC: $137
      • approximate cost of providing service: $21/month
      • payback period: month 2
      • Customer Lifetime Value is on the low-end
        • moz: $980
        • constant contact: $1500
        • but we have the highest CLTV/cost ratio
        • cost
          • moz: $137
          • constant contact: $650
    • Rethink Retention
      • Churn is very high in the first 3 months: 25% / 15% / 8%
      • But by month 4, churn stabilizes. Now you are a qualified customers.
      • Looking at first 3 months. composed of:
        • People I’m going to lose no matter what i do. they are not target customer.
        • people i should be keeping, but i’m not.
        • people who i will keep even if i don’t spend effort on them. they “got it” right away.
      • Don’t worry about the first group. they are not the target customr. let them go.
      • second group: keeps me up at night.
      • you must know how to tell these groups apart, especially with respect to their feedback. feedback of the first group should be ignored!
    • Heart-Centered, Authentic, Customer Success
      • Need awesome customer support team. we don’t have salespeople up front. Instead, we treat them really well once they are paying us.
      • We don’t try to use robots to save money.
      • We talk to the customers, visit their websites, suggest improvements.
      • We don’t have a storefront or physical presence. so how do we make the relationships longer, stronger? we sent out happy packets of moz fun stuff.
  • Benefits
    • Your community is a flywheel.
      • it takes time to get up to speed.
      • once the flywheel starts spinning, the community starts to create itself.
      • now moz is just the stewards of the community.
      • it’s like hosting a really great house-party of respectful guests.
      • it’s an incredible barrier to entry for competitors.
        • there’s no shortcut, no way to buy into this.
    • Low Burn rate helps when the economy goes in the shitter.
      • no sales team means less burn.
      • less capital required.
      • easier to self-funded.
      • no community to calculate.
    • the strategy generates lots of predictable recurring revenue: 96% of revenue is recurring.
    • risk is distributed across a broad customer base. even if the best customer leaves, it’s no big deal.
    • we can pour more dollars into R&D
      • third group: don’t worry about them either.
  • Caveats
    • No magic growth lever: can’t just scale from 5 salespeople to 10 salespeople.
    • Will public markets and VCs continue to prize growth rate over burn rate?
  • Future of B2B Sales
    • Every business is a publisher.
    • Every business has a community.
    • Are you managing it?
    • Increased transparency around quality and pricing.
      • should lead to more corporate accountability.
    • Multi-channel, customer driven contact
    • customers want shorter contract cycles. Nobody wants to be locked into anything anymore.
    • Software sales begin with the people who use the software. They advocate to the C-suite.

These are my session notes from Defrag 2014 (#Defragcon).

I normally break my notes out and add some context to them, but I’m short of time, so I’m simply posting raw notes below.


  • Slack — superior chat, with channels and per channel notifications. Lots of integrations. Seems better than both Campfire and Hip Chat.
Chris Anderson
3D Robotics
  • Use drones for farmers to spot irrigation, pest problems, soil differences.
  • Can’t see the patterns from the ground
  • Visual and near-infrared.
  • Push button operation: One button to “do your thing”
  • What it enables:
    • better farming.
    • produce more with less resources.
    • don’t overwater.
    • don’t underwater and lose crops.
    • don’t apply pesticides everywhere, just where the problem is.
    • tailor to the soil.
  • it’s big data for farmers.
    • it turns an open-loop system into a closed-loop system
George Dyson
author Turing’s Cathedral
From Analog to Digital
  • Alan Turing: 1912-1954
  • Turing “being digital was more important than being electronic”
  • It is possible to invent a single machine which can compute any computable program.
  • Movie: The Imitation Game — true movie about Alan Turing
  • Insisted on hardware random number generated because software algorithms to generate random numbers cannot be trusted, nor can the authorities (whom he worked for)
  • John von Neumann: continued Alan Turing’s work, always gave him credit.
  • Where Turing was hated by his government, von Neumann got everything from his government: funding of millions of dollars.
  • Baumberger: made his riches in retail, decide to found an institution of learning
  • “The usefulness of useless information” — just hire great minds and let them work on whatever they want, and good things will come.
  • Thanks to German-Nazi situation in the 1930s, it was “cheap” to get jewish intellectuals.
  • The second professor hired: Albert Einstein.
  • In Britain, they took the brightest people to work on encryption. In the US, we took them to Los Alamos to build the atomic bomb.
  • ….lots of interesting history…
  • By the end of Turing’s life, he had moved past determinism. He believes it was important for machines to be able to make mistakes in order to have intuition and ingenuity.
  • What’s next?
    • Three-dimensional computation.
      • Turing gave us 1-dimension.
      • von Neumann gave us 2-d.
    • Template-based addressing
      • In biology, we use template-based addressing. “I want a copy of this protein that matches this pattern.” No need to specify a particular address of a particular protein.
    • Pulse-frequency computing
    • Analog computing
Amber Case
Designing Calm Technology
  • 50 billion devices will be online by 2020 — Cisco
  • Smart watch: how many of the notifications you get are really useful, and how many are bothering you?
  • Imagine the dystopian kitchen of the future: all the appliances clamoring for your attention, all needing firmware updates before you can cook, and having connectivity problems.
  • Calm Technology
    • Mark Weiser and John Seely Brown describe calm technology as “that which informs but doesn’t demand our focus or attention.” [1]
  • “Technology shouldn’t require all of our attention, just some of it, and only when necessary.”
  • The coming age of calm technology…
  • If the cloud goes down, I should be able to still turn down my thermostat.
  • Calm technology makes things to the peripherally of our attention. Placing things in the peripherally allow us to pay less attention to many more things.
  • A tea kettle: calm technology. You set it, you forget about it, it whistles when it’s ready. No unnecessary alerts.
  • A little tech goes a long way…
  • We’re addicted to adding features: consumers want it, we like to build it. But that adds cost to manufacturing and to service and support.
  • Toilet occupied sign: doesn’t need to be translated, easily understand, even if color-blind.
  • Light-status systems: Hue Lightbulb connected to a weather report.
  • Light-LEDs attached to Beeminder report: green, yellow, red. Do you need to pay attention? Instead of checking app 10 times a day, nervous about missing goals.
  • We live in a somewhat dystopic world: not perfect, but we deal with it.
  • Two principles
    • a technology should inform and encalm
    • make use of periphery attention
  • Design for people first
    • machines shouldn’t act like humans
    • humans shouldn’t act like machines
    • Amplify the best part of each
  • Technology can communicate, but doesn’t need to speak.
  • Roomba: happy tone when done, unhappy tone when stuck.
  • Create ambient awareness through different senses
    • haptics vs auditory alerts
    • light status vs. full display
  • Calm Technology and Privacy
    • privacy is the ability not to be surprised. “i didn’t expect that. now i don’t trust it.”
  • Feature phones
    • limited features, text and voice calls, few apps, became widespread over time
  • Smartphone cameras
    • not well known, not everybody had it.
    • social norm created that it was okay to have a phone in your pocket. we’re not terrified that people are taking pictures: because we know what it looks like when something is taking a picture.
  • Google Glass Launch
    • Reduced play, confusion, speculation, fear.
    • Had the features come out slowly, maybe less fear.
    • but the feature came out all at once.
    • are you recording me? are you recording everything? what are you tracking? what are you seeing? what are you doing?
    • poorly understood.
  • Great design allows people to accomplish their goals in the least amount of movies
  • Calm technology allows people to accomplish the same goals with the least amount of mental cost
  • A person’s primary task should not be computing, but being human.
Helen Greiner
CyPhy Works
Robots Take Flight
  • commercial grade aerial robots that provide actionable insights for data driven decision making
  • PARC tethered aerial robot
    • persistent real-time imagery and other sensing
    • on-going real-time and cloud based analytic service
    • 500-feet with microfilament power.
    • stays up indefinitely.
  • 2014: Entertaining/Recording
  • 2015/16: Protecting/Inspecting: Military, public safety, wildlife, cell towers, agriculture
  • 2017/18: Evaluating/Managing: Situation awareness, operations management, asset tracking, modeling/mapping.
  • 2019: Packaging/Delivery
  • “If you can order something and get it delivered within 30 minutes, that’s the last barrier to ordering online.” because i only buy something in a store if I need it right away.
  • Concept delivery drone: like an osprey, vertical takeoff but horizontal flight.
  • Tethered drone can handle 20mph winds with 30mph gusts.
    • built to military competition spec.
  • how do you handle tangling, especially in interior conditions?
    • externally: spooler is monitoring tension.
    • internally: spooler is on the helicopter, so it avoids ever putting tension on the line. disposable filament.
Lorinda Brandon
Monkey selfies and other conundrums
Who owns your data?
  • Your data footprint
    • explicit data
    • implicit data
  • trendy to think about environmental footprint.
  • explicit: what you intentionally put online: a blog post, photo, or social media update.
  • implicit data
    • derived information
    • not provided intentionally
    • may not be available or visible to the person who provided the data
  • The Biggest Lie on the internet: I’ve read the terms of use.
  • But even if you read the terms of use, that’s not where implicit data comes in. That’s usually in the privacy policy.
  • Before the connected age:
    • helicopters flew over roads to figure out the traffic conditions.
  • Now, no helicopters.
    • your phone is contributing that data.
    • anonymously.
    • and it benefits you with better routing.
  • Samsung Privacy Policy
    • collective brain syndrome: i watched two footballs out of many playing over the weekend. On the following morning, my samsung phone showed me the final scores of just the two games I watched.
    • Very cool, but sorta creepy.
    • I read the policy in detail: it took a couple of hours.
  • Things Samsungs collect:
    • device info
    • time and duration of your use of the service
    • search query terms you enter
    • location information
    • voice information: such as recording of your voice
    • other information: the apps you use, the websites you visit, and how you interact with content offered through a service.
  • Who they share it with.
    • They don’t share it for 3rd party marketing. but they do share for the purpose of their businesses
    • Affiliates
    • business partners
    • Service providers
    • other parties in connection with corporate transactions
    • other parties when required by law
    • other parties with your consent (this is the only one you opt-in to)
  • Smart Meter – Data and privacy concerns
    • power company claims they own it, and they can share/sell it to whom they like.
    • What they collect:
      • individual appliances used in the household
      • power usage data is easily available
      • data transmitted inside and outside the grid
    • In Ohio, law enforcement using it to locate grow houses.
  • Your device != your data
  • Monkey selfies
    • Case where photographer was setting up for photo shoot.
    • Monkey stole camera, took selfies.
    • Photographer got camera back.
    • Who owns the copyright on the photos?
    • Not the photographer, who didn’t take them.
    • Not the monkey, because the monkey didn’t have intent.
    • So it’s in the public domain.
  • Options
    • DoNotTrack.us – sends signal that indicates opt-out preference.
    • Disconnect.me – movement to get vendors to identify what data and data sharing is happening.
    • Opennotice.org – minimal viable consent receipt, which creates a repository of your consent.
    • ClearButton.net – MIT project to express desire to know who has your data, work with manufacturers.
  • Innovate Responsibly
    • If you are a creator, be sensitive to people’s needs.
    • Even if you are doing altruist stuff, you’ve still got to be transparent and responsible.
How to Distribute and Make Money from your API
Orlando Kalossakas, Mash-ape
  • API management
  • API marketplace: find APIs to use
  • Devices connect to the internet
    • 2013: 8.7B
    • 2015: 15B
    • 2020: 50B
  • App stores
    • 1.4M: Google play
    • 1.2M: Apple
    • 300k: Microsoft
    • 140K: Amazon
  • Jeff Bezos:
    • “turn everything into APIs, or I fire you.”
    • A couple of years later
  • Mashape.com: hosts over 10,000 private and public API
  • Google / Twitter / Facebook: Billions of API calls per day
  • Mashape pricing
    • 92% free
    • 5.6% freemium
    • 1.4% paid
  • Consumers of mash ape APIs more than doubling every year.
  • API forms:
    • As a product: the customer uses the API directly add capabilities
    • As an extension of a product: the API is used in conjunction with the product to add value.
    • As promotion: The API is used as a mechanism to promote the product.
  • paid or freemium flavors
    • pay as you go, with or without tiers
    • monthly recurring
    • unit price
    • rev share
    • transaction fee
  • depending on business model, you might end up paying developers to use your API
    • if you are expedia or amazon, you’re paying the developers to integrate with you.
  • Things to consider…
    • is your audience right?
    • Do your competitors have APIs?
    • Could they copy your model easily?
    • How does the API fit into your roadmap?
  • Preparing…
    • discovery
    • security
    • monitoring / qa
    • testing
    • support
    • documentation*
    • monetization*
    • *most important
  • How will you publish your API?
    • onboarding and documentation are the face of your API?
    • Mashape: if you have interactive documentation, consumers are more likely to use it.
  • Achieving great developer experience
    • Track endpoint analytics
    • track documentation/s web analytics
    • get involved in physical hackathons
    • keep api documentation up to date
    • don’t break things.
Blend Web IDEs, Open Source and PaaS to Create and Deploy APIs
Jerome Louvel , Restlet
  • New API landscape:
    • web of data (semantic)
    • cloud computing & hybrid architectures
    • cross-channel user experiences
    • mobile and contextual access to services
    • Multiplicity of HCI modes (human computer interaction)
    • always-on and instantaneous service
  • Impacts on API Dev
    • New types of APIs
      • Internal and external APIs
      • composite and micro APIs
      • experience and open APIs
    • Number of APIs increases
      • channels growth
      • history of versions
      • micro services pattern
      • quality of service
    • industrialization needed
      • new development workflows
  • API-driven approach benefits
    • a pivot API descriptor
    • server skeletons & mock generations
    • up-to-date client SDKs and docs
    • rapid API crafting and implementation
  • Code-first or API-first approaches
    • can be combined using code introspect ors to extract, and code generators to resync.
  • Crafting an API
    • swagger, apiary, raml, miredot, restlet studio
    • new generation of tools:
      • IDE-type
      • web-based
    • example: swagger editor is GUI app
    • RESTlet visual studio
Connecting All Things (drone, sphero, raspberry pi, phillips hue) to Build a Rube Goldberg Machine
Kirsten Hunter
  • API evanglist at Akamai
  • cylon-sphero
  • node.js
  • cylon library makes it easy to control robotics

World’s shortest Interstellar review: Go see this movie right now.

Slightly longer review:

I got advanced screening tickets to see Interstellar in 35mm at the Hollywood theatre in Portland. I didn’t know that much about the movie, other than seeing the trailer and thinking it looked pretty good.

In fact, it was incredible. The trailer does not do it justice. I don’t want to give away the plot of the movie, so I’m not going to list all of the big ideas in this movie, but Erin and I went through the list on the drive home, and it was impressive. Easily the best movie I’ve seen in quite a while.

And this is one that really deserves being seen on a big screen, in a good theatre, on 35mm film if possible.




I only have a limited amount of writing time this week, and I want to focus that time on my next novel. (No, not book 4. That’s off with an editor right now. I’m drafting the novel after that, the first non-Avogadro Corp book.) But I feel compelled to briefly address the reaction to Elon Musk’s opinion about AI.

Brief summary: Elon Musk said that AI is a risk, and that the risks could be bigger than those posed by nuclear weapons. He compared AI to summoning a demon, using the comparison to illustrate the idea that although we think we’d be in control, AI could easily escape from that control.

Brief summary of the reaction: A bunch of vocal folks have ridiculed Elon Musk for raising these concerns. I don’t know how vocal they are, but there seems to be a lot of posts in my feeds from them.

I think I’ve said enough to make it clear that I agree that there is the potential for risk. I’m not claiming the danger is guaranteed, nor do I believe that it will come in the form of armed robots (despite the fiction I write). Again, to summarize very briefly: the risk of AI danger can come from many different dimensions:

  • accidents (a programming bug that causes the power grid to die, for example)
  • unintentional side effects (an AI that decides on the best path to fulfill it’s goal without taking into account the impact on humans: maybe an autonomous mining robot that harvests the foundations of buildings)
  • complex interactions (e.g. stock trading AI that nearly collapsed the financial markets a few years ago)
  • intention decisions (an AI that decides humans pose a risk to AI, or an AI that is merely angry or vengeful.)
  • human-driven terrorism (e.g. nanotechnology made possible by AI, but programmed by a person to attack other people)

Accidents and complex interactions have already happened. Programmers already don’t understand their code, and AI are often written as black-boxes that are even more incomprehensible. There will be more of these, and they don’t require human-level intelligence. Once AI does achieve human-level intelligence, then new risks become more likely.

What makes AI risks different than more traditional ones are their speed and scale. A financial melt-down can happen in seconds, and we humans would know about it only afterwards. Bad decisions by a human doctor could affect a few dozen patients. Bad decisions by a medical AI that’s installed in every hospital could affects hundreds of thousands of patients.

There are many potential benefits to AI. They are also not guaranteed, but they include things like more efficient production so that we humans might work less, greater advances in medicine and technology so that we can live longer, and reducing our impact on the environment so we have a healthier planet.

Because of these many potential benefits, we probably don’t want to stop work on AI. But since almost all research effort is going into creating AI and very little is going into reducing the risks of AI, we have an imbalance. When Elon Musk, who has a great deal of visibility and credibility, talks about the risks of AI, this is a very good thing, because it will help us address that imbalance and invest more in risk reduction.

Cory Doctorow was in Portland promoting his new book Information Doesn’t Want to Be Free (Powells, Amazon).

Here are my notes from his talk at Powell’s Bookstore:

Cory and I at his booksigning.

Cory Doctorow and I at his book signing.
(Photograph by Erin Gately)

Cory Doctorow

Information Doesn’t Want to be Free
Creativity in the Twenty-First Century
  • If you don’t earn your living online now, you probably will in the future
  • It’s hard to generalize a single way to earn a living in the arts
  • Most people who set out to make money in the arts ends up losing money
  • Living in the creative fields and earning a living there is way out there…it’s a six sigma event.
  • Imagine millions and millions of people flipping coins… a few have coins that land on their edge. some people have this happen many times. The only thing that unites these people is luck.
  • But when artists make money, we treat them with reverence. But fundamentally they are just lucky.
  • But we put them on magazine covers, and try to figure out which business models serve artists the best.
  • But any business model will be quickly copied by thousands of new artists.
  • And business models change. they can’t stay the same.
  • The artists of yesterday want the business models of yesterday to stay in place. it’s like last year’s lottery winners wanting to win the lottery every year.
  • Three Laws
  • First Law: Anytime someone puts a lock on something and doesn’t give you the key, the lock is not there for your best interest.
    • (Funny anecdote about Cory’s literary agent, who also represented Arthur C. Clarke: “One thing I learn is that you always have to have three laws.”)
    • DRM: digital rights management.
    • DRM works by scrambling the creative work you upload, and then giving the audience/customer a player than can descramble the work, but which won’t let them do anything you don’t want: copy it, save it, play it in the United States.
    • But DRM only works if nobody can find the key, which has to be embedded in the player. So its trivial to find. Inevitable…
    • But it’s illegal thanks to a 1998 law.
    • As soon as Adobe, Amazon, or Apple puts DRM on something (and those are just the As), you’ve lost control over it. And your customer too.
    • Customers can only read the books in the ecosystem in which you bought them.
    • It’s like having a room in your house to only read Barnes & Noble books. But then if you bought books from Powell’s, you’d need to have another room to read them. And you couldn’t move the books from one room to the other.
    • Audible has 90% of the audio book market, and they have DRM, and they’ve locked up that market.
  • Second law: Fame won’t make you rich, but you can’t sell your art without it.
    • Tim O’Reilly said “The problem for most artists isn’t piracy, it’s obscurity.”
    • We’re left with: Five publishers, four labels, and five studios.
    • The contracts that exist today with the above are all screwing the artist, and it reflects that it’s a buyers market, because these companies own the market.
    • Lots of abusive terms, and lots of non-negotiatable.
    • It’s a competitor of last resort. The worst deal the traditional publishers can offer has to be competitive with what they think they can make indie.
    • The indie sector is at the end of a 15 year war with the traditional pubs.
    • Viacom wanted Google to have an army of lawyers to check the 96 hours of video upload to YouTube every minute. There aren’t enough lawyers in the world. You’d get to the heat death of the universe before you could review all the video.
    • There’s more efforts coming to attack the indies. We’re just seeing the beginning of it.
    • (lots of trade agreements cited.)
    • What happens with Viacom and cable is that the army of lawyers can’t be hired. So the content producers have to provide insurance that their content doesn’t infringe. And only rich people can afford that.
    • And so there is lots of other content rules coming like this.
    • What the Internet does, the primary purpose, is to make copies, as quickly and effortlessly as possible, and with high fidelity. Trying to make the internet copy stuff is like trying to make water less wet.
  • Third law: Information Doesn’t Want to be Free.
    • I invited Information out to the weekend at the Hamptom’s.
    • Information doesn’t want anything.
    • This isn’t about information.
    • This is about people.
    • People want to be free.
    • When we live in an information age, that means they want their information to be free.
    • When we put DRM in software and content, then we are taking freedom away from people.
    • Programmers are fallible. They make mistakes. And those mistakes can compromise your privacy. Your phone is a supercomputer in your pocket with a microphone and camera that you take into the bathroom and bedroom and that knows who your friends are, and what you talk to them about, and what your lawyer told you.
    • But the DRM laws make it illegal to talk about those mistakes. Which means that your phone can be attacked.
    • University of Michigan video showing a bluetooth hack with a pacemaker in which they cook bacon.
    • What happens when technology moves inside your body. Your future hearing aide isn’t something in your ear, it’s something in your mind. What will be the model for that? It is a device you control? or a device you don’t control? “You can’t do that Dave.”
    • The internet has tons of daily business, not just the “important stuff”, but also the banal stuff. But the banal stuff is important to. When I saw “how did you sleep?” to my wife, I know how she slept. My saying that is my way of saying “how are you? i care about you? i’m here for you.” it’s the soil from which everything grows.
    • In New Zealand…3 strikes rule that says that if you are guilty of copyright infringement 3 times, they take away internet access from your family.
    • Did study in the UK… People who have internet access have:
    • better health
    • better jobs
    • more civic-ly engaged
    • more politically engaged
    • better student grades
    • They passed the digital economy act: 3 strike rule.
    • Which means that they aren’t just taking away the internet access. they are taking away health, jobs, political engagement, and student grades.
  • How do you deal with self-criticism as you write?
    • Don’t revise until I’m done.
    • Don’t look too closely at what i’m doing.
    • Looked back at quality of what I wrote when I thought I was doing well, and when I thought I wasn’t doing well, and found no correlation. Had to do with how I felt, not what I produced.
  • Is there a parallel between the history of human cognition and the history of computer development?
    • (missed lots of good stuff here.)
    • we have no shortage of minds we are creating that think like people. we call them babies.
    • we need things that think differently than us.
    • to think computers will think like us is to think that airplanes will fly like birds.

There were many more great questions and answers, but that was the hardest part to capture.

A great article in The Atlantic about happiness versus meaning. An excerpt:

“Happiness without meaning characterizes a relatively shallow, self-absorbed or even selfish life, in which things go well, needs and desire are easily satisfied, and difficult or taxing entanglements are avoided,” the authors write.

How do the happy life and the meaningful life differ? Happiness, they found, is about feeling good. Specifically, the researchers found that people who are happy tend to think that life is easy, they are in good physical health, and they are able to buy the things that they need and want. While not having enough money decreases how happy and meaningful you consider your life to be, it has a much greater impact on happiness. The happy life is also defined by a lack of stress or worry.

Most importantly from a social perspective, the pursuit of happiness is associated with selfish behavior — being, as mentioned, a “taker” rather than a “giver.” The psychologists give an evolutionary explanation for this: happiness is about drive reduction. If you have a need or a desire — like hunger — you satisfy it, and that makes you happy. People become happy, in other words, when they get what they want. Humans, then, are not the only ones who can feel happy. Animals have needs and drives, too, and when those drives are satisfied, animals also feel happy, the researchers point out.

“Happy people get a lot of joy from receiving benefits from others while people leading meaningful lives get a lot of joy from giving to others,” explained Kathleen Vohs, one of the authors of the study, in a recent presentation at the University of Pennsylvania. In other words, meaning transcends the self while happiness is all about giving the self what it wants. People who have high meaning in their lives are more likely to help others in need. “If anything, pure happiness is linked to not helping others in need,” the researchers, which include Stanford University’s Jennifer Aaker and Emily Garbinsky, write.

When I started writing AI Apocalypse, I had to deal naming and discussing multiple AI characters. Since biological genders could, in theory, be meaningless to AI, once approach would be to give them names at random, and use only gender-neutral pronouns.

I’m fine with using “they” as a gender-neutral, singular pronoun. “It” can also work, but it’s somewhat distancing. In the end, I felt like using gender-specific pronouns because that brought my closer to the characters.

That begs the questions of how the AI get genders when they don’t start with any. I believe they start gender-neutral, but can choose the gender pronouns they want applied to them. Although we don’t see it in the books, I’m imaging that there’s some aspect to their online profile/reputation that indicates preferred gender pronouns. So we could, in theory, have AI that identify as it, he, she, they, or something else entirely.

I thought this was a pretty novel explanation. Until I watched Star Trek: The Next Generation with my kids the other night, and we saw The Offspring (Season 03, Episode 16), the episode in which Data creates a child android named Lal. And what does Lal do? She starts out gender-less, and then chooses a gender after making observations.

I’ve seen every Next Generation episode, many more than once, but didn’t remember this episode at all. But it must have influenced me, but this was exactly how I imagined the AI in the Avogadro Corp universe to behave.

I think a lot of science fiction influences me that way: concepts linger over many years, even though the details of where something came from fade away.

By the way, I only recently learned that Japanese has gender-specific name endings, and “ko” is reserved for female names. So Shizoko, from The Last Firewall, is properly a female name. Woops. Sorry to all Japanese speakers out there. If you want an in-universe explanation, I’m going to say that Shizoko was previously identified as female, but changed her gender while keeping her name. :)