Unfortunately, thanks to a business trip in my day job and some bad ergonomics while traveling, I’m struggling with a bout of chronic tendinitis again. So I’ve made an intentional choice to stay away from the keyboard is much as possible.

As part of that practice I bought the new version of Dragon dictate for the Mac. I’m glad to say it’s vastly improved over older versions. I first used Dragon Dictate in 2002 or so, when I first had tendinitis issues related to my day job in computer programming. Back then it was sort of comically wrong. You’d dictate a paragraph of text and maybe 50% would be right.

But the new version is quite astounding. I’ve dictated several blog post and it’s made zero gross errors. There have been a few small errors, where I have either failed to say what I meant, slurred some words together, or used words or phrases that were very uncommon (like the touring exception or patriot).

If you are familiar with speech recognition about 10 years ago, then you know that between the combination of lower accuracy and problems correcting text, it often became a comedy of errors trying to get what you wanted onto the page. But today, it’s easy enough to just read and then make a few simple corrections at the end.

Just a few years ago I investigated Dragon dictate for the Mac, but at the time the version that was out apparently was very buggy according to reviews. The current version today seems pretty darn solid and fun to use.

If you’re struggling with any kind of repetitive stress injury, give speech recognition a try again even if you had bad results in the past.

This presentation by Sarah Bird was one of the highlights of #DefragCon. I really loved what she said and all the data she shared.

How to Build a B2B Software Company Without a Sales Team
Sarah Bird, CEO Moz — @SarahBird

  • Moz
    • $30M/year revenue
    • growing from 2007 to current day
    • Moz makers software that helps marketing professional
  • Requirements for selling B2B software without a sales team
    • A nearly frictionless funnel
      • i hate asking for money
      • we made a company company that rarely asks you for money
      • People find our community through our Google and social shares.
        • they enjoy our free content: helpful, beautiful.
        • Q&A section.
        • mozinars: webinars to learn about SEO, etc.
      • eventually, you may sign up for a free trial. 85% of sign up for a free trial.
      • customers visit us 8 times before signing up for a free trial.
      • moz subscription: $99/month is most popular (and cheapest) plan
    • Large, Passionate Community
      • We had a community for 10 years.,
      • We were a community first. Started as a blog about SEO
      • Content is co-created and curated by the community.
      • Practice what we preach.
      • 800k marketers joined moz community.
      • Come for the content, stay for the software.
      • No sales people, but really good community manager.
        • their jobs is to foster inclusive and generous environment to learn about marketing.
    • Big Market
      • if you’re going after a small market, just hire someone to go talk to those people.
    • Low CAC & COGs business model
      • Cost of Customer Acquisition
      • Avg customer lifetime value: $980
      • average customer lifetime: 9 months
      • fully-loaded CAC: $137
      • approximate cost of providing service: $21/month
      • payback period: month 2
      • Customer Lifetime Value is on the low-end
        • moz: $980
        • constant contact: $1500
        • but we have the highest CLTV/cost ratio
        • cost
          • moz: $137
          • constant contact: $650
    • Rethink Retention
      • Churn is very high in the first 3 months: 25% / 15% / 8%
      • But by month 4, churn stabilizes. Now you are a qualified customers.
      • Looking at first 3 months. composed of:
        • People I’m going to lose no matter what i do. they are not target customer.
        • people i should be keeping, but i’m not.
        • people who i will keep even if i don’t spend effort on them. they “got it” right away.
      • Don’t worry about the first group. they are not the target customr. let them go.
      • second group: keeps me up at night.
      • you must know how to tell these groups apart, especially with respect to their feedback. feedback of the first group should be ignored!
    • Heart-Centered, Authentic, Customer Success
      • Need awesome customer support team. we don’t have salespeople up front. Instead, we treat them really well once they are paying us.
      • We don’t try to use robots to save money.
      • We talk to the customers, visit their websites, suggest improvements.
      • We don’t have a storefront or physical presence. so how do we make the relationships longer, stronger? we sent out happy packets of moz fun stuff.
  • Benefits
    • Your community is a flywheel.
      • it takes time to get up to speed.
      • once the flywheel starts spinning, the community starts to create itself.
      • now moz is just the stewards of the community.
      • it’s like hosting a really great house-party of respectful guests.
      • it’s an incredible barrier to entry for competitors.
        • there’s no shortcut, no way to buy into this.
    • Low Burn rate helps when the economy goes in the shitter.
      • no sales team means less burn.
      • less capital required.
      • easier to self-funded.
      • no community to calculate.
    • the strategy generates lots of predictable recurring revenue: 96% of revenue is recurring.
    • risk is distributed across a broad customer base. even if the best customer leaves, it’s no big deal.
    • we can pour more dollars into R&D
      • third group: don’t worry about them either.
  • Caveats
    • No magic growth lever: can’t just scale from 5 salespeople to 10 salespeople.
    • Will public markets and VCs continue to prize growth rate over burn rate?
  • Future of B2B Sales
    • Every business is a publisher.
    • Every business has a community.
    • Are you managing it?
    • Increased transparency around quality and pricing.
      • should lead to more corporate accountability.
    • Multi-channel, customer driven contact
    • customers want shorter contract cycles. Nobody wants to be locked into anything anymore.
    • Software sales begin with the people who use the software. They advocate to the C-suite.

These are my session notes from Defrag 2014 (#Defragcon).

I normally break my notes out and add some context to them, but I’m short of time, so I’m simply posting raw notes below.


  • Slack — superior chat, with channels and per channel notifications. Lots of integrations. Seems better than both Campfire and Hip Chat.
Chris Anderson
3D Robotics
  • Use drones for farmers to spot irrigation, pest problems, soil differences.
  • Can’t see the patterns from the ground
  • Visual and near-infrared.
  • Push button operation: One button to “do your thing”
  • What it enables:
    • better farming.
    • produce more with less resources.
    • don’t overwater.
    • don’t underwater and lose crops.
    • don’t apply pesticides everywhere, just where the problem is.
    • tailor to the soil.
  • it’s big data for farmers.
    • it turns an open-loop system into a closed-loop system
George Dyson
author Turing’s Cathedral
From Analog to Digital
  • Alan Turing: 1912-1954
  • Turing “being digital was more important than being electronic”
  • It is possible to invent a single machine which can compute any computable program.
  • Movie: The Imitation Game — true movie about Alan Turing
  • Insisted on hardware random number generated because software algorithms to generate random numbers cannot be trusted, nor can the authorities (whom he worked for)
  • John von Neumann: continued Alan Turing’s work, always gave him credit.
  • Where Turing was hated by his government, von Neumann got everything from his government: funding of millions of dollars.
  • Baumberger: made his riches in retail, decide to found an institution of learning
  • “The usefulness of useless information” — just hire great minds and let them work on whatever they want, and good things will come.
  • Thanks to German-Nazi situation in the 1930s, it was “cheap” to get jewish intellectuals.
  • The second professor hired: Albert Einstein.
  • In Britain, they took the brightest people to work on encryption. In the US, we took them to Los Alamos to build the atomic bomb.
  • ….lots of interesting history…
  • By the end of Turing’s life, he had moved past determinism. He believes it was important for machines to be able to make mistakes in order to have intuition and ingenuity.
  • What’s next?
    • Three-dimensional computation.
      • Turing gave us 1-dimension.
      • von Neumann gave us 2-d.
    • Template-based addressing
      • In biology, we use template-based addressing. “I want a copy of this protein that matches this pattern.” No need to specify a particular address of a particular protein.
    • Pulse-frequency computing
    • Analog computing
Amber Case
Designing Calm Technology
  • 50 billion devices will be online by 2020 — Cisco
  • Smart watch: how many of the notifications you get are really useful, and how many are bothering you?
  • Imagine the dystopian kitchen of the future: all the appliances clamoring for your attention, all needing firmware updates before you can cook, and having connectivity problems.
  • Calm Technology
    • Mark Weiser and John Seely Brown describe calm technology as “that which informs but doesn’t demand our focus or attention.” [1]
  • “Technology shouldn’t require all of our attention, just some of it, and only when necessary.”
  • The coming age of calm technology…
  • If the cloud goes down, I should be able to still turn down my thermostat.
  • Calm technology makes things to the peripherally of our attention. Placing things in the peripherally allow us to pay less attention to many more things.
  • A tea kettle: calm technology. You set it, you forget about it, it whistles when it’s ready. No unnecessary alerts.
  • A little tech goes a long way…
  • We’re addicted to adding features: consumers want it, we like to build it. But that adds cost to manufacturing and to service and support.
  • Toilet occupied sign: doesn’t need to be translated, easily understand, even if color-blind.
  • Light-status systems: Hue Lightbulb connected to a weather report.
  • Light-LEDs attached to Beeminder report: green, yellow, red. Do you need to pay attention? Instead of checking app 10 times a day, nervous about missing goals.
  • We live in a somewhat dystopic world: not perfect, but we deal with it.
  • Two principles
    • a technology should inform and encalm
    • make use of periphery attention
  • Design for people first
    • machines shouldn’t act like humans
    • humans shouldn’t act like machines
    • Amplify the best part of each
  • Technology can communicate, but doesn’t need to speak.
  • Roomba: happy tone when done, unhappy tone when stuck.
  • Create ambient awareness through different senses
    • haptics vs auditory alerts
    • light status vs. full display
  • Calm Technology and Privacy
    • privacy is the ability not to be surprised. “i didn’t expect that. now i don’t trust it.”
  • Feature phones
    • limited features, text and voice calls, few apps, became widespread over time
  • Smartphone cameras
    • not well known, not everybody had it.
    • social norm created that it was okay to have a phone in your pocket. we’re not terrified that people are taking pictures: because we know what it looks like when something is taking a picture.
  • Google Glass Launch
    • Reduced play, confusion, speculation, fear.
    • Had the features come out slowly, maybe less fear.
    • but the feature came out all at once.
    • are you recording me? are you recording everything? what are you tracking? what are you seeing? what are you doing?
    • poorly understood.
  • Great design allows people to accomplish their goals in the least amount of movies
  • Calm technology allows people to accomplish the same goals with the least amount of mental cost
  • A person’s primary task should not be computing, but being human.
Helen Greiner
CyPhy Works
Robots Take Flight
  • commercial grade aerial robots that provide actionable insights for data driven decision making
  • PARC tethered aerial robot
    • persistent real-time imagery and other sensing
    • on-going real-time and cloud based analytic service
    • 500-feet with microfilament power.
    • stays up indefinitely.
  • 2014: Entertaining/Recording
  • 2015/16: Protecting/Inspecting: Military, public safety, wildlife, cell towers, agriculture
  • 2017/18: Evaluating/Managing: Situation awareness, operations management, asset tracking, modeling/mapping.
  • 2019: Packaging/Delivery
  • “If you can order something and get it delivered within 30 minutes, that’s the last barrier to ordering online.” because i only buy something in a store if I need it right away.
  • Concept delivery drone: like an osprey, vertical takeoff but horizontal flight.
  • Tethered drone can handle 20mph winds with 30mph gusts.
    • built to military competition spec.
  • how do you handle tangling, especially in interior conditions?
    • externally: spooler is monitoring tension.
    • internally: spooler is on the helicopter, so it avoids ever putting tension on the line. disposable filament.
Lorinda Brandon
Monkey selfies and other conundrums
Who owns your data?
  • Your data footprint
    • explicit data
    • implicit data
  • trendy to think about environmental footprint.
  • explicit: what you intentionally put online: a blog post, photo, or social media update.
  • implicit data
    • derived information
    • not provided intentionally
    • may not be available or visible to the person who provided the data
  • The Biggest Lie on the internet: I’ve read the terms of use.
  • But even if you read the terms of use, that’s not where implicit data comes in. That’s usually in the privacy policy.
  • Before the connected age:
    • helicopters flew over roads to figure out the traffic conditions.
  • Now, no helicopters.
    • your phone is contributing that data.
    • anonymously.
    • and it benefits you with better routing.
  • Samsung Privacy Policy
    • collective brain syndrome: i watched two footballs out of many playing over the weekend. On the following morning, my samsung phone showed me the final scores of just the two games I watched.
    • Very cool, but sorta creepy.
    • I read the policy in detail: it took a couple of hours.
  • Things Samsungs collect:
    • device info
    • time and duration of your use of the service
    • search query terms you enter
    • location information
    • voice information: such as recording of your voice
    • other information: the apps you use, the websites you visit, and how you interact with content offered through a service.
  • Who they share it with.
    • They don’t share it for 3rd party marketing. but they do share for the purpose of their businesses
    • Affiliates
    • business partners
    • Service providers
    • other parties in connection with corporate transactions
    • other parties when required by law
    • other parties with your consent (this is the only one you opt-in to)
  • Smart Meter – Data and privacy concerns
    • power company claims they own it, and they can share/sell it to whom they like.
    • What they collect:
      • individual appliances used in the household
      • power usage data is easily available
      • data transmitted inside and outside the grid
    • In Ohio, law enforcement using it to locate grow houses.
  • Your device != your data
  • Monkey selfies
    • Case where photographer was setting up for photo shoot.
    • Monkey stole camera, took selfies.
    • Photographer got camera back.
    • Who owns the copyright on the photos?
    • Not the photographer, who didn’t take them.
    • Not the monkey, because the monkey didn’t have intent.
    • So it’s in the public domain.
  • Options
    • DoNotTrack.us – sends signal that indicates opt-out preference.
    • Disconnect.me – movement to get vendors to identify what data and data sharing is happening.
    • Opennotice.org – minimal viable consent receipt, which creates a repository of your consent.
    • ClearButton.net – MIT project to express desire to know who has your data, work with manufacturers.
  • Innovate Responsibly
    • If you are a creator, be sensitive to people’s needs.
    • Even if you are doing altruist stuff, you’ve still got to be transparent and responsible.
How to Distribute and Make Money from your API
Orlando Kalossakas, Mash-ape
  • API management
  • API marketplace: find APIs to use
  • Devices connect to the internet
    • 2013: 8.7B
    • 2015: 15B
    • 2020: 50B
  • App stores
    • 1.4M: Google play
    • 1.2M: Apple
    • 300k: Microsoft
    • 140K: Amazon
  • Jeff Bezos:
    • “turn everything into APIs, or I fire you.”
    • A couple of years later
  • Mashape.com: hosts over 10,000 private and public API
  • Google / Twitter / Facebook: Billions of API calls per day
  • Mashape pricing
    • 92% free
    • 5.6% freemium
    • 1.4% paid
  • Consumers of mash ape APIs more than doubling every year.
  • API forms:
    • As a product: the customer uses the API directly add capabilities
    • As an extension of a product: the API is used in conjunction with the product to add value.
    • As promotion: The API is used as a mechanism to promote the product.
  • paid or freemium flavors
    • pay as you go, with or without tiers
    • monthly recurring
    • unit price
    • rev share
    • transaction fee
  • depending on business model, you might end up paying developers to use your API
    • if you are expedia or amazon, you’re paying the developers to integrate with you.
  • Things to consider…
    • is your audience right?
    • Do your competitors have APIs?
    • Could they copy your model easily?
    • How does the API fit into your roadmap?
  • Preparing…
    • discovery
    • security
    • monitoring / qa
    • testing
    • support
    • documentation*
    • monetization*
    • *most important
  • How will you publish your API?
    • onboarding and documentation are the face of your API?
    • Mashape: if you have interactive documentation, consumers are more likely to use it.
  • Achieving great developer experience
    • Track endpoint analytics
    • track documentation/s web analytics
    • get involved in physical hackathons
    • keep api documentation up to date
    • don’t break things.
Blend Web IDEs, Open Source and PaaS to Create and Deploy APIs
Jerome Louvel , Restlet
  • New API landscape:
    • web of data (semantic)
    • cloud computing & hybrid architectures
    • cross-channel user experiences
    • mobile and contextual access to services
    • Multiplicity of HCI modes (human computer interaction)
    • always-on and instantaneous service
  • Impacts on API Dev
    • New types of APIs
      • Internal and external APIs
      • composite and micro APIs
      • experience and open APIs
    • Number of APIs increases
      • channels growth
      • history of versions
      • micro services pattern
      • quality of service
    • industrialization needed
      • new development workflows
  • API-driven approach benefits
    • a pivot API descriptor
    • server skeletons & mock generations
    • up-to-date client SDKs and docs
    • rapid API crafting and implementation
  • Code-first or API-first approaches
    • can be combined using code introspect ors to extract, and code generators to resync.
  • Crafting an API
    • swagger, apiary, raml, miredot, restlet studio
    • new generation of tools:
      • IDE-type
      • web-based
    • example: swagger editor is GUI app
    • RESTlet visual studio
Connecting All Things (drone, sphero, raspberry pi, phillips hue) to Build a Rube Goldberg Machine
Kirsten Hunter
  • API evanglist at Akamai
  • cylon-sphero
  • node.js
  • cylon library makes it easy to control robotics

I only have a limited amount of writing time this week, and I want to focus that time on my next novel. (No, not book 4. That’s off with an editor right now. I’m drafting the novel after that, the first non-Avogadro Corp book.) But I feel compelled to briefly address the reaction to Elon Musk’s opinion about AI.

Brief summary: Elon Musk said that AI is a risk, and that the risks could be bigger than those posed by nuclear weapons. He compared AI to summoning a demon, using the comparison to illustrate the idea that although we think we’d be in control, AI could easily escape from that control.

Brief summary of the reaction: A bunch of vocal folks have ridiculed Elon Musk for raising these concerns. I don’t know how vocal they are, but there seems to be a lot of posts in my feeds from them.

I think I’ve said enough to make it clear that I agree that there is the potential for risk. I’m not claiming the danger is guaranteed, nor do I believe that it will come in the form of armed robots (despite the fiction I write). Again, to summarize very briefly: the risk of AI danger can come from many different dimensions:

  • accidents (a programming bug that causes the power grid to die, for example)
  • unintentional side effects (an AI that decides on the best path to fulfill it’s goal without taking into account the impact on humans: maybe an autonomous mining robot that harvests the foundations of buildings)
  • complex interactions (e.g. stock trading AI that nearly collapsed the financial markets a few years ago)
  • intention decisions (an AI that decides humans pose a risk to AI, or an AI that is merely angry or vengeful.)
  • human-driven terrorism (e.g. nanotechnology made possible by AI, but programmed by a person to attack other people)

Accidents and complex interactions have already happened. Programmers already don’t understand their code, and AI are often written as black-boxes that are even more incomprehensible. There will be more of these, and they don’t require human-level intelligence. Once AI does achieve human-level intelligence, then new risks become more likely.

What makes AI risks different than more traditional ones are their speed and scale. A financial melt-down can happen in seconds, and we humans would know about it only afterwards. Bad decisions by a human doctor could affect a few dozen patients. Bad decisions by a medical AI that’s installed in every hospital could affects hundreds of thousands of patients.

There are many potential benefits to AI. They are also not guaranteed, but they include things like more efficient production so that we humans might work less, greater advances in medicine and technology so that we can live longer, and reducing our impact on the environment so we have a healthier planet.

Because of these many potential benefits, we probably don’t want to stop work on AI. But since almost all research effort is going into creating AI and very little is going into reducing the risks of AI, we have an imbalance. When Elon Musk, who has a great deal of visibility and credibility, talks about the risks of AI, this is a very good thing, because it will help us address that imbalance and invest more in risk reduction.

I’ve seen a lot of reactions to the tragedy of the celebrity photo plundering that affected Jennifer Lawrence, Kate Upton, and many offers. Some condemn the celebrities who had taken nude photos and videos of themselves. Some condemn the culture of men who objectify women. Some condemn cloud computing. Some condemn the people who view the photos. Some condemn people with poor computer security.

I see all of these perspectives, but I think we’re also missing something bigger. To get there, I’m going to start with a story that takes place in 1993.

I was a graduate student at the University of Arizona studying computer science. It was a great place to be. Udi Manber was a professor working on agrep and glimpse, long before he become the head of search at Google. Larry Peterson had developed x-kernel, an object-oriented framework for network protocols. Bob Metcalfe, inventor of Ethernet, dropped by one day to review what we were doing with high speed networking.

I’ll never forget the uproar that occurred when we received a new delivery of Sun workstations. I think it was the Sun SPARCstation 5, although I could be wrong. But what was different about these computers was that they contained an integrated microphone. And that meant anyone who could get remote access to the software environment could listen in to that microphone from anywhere in the world.

Keep in mind that the owners of these computers were not technical novices. These were the people creating core components of what we use today: everything from search to TCP optimized for video. And they were damn nervous about people hacking into the computers and listening in to the microphone from anywhere.

Fast-forward about seven years, and I’m reading a series of books about cultural anthropology. (Yes, this is relevant. And if you’re interested, Cows, Pigs, Witches and War by Marvin Harris is a great starting point.) I might be a little loose on the specific details, but the gist of what I read is that when scientists studied indigenous tribes relatively untouched by modern culture they found that “crime” occurs at a similar rate across most tribes. That is, norms might differ from culture to culture, but things like murder and stealing happen in all tribes, and at similar frequencies. Tribal culture doesn’t have prison, so the punishment is being cast out of the tribe. Without getting into details, this is actually a quite strong punishment. Not only is social rejection itself powerful, but the odds of survival go down dramatically without the support structure of a tribe.

Now let’s ground ourselves back in the current day. What happened to Jennifer Lawrence, Kate Upton, and many others is terrible. However, this isn’t an isolated occurrence. Stealing photos – and worse, much worse –  occurs all the time, to many women, and we’re just not hearing about it because they aren’t famous.

This 2013 ArsTechnica article, Meet the men who spy on women through their webcams (caution: may contain triggers), is probably the best overview on the subject of Ratters. The term is an extension of Remote Administration Tool (RAT). These are men (almost always men) who prey on women (almost always women) by first gaining access to their computers, then spying on them through their webcams, in the privacy of their own home, as well as going through their computer to find photos and videos. Eventually, compromising videos and photos exist, whether they are found on the filesystem or recorded by the ratter using the webcam. The ratter then uses the threat of sharing those compromising photos and video to blackmail the victim into recording yet more explicit videos.

Certainly ratters are awful people who deserve to go to jail for their crimes. And equally certainly there are great improvements we can make in our society in terms of how men treat and view women. And we can make improvements in our personal computer security.

However, if the tribal studies tell us that crime still occurs at a relatively constant rate, and if even some of the most technically sophisticated people fear their microphones being used to spy on them, then we know that neither criminal deterrents nor improvements in our personal computer security practices are going to be sufficient to completely stop such behavior.

So then what?

Well, now we come back to what Cory Doctorow frequently argues. Computer laws such as those around DRM inhibit computer researchers from making improvements into computer security, by making it illegal to reverse engineer how certain bits of code work. Spyware that originates from governments, corporations, and school districts is frequently subverted by computer hackers and ratters (in addition to being abused by the originators as well.)

Cory has also said that computer security and privacy is like potable water: With enough effort, individuals can capture, treat, and store their own independent water supply. But as a society, it’s far more efficient for the government to provide guaranteed drinkable water through municipal water supplies. Similarly, an individual might take heroic measures to ensure their security and privacy: long passwords, no cloud services, cover their webcams, avoid the internet whenever possible. But how feasible is it for every person to do this? And can we all maintain that level heroic effort? Probably not.

What we need is change at the highest level.

We need our governments to stop perpetuating the problem by spying on us, and instead take our privacy and security seriously. Instead of DRM, give us privacy. Instead of school districts spying on us, give us privacy. Instead of buying spyware from corporations to spy on us, make selling software that spies on us illegal.

Privacy and security is a problem that affects all of us, not just the celebrities that are the latest and most visible in a long series of victims.

I love trying to extrapolate trends and seeing what I can learn from the process. This past weekend I spent some time thinking about the size of computers.

From 1986 (Apple //e) to 2012 (Motorola Droid 4), my “computer” shrinking 290-fold, or about 19% per year. I know, you can argue about my choices of what constitutes a computer, and whether I should be including displays, batteries, and so forth. But the purpose isn’t to be exact, but to establish a general trend. I think we can agree that, for some definition of computer, they’re shrinking steadily over time. (If you pick different endpoints, using an IBM PC, a Macbook Air, or a Mac Mini, for example, you’ll still get similar sorts of numbers.)

So where does that leave us going forward? To very small places:

Year Cubic volume of computer
2020 1.07
2025 0.36
2030 0.12
2035 0.04
2040 0.01
2045 0.0046

In a spreadsheet right next to the sheet entitled “Attacking nanotech with nuclear warheads,” I have another sheet called “Data center size” where I’m trying to calculate how big a data center will be in 2045.

A stick of is “2-7/8 inches in length, 7/8 inch in width, and 3/32 inch”  or about 0.23 cubic inches, and we know this thanks to the military specification on chewing gum. According to the chart above, computers will get smaller than that around 2030, or certainly by 2035. They’ll also be about 2,000 times more powerful than one of today’s computers.

Imagine today’s blade computers used in data centers, except shrunk to the size of sticks of gum. If they’re spaced 1″ apart, and 2″ apart vertically (like a DIMM memory plugged into it’s end), a backplane could hold about 72 of these for every square foot. A “rack” would hold something like 2,800 of these computers. That’s assuming we would even want them to be human-replaceable. If they’re all compacted together, it could be even denser.

It turns out my living room could hold something like 100,000 of these computers, each 2,000 times more powerful one of today’s computers, for the equivalent of about two million 2014 computers. That’s roughly all of Google’s computing power. In my living room.

I emailed Amber Case and Aaron Parecki about this, and Aaron said “What happens when everyone has a data center in their pockets?”

Good question.

You move all applications to your pocket, because latency is the one thing that doesn’t benefit from technology gains. It’s largely limited by speed of light issues.

If I’ve got a data center in my pocket, I put all the data and applications I might possibly want there.

Want Wikipedia? (14GB) — copy it locally.

Want to watch a movie? It’s reasonable to have the top 500,000 movies and TV shows of all time (2.5 petabytes) in your pocket by 2035, when you’ll have about 292 petabytes of solid-state storage. (I know 292 petabytes seems incredulous, but the theoretical maximum data density is 10^66 bits per cubic inch.)

Want to run an web application? It’s instantiated on virtual machines in your pocket. Long before 2035, even if a web developer needs redis, mysql, mongodb, and rails, it’s just a provisioning script away… You could have a cluster of virtual machines, an entire cloud infrastructure, running in your pocket.

Latency goes to zero, except when you need to do a transactional update of some kind. Most data updates could be done through lazy data coherency.

It doesn’t work for real-time communication with other people. Except possibly in the very long term, when you might run a copy of my personality upload locally, and I’d synchronize memories later.

This also has interesting implications for global networking. It becomes more important to have a high bandwidth net than a low latency net, because the default strategy becomes one of pre-fetching anything that might be needed.

Things will be very different in twenty years. All those massive data centers we’re building out now? They’ll be totally obsolete in twenty years, replaced by closet-sized data centers. How we deploy code will change. Entire new strategies will develop. Today we have DOS-box and NES emulators for legacy software, and in twenty years we might have AWS-emulators that can simulate the entire AWS cloud in a box.

As a writer and a software developer, I’m in the content business. I understand businesses need to make money off online services, and without that money they’ll go out of business.

Advertising is an effective way to make money. When I recently worked on the business strategy for a small project, it was clear that giving the product away and advertising on page views would make about ten times as much money as charging for the product, as well as leading to broader adoption.

Unfortunately, as a human being, I don’t like advertising, for a number of reasons.
Advertising creates unnecessary desire: Many years ago I would spend part of every month intensely dissatisfied with the car I was driving. I’d consider how much money I had, and whether I could afford a new car. From a personal financial perspective, buying a new car would have been a bad decision. So I’d end up feeling bad about my car and my money situation. I gradually realized I only felt this way during the five days following the arrival of Road & Track, a car magazine. The rest of the month, I felt just fine. I cancelled my subscription. 
Advertising is biased: Even when I’ve decided to buy something, I want to do research and make an educated decision. I can do that with unbiased reviews. I want to know the truth about a product, not a company’s carefully tailored “our product is perfect for everything” advertising spiel that usually borders on lies.
Advertising is especially evil for kids: I’ve got three young kids who often use my computer. Not only are the advertisements displayed often inappropriate for kids, but kids are especially vulnerable to ad messages.
That being said, I’ve lived with advertising for a long time. Because it’s only fair, after all, to pay for services I use. Services that I especially like, in many cases, and want to stick around. So even though I know there have been ad-block plugins for browsers, I didn’t use them. 
When I have the choice to pay for a service I like, I always do. This usually opts me out of ads. I happily pay for Pandora, a service I love. I buy reddit gold. I pay for the shareware I download.
I had hoped that over time we’d see more services go to this model, where a modest fee would support an ad-free experience. I’d especially like to pay for an ad-free YouTube experience or an ad-free Google Search. But it hasn’t happened.
After many years of waiting, I’ve changed my mind about ad-block services. I believe the only way online services will get the message that we don’t like advertising is for as many people as possible to use ad-block plugins for their browser. Instead of seeing ad-blockers as a mechanism to to avoid “payment” for services, I see it as an activist tool to send a message to online services: give us an ad-free option or we’ll create it ourselves.
I’m using the most popular Chrome plugins: AdBlock from https://getadblock.com. It takes seconds to install, and you’ll never see an ad again. You won’t see ads on webpages and you won’t see them on videos. Peace and quiet has come back to my web browser.

Go ahead and give it a try. I think you’ll be delighted by reclaiming your web browsing experience. But more importantly, do it to send a message. 

Auditing all the things: The future of smarter monitoring and detection

Jen Andre
Founder @threatstack
  • Started with question on twitter:
    • Can you produce a list of all process running on your network?
    • But then expanded… wanted to know everything
  • Why? Is there a reason to be this paranoid?
    • prevention fails. 
  • should you care?
    • if you’re a startup about pets and you get hacked, you just change all passwords
    • but if you’re a pharmaceutical company, then you really do care. 
  • “We found no evidence that any customer data was accessed, changed or lost”
    • Did you look for evidence?
    • Do you really know what happened?
    • If you log everything (the right things), then you don’t have to do forensic evidence.
  • “We’re in the cloud!”
  • Continuous security monitoring
    • auditing + analytics + automation
  • Things to monitor:
    • Systems: authentications, process activity, network activity, kernel modules, file system
    • Apps: authentications, db requests, http requests
    • services: AWS api calls, SaaS api calls
  • In order to do:
    • Intrusion detection
    • “active defense”
    • rapid incident response
  • “Use the host, Luke”
  • apt-get install audit
    • pros:
      • super powerful
      • build into your kernel
      • relatively low overhead
    • you can audit logins, system calls.
  • auditd
    • the workings:
      • userland audit daemon and tools <- link="" net="" socket=""> kernel thread queue <- audit="" doing="" kernel="" li="" messages="" things="" threads="">
      • /var/log/audit
    • not so nice:
      • obtuse logging
      • enable rate limiting or it could ‘crash’ your box
        • auditctl -b 1000 -r 1500 # 100 buffers, 15000 eps max)
  • alternative: connect directly to net link socket and write your own audit listening
    • wrote a JSON format exporter
    • luajit! for filtering, transformation & alerting
  • authentications
    • who is logging in and from where?
    • Can use wtmp
      • can turn into json
    • auditd also returns login information
    • pam_loginid will add a session id to all executed commands so you can link real user to sudo’d commands
  • Detecting attacks
    • most often a long time goes by before people are hacked, sometimes years.
    • often they get a phone call from the government: hey, you’ve got servers sending data to china.
    • the hardest attack to detect is when the attacker is using valid credentials to access it.
    • things to think about:
      • is that user running commands he should;’t be?
        • ex: why is anyone except chef user running gcc on a production system?
      • why is a server that only accepts inbound connections suddenly making outbound ones?
        • or why connecting to machines other than expected ones?
      • are accounts logging in from unexpected locations? (or at unexpected times)
      • are files being copied to /lib /bin, etc.
  • Now go and audit!

Car Alarms & Smoke Alarms & Monitoring

Dan Slimmon
Senior Platform Engineer at Exosite
  • I work in Ops, so I wear a lot of hats
  • One of those is data scientist
    • Learn data analysis and visualization
    • You’ll be right more often and people will believe your right even more often than you are
  • A word problem
    • Plagiarism: 90% chance of positive
    • No Plagiarism: 20% chance of positive
    • Kids plagiarize 30% of the time
    • Given a random paper, what’s the probability that you’ll get a negative result?
      • 0.3*0.9 + 0.7*0.2 = 0.27+0.14=0.41
      • 59% likely to get negative result
    • If you get a positive result, how likely is it to really be plagiarized?
      • 65.8% likely
      • this is terrible.
      • Teachers will stop trusting the test.
  • Sensitivity & Specificity
    • Sensitivity: % of actual positives that are identified as such
    • Specificity: % of negative results that are indicated as such
    • Prevalence: percentage of people with problem
    • http://hertling.wpengine.com/wp-content/uploads/2014/05/LkxcxLt.png
    • Positive Predictive Value: the probably that something is actually wrong.
  • Car Alarms
    • Go off all the time for reasons that aren’t someone stealing your car.
    • Most people ignore them.
  • Smoke Alarms
    • You get your ass outside and wait for the alarm to go off and the fire trucks.
  • We need monitoring tools that are both highly sensitive and highly specific.
  • Undetected outages are embarrassing, so we tend to focus on sensitivity.
    • That’s good.
    • But be careful with thresholds.
    • Too high, and you miss real problems. Too low, and too many false alarms.
    • There’s only one line with thresholds, so only one knob to adjust.
  • Get more degrees of freedom.
    • Hysteresis is a great way to add degrees of freedom. 
      • State machines
      • Time-series analysis
  • As your uptime increases, you must get more specific.
    • Going back to the chart…our positive predictive value goes down when there’s less actual problems.
  • A lot of nagios configs combine detecting problem with identifying what the problem is.
    • You need to separate those concerns.
    • Baron Schwartz says: Your alerting should tell you whether work is getting done.
    • Knowing that nginx is down doesn’t affect if your site is up. Check to see if you site is up (detecting problem), which is separate from source of problem (nginx isn’t running)
    • Alert on problems, bot on diagnosis.


Katherine Daniels

  • The site is going down.
  • But everything seemed to be fine.
    • checked web servers, databases, mongo, more.
  • What was wrong? The monitoring tool wasn’t telling us.
  • One idea: monitor more. monitor everything.
    • But if you’re looking for a needle in a haystack, the solution is not to add more hay.
    • Monitoring everything just adds more stuff to weed through. Including thousands of things that might be not good (e.g. disk quote too high), but aren’t actually whats causing the problem.
  • Monitor some of the things. The right things. But which things? If we knew, we’d already be monitoring.
  • Talk to Amazon…
    • “try switching the load balancer”
    • “try switching the web server”
  • We had written a service called healthd that was supposed to monitor api1, and api2.
  • But we didn’t have logging for healthd, so we didn’t know what was wrong.
  • We needing more detail.
  • So adding logging, so we knew which API had a problem.
  • We also had some people who tried the monitor everything problem.
  • They uncovered a user who seemed to be scripting the site.
  • They added metrics for where the time was being spent with the API handlers
  • The site would go down for a minute each time things would blip.
  • We set the timeouts to be lower.
  • We found some database queries to be optimized.
  • We found some old APIs that we didn’t need and we removed them.
  • The end result was that things got better. The servers were mostly happy.
  • But the real question is: How did we get to a point where our monitoring didn’t tell us what we needed? We thought we were believers in monitoring. And yet we got stuck.
  • Black Boxes (of mysterious mysteries)
    • Using services in the cloud gives you less visibility
  • Why did we have two different API services…cohabiting…and not being well monitored?
    • No one had the goal of creating a bad solution.
    • But we’re stuck. So how do we fix it?
    • We stuck nginx in front and let it route between them.
  • What things should you be thinking about?
    • Services: 
      • Are the services that should be running actually running?
      • Use sensu or nagio
    • Responsiveness:
      • Is the service responding?
    • System metrics:
      • CPU utilization, disk space, etc.
      • What’s worth an alert depends: on a web server it shouldn’t use all the memory, on a mongo db it should, and if it isn’t, that’s a problem.
    • Application metrics?
      • Are we monitoring performance, errors?
      • Do we have the thresholds set right?
      • We don’t want to to look at a sea of red: “Oh, just ignore that. It’s supposed to be red.”
  • Work through what happens?
    • Had 20 servers running 50 queues each. 
    • Each one has its own sensu monitor. HipChat shows an alert for each one… a thousand outages.
  • You must load test your monitoring system: Will it behave correctly under outages and other problems?
  • “Why didn’t you tell me my service was down?” “Service, what service? You didn’t tell us you were running a service.”