NAME

polyamory – supports simultaneous relationships

SYNOPSIS

poly [-dpt]

DESCRIPTION

poly supports simultaneous host-to-host relationships.

By default, poly searches for and upgrades any preexisting monogamous relationship to polyamory. Results may be mixed. To suppress this behavior, use kill -9 to terminate existing relationships first.

Polyamory comes in many variations. Best results are obtained running identical or closely compatible variations. See also: poly-nonhierarchical, poly-hierarchical, and poly-solo. Less compatible variations include: swinging. Poly is not compatible with cheating.

It is possible but not recommended to connect two hosts, one running poly and one running monogamy, but this requires an experienced system administrator and increases the risk of system instability. Resource utilization (see relationship-discussion) will likely be higher with this combination.

It is normal to have one or more relationship-discussion background processes per relationship. In some cases, O(n^2) processes are required for n relationships. These child processes automatically consume all available CPU cycles, and are necessary for system stability.

OPTIONS

-p In promiscuous mode, poly will fork additional instances any time it sees an open port on a compatible host. This can be resource intensive and is not recommended to run indefinitely, although it is common for users to start poly in this state.

-d In debug mode, extra relationship-discussions are spawned. Poly is notoriously difficult to debug. If relationship-discussion is insufficient, if CPU utilization is too high, or system instability exceeds comfortable limits, use couples-counseling to process debug output.

-t To prevent errors during initialization and facilitate user adoption, poly supports a -t flag for trial mode. However, this is a dummy flag and has no effect.

TIPS

Poly by default operates in a time-sharing mode. For real-time relationship parallelism, it may be necessary to install the threesome, orgy, and/or kitchen-table packages.

It is recommended to run sti-scanner at regular intervals while running poly and furthermore to ensure that all relationship endpoints run sti-scanner. Alternatively, you can run poly in a private cloud, although not all benefits of poly are available in this configuration.

It is normal after installing poly to sometimes wish to revert to monogamy, especially after during periods of high system instability. While this works in some cases, running poly modifies many components of the host operating system, sometimes permanently. Results with reverting to monogamy may vary.

I originally wrote the ten musings on AI risks as the first half of a talk I was planning to give at the Rise of AI conference in Berlin next month. Unfortunately, plans for speaking at the conference fell through. The speaking points have been languishing in a document since then, so I figured I’d share them as a blog post while they are still relevant. Please excuse any errors, this was just a rough draft.

10. There’s a balance between AI risk and AI benefits.

There’s too much polarization on AI risks. One camp says AI is going to be end of all life, we must stop all development immediately. The other camp says AI poses no risk, so let’s go full speed ahead.

Neither statement is totally true.

AI poses some risks, as all things do. The first step to being able to have a discussion about those risks is to admit that they exist. Then we can have a more nuanced and educated conversation about those risks.

However, no matter how severe those risks are, there’s no way we can stop all AI development, because:

9. There’s no way to put the AI genie back in the bottle.

There’s too much economic and military advantage to artificial intelligence to stop AI development.

On a government level, no governments would give up the advantages they could get. When we have nation states hacking each other and surveilling each other with little to no regard for laws, agreements, or ethics, there’s no way we could get them to limit AI development, if it could give them an advantage over others.

On a corporate level, no corporations would willingly give up the economic advantages of artificial intelligence, either as a provider of AI, with the revenue they might stand to make, nor as a consumer of AI, gaining efficiencies in manufacturing, business, personnel management, or communications.

Lastly, on an individual level, we cannot stop people from developing and releasing software. We couldn’t stop bitcoin, or hackers, or malware. We’re certainly not going to stop AI.

8. We must accelerate the development of safeguards, not slow the development of AI.

Because we can’t stop the development of AI, and because AI has many risks, the only option we have is to accelerate the development of safeguards, by thinking through risks and developing approaches to address them.

If a car had only an engine and wheels, we wouldn’t start driving it. We need, at a minimum, brakes and a steering wheel. Yet little investment is being made into mitigating risks with basic safeguards.

7. Manual controls are the most elementary form of safeguard.

Consider the Google Car. The interior has no steering wheel, no brake, no gas pedal. It makes sense that we would take out what isn’t needed.

But what happens if GPS goes down or if there’s a Google Car virus or anything else that renders the self-driving ability useless? Then this car is just a hunk of plastic and metal.

What if it isn’t just this car, but all cars? Not just all cars, but all trucks, including delivery trucks? Now suddenly our entire transportation infrastructure is gone, and along with it, our supply chains. Businesses stop, people can’t get necessary supplies, and they eventually starve.

It’s not just transportation we depend on. It’s payment systems, the electrical grid, and medical systems as well.

Of course, manual controls have a cost. Keeping people trained and available to operate buses and planes and medical systems has a cost. In the past, when we had new technical innovations, we didn’t keep the old tools and knowledge around indefinitely. Once we had automatic looms, we didn’t still have people still working manual ones.

But one key difference is the mode of failure. If a piece of machinery fails, it’s just that one instance. If we have a catastrophic AI event — whether it’s a simple crash, unintentional behavior, or actual malevolence, it has the potential to affect all of those instances. It’s not one self-driving car breaking, it’s potentially all self-driving vehicles failing simultaneously.

6. Stupidity is not a form of risk mitigation.

I’ve heard people suggest that limiting AI to a certain level of intelligence or capability is one way to ensure safety.

But let’s imagine this scenario:

You’re walking down a dark street at night. Further on down the block, you see an ominous looking figure, who you worry that might mug you or worse. Do you think to yourself, “I hope he’s stupid!”

Of course not. An intelligent person is less likely to hurt you.

So why do we think that crippling AI can lead to good outcomes?

Even stupid AI has risks: AI can crash the global stock market, cripple the electrical grid, or make poor driving or flying decisions. All other things being equal, we would expect a more intelligent AI to make better decisions than a stupid one.

That being said, we need to embody systems of ethical thinking.

5. Ethics is a two way street.

Most often, when we think about ethics and AI, we think about guiding the behavior of AI towards humans.

But what about the behavior of humans towards AI?

Consider a parent and child standing together outside by their family car. The parent is frustrated because the car won’t start, and they kick the tire of the car. The child might be surprised by this, but they likely aren’t going to be traumatized by it. There’s only so much empathy they have for an inanimate machine.

Now imagine that the parent and child are standing together by their family dog. The dog has just had an accident on the floor in the house. The parent kicks the dog. The child will be traumatized by this behavior, because they have empathy for the dog.

What happens when we blur the line? What if we had a very realistic robotic dog? We could easily imagine the child being very upset if their robotic dog was attacked, because even though we adults know it is not alive, it will be alive for the child.

I see my kids interact with Amazon Alexa, and they treat her more like a person. They laugh at her, thank her, and interact with her in ways that they don’t interact with the TV remote control, for example.

Now what if my kids learned that Alexa was the result of evolutionary programming, and that there were thousands or millions of earlier versions of Alexa that had been killed off in in the process of making Alexa. How will they feel? How will they feel if their robotic dog gets recycled at the end of its life? Or if it “dies” when you don’t pay the monthly fee?

It’s not just children that are affected. We all have relationships with inanimate objects to some degree, something you treat with reverence. That will grow as those objects appear more intelligent.

My point is that how we treat AI will affect us emotionally, whether we want to or not.

(Thanks and credit to Daniel H. Wilson for the car/dog example.)

4. How we treat AI is a model for how AI will treat us.

We know that if we want to teach children to be polite, we must model politeness. If we want to teach empathy, we must practice empathy. If we want to teach respect, we must be respectful.

So how we treat AI is critically important for how AI sees us. Now, clearly I’m talking about AGI, not narrow AI. But let’s say we have a history of using genetic programming techniques to breed better performing AI. The implication is that we kill off thousands of programs to obtain one good program.

If we run AI programs at our whim, and stop them or destroy them when we’re finished with them, we’re treating them in a way that would be personally threatening to a sufficiently advanced AGI.

It’s a poor ethical model for how we’d want an advanced AI to treat us.

The same goes for other assumptions that stem from treating AI as machines, such as assuming an AI would work 24 hours a day, 7 days a week on the tasks we want.

Now we can’t know how AI would want to be treated, but assuming we can treat them like machines is a bad starting point. So we either treat them like we would other humans and accord them similar rights, or better yet, we ask them how they want to be treated, and treat them accordingly.

Historically, though, there are those who aren’t very good at treating other people with the respect and rights they are due. They aren’t very likely to treat AI well. This could potentially be dangerous, especially if we’re talking about AI with control over infrastructure or other important resources. We have to become even better at protecting the rights of people, so that we can apply those same principles to protecting the rights of AI. (and codifying this within our system of law)

3. Ethical behavior of AI towards people includes the larger environment in which we live and operate.

If we build artificial intelligence that optimizes for a given economic result, such as running a business to maximize profit, and we embody our current system of laws and trade agreements, then what we’ll get is a system that looks much like the publicly-traded corporation does today.

After all, the modern corporation is a form of artificial intelligence that optimizes for profit at the expense of everything else. It just happens to be implemented as a system of wetware, corporate rules, and laws that insist that it must maximize profit.

We can and must do better with machine intelligence.

We’re the ones building the AI, we get to decide what we want. We want a system that recognizes that human welfare is more than just the money in our bank accounts, and that it includes free agency, privacy, respect, and happiness and other hard to define quality.

We want an AI that recognizes that we live in a closed ecosystem, and if we degrade that ecosystem, we’re compromising our long-term ability to achieve those goals.

Optimizing for multiple values is difficult for people, but it should be easier for AGI over the long term, because it can evaluate and consider many more options to a far greater depth and at a far greater speed than people ever can.

An AI that simply obeys laws is never going to get us what we need. We can see many behaviors that are legal and yet still harmful.

The problem is not impossible to solve. You can ask any ten year old child what we should do, and they’ll almost always give you an ethically superior answer to what a CEO of a corporation will tell you.

2. Over the long run, the ethical behavior of AI toward people must include intent, not just rules.

In the next few years, we’ll see narrow AI solutions to ethical behavior problems.

When an accident is unavoidable, self-driving AI will choose what we decide as the best option.

It’s better to hit another car than a pedestrian because the pedestrian will be hurt more. That’s ethically easy. We’ll try to answer it.

More difficult: The unattached adult or the single mother whom two children depend on?

We can come up with endless variations of the trolley dilemma, and depending on how likely they are, we’ll embody some of them in narrow AI.

But none of that can be generalized to solve other ethical problems.

  • How much carbon can we afford to emit?
  • Is it better to save 500 local manufacturing jobs, or to reduce the cost of the product by half, when the product will make people’s lives better?
  • Better to make a part out of metal, which has certain environmental impacts, or plastic, which has different ones?

These are really difficult questions. Some of them we attempt to answer today with techniques such as lifecycle analysis. AI will do that job far better than us, conducting lifecycle analysis for many, many decisions.

1. As we get closer to artificial general intelligence, we must consider the role of emotions in decision-making.

In my books, which span 30 years in my fictional, near-future world, AI start out emotionless, but gradually develop more emotions. I thought hard about that: was I giving them emotions because I wanted to anthropomorphize the AI, and make them easier characters to write, or was there real value to emotions?

People have multiple systems for decision-making. We have some autonomic reactions, like jerking away our hand from heat, which happens without involving the brain until after the fact.

We have some purely logical decisions, such as which route to take to drive home.

But most of our decisions are decided or guided by emotional feelings. Love. Beauty. Anger. Boredom. Fear.

It would be a terrible thing if we needed to logically think through every decision: Should I kiss my partner now? Let me think through the pros and cons of that decision…. No, that’s a mostly emotional decision.

Others are a blend of emotion and logic: Should I take that new job? Is this the right person for me to marry?

I see emotions as a shortcut to decision-making, because it would take forever to reach every decision through a dispassionate, logical evaluation of options. And that’s the same reason why we have an autonomic system: to shortcut conscious decision making. I perceive this stove is hot. I perceive that my hand is touching the stove. This amount of heat sustained too long will damage my hand. Damaging my hand would be bad because it will hurt and because it will compromise my ability to do other things. Therefore, I conclude I shall withdraw my hand from the stove.

That’s a terrible approach to resolve a time critical matter.

Emotions inform or constrain decision making. I might still think through things, but the decision I reach will differ depending on whether I’m angry and scared, or comfortable and confident.

As AI become sophisticated and approach or exceed AGI, we must eventually see the equivalent of emotions that automate some lesser decisions for AI and guide other, more complicated decisions.

Research into AI emotions will likely be one of the signs that AGI is very, very near.

 

I gave a talk in the Netherlands last week about the future of technology. I’m gathering together a few resources here for attendees. Even if you didn’t attend, you may still find these interesting, although some of the context will be lost.

Previous Articles

I’ve written a handful of articles on these topics in the past. Below are three that I think are relevant:

Next Ten Years

Ten to Thirty Years

 

Cory Doctorow was in Portland promoting his new book Information Doesn’t Want to Be Free (Powells, Amazon).

Here are my notes from his talk at Powell’s Bookstore:

Cory and I at his booksigning.

Cory Doctorow and I at his book signing.
(Photograph by Erin Gately)

Cory Doctorow

Information Doesn’t Want to be Free
Creativity in the Twenty-First Century
  • If you don’t earn your living online now, you probably will in the future
  • It’s hard to generalize a single way to earn a living in the arts
  • Most people who set out to make money in the arts ends up losing money
  • Living in the creative fields and earning a living there is way out there…it’s a six sigma event.
  • Imagine millions and millions of people flipping coins… a few have coins that land on their edge. some people have this happen many times. The only thing that unites these people is luck.
  • But when artists make money, we treat them with reverence. But fundamentally they are just lucky.
  • But we put them on magazine covers, and try to figure out which business models serve artists the best.
  • But any business model will be quickly copied by thousands of new artists.
  • And business models change. they can’t stay the same.
  • The artists of yesterday want the business models of yesterday to stay in place. it’s like last year’s lottery winners wanting to win the lottery every year.
  • Three Laws
  • First Law: Anytime someone puts a lock on something and doesn’t give you the key, the lock is not there for your best interest.
    • (Funny anecdote about Cory’s literary agent, who also represented Arthur C. Clarke: “One thing I learn is that you always have to have three laws.”)
    • DRM: digital rights management.
    • DRM works by scrambling the creative work you upload, and then giving the audience/customer a player than can descramble the work, but which won’t let them do anything you don’t want: copy it, save it, play it in the United States.
    • But DRM only works if nobody can find the key, which has to be embedded in the player. So its trivial to find. Inevitable…
    • But it’s illegal thanks to a 1998 law.
    • As soon as Adobe, Amazon, or Apple puts DRM on something (and those are just the As), you’ve lost control over it. And your customer too.
    • Customers can only read the books in the ecosystem in which you bought them.
    • It’s like having a room in your house to only read Barnes & Noble books. But then if you bought books from Powell’s, you’d need to have another room to read them. And you couldn’t move the books from one room to the other.
    • Audible has 90% of the audio book market, and they have DRM, and they’ve locked up that market.
  • Second law: Fame won’t make you rich, but you can’t sell your art without it.
    • Tim O’Reilly said “The problem for most artists isn’t piracy, it’s obscurity.”
    • We’re left with: Five publishers, four labels, and five studios.
    • The contracts that exist today with the above are all screwing the artist, and it reflects that it’s a buyers market, because these companies own the market.
    • Lots of abusive terms, and lots of non-negotiatable.
    • It’s a competitor of last resort. The worst deal the traditional publishers can offer has to be competitive with what they think they can make indie.
    • The indie sector is at the end of a 15 year war with the traditional pubs.
    • Viacom wanted Google to have an army of lawyers to check the 96 hours of video upload to YouTube every minute. There aren’t enough lawyers in the world. You’d get to the heat death of the universe before you could review all the video.
    • There’s more efforts coming to attack the indies. We’re just seeing the beginning of it.
    • (lots of trade agreements cited.)
    • What happens with Viacom and cable is that the army of lawyers can’t be hired. So the content producers have to provide insurance that their content doesn’t infringe. And only rich people can afford that.
    • And so there is lots of other content rules coming like this.
    • What the Internet does, the primary purpose, is to make copies, as quickly and effortlessly as possible, and with high fidelity. Trying to make the internet copy stuff is like trying to make water less wet.
  • Third law: Information Doesn’t Want to be Free.
    • I invited Information out to the weekend at the Hamptom’s.
    • Information doesn’t want anything.
    • This isn’t about information.
    • This is about people.
    • People want to be free.
    • When we live in an information age, that means they want their information to be free.
    • When we put DRM in software and content, then we are taking freedom away from people.
    • Programmers are fallible. They make mistakes. And those mistakes can compromise your privacy. Your phone is a supercomputer in your pocket with a microphone and camera that you take into the bathroom and bedroom and that knows who your friends are, and what you talk to them about, and what your lawyer told you.
    • But the DRM laws make it illegal to talk about those mistakes. Which means that your phone can be attacked.
    • University of Michigan video showing a bluetooth hack with a pacemaker in which they cook bacon.
    • What happens when technology moves inside your body. Your future hearing aide isn’t something in your ear, it’s something in your mind. What will be the model for that? It is a device you control? or a device you don’t control? “You can’t do that Dave.”
    • The internet has tons of daily business, not just the “important stuff”, but also the banal stuff. But the banal stuff is important to. When I saw “how did you sleep?” to my wife, I know how she slept. My saying that is my way of saying “how are you? i care about you? i’m here for you.” it’s the soil from which everything grows.
    • In New Zealand…3 strikes rule that says that if you are guilty of copyright infringement 3 times, they take away internet access from your family.
    • Did study in the UK… People who have internet access have:
    • better health
    • better jobs
    • more civic-ly engaged
    • more politically engaged
    • better student grades
    • They passed the digital economy act: 3 strike rule.
    • Which means that they aren’t just taking away the internet access. they are taking away health, jobs, political engagement, and student grades.
Questions
  • How do you deal with self-criticism as you write?
    • Don’t revise until I’m done.
    • Don’t look too closely at what i’m doing.
    • Looked back at quality of what I wrote when I thought I was doing well, and when I thought I wasn’t doing well, and found no correlation. Had to do with how I felt, not what I produced.
  • Is there a parallel between the history of human cognition and the history of computer development?
    • (missed lots of good stuff here.)
    • we have no shortage of minds we are creating that think like people. we call them babies.
    • we need things that think differently than us.
    • to think computers will think like us is to think that airplanes will fly like birds.

There were many more great questions and answers, but that was the hardest part to capture.

I love trying to extrapolate trends and seeing what I can learn from the process. This past weekend I spent some time thinking about the size of computers.

From 1986 (Apple //e) to 2012 (Motorola Droid 4), my “computer” shrinking 290-fold, or about 19% per year. I know, you can argue about my choices of what constitutes a computer, and whether I should be including displays, batteries, and so forth. But the purpose isn’t to be exact, but to establish a general trend. I think we can agree that, for some definition of computer, they’re shrinking steadily over time. (If you pick different endpoints, using an IBM PC, a Macbook Air, or a Mac Mini, for example, you’ll still get similar sorts of numbers.)

So where does that leave us going forward? To very small places:

Year Cubic volume of computer
2020 1.07
2025 0.36
2030 0.12
2035 0.04
2040 0.01
2045 0.0046

In a spreadsheet right next to the sheet entitled “Attacking nanotech with nuclear warheads,” I have another sheet called “Data center size” where I’m trying to calculate how big a data center will be in 2045.

A stick of is “2-7/8 inches in length, 7/8 inch in width, and 3/32 inch”  or about 0.23 cubic inches, and we know this thanks to the military specification on chewing gum. According to the chart above, computers will get smaller than that around 2030, or certainly by 2035. They’ll also be about 2,000 times more powerful than one of today’s computers.

Imagine today’s blade computers used in data centers, except shrunk to the size of sticks of gum. If they’re spaced 1″ apart, and 2″ apart vertically (like a DIMM memory plugged into it’s end), a backplane could hold about 72 of these for every square foot. A “rack” would hold something like 2,800 of these computers. That’s assuming we would even want them to be human-replaceable. If they’re all compacted together, it could be even denser.

It turns out my living room could hold something like 100,000 of these computers, each 2,000 times more powerful one of today’s computers, for the equivalent of about two million 2014 computers. That’s roughly all of Google’s computing power. In my living room.

I emailed Amber Case and Aaron Parecki about this, and Aaron said “What happens when everyone has a data center in their pockets?”

Good question.

You move all applications to your pocket, because latency is the one thing that doesn’t benefit from technology gains. It’s largely limited by speed of light issues.

If I’ve got a data center in my pocket, I put all the data and applications I might possibly want there.

Want Wikipedia? (14GB) — copy it locally.

Want to watch a movie? It’s reasonable to have the top 500,000 movies and TV shows of all time (2.5 petabytes) in your pocket by 2035, when you’ll have about 292 petabytes of solid-state storage. (I know 292 petabytes seems incredulous, but the theoretical maximum data density is 10^66 bits per cubic inch.)

Want to run an web application? It’s instantiated on virtual machines in your pocket. Long before 2035, even if a web developer needs redis, mysql, mongodb, and rails, it’s just a provisioning script away… You could have a cluster of virtual machines, an entire cloud infrastructure, running in your pocket.

Latency goes to zero, except when you need to do a transactional update of some kind. Most data updates could be done through lazy data coherency.

It doesn’t work for real-time communication with other people. Except possibly in the very long term, when you might run a copy of my personality upload locally, and I’d synchronize memories later.

This also has interesting implications for global networking. It becomes more important to have a high bandwidth net than a low latency net, because the default strategy becomes one of pre-fetching anything that might be needed.

Things will be very different in twenty years. All those massive data centers we’re building out now? They’ll be totally obsolete in twenty years, replaced by closet-sized data centers. How we deploy code will change. Entire new strategies will develop. Today we have DOS-box and NES emulators for legacy software, and in twenty years we might have AWS-emulators that can simulate the entire AWS cloud in a box.

Lately it seems like the 99% / 1% debate on income inequality has faded from active discussion. Snowden and gun violence and other topics have temporarily taken over our consciousness. But the issue of the accumulation of ever greater wealth by the 1% hasn’t disappeared, as I was recently reminded by the number of friends who are unemployed or underemployed.

For the last week I’ve been thinking about locus of control. Another day I’ll come back to this topic as it applies to publishing, which was where my thinking about it originated. But these ruminations made me consider how it might apply to wealth inequality.

So much of the debate about the 99% seemed to assume our locus of control is external: that is, we the 99% are the victims of the 1% who control the economy. The wealthy own the big corporations, the banks, and the political system, and frankly it seems like they hold all the chips.

However, the irony in this is that we continue to feed the 1% our money, every day. We all make decisions and take actions that put more and more of our hard earned money directly into the pockets of the 1%. This is within our control. We can change our behaviors so that more of our money remains with the 99%.


Can this eliminate wealth inequality? I don’t think so. But it could stem the tide, slow the flow of ever increasing amounts of income, wealth, and other resources into the pockets of the 1%.

The trick is to spot where and how money flows to the 1%.

Here’s how:

Buy goods made by people, not corporations

Every time you buy a product, you are supporting the people who made that product. If you buy a children’s toy, your dollars go to the workers who built it with their manual labor, the manager of those workers, the boss of those managers, and so on.

For the most part, this is good: your money is providing a living for other people. This is what we want.

If the product is manufactured by a corporation, then some percentage of the company’s revenue goes to those employees, but some goes to the executives and owners or investors in a company. (In the case of a publicly held company, either directly through dividends or indirectly through boosting the share price of the company.)

For large companies, these executives, owners, and investors are nearly always going to be the 1%. The average CEO salary in 2012 was $9.7 million. The “top one percent of households have 35% of all privately held stock, 64.4% of financial securities, and 62.4% of business equity“.

When a manufactured good is made by a major corporation, I don’t know what percent of the profit from that product goes to the employees (the 99%) versus the executives and investors (the 1%), but let’s take a guess, and say that the very wealthy get 20% and everyone else gets 80%.

Buycott helps identify who
manufactures a given product

By comparison, think about buying a small scale product. This could be a hand-manufactured good on Etsy or something made by a small, privately owned company. In this case, your money still goes to the workers who made the product and their managers and even the company owner. But nearly always, in this case, we’re still talking about the 99%. (Most small business owners make a modest income.) In this case, we’re still supporting our fellow citizens, but we’re giving them 100% of our purchase and avoiding the 20% that would otherwise go to the 1%.

(Again, this 80/20 is just a guess. If anyone has more accurate numbers, please post them in the comments with sources, and I’ll update the article.)

There’s a great app called Buycott that helps identify where products are made by scanning their barcodes. It also identifies whether the company is involved in any campaigns (e.g. human rights abuses, independent farming, etc.) and whether the product helps or harms that campaign.

OK, enough about products. Let’s move onto retailers.

Buy from locally owned stores, not national chains

Where you buy something is just as important as what you buy. Profit margins vary by industries, but anywhere from 20% to 60% of the money you spend flows to the retailer itself (the store), rather than the manufacturer of the product. That is, if you spend $50 on kid’s clothing, the store makes $25 and $25 goes to the manufacturer.

The employees of a retail store are all surely part of the 99%. These are the people we want to support, no matter what.

In the case of any national chain, there will be high level executives, owners and investors that are part of the 1%. Again, these 1% will benefit disproportionately from people buying from the stores. Let’s say that again, they take 20% of the money you spend, leaving 80% for the employees.

Supportland rewards shoppers at local business

Compare this to an individually owned store, or even a small, local chain of stores. In this case, the owner may be the person behind the cash register. They are certainly not part of the 1%. Even for a small chain, while the owners might make a good income, they’re still far more likely to be like you and me. And most likely, any profits earned by the company are being spent right in your local area, flowing back to the community.

In Portland, we have a wonderful organization called Supportland that offers a rewards card for locally owned businesses. They help verify which companies truly are locally owned.

Buy with cash, not bank cards

We’ve covered what you buy and where you buy it. Now let’s consider how you buy goods. Most of us use credit or debit cards. Yet both credit and debit cards come with a hidden tax to the retailer: anywhere from 2% to 6% of your purchase price will flow to some combination of banks and card payment processors.

I first thought about this impact at a small, local food grocery. They put up a sign asking customers to pay in cash when possible, explaining the costs of credit and debit card processing. They pointed out that nearly the fees on bank card purchases came to nearly 5%. Groceries operate on fairly slim margins. Let’s say that for each dollar you spend at a grocery, 80 cents goes to pay for the food (e.g. to the manufacturer, farmer, distributor, etc.), and the grocer themselves earns 20 cents.

If card processing fees takes 5% of every dollar you spend, it reduces the amount earned by the grocery significantly: They go from earning 20 cents of every dollar to 15 cents, a 25% reduction.

The grocer pointed out that the amount they’d spend on credit card processing fees in the previous year would have been enough to provide every employee with full healthcare.

Bank locally and/or with credit unions

Banking funnels a tremendous amount of money to the wealthy. 
Like any other industry, the finance industry has run of the mill employees that are part of the 99%, but the big publicly held companies have executives and investors that are all part of the 1%.

It’s the big financial institutions that have been behind so much of the excess of the last decade. Not only can we deny our profits to them, but we can also deny our capital to them.

Credit unions are non-profit banks. They don’t have publicly held stock and I’m guessing their executives are more modestly paid. I don’t know much about local banking, but I’m assuming there are still smaller, regional banks.

If you have a home mortgage or other loan, most of the money you pay each money is interest on the loan. That interest is profit for the bank, which, if it is a large financial institution, will flow into the pockets of the rich. If that same loan was held by a credit union or a local bank, the profits will remain within the bank or within the community.

If you have deposits with a bank, that is capital that can be used by the bank to make loans and investments. Do you want those loans and investments to be made in your local community? Then bank locally.

Spend less money & stay out of debt

Perhaps the simplest method to deny money to the 1% is to spend less money. Each dollar not spent is a dollar in your pocket, not theirs.

How to Manage Your Money
When You Don’t Have Any

This isn’t easy. We’re accustomed to the life we have, and habits are hard to change. One great book I read recently that addresses change habits to align with our values is How to Manage Your Money When You Don’t Have Any by Erik Wecks.

Personally, I like the philosophy of borrowing and lending with friends and neighbors. Do we all need our own lawn mowers or can several neighbors share one? Can we give a friend a lift to the airport? Or invite folks over for a potluck instead of eating out? Give our kids old toys to someone expecting a new baby? There are so many ways we can share that saves us money even as it helps us build relationships with each other.

Avoid debt as well. It’s not just good for you, but consider that every dollar you have in debt is a tax on your entire financial life, and that tax goes directly to big banks.

What’s the net impact of making these changes?

I’ve made some rough estimates. I’ve assumed, for example, that in big corporations, about 80% of their revenue flows to employees, and 20% enriches the already wealthy. I’d guess that financial institutions might have a higher ratio of revenue that flows to the wealthy compared to other businesses. Usually an inaccurate estimate is better than none.

A quick spreadsheet estimated that shopping at national chains buying mass marketing goods with your credit card causes $215 out of every $1,000 (21.5%) to flow into the pockets of the wealthiest people.

But whether that actual number is 5%, 10%, or 30%, there’s something interesting that happens when we consider how money moves through the economy.

Recall that the money we spend does benefit the people who sell, make, and support goods. When I spend $1,000, the percent not taken by the wealthy goes into the pockets of other people in the 99%. Those people also spend that money, and some portion of what they spend goes to the 99%, and some goes to the 1%.

The economy depends on money recirculating. In fact, the very idea behind an economic stimulus (part of overall monetary policy) such as lowering taxes or offering a tax rebate is not just that you’ll spend money, but that the money you spend is earned by other people who then turn around and spend it again, and so forth.

Where we run into a problem is when the wealthy skim off a percentage of all transactions, as we see that they do through card processing fees and the ownership of retailers and manufacturers.

Consider the following chart. Each row assumes a different percentage that the 99% retain each time money is spent, ranging from 75% to 99.9%. In the first row, we see that the first time money is spent, $750 flows to other people in the 99%, while $250 flows to the wealthy. By the time that money has recirculated five times, only $237 is left for the 99%, while the wealthy now have most of it.

Compare this with the bottom row, in which we assume that through making very educating spending decisions, we’re almost always able to support small, local businesses and individuals. In that case, by the time the money has recirculated five times, an amazing $995 is still within the 99%, and only $5 has gone to the wealthiest 1%.

Amount left for the 99%
Starting Amount Percent Spent with the 99% Round 1 Round 2 Round 3 Round 4 Round 5
$1,000 75% $750 $563 $422 $316 $237
80% $800 $640 $512 $410 $328
85% $850 $723 $614 $522 $444
90% $900 $810 $729 $656 $590
95% $950 $903 $857 $815 $774
99% $990 $980 $970 $961 $951
99.90% $999 $998 $997 $996 $995

Conclusions

Through the choices we make when we decide where to keep our money, how to borrow money, where we spend, and what we purchase, we’re significantly impacting whether the 99% keep the money we spend or whether it will end up with the wealthiest 1%.
By spending it with locally owned or small businesses, on locally or handmade stuff, and by avoiding putting money into the coffers of the largest financial institutions, we can ourselves stimulate the economy of the 99%. Lest you worry that Walmart will go out of business and thousands will lose jobs, recall that jobs will go where the money is. If we spend it with small and locally owned businesses, those businesses will be able to hire more employees, pay those employees more, and offer them better benefits. (Just as they did only a few generations ago.)
Thanks to apps like Buycott, organizations like Supportland, and marketplaces like Etsy, it’s easier than ever to find local businesses and local products. (Of course, don’t forget about things like your local farmer’s market, or just the mom and pop store down the street.)
I’m not an economist or a financial advisor. I’m just a regular person thinking about this stuff, so I’m sure there may be holes in my logic or better numbers I could be using. I’d love comments to discuss this further, so please post away. (Respectfully, of course.)

Everyone would like a sure-fire way to predict the future. Maybe you’re thinking about startups to invest in, or making decisions about where to place resources in your company, or deciding on a future career, or where to live. Maybe you just care about what things will be like in 10, 20, or 30 years.

There are many techniques to think logically about the future, to inspire idea creation, and to predict when future inventions will occur.

 

I’d like to share one technique that I’ve used successfully. It’s proven accurate on many occasions. And it’s the same technique that I’ve used as a writer to create realistic technothrillers set in the near future. I’m going to start by going back to 1994.

 

Predicting Streaming Video and the Birth of the Spreadsheet
There seem to be two schools of thought on how to predict the future of information technology: looking at software or looking at hardware. I believe that looking at hardware curves is always simpler and more accurate.
This is the story of a spreadsheet I’ve been keeping for almost twenty years.
In the mid-1990s, a good friend of mine, Gene Kim (founder of Tripwire and author of When IT Fails: A Business Novel) and I were in graduate school together in the Computer Science program at the University of Arizona. A big technical challenge we studied was piping streaming video over networks. It was difficult because we had limited bandwidth to send the bits through, and limited processing power to compress and decompress the video. We needed improvements in video compression and in TCP/IP – the underlying protocol that essentially runs the Internet.
The funny thing was that no matter how many incremental improvements researchers made (there were dozens of people working on different angles of this), streaming video always seemed to be just around the corner. I heard “Next year will be the year for video” or similar refrains many times over the course of several years. Yet it never happened.
Around this time I started a spreadsheet, seeding it with all of the computers I’d owned over the years. I included their processing power, the size of their hard drives, the amount of RAM they had, and their modem speed. I calculated the average annual increase of each of these attributes, and then plotted these forward in time.
I looked at the future predictions for “modem speed” (as I called it back then, today we’d called it internet connection speed or bandwidth). By this time, I was tired of hearing that streaming video was just around the corner, and I decided to forget about trying to predict advancements in software compression, and just look at the hardware trend. The hardware trend showed that internet connection speeds were increasing, and by 2005, the speed of the connection would be sufficient that we could reasonably stream video in real time without resorting to heroic amounts of video compression or miracles in internet protocols. Gene Kim laughed at my prediction.
Nine years later, in February 2005, YouTube arrived. Streaming video had finally made it.
The same spreadsheet also predicted we’d see a music downloading service in 1999 or 2000. Napster arrived in June, 1999.
The data has held surprisingly accurate over the long term. Using just two data points, the modem I had in 1986 and the modem I had in 1998, the spreadsheet predicts that I’d have a 25 megabit/second connection in 2012. As I currently have a 30 megabit/second connection, this is a very accurate 15 year prediction.
Why It Works Part One: Linear vs. Non-Linear
Without really understanding the concept, it turns out that what I was doing was using linear trends (advancements that proceed smoothly over time), to predict the timing of non-linear events (technology disruptions) by calculating when the underlying hardware would enable a breakthrough. This is what I mean by “forget about trying to predict advancements in software and just look at the hardware trend”.
It’s still necessary to imagine the future development (although the trends can help inspire ideas). What this technique does is let you map an idea to the underlying requirements to figure out when it will happen.
For example, it answers questions like these:
When will the last magnetic platter hard drive be manufactured?
2016. I plotted the growth in capacity of magnetic platter hard drives and flash drives back in 2006 or so, and saw that flash would overtake magnetic media in 2016.
When will a general purpose computer be small enough to be implanted inside your brain?
2030. Based on the continual shrinking of computers, by 2030 an entire computer will be the size of a pencil eraser, which would be easy to implant.
When will a general purpose computer be able to simulate human level intelligence?
Between 2024 and 2050, depending on which estimate of the complexity of human intelligence is selected, and the number of computers used to simulate it.
Wait, a second: Human level artificial intelligence by 2024? Gene Kim would laugh at this. Isn’t AI a really challenging field? Haven’t people been predicting artificial intelligence would be just around the corner for forty years?
Why It Works Part Two: Crowdsourcing
At my panel on the future of artificial intelligence at SXSW, one of my co-panelists objected to the notion that exponential growth in computer power was, by itself, all that was necessary to develop human level intelligence in computers. There are very difficult problems to solve in artificial intelligence, he said, and each of those problems requires effort by very talented researchers.
I don’t disagree, but the world is a big place full of talented people. Open source and crowdsourcing principles are well understood: When you get enough talented people working on a problem, especially in an open way, progress comes quickly.
I wrote an article for the IEEE Spectrum called The Future of Robotics and Artificial Intelligence is Open. In it, I examine how the hobbyist community is now building inexpensive unmanned aerial vehicle auto-pilot hardware and software. What once cost $20,000 and was produced by skilled researchers in a lab, now costs $500 and is produced by hobbyists working part-time.
Once the hardware is capable enough, the invention is enabled. Before this point, it can’t be done.  You can’t have a motor vehicle without a motor, for example.
As the capable hardware becomes widely available, the invention becomes inevitable, because it enters the realm of crowdsourcing: now hundreds or thousands of people can contribute to it. When enough people had enough bandwidth for sharing music, it was inevitable that someone, somewhere was going to invent online music sharing. Napster just happened to have been first.
IBM’s Watson, which won Jeopardy, was built using three million dollars in hardware and had 2,880 processing cores. When that same amount of computer power is available in our personal computers (about 2025), we won’t just have a team of researchers at IBM playing with advanced AI. We’ll have hundreds of thousands of AI enthusiasts around the world contributing to an open source equivalent to Watson. Then AI will really take off.
(If you doubt that many people are interested, recall that more than 100,000 people registered for Stanford’s free course on AI and a similar number registered for the machine learning / Google self-driving car class.)
Of course, this technique doesn’t work for every class of innovation. Wikipedia was a tremendous invention in the process of knowledge curation, and it was dependent, in turn, on the invention of wikis. But it’s hard to say, even with hindsight, that we could have predicted Wikipedia, let alone forecast when it would occur.
(If one had the idea of an crowd curated online knowledge system, you could apply the litmus test of internet connection rate to assess when there would be a viable number of contributors and users. A documentation system such as a wiki is useless without any way to access it. But I digress…)
Objection, Your Honor
A common objection is that linear trends won’t continue to increase exponentially because we’ll run into a fundamental limitation: e.g. for computer processing speeds, we’ll run into the manufacturing limits for silicon, or the heat dissipation limit, or the signal propagation limit, etc.
I remember first reading statements like the above in the mid-1980s about the Intel 80386 processor. I think the statement was that they were using an 800 nm process for manufacturing the chips, but they were about to run into a fundamental limit and wouldn’t be able to go much smaller. (Smaller equals faster in processor technology.)
Semiconductor
manufacturing
processes

 

Source: Wikipedia
But manufacturing technology has proceeded to get smaller and smaller.  Limits are overcome, worked around, or solved by switching technology. For a long time, increases in processing power were due, in large part, to increases in clock speed. As that approach started to run into limits, we’ve added parallelism to achieve speed increases, using more processing cores and more execution threads per core. In the future, we may have graphene processors or quantum processors, but whatever the underlying technology is, it’s likely to continue to increase in speed at roughly the same rate.
Why Predicting The Future Is Useful: Predicting and Checking
There are two ways I like to use this technique. The first is as a seed for brainstorming. By projecting out linear trends and having a solid understanding of where technology is going, it frees up creativity to generate ideas about what could happen with that technology.
It never occurred to me, for example, to think seriously about neural implant technology until I was looking at the physical size trend chart, and realized that neural implants would be feasible in the near future. And if they are technically feasible, then they are essentially inevitable.
What OS will they run? From what app store will I get my neural apps? Who will sell the advertising space in our brains? What else can we do with uber-powerful computers about the size of a penny?
The second way I like to use this technique is to check other people’s assertions. There’s a company called Lifenaut that is archiving data about people to provide a life-after-death personality simulation. It’s a wonderfully compelling idea, but it’s a little like video streaming in 1994: the hardware simply isn’t there yet. If the earliest we’re likely to see human-level AI is 2024, and even that would be on a cluster of 1,000+ computers, then it’s seems impossible that Lifenaut will be able to provide realistic personality simulation anytime before that.* On the other hand, if they have the commitment needed to keep working on this project for fifteen years, they may be excellently positioned when the necessary horsepower is available.
At a recent Science Fiction Science Fact panel, other panelists and most of the audience believed that strong AI was fifty years off, and brain augmentation technology was a hundred years away. That’s so distant in time that the ideas then become things we don’t need to think about. That seems a bit dangerous.
* The counter-argument frequently offered is “we’ll implement it in software more efficiently than nature implements it in a brain.” Sorry, but I’ll bet on millions of years of evolution.

How To Do It

This article is How To Predict The Future, so now we’ve reached the how-to part. I’m going to show some spreadsheet calculations and formulas, but I promise they are fairly simple. There’s three parts to to the process: Calculate the annual increase in a technology trend, forecast the linear trend out, and then map future disruptions to the trend.
Step 1: Calculate the annual increase
It turns out that you can do this with just two data points, and it’s pretty reliable. Here’s an example using two personal computers, one from 1996 and one from 2011. You can see that cell B7 shows that computer processing power, in MIPS (millions of instructions per second), grew at a rate of 1.47x each year, over those 15 years.
A
B
C
1
MIPS
Year
2
Intel Pentium Pro
541
1996
3
Intel Core i7 3960X
177730
2011
4
5
Gap in years
15
=C3-C2
6
Total Growth
328.52
=B3/B2
7
Rate of growth
1.47
=B6^(1/B5)
I like to use data related to technology I have, rather than technology that’s limited to researchers in labs somewhere. Sure, there are supercomputers that are vastly more powerful than a personal computer, but I don’t have those, and more importantly, they aren’t open to crowdsourcing techniques.
I also like to calculate these figures myself, even though you can research similar data on the web. That’s because the same basic principle can be applied to many different characteristics.
Step 2: Forecast the linear trend
The second step is to take the technology trend and predict it out over time. In this case we take the annual increase in advancement (B$7 – previous screenshot), raised to an exponent of the number of elapsed years, and multiply it by the base level (B$11). The formula displayed in cell C12 is the key one.
A
B
C
10
Year
Expected MIPS
Formula
11
2011
177,730
=B3
12
2012
261,536
=B$11*(B$7^(A12-A$11))
13
2013
384,860
14
2014
566,335
15
2015
833,382
16
2020
5,750,410
17
2025
39,678,324
18
2030
273,783,840
19
2035
1,889,131,989
20
2040
13,035,172,840
21
2050
620,620,015,637
I also like to use a sanity check to ensure that what appears to be a trend really is one. The trick is to pick two data points in the past: one is as far back as you have good data for, the other is halfway to the current point in time. Then run the forecast to see if the prediction for the current time is pretty close. In the bandwidth example, picking a point in 1986 and a point in 1998 exactly predicts the bandwidth I have in 2012. That’s the ideal case.
Step 3: Mapping non-linear events to linear trend
The final step is to map disruptions to enabling technology. In the case of the streaming video example, I knew that a minimal quality video signal was composed of a resolution of 320 pixels wide by 200 pixels high by 16 frames per second with a minimum of 1 byte per pixel. I assumed an achievable amount for video compression: a compressed video signal would be 20% of the uncompressed size (a 5x reduction). The underlying requirement based on those assumptions was an available bandwidth of about 1.6mb/sec, which we would hit in 2005.
In the case of implantable computers, I assume that a computer of the size of a pencil eraser (1/4” cube) could easily be inserted into a human’s skull. By looking at physical size of computers over time, we’ll hit this by 2030:
Year
Size
(cubic inches)
Notes
1986
1782
Apple //e with two disk drives
2012
6.125
Motorola Droid 3
Elapsed years
26
Size delta
290.94
Rate of shrinkage per year
1.24
Future Size
2012
6.13
2013
4.92
2014
3.96
2015
3.18
2020
1.07
2025
0.36
2030
0.12
Less than 1/4 inch on a side cube. Could easily fit in your skull.
2035
0.04
2040
0.01
This is a tricky prediction: traditional desktop computers have tended to be big square boxes constrained by the standardized form factor of components such as hard drives, optical drives, and power supplies. I chose to use computers I owned that were designed for compactness for their time. Also, I chose a 1996 Toshiba Portege 300CT for a sanity check: if I project the trend between the Apple //e and Portege forward, my Droid should be about 1 cubic inch, not 6. So this is not an ideal prediction to make, but it’s still clues us in about the general direction and timing.
The predictions for human-level AI are more straightforward, but more difficult to display, because there’s a range of assumptions for how difficult it will be to simulate human intelligence, and a range of projections depending on how many computers you can bring to pair on the problem. Combining three factors (time, brain complexity, available computers) doesn’t make a nice 2-axis graph, but I have made the full human-level AI spreadsheet available to explore.
I’ll leave you with a reminder of a few important caveats:
  1. Not everything in life is subject to exponential improvements.
  2. Some trends, even those that appear to be consistent over time, will run into limits. For example, it’s clear that the rate of settling new land in the 1800s (a trend that was increasing over time) couldn’t continue indefinitely since land is finite. But it’s necessary to distinguish genuine hard limits (e.g. amount of land left to be settled) from the appearance of limits (e.g. manufacturing limits for computer processors).
  3. Some trends run into negative feedback loops. In the late 1890s, when all forms of personal and cargo transport depended on horses, there was a horse manure crisis. (Read Gotham: The History of New York City to 1898.) Had one plotted the trend over time, soon cities like New York were going to be buried under horse manure. Of course, that’s a negative feedback loop: if the horse manure kept growing, at a certain point people would have left the city. As it turns out, the automobile solved the problem and enabled cities to keep growing.

 

So please keep in mind that this is a technique that works for a subset of technology, and it’s always necessary to apply common sense. I’ve used it only for information technology predictions, but I’d be interested in hearing about other applications.

This is a repost of an article I originally wrote for Feld.com. If you enjoyed this post, please check out my novels Avogadro Corp: The Singularity Is Closer Than It Appears and A.I. Apocalypse, near-term science-fiction novels about realistic ways strong AI might emerge. They’ve been called “frighteningly plausible”, “tremendous”, and “thought-provoking”.

Technological unemployment is the notion that even as innovation creates new opportunities, it destroys old jobs. Automobiles, for example, created entirely new industries (and convenience), but eliminated jobs like train engineers and buggy builders. As the pace of technology change grows faster, the impact of large scale job elimination increases, and some fear we’ve passed the point of peak jobs. This post explores the past and future of technological unemployment.

Growing up in Brooklyn, my friend Vito’s father spent as much time tinkering around his home as he did working. He was just around more than other dads. I found it quite puzzling until Vito explained that his father was a stevedore, or longshoreman, a worker who loaded and unloaded shipping vessels.

New York Shipyard

Shipping containers (specifically intermodal containers) started to be widely used in the late 1960s and early 1970s. They took far less time to load and unload than un-contained cargo. Longshoreman, represented by a strong union, opposed the intermodal containers, until the union came to an agreement that the longshoreman would be compensated for the loss of employment due to the container innovation. So longshoreman worked when ships came in, and received payment (partial or whole I’m not sure) for the time they didn’t work because of how quickly the containers could be unloaded.

As a result Vito’s father was paid a full salary, even though his job didn’t require him full time. The extra time he was able to be with his kids and work around the home.

Other industries have had innovations that led to unemployment, and in most cases, those professions were not so protected. Blacksmiths are few and far between, and they didn’t get a stipend. Nor did wagon wheel makers, or train conductors, or cowboys. In fact, if we look at professions of the 1800s, we can see many that are gone today. And through there may have been public outcry at the time, we recognize that times change, and clearly we couldn’t protect their jobs forever, even if we wanted to.

Victorian Blacksmiths

However, technology changed slower in the 1800s. It’s likely that wagon wheel makers and blacksmiths died out through attribution (less people entering the profession because they saw fewer opportunities while existing, older practitioners retiring or dying) than through mass unemployment.

By comparison, in the 1900s, technology changed fast enough, and with enough disruption, that it routinely put people out of work. Washing machines put laundries out of business. Desktop publishing put typesetters out of work. (Desktop publishing created new jobs, new business, and new opportunities, but for people whose livelihood was typesetting: they were out of luck.) Travel websites put travel agents out of business. Telephone automation put operators out of work. Automated teller machines put many bank tellers out of work (and many more soon), and so on.

This notion that particular kinds of jobs cease to exist is known as technological unemployment. It’s been the subject of numerous articles lately. John Maynard Keynes used the term in 1930:

We are being afflicted with a new disease of which some readers may not yet have heard the name, but of which they will hear a great deal in the years to come-namely, technological unemployment. This means unemployment due to our discovery of means of economizing the use of labor outrunning the pace at which we can find new uses for labor.”

In December, Wired ran a feature length article on how robots would take over our jobs. TechCrunch wrote that we’ve hit peak jobs. Andrew McAfee (of Enterprise 2.0 fame) and Erik Brynjolfsson wrote Race Against The Machine. There is a Google+ community dedicated to discussing the concept and its implications.

There are different ways to respond to technological unemployment.

One approach is to do nothing: let people who lose their jobs retrain on their own to find new jobs. Of course, this causes pain and suffering. It can be hard enough to find a job when you’re trained for one, let alone when your skills are obsolete. Meanwhile, poverty destroys individuals, children, and families through long-term financial problems, lack of healthcare, food, shelter, and sufficient material goods. I find this approach objectionable on this reason alone.

A little personal side story: Several years ago a good friend broke his ankle and didn’t have health insurance. The resulting medical bills would be difficult to handle under any circumstances, but especially so in the midst of the great recession when their income was low. We helped raised money through donations to the family to help pay for the medical expenses. Friends and strangers chipped in. But what was remarkable is that nearly all the strangers that chipped in were other folks that either didn’t have health insurance or had extremely modest incomes. That is, although we made the same plea for help to everyone, except for friends, it was only those people who had been in the same or similar situations who actually donated money — despite them being the ones who could least afford to do it. I bring this up because I don’t think people who have a job and health insurance can really appreciate what it means to not have those things. 

The other reason it doesn’t make sense to do nothing is because it can often become a roadblock to meaningful change. One example of this is the logging industry. Despite fairly broad public support to changes in clear-cutting policies, loggers and logging companies often fight back with claims of the number of jobs that will be lost. Whether this is true or not, the job loss argument has stymied attempts to change logging policy to more sustainable practices. So even though we could get to a better long-term place through changes, the short-term fear of job loss can hold us back.

Similar arguments have often been made about military investments. Although this is a more complicated issue (it’s not just about jobs, but about overall global policy and positioning), I know particular military families that will consistently vote for the political candidate that supports the biggest military investment because that will preserve their jobs. Again, fear of job loss drives decision making, as opposed to bigger picture concerns.

Longshoremen circa 1912
by Lewis Hine

The longshoreman union agreement is what I’d call a semi-functional response to technological unemployment. Rather than stopping innovation, the compromise allowed innovation to happen while preserving the income of the affected workers. It certainly wasn’t a bad thing that Vito’s father was around more to help his family.

There are two small problems with this approach: it doesn’t scale, and it doesn’t enable change. Stevedores were a small number of workers in a big industry, one that was profitable enough to afford to continue to equal pay for reduced work.

I started to think about technological unemployment a few years ago when I published my first book. I was shocked at the amount of manual work it took to transform a manuscript into a finished book: anywhere from 20 to 60 hours for a print book.

As a software developer who is used to repeatable processes, I found the manual work highly objectionable. One of the recent principles of software development is the agile methodology, where change is embraced and expected. If I discover a problem in a web site, I can fix that problem, test the software, and deploy it, in a fully automated way, within minutes. Yet if I found a problem in my manuscript, it was tedious to fix: it required changes in multiple places, handoffs between multiple people and computers and software programs. It would take days of work to fix a single typo, and weeks to do a few hundred changes. What I expected, based on my years of experience in the software industry, was a tool that would automatically transform my manuscript into an ebook, printed book, manuscript format, etcetera.

I envisioned creating a tool to automate this workflow, something that would turn novel manuscripts into print-ready books with a single click. I also realized that such a tool would eliminate hundreds or thousands of jobs: designers who currently do this would be put out of work. Of course it wouldn’t be all designers, and it wouldn’t be able books. But for many books, the $250 to $1,000 that might currently be paid to a designer would be replaced by $20 for the software or web service to do that same job.

It is progress, and I would love such a tool, and it would undoubtably enable new publishing opportunities. But it has a cost, too.

Designers are intelligent people, and most would find other jobs. A few might eek out an existence creating book themes for the book formatting services,  but that would be a tiny opportunity compared to the earnings before, much the same way iStockPhoto changed the dynamics of photography. In essence, a little piece of the economic pie would be forever destroyed by that particular innovation.

When I thought about this, I realized that this was the story of the technology industry writ large: the innovations that have enabled new businesses, new economic opportunities, more convenience — they all come of the expense of existing businesses and existing opportunities.

I like looking up and reserving my own flights and I don’t want to go backwards and have travel agents again. But neither do I want to live in a society where people can’t find meaningful work.

Meet your future boss.

Innovation won’t stop, and many of us don’t want it to. I think there is a revolution coming in artificial intelligence, and subsequently in robotics, and these will speed up the pace of change, rendering even more jobs obsolete. The technological singularity may bring many wondrous things, but change and job loss is an inevitable part of it.

If we don’t want to lose civilization to poverty, and the longshoreman approach isn’t scalable, then what do we do?

One thing that’s clear is that we can’t do it piecemeal: If we must negotiate over every class of work, we’ll quickly become overwhelmed. We can’t reach one agreement with loggers, another with manufacturing workers, another with construction workers, another with street cleaners, a different one for computer programmers. That sort of approach doesn’t work either, and it’s not timely enough.

I think one answer is that we provide education and transitional income so that workers can learn new skills. If a logger’s job is eliminated, then we should be able to provide a year of income at their current income rate while they are trained in a new career. Either benefit alone doesn’t make sense: simply giving someone unemployment benefits to look for a job in a dying career doesn’t make a long term change. And we can’t send someone to school and expect them to learn something new if we don’t take care of their basic needs.

The shortcoming of the longshoreman solution is that the longshoremen were never trained in a new field. The expense of paying them for reduced work was never going to go away, because they were never going to make the move to a new career, so there would always be more of them than needed.

And rather than legislate which jobs receive these kinds of benefits, I think it’s easy to determine statistically. The U.S. government has thousands of job classifications: it can track which classifications are losing workers at a statistically significant rate, and automatically grant a “career transition” benefit if a worker loses a job in an affected field.

In effect, we’re adjusting the supply and demand of workers to match available opportunities. If logging jobs are decreasing, not only do you have loggers out of work, but you also have loggers competing for a limited number of jobs, in which case wages decrease, and even those workers with jobs are making so little money they soon can’t survive.

Even as many workers are struggling to find jobs, I see companies struggling to find workers. Skills don’t match needs, so we need to add to people’s skills.

Programmers need to eat too,

I use the term education, but I suspect there are a range of ways that retraining can happen besides the traditional education experience: unpaid work internships, virtual learning, and business incubators.

There is currently a big focus on high tech incubators like TechStars because of the significant return on investment in technology companies, but many firms from restaurants to farming to brick and mortar stores would be amenable to incubators. Incubator graduates are nearly twice as likely to stay in business as compared to the average company, so the process clearly works. It just needs to be expanded to new businesses and more geographic areas.

The essential attributes of entrepreneurs are an ability to learn quickly, respond to changing conditions, and big picture thinking. These will be vital skills in the fast-evolving future. It’s why, when my school age kids talk about getting ‘jobs’ when they grow up, I push them towards thinking about starting their own businesses.

Still, I think a broad spectrum retraining program, including a greater move toward entrepreneurship, is just one part of the solution.

I think the other part of the solution is to recognize that even with retraining, there will come a time,
whether in twenty-five or fifty years, when the majority of jobs are performed by machines and computers. (There may be a small subset of jobs humans do because they want to, but eventually all work will become a hobby: something we do because we want to, not because we need to.)

This job will be available
in the future.

The pessimistic view would be bad indeed: 99% of humanity scrambling for any kind of existence at all. I don’t believe it will end up like this, but clearly we need a different kind of economic infrastructure for the period when there are no jobs. Rather than wait until the situation is dire, we should start putting that infrastructure in place now.

We need a post-scarcity economic infrastructure. Here’s one example:

We have about 650,000 homeless in the United States and foreclosures on millions of homes but about 11% of U.S. houses are empty. Empty! They are creating no value for anyone. Houses are not scarce, and we could reduce both suffering (homeless) and economic strain (foreclosures) by realizing this. We can give these non-scarce resources to people, and free up their money for actually scarce resources, like food and material goods.

Who wins when a home is foreclosed?

To weather the coming wave of joblessness, we need a combination of better redistribution of non-scarce resources as well as a basic living stipend. There are various models of this from Alaska’s Permanent Fund to guaranteed basic income. Unlike full fledged socialism, where everyone receives the same income regardless of their work (and can earn neither more or less than this, and by traditional thinking, therefore may have little motivation to work), a stipend or basic income model provides a minimal level of income so that people can live humanely. It does not provide for luxuries: if you want to own a car, or a big screen TV, or eat steak every night, you’re still going to have to work.

European Initiative for
Unconditional Basic Income

This can be dramatically less expensive than it might seem. When you realize that housing is often a family’s largest expense (consuming more than half of the income of a family at poverty level), and the marginal cost of housing is $0 (see above), and if universal healthcare exists (we can hope the U.S. will eventually reach the 21st century), then providing a basic living stipend is not a lot of additional money.

I think this is the inevitable future we’re marching towards. To reiterate:

  1. full income and retraining for jobs eliminated due to technological change
  2. redistribution of unused, non-scarce resources
  3. eventual move toward a basic living stipend

I think we can fight the future, perhaps delay it by a few years. If we do, we’ll cause untold suffering along the way as yet more people edge toward poverty, joblessness, or homelessness.

Or we can embrace the future quickly, and put in place the right structural supports that allow us to move to a post-scarcity economy with less pain.

What do you think?