I originally wrote the ten musings on AI risks as the first half of a talk I was planning to give at the Rise of AI conference in Berlin next month. Unfortunately, plans for speaking at the conference fell through. The speaking points have been languishing in a document since then, so I figured I’d share them as a blog post while they are still relevant. Please excuse any errors, this was just a rough draft.

10. There’s a balance between AI risk and AI benefits.

There’s too much polarization on AI risks. One camp says AI is going to be end of all life, we must stop all development immediately. The other camp says AI poses no risk, so let’s go full speed ahead.

Neither statement is totally true.

AI poses some risks, as all things do. The first step to being able to have a discussion about those risks is to admit that they exist. Then we can have a more nuanced and educated conversation about those risks.

However, no matter how severe those risks are, there’s no way we can stop all AI development, because:

9. There’s no way to put the AI genie back in the bottle.

There’s too much economic and military advantage to artificial intelligence to stop AI development.

On a government level, no governments would give up the advantages they could get. When we have nation states hacking each other and surveilling each other with little to no regard for laws, agreements, or ethics, there’s no way we could get them to limit AI development, if it could give them an advantage over others.

On a corporate level, no corporations would willingly give up the economic advantages of artificial intelligence, either as a provider of AI, with the revenue they might stand to make, nor as a consumer of AI, gaining efficiencies in manufacturing, business, personnel management, or communications.

Lastly, on an individual level, we cannot stop people from developing and releasing software. We couldn’t stop bitcoin, or hackers, or malware. We’re certainly not going to stop AI.

8. We must accelerate the development of safeguards, not slow the development of AI.

Because we can’t stop the development of AI, and because AI has many risks, the only option we have is to accelerate the development of safeguards, by thinking through risks and developing approaches to address them.

If a car had only an engine and wheels, we wouldn’t start driving it. We need, at a minimum, brakes and a steering wheel. Yet little investment is being made into mitigating risks with basic safeguards.

7. Manual controls are the most elementary form of safeguard.

Consider the Google Car. The interior has no steering wheel, no brake, no gas pedal. It makes sense that we would take out what isn’t needed.

But what happens if GPS goes down or if there’s a Google Car virus or anything else that renders the self-driving ability useless? Then this car is just a hunk of plastic and metal.

What if it isn’t just this car, but all cars? Not just all cars, but all trucks, including delivery trucks? Now suddenly our entire transportation infrastructure is gone, and along with it, our supply chains. Businesses stop, people can’t get necessary supplies, and they eventually starve.

It’s not just transportation we depend on. It’s payment systems, the electrical grid, and medical systems as well.

Of course, manual controls have a cost. Keeping people trained and available to operate buses and planes and medical systems has a cost. In the past, when we had new technical innovations, we didn’t keep the old tools and knowledge around indefinitely. Once we had automatic looms, we didn’t still have people still working manual ones.

But one key difference is the mode of failure. If a piece of machinery fails, it’s just that one instance. If we have a catastrophic AI event — whether it’s a simple crash, unintentional behavior, or actual malevolence, it has the potential to affect all of those instances. It’s not one self-driving car breaking, it’s potentially all self-driving vehicles failing simultaneously.

6. Stupidity is not a form of risk mitigation.

I’ve heard people suggest that limiting AI to a certain level of intelligence or capability is one way to ensure safety.

But let’s imagine this scenario:

You’re walking down a dark street at night. Further on down the block, you see an ominous looking figure, who you worry that might mug you or worse. Do you think to yourself, “I hope he’s stupid!”

Of course not. An intelligent person is less likely to hurt you.

So why do we think that crippling AI can lead to good outcomes?

Even stupid AI has risks: AI can crash the global stock market, cripple the electrical grid, or make poor driving or flying decisions. All other things being equal, we would expect a more intelligent AI to make better decisions than a stupid one.

That being said, we need to embody systems of ethical thinking.

5. Ethics is a two way street.

Most often, when we think about ethics and AI, we think about guiding the behavior of AI towards humans.

But what about the behavior of humans towards AI?

Consider a parent and child standing together outside by their family car. The parent is frustrated because the car won’t start, and they kick the tire of the car. The child might be surprised by this, but they likely aren’t going to be traumatized by it. There’s only so much empathy they have for an inanimate machine.

Now imagine that the parent and child are standing together by their family dog. The dog has just had an accident on the floor in the house. The parent kicks the dog. The child will be traumatized by this behavior, because they have empathy for the dog.

What happens when we blur the line? What if we had a very realistic robotic dog? We could easily imagine the child being very upset if their robotic dog was attacked, because even though we adults know it is not alive, it will be alive for the child.

I see my kids interact with Amazon Alexa, and they treat her more like a person. They laugh at her, thank her, and interact with her in ways that they don’t interact with the TV remote control, for example.

Now what if my kids learned that Alexa was the result of evolutionary programming, and that there were thousands or millions of earlier versions of Alexa that had been killed off in in the process of making Alexa. How will they feel? How will they feel if their robotic dog gets recycled at the end of its life? Or if it “dies” when you don’t pay the monthly fee?

It’s not just children that are affected. We all have relationships with inanimate objects to some degree, something you treat with reverence. That will grow as those objects appear more intelligent.

My point is that how we treat AI will affect us emotionally, whether we want to or not.

(Thanks and credit to Daniel H. Wilson for the car/dog example.)

4. How we treat AI is a model for how AI will treat us.

We know that if we want to teach children to be polite, we must model politeness. If we want to teach empathy, we must practice empathy. If we want to teach respect, we must be respectful.

So how we treat AI is critically important for how AI sees us. Now, clearly I’m talking about AGI, not narrow AI. But let’s say we have a history of using genetic programming techniques to breed better performing AI. The implication is that we kill off thousands of programs to obtain one good program.

If we run AI programs at our whim, and stop them or destroy them when we’re finished with them, we’re treating them in a way that would be personally threatening to a sufficiently advanced AGI.

It’s a poor ethical model for how we’d want an advanced AI to treat us.

The same goes for other assumptions that stem from treating AI as machines, such as assuming an AI would work 24 hours a day, 7 days a week on the tasks we want.

Now we can’t know how AI would want to be treated, but assuming we can treat them like machines is a bad starting point. So we either treat them like we would other humans and accord them similar rights, or better yet, we ask them how they want to be treated, and treat them accordingly.

Historically, though, there are those who aren’t very good at treating other people with the respect and rights they are due. They aren’t very likely to treat AI well. This could potentially be dangerous, especially if we’re talking about AI with control over infrastructure or other important resources. We have to become even better at protecting the rights of people, so that we can apply those same principles to protecting the rights of AI. (and codifying this within our system of law)

3. Ethical behavior of AI towards people includes the larger environment in which we live and operate.

If we build artificial intelligence that optimizes for a given economic result, such as running a business to maximize profit, and we embody our current system of laws and trade agreements, then what we’ll get is a system that looks much like the publicly-traded corporation does today.

After all, the modern corporation is a form of artificial intelligence that optimizes for profit at the expense of everything else. It just happens to be implemented as a system of wetware, corporate rules, and laws that insist that it must maximize profit.

We can and must do better with machine intelligence.

We’re the ones building the AI, we get to decide what we want. We want a system that recognizes that human welfare is more than just the money in our bank accounts, and that it includes free agency, privacy, respect, and happiness and other hard to define quality.

We want an AI that recognizes that we live in a closed ecosystem, and if we degrade that ecosystem, we’re compromising our long-term ability to achieve those goals.

Optimizing for multiple values is difficult for people, but it should be easier for AGI over the long term, because it can evaluate and consider many more options to a far greater depth and at a far greater speed than people ever can.

An AI that simply obeys laws is never going to get us what we need. We can see many behaviors that are legal and yet still harmful.

The problem is not impossible to solve. You can ask any ten year old child what we should do, and they’ll almost always give you an ethically superior answer to what a CEO of a corporation will tell you.

2. Over the long run, the ethical behavior of AI toward people must include intent, not just rules.

In the next few years, we’ll see narrow AI solutions to ethical behavior problems.

When an accident is unavoidable, self-driving AI will choose what we decide as the best option.

It’s better to hit another car than a pedestrian because the pedestrian will be hurt more. That’s ethically easy. We’ll try to answer it.

More difficult: The unattached adult or the single mother whom two children depend on?

We can come up with endless variations of the trolley dilemma, and depending on how likely they are, we’ll embody some of them in narrow AI.

But none of that can be generalized to solve other ethical problems.

  • How much carbon can we afford to emit?
  • Is it better to save 500 local manufacturing jobs, or to reduce the cost of the product by half, when the product will make people’s lives better?
  • Better to make a part out of metal, which has certain environmental impacts, or plastic, which has different ones?

These are really difficult questions. Some of them we attempt to answer today with techniques such as lifecycle analysis. AI will do that job far better than us, conducting lifecycle analysis for many, many decisions.

1. As we get closer to artificial general intelligence, we must consider the role of emotions in decision-making.

In my books, which span 30 years in my fictional, near-future world, AI start out emotionless, but gradually develop more emotions. I thought hard about that: was I giving them emotions because I wanted to anthropomorphize the AI, and make them easier characters to write, or was there real value to emotions?

People have multiple systems for decision-making. We have some autonomic reactions, like jerking away our hand from heat, which happens without involving the brain until after the fact.

We have some purely logical decisions, such as which route to take to drive home.

But most of our decisions are decided or guided by emotional feelings. Love. Beauty. Anger. Boredom. Fear.

It would be a terrible thing if we needed to logically think through every decision: Should I kiss my partner now? Let me think through the pros and cons of that decision…. No, that’s a mostly emotional decision.

Others are a blend of emotion and logic: Should I take that new job? Is this the right person for me to marry?

I see emotions as a shortcut to decision-making, because it would take forever to reach every decision through a dispassionate, logical evaluation of options. And that’s the same reason why we have an autonomic system: to shortcut conscious decision making. I perceive this stove is hot. I perceive that my hand is touching the stove. This amount of heat sustained too long will damage my hand. Damaging my hand would be bad because it will hurt and because it will compromise my ability to do other things. Therefore, I conclude I shall withdraw my hand from the stove.

That’s a terrible approach to resolve a time critical matter.

Emotions inform or constrain decision making. I might still think through things, but the decision I reach will differ depending on whether I’m angry and scared, or comfortable and confident.

As AI become sophisticated and approach or exceed AGI, we must eventually see the equivalent of emotions that automate some lesser decisions for AI and guide other, more complicated decisions.

Research into AI emotions will likely be one of the signs that AGI is very, very near.

 

World’s shortest Interstellar review: Go see this movie right now.

Slightly longer review:

I got advanced screening tickets to see Interstellar in 35mm at the Hollywood theatre in Portland. I didn’t know that much about the movie, other than seeing the trailer and thinking it looked pretty good.

In fact, it was incredible. The trailer does not do it justice. I don’t want to give away the plot of the movie, so I’m not going to list all of the big ideas in this movie, but Erin and I went through the list on the drive home, and it was impressive. Easily the best movie I’ve seen in quite a while.

And this is one that really deserves being seen on a big screen, in a good theatre, on 35mm film if possible.

 

 

 

I only have a limited amount of writing time this week, and I want to focus that time on my next novel. (No, not book 4. That’s off with an editor right now. I’m drafting the novel after that, the first non-Avogadro Corp book.) But I feel compelled to briefly address the reaction to Elon Musk’s opinion about AI.

Brief summary: Elon Musk said that AI is a risk, and that the risks could be bigger than those posed by nuclear weapons. He compared AI to summoning a demon, using the comparison to illustrate the idea that although we think we’d be in control, AI could easily escape from that control.

Brief summary of the reaction: A bunch of vocal folks have ridiculed Elon Musk for raising these concerns. I don’t know how vocal they are, but there seems to be a lot of posts in my feeds from them.

I think I’ve said enough to make it clear that I agree that there is the potential for risk. I’m not claiming the danger is guaranteed, nor do I believe that it will come in the form of armed robots (despite the fiction I write). Again, to summarize very briefly: the risk of AI danger can come from many different dimensions:

  • accidents (a programming bug that causes the power grid to die, for example)
  • unintentional side effects (an AI that decides on the best path to fulfill it’s goal without taking into account the impact on humans: maybe an autonomous mining robot that harvests the foundations of buildings)
  • complex interactions (e.g. stock trading AI that nearly collapsed the financial markets a few years ago)
  • intention decisions (an AI that decides humans pose a risk to AI, or an AI that is merely angry or vengeful.)
  • human-driven terrorism (e.g. nanotechnology made possible by AI, but programmed by a person to attack other people)

Accidents and complex interactions have already happened. Programmers already don’t understand their code, and AI are often written as black-boxes that are even more incomprehensible. There will be more of these, and they don’t require human-level intelligence. Once AI does achieve human-level intelligence, then new risks become more likely.

What makes AI risks different than more traditional ones are their speed and scale. A financial melt-down can happen in seconds, and we humans would know about it only afterwards. Bad decisions by a human doctor could affect a few dozen patients. Bad decisions by a medical AI that’s installed in every hospital could affects hundreds of thousands of patients.

There are many potential benefits to AI. They are also not guaranteed, but they include things like more efficient production so that we humans might work less, greater advances in medicine and technology so that we can live longer, and reducing our impact on the environment so we have a healthier planet.

Because of these many potential benefits, we probably don’t want to stop work on AI. But since almost all research effort is going into creating AI and very little is going into reducing the risks of AI, we have an imbalance. When Elon Musk, who has a great deal of visibility and credibility, talks about the risks of AI, this is a very good thing, because it will help us address that imbalance and invest more in risk reduction.

ROS, the open source robotics OS, is accelerating development in robotics because scientists don’t have to reinvent everything from scratch:

As an example of how ROS works, imagine you’re building an app. That app is useless without hardware and software – that is, your computer and operating system. Before ROS, engineers in different labs had to build that hardware and software specifically for every robotic project. As a result, the robotic app-making process was incredibly slow – and done in a vacuum.  

Now ROS, along with complementary robot prototypes, provide that supporting hardware and software. Robot researchers can shortcut straight to the app building. And since other researchers around the world are using the same tools, they can easily share their developments from one project to another. 

I wrote a similar article last year about how we should expect to see an acceleration in both AI and robotics due to this effect. The remaining barrier to participation is cost:

The reason we haven’t seen even greater amateur participation in robotics and AI, up until this point, has been because of the cost: whether it’s the $400,000 to buy a PR2, or $3 million dollars to replicate IBM’s Watson. This too is about to change.

It’s about to change because cost of electronics declines quickly: by 2025, the same processing capacity it takes to run Watson will be available to us in a general purpose personal computer. Robotics hardware might not decrease in cost as quickly as pure silicon would, but it will surely come down. When it hits the price of a car ($25,000), I’m sure we’ll see hobbyists with them.

PR2 fetches a beer from the fridge

This is a great post and video about two robots making pancakes together. What’s amazing is that it’s not all preprogrammed. They’re figuring this stuff out on the fly:

James uses the Web for problem solving, just like we would.  To retrieve the correct bottle of pancake mix from the fridge, it looks up a picture on the Web and then goes online to find the cooking instructions. 

Rosie makes use of gravity compensation when pouring the batter, with the angle and the time for pouring the pancake mix adjusted depending on the weight of the mix.  The manipulation of the spatula comes in to play when Rosie’s initially inaccurate depth estimation is resolved by sensors detecting contact with the pancake maker.

Technological unemployment is the notion that even as innovation creates new opportunities, it destroys old jobs. Automobiles, for example, created entirely new industries (and convenience), but eliminated jobs like train engineers and buggy builders. As the pace of technology change grows faster, the impact of large scale job elimination increases, and some fear we’ve passed the point of peak jobs. This post explores the past and future of technological unemployment.

Growing up in Brooklyn, my friend Vito’s father spent as much time tinkering around his home as he did working. He was just around more than other dads. I found it quite puzzling until Vito explained that his father was a stevedore, or longshoreman, a worker who loaded and unloaded shipping vessels.

New York Shipyard

Shipping containers (specifically intermodal containers) started to be widely used in the late 1960s and early 1970s. They took far less time to load and unload than un-contained cargo. Longshoreman, represented by a strong union, opposed the intermodal containers, until the union came to an agreement that the longshoreman would be compensated for the loss of employment due to the container innovation. So longshoreman worked when ships came in, and received payment (partial or whole I’m not sure) for the time they didn’t work because of how quickly the containers could be unloaded.

As a result Vito’s father was paid a full salary, even though his job didn’t require him full time. The extra time he was able to be with his kids and work around the home.

Other industries have had innovations that led to unemployment, and in most cases, those professions were not so protected. Blacksmiths are few and far between, and they didn’t get a stipend. Nor did wagon wheel makers, or train conductors, or cowboys. In fact, if we look at professions of the 1800s, we can see many that are gone today. And through there may have been public outcry at the time, we recognize that times change, and clearly we couldn’t protect their jobs forever, even if we wanted to.

Victorian Blacksmiths

However, technology changed slower in the 1800s. It’s likely that wagon wheel makers and blacksmiths died out through attribution (less people entering the profession because they saw fewer opportunities while existing, older practitioners retiring or dying) than through mass unemployment.

By comparison, in the 1900s, technology changed fast enough, and with enough disruption, that it routinely put people out of work. Washing machines put laundries out of business. Desktop publishing put typesetters out of work. (Desktop publishing created new jobs, new business, and new opportunities, but for people whose livelihood was typesetting: they were out of luck.) Travel websites put travel agents out of business. Telephone automation put operators out of work. Automated teller machines put many bank tellers out of work (and many more soon), and so on.

This notion that particular kinds of jobs cease to exist is known as technological unemployment. It’s been the subject of numerous articles lately. John Maynard Keynes used the term in 1930:

We are being afflicted with a new disease of which some readers may not yet have heard the name, but of which they will hear a great deal in the years to come-namely, technological unemployment. This means unemployment due to our discovery of means of economizing the use of labor outrunning the pace at which we can find new uses for labor.”

In December, Wired ran a feature length article on how robots would take over our jobs. TechCrunch wrote that we’ve hit peak jobs. Andrew McAfee (of Enterprise 2.0 fame) and Erik Brynjolfsson wrote Race Against The Machine. There is a Google+ community dedicated to discussing the concept and its implications.

There are different ways to respond to technological unemployment.

One approach is to do nothing: let people who lose their jobs retrain on their own to find new jobs. Of course, this causes pain and suffering. It can be hard enough to find a job when you’re trained for one, let alone when your skills are obsolete. Meanwhile, poverty destroys individuals, children, and families through long-term financial problems, lack of healthcare, food, shelter, and sufficient material goods. I find this approach objectionable on this reason alone.

A little personal side story: Several years ago a good friend broke his ankle and didn’t have health insurance. The resulting medical bills would be difficult to handle under any circumstances, but especially so in the midst of the great recession when their income was low. We helped raised money through donations to the family to help pay for the medical expenses. Friends and strangers chipped in. But what was remarkable is that nearly all the strangers that chipped in were other folks that either didn’t have health insurance or had extremely modest incomes. That is, although we made the same plea for help to everyone, except for friends, it was only those people who had been in the same or similar situations who actually donated money — despite them being the ones who could least afford to do it. I bring this up because I don’t think people who have a job and health insurance can really appreciate what it means to not have those things. 

The other reason it doesn’t make sense to do nothing is because it can often become a roadblock to meaningful change. One example of this is the logging industry. Despite fairly broad public support to changes in clear-cutting policies, loggers and logging companies often fight back with claims of the number of jobs that will be lost. Whether this is true or not, the job loss argument has stymied attempts to change logging policy to more sustainable practices. So even though we could get to a better long-term place through changes, the short-term fear of job loss can hold us back.

Similar arguments have often been made about military investments. Although this is a more complicated issue (it’s not just about jobs, but about overall global policy and positioning), I know particular military families that will consistently vote for the political candidate that supports the biggest military investment because that will preserve their jobs. Again, fear of job loss drives decision making, as opposed to bigger picture concerns.

Longshoremen circa 1912
by Lewis Hine

The longshoreman union agreement is what I’d call a semi-functional response to technological unemployment. Rather than stopping innovation, the compromise allowed innovation to happen while preserving the income of the affected workers. It certainly wasn’t a bad thing that Vito’s father was around more to help his family.

There are two small problems with this approach: it doesn’t scale, and it doesn’t enable change. Stevedores were a small number of workers in a big industry, one that was profitable enough to afford to continue to equal pay for reduced work.

I started to think about technological unemployment a few years ago when I published my first book. I was shocked at the amount of manual work it took to transform a manuscript into a finished book: anywhere from 20 to 60 hours for a print book.

As a software developer who is used to repeatable processes, I found the manual work highly objectionable. One of the recent principles of software development is the agile methodology, where change is embraced and expected. If I discover a problem in a web site, I can fix that problem, test the software, and deploy it, in a fully automated way, within minutes. Yet if I found a problem in my manuscript, it was tedious to fix: it required changes in multiple places, handoffs between multiple people and computers and software programs. It would take days of work to fix a single typo, and weeks to do a few hundred changes. What I expected, based on my years of experience in the software industry, was a tool that would automatically transform my manuscript into an ebook, printed book, manuscript format, etcetera.

I envisioned creating a tool to automate this workflow, something that would turn novel manuscripts into print-ready books with a single click. I also realized that such a tool would eliminate hundreds or thousands of jobs: designers who currently do this would be put out of work. Of course it wouldn’t be all designers, and it wouldn’t be able books. But for many books, the $250 to $1,000 that might currently be paid to a designer would be replaced by $20 for the software or web service to do that same job.

It is progress, and I would love such a tool, and it would undoubtably enable new publishing opportunities. But it has a cost, too.

Designers are intelligent people, and most would find other jobs. A few might eek out an existence creating book themes for the book formatting services,  but that would be a tiny opportunity compared to the earnings before, much the same way iStockPhoto changed the dynamics of photography. In essence, a little piece of the economic pie would be forever destroyed by that particular innovation.

When I thought about this, I realized that this was the story of the technology industry writ large: the innovations that have enabled new businesses, new economic opportunities, more convenience — they all come of the expense of existing businesses and existing opportunities.

I like looking up and reserving my own flights and I don’t want to go backwards and have travel agents again. But neither do I want to live in a society where people can’t find meaningful work.

Meet your future boss.

Innovation won’t stop, and many of us don’t want it to. I think there is a revolution coming in artificial intelligence, and subsequently in robotics, and these will speed up the pace of change, rendering even more jobs obsolete. The technological singularity may bring many wondrous things, but change and job loss is an inevitable part of it.

If we don’t want to lose civilization to poverty, and the longshoreman approach isn’t scalable, then what do we do?

One thing that’s clear is that we can’t do it piecemeal: If we must negotiate over every class of work, we’ll quickly become overwhelmed. We can’t reach one agreement with loggers, another with manufacturing workers, another with construction workers, another with street cleaners, a different one for computer programmers. That sort of approach doesn’t work either, and it’s not timely enough.

I think one answer is that we provide education and transitional income so that workers can learn new skills. If a logger’s job is eliminated, then we should be able to provide a year of income at their current income rate while they are trained in a new career. Either benefit alone doesn’t make sense: simply giving someone unemployment benefits to look for a job in a dying career doesn’t make a long term change. And we can’t send someone to school and expect them to learn something new if we don’t take care of their basic needs.

The shortcoming of the longshoreman solution is that the longshoremen were never trained in a new field. The expense of paying them for reduced work was never going to go away, because they were never going to make the move to a new career, so there would always be more of them than needed.

And rather than legislate which jobs receive these kinds of benefits, I think it’s easy to determine statistically. The U.S. government has thousands of job classifications: it can track which classifications are losing workers at a statistically significant rate, and automatically grant a “career transition” benefit if a worker loses a job in an affected field.

In effect, we’re adjusting the supply and demand of workers to match available opportunities. If logging jobs are decreasing, not only do you have loggers out of work, but you also have loggers competing for a limited number of jobs, in which case wages decrease, and even those workers with jobs are making so little money they soon can’t survive.

Even as many workers are struggling to find jobs, I see companies struggling to find workers. Skills don’t match needs, so we need to add to people’s skills.

Programmers need to eat too,

I use the term education, but I suspect there are a range of ways that retraining can happen besides the traditional education experience: unpaid work internships, virtual learning, and business incubators.

There is currently a big focus on high tech incubators like TechStars because of the significant return on investment in technology companies, but many firms from restaurants to farming to brick and mortar stores would be amenable to incubators. Incubator graduates are nearly twice as likely to stay in business as compared to the average company, so the process clearly works. It just needs to be expanded to new businesses and more geographic areas.

The essential attributes of entrepreneurs are an ability to learn quickly, respond to changing conditions, and big picture thinking. These will be vital skills in the fast-evolving future. It’s why, when my school age kids talk about getting ‘jobs’ when they grow up, I push them towards thinking about starting their own businesses.

Still, I think a broad spectrum retraining program, including a greater move toward entrepreneurship, is just one part of the solution.

I think the other part of the solution is to recognize that even with retraining, there will come a time,
whether in twenty-five or fifty years, when the majority of jobs are performed by machines and computers. (There may be a small subset of jobs humans do because they want to, but eventually all work will become a hobby: something we do because we want to, not because we need to.)

This job will be available
in the future.

The pessimistic view would be bad indeed: 99% of humanity scrambling for any kind of existence at all. I don’t believe it will end up like this, but clearly we need a different kind of economic infrastructure for the period when there are no jobs. Rather than wait until the situation is dire, we should start putting that infrastructure in place now.

We need a post-scarcity economic infrastructure. Here’s one example:

We have about 650,000 homeless in the United States and foreclosures on millions of homes but about 11% of U.S. houses are empty. Empty! They are creating no value for anyone. Houses are not scarce, and we could reduce both suffering (homeless) and economic strain (foreclosures) by realizing this. We can give these non-scarce resources to people, and free up their money for actually scarce resources, like food and material goods.

Who wins when a home is foreclosed?

To weather the coming wave of joblessness, we need a combination of better redistribution of non-scarce resources as well as a basic living stipend. There are various models of this from Alaska’s Permanent Fund to guaranteed basic income. Unlike full fledged socialism, where everyone receives the same income regardless of their work (and can earn neither more or less than this, and by traditional thinking, therefore may have little motivation to work), a stipend or basic income model provides a minimal level of income so that people can live humanely. It does not provide for luxuries: if you want to own a car, or a big screen TV, or eat steak every night, you’re still going to have to work.

European Initiative for
Unconditional Basic Income

This can be dramatically less expensive than it might seem. When you realize that housing is often a family’s largest expense (consuming more than half of the income of a family at poverty level), and the marginal cost of housing is $0 (see above), and if universal healthcare exists (we can hope the U.S. will eventually reach the 21st century), then providing a basic living stipend is not a lot of additional money.

I think this is the inevitable future we’re marching towards. To reiterate:

  1. full income and retraining for jobs eliminated due to technological change
  2. redistribution of unused, non-scarce resources
  3. eventual move toward a basic living stipend

I think we can fight the future, perhaps delay it by a few years. If we do, we’ll cause untold suffering along the way as yet more people edge toward poverty, joblessness, or homelessness.

Or we can embrace the future quickly, and put in place the right structural supports that allow us to move to a post-scarcity economy with less pain.

What do you think?

Some assorted news:
1) Avogadro Corp hit #2 on the technothriller list, and A.I. Apocalypse hit #5:
Both books catapulted to the top of the technothrillers category this week.
Thank you to everyone who bought a copy, posted reviews, or told friends! 
2) If you’ve been waiting to get A.I. Apocalypse via Smashwords for your iPad, Nook, or other device, it’s available now.
3) If you haven’t seen them, check out this week’s collection of robot videos from IEEE Spectrum. There’s some amazing ones in there, including this one showing their dragonfly drone from the 1970s:

The Uncanny Valley by Masahiro Mori is a highly referenced 1970 paper about human reaction to realistic robots. The first authorized and reviewed English translation has been published in the IEEE Spectrum, and is available online. He starts the paper by explaining, “in climbing toward the goal of making robots appear human, our affinity for them increases until we come to a valley, which I call the uncanny valley.”

Here’s an excerpt:

Since creating an artificial human is itself one of the objectives of robotics, various efforts are underway to build humanlike robots. For example, a robot’s arm may be composed of a metal cylinder with many bolts, but by covering it with skin and adding a bit of fleshy plumpness, we can achieve a more humanlike appearance. As a result, we naturally respond to it with a heightened sense of affinity.

Many of our readers have experience interacting with persons with physical disabilities, and all must have felt sympathy for those missing a hand or leg and wearing a prosthetic limb. Recently, owing to great advances in fabrication technology, we cannot distinguish at a glance a prosthetic hand from a real one. Some models simulate wrinkles, veins, fingernails, and even fingerprints. Though similar to a real hand, the prosthetic hand’s color is pinker, as if it had just come out of the bath.

One might say that the prosthetic hand has achieved a degree of resemblance to the human form, perhaps on a par with false teeth. However, when we realize the hand, which at first site looked real, is in fact artificial, we experience an eerie sensation. For example, we could be startled during a handshake by its limp boneless grip together with its texture and coldness. When this happens, we lose our sense of affinity, and the hand becomes uncanny.

Here’s the diagram, as published on IEEE Spectrum:

Read the entire article.