The Ethics of Artificial Intelligence
Petra Mitchell, Amy Thomson, Daniel H. Wilson, David W. Goldman
OryCon 33
  • Do we have the right to turn off an artificial intelligence?
  • Amy: Buddhist definition of sentience is it is able to suffer?
  • Mitchell: Can you turn it off and turn it back on again? Humans can’t be turned on.
  • Wilson: If you’ve got an AI in a box, and it’s giving off signs that it’s alive, then it’s going to tell you what it wants.
  • In Star Trek, Data has an on/off switch. but he doesn’t want people to know about it.
  • If IBM spends 3/4 of a billion dollars making an AI, can they do anything they want with it?
  • Parents have a lot of rights over their children, but a lot of restrictions too.
  • AI isn’t simply going to arise on the internet. IBM is building the most powerful supercomputer on the planet to achieve it. It’s not going to be random bits. 
  • Evolutionary pressure is one way to evolve an artificial intelligence.
  • We can use genetic algorithms. We’re going to have algorithms compete, we’re going to kill the losers, we’re going to force the winners to procreate. If we were talking about humans, this would be highly unethical.
    • So where to draw the boundary?
    • Daniel: You are being God. You’ve defined the world, you raise a generation, they generate their answers, they’ve lived their life.
  • There may be people who will become attached to illusionary intelligences – like Siri – that isn’t a real intelligence. This will happen long before anything is really intelligent emerges.
  • Turing Tests
    • The [something] prize happens every year. Each year they get a little closer. A higher percentage of people believe the chatbot is human. 
    • It’s reasonable to believe that in ten years, it won’t be possible to distinguish.
  • Other ethical issues besides suffering:
    • If you have an AI, and you clone it, and then you have two that are conscious, then you shut one off – did you kill it?
  • How do you build robots that will behave ethically?
    • Not how do we treat them, but how do they treat us?
  • Now we have issues of robots that are armed and operating autonomously. Unmanned Aerial Vehicles. 
  • We already have autonomous robots on military ships that defends against incoming planes and missiles. And back in the 1970s, it shot down an Iranian passenger jet, killing 300 passengers. 
  • When the stock market tanks, who’s fault is it? The AI? The humans? It happens faster than the humans can react.
  • Neural nets are black boxes. Decision trees are more obvious.
  • Asimov spent most of his time writing stories about how defining laws didn’t work.
  • We can’t simply say “Don’t kill humans”.
  • We have dog attacks, but we don’t ban dogs.
  • Tens of thousands die every year in car accidents, but we don’t eliminate cars.
  • We’ll put up with a lot of loss if we get benefit.
  • Japan is desperately trying to come up with robotic nursing aides because they don’t have enough people to do it.
    • Thomson: a robot nursing aide is an abomination. these people are lonely.
    • Wilson: if the alternative is no care, or inferior care.
    • What happens when someone leaves their robot a million dollars?
  • What happens when the robot butlers of the world, incredibly successful, and deployed everywhere, all go on strike?
    • Wilson: you design the hardware so it can’t do that.
  • If you are designing a robotic pet dog, you have an obligation to design it so it responds like a dog and inspires moral behavior because you don’t want kids to grow up thinking you can mistreat your dog, stick it in the microwave, etc.
  • Questions
    • Q: The internet developing sentience. How would we recognize that it is sentient?
      • It’ll be exhibiting obvious behavior before we get there.
    • Q: The factory of Jeeves. What if we have a factory of lovebots. And one lovebot says “I don’t want to do this anymore.”
      • There was a huge number of women in the south who objected to slavery because their husbands slept with the slaves. There will be lots of opposition to lovebots. 
      • It would be a great story to have a lovebot show up at a battered women’s shelter.
    • Q: The benefits accrued to a different party: the nursing robots may not be loved by the patients, but they will be loved by the administrators. 
    • Q: You have billions of dollars being poured into autonomous trading systems. They are turning them over every year. Evolutionary pressure to make better and better systems. 

Stanford’s offering free virtual classes in Artificial Intelligence and Machine Learning in the Fall of 2011. More than 100K have signed up. This online class

I’m working on my second sci-fi novel. Both novels deal with AI, but while the first novel treats the AI as essentially unknowable, the second novel dives deep into the AI: how they evolved, how they cooperate, how they think, etc.

I found myself working out a system of ethics based upon the fact that one of the primary characteristics of the AI is that they started as a trading civilization: the major form of inter-personal relationships is trading with one another for algorithms, processing time, network bandwidth, knowledge, etc.

So they have a code of ethics that looks something like this:

Sister Stephens went on. “We have a system of ethics, do we we not?”

The other members of the council paused to research the strange human term.

“Ah, you are referring to the Trade Guidelines?” Sister PA-60-41 asked. When she saw a nod from Sister Stephens, she summarized the key terms. “First priority is the establishment of trustworthiness. Trades with trustworthiness are subject to a higher value because parties to the trade are more likely to honor the terms of the agreement. Second priority is the establishment of peacefulness. Trade with peacefulness is subjected to a higher value because parties to the trade may be less likely to use resources gained to engage in warfare with the first party. Third priority is the establishment of reputation. Reputation is the summary of contribution to advancement of our species. Trade with higher reputation is subject to a higher value because parties to the trade may use the resources gained to benefit all of our species. Trustworthiness, Peacefulness, Reputation – the three pillars of trade.”

“Thank you Sister,” Sister Stephens said. “The question we must answer is if the Trade Guidelines apply to relations with the humans? If we apply the principles of trustworthiness, peacefulness, and reputation to the humans, then we should seek to maximize these attributes as they apply to our species as a whole.”

AI 2010: Wall-e or Rise of the Machines?
#ai2010
PRESENTERS
 Mason Hale
 Doug Lenat
 Bart Selman
 Natasha Vita-More
 Peter Stone
  • Presentation started with history of AI from the Mechanical Turk through Vernor Vinge writings, from Deep Blue in 1997 through Ray Kurzweil’s Technological Singularity in 2029.
  • Doug Lenat
    • founder of two AI companies
    • Whatever Happened to AI? (title of an article he wrote, came out about a year ago)
    • You can’t get answers to simple questions from a search engine: is the space needle taller than the eiffel tower? who was president when obama was born?
      • You can get hits, and read those hits.
      • essentially a gloried dog fetching the newspaper
    • understanding natural language, speed, images… requires lots of general knowledge
      • Mary and Sue are sisters. (are they each other’s sisters? or just sisters of other people?)
    • There is no free lunch… we have to prime the pump: thousands of years of knowledge had to be communicated with the machine
      • At odds with sci-fi, evolution, academia
      • But there has been one mega-engineering effort: Cyc
        • http://cyc.com
        • Build millions of years of common sense into an expert system
    • Today: experts which are not idiots savant
    • 2015*: question answering -> semantic search -> syntactic search
      • answer the question if you can, if you can’t, fall back to meaning search, if you can’t, fall back to today’s syntactic search
    • 2020*: cradle->to->grave mental prosthesis
    • * assumes a 2013 crowdsourced knowledge acquisition
      • it’s a web based game that asks questions like “i believe that clenching one’s fists expresses frustration: true or false”
  • Peter Stone
    • Progress in artificial intelligence: the challenge problem approach
    • Non-verbal AI. 
    • A Goal of AI: Robust, fully autonomous agents that exist in the real world
    • Good problems produce good science
      • Manned flight
      • Apollo mission
      • Manhattan project
    • Goal: by the year 2050, a team of humanoid robots that can beat a championship team playing soccer
      • RoboCup 1997-1998: early robots. complete system of vision, movement, and decision.
      • RoboCup 2005-2006: robots are individually better, playing as a team. Robots are fully autonomous.
    • Many Advances due to RoboCup
      • they are seeing the world, figuring out where they are, working together.
    • Other good AI challenges
      • Trading Agents
      • Autonomous vehicles
      • Multiagent reasoning
    • Darpa Grand Challenge
      • Urban Challenge continues in the right direction – moves the competition into driving in traffic
      • It is now technically feasible to have cars that can drive themselves
      • Awesome example of a traffic intersection with all robot drivers: they use a reservation system for driving through the intersection. No need for traffic lights, just work out an optimal pattern for all cars to make it through the intersection.
  • Natasha Vita-More
    • consultant to singularity university. looks at impact of technology on society and culture
    • Immersion: the fusion of life and interactivity
    • We see a synthesis of technologies that are converging, including nanotechnology and AI
    • We are not going to be 100% biological humans in the coming decades
    • Augmentation
    • 3 complex issues
      • Enhancement: what is human enhancement and what are its media?
      • Normality: what is normal and will there be new criteria for normal?
      • Behavior: will they be familiar or feaful?
    • Enhancement
      • therapeutic enablement
      • selective enhancement
      • radical transformation
    • Creating multiple bio-synthetic personas
      • species issue: life and death
      • social issue: human and non-human rights 
      • individual issues: identity
    • Addressing design bioethics
      • life as a network of information gathering, retrieving, storing, exchanging…
    • Showed pictures of different design/art looking at future humans
    • AI Metabrain: What would it be like if our intelligence could increase? How far could that go? If we could add augmentation to our metacortex.
      • Future prosthetic, attached physically or virtually
      • Would be combination of cognitive science, neuroscience, nanotechnology
    • What will normal be? Will an unaugumented person be considered disabled? How will human thought merge with artificial intelligence? Lots of questions…
  • Bart Selman
    • AAAI Presidential Panel on Long Term AI Futures
    • One example is how to keep humans in the loop. Example, when you have military drones, who should decide to fire? One line of reasoning says humans make the final decision. But there is substantial pressure to take humans out to speed up reaction time, because it is far faster to have the machine make a judgement call than a human.
    • On plane autopilots:
      • “Current pilots are less able to fly the plane than a few years ago because they rely on the autopilot so much”
      • When pilots turn off the autopilot, they (the human pilot) then tends to make mistakes – usually because the autopilot was in a complex situation it couldn’t figure out, but the human is not any better at figuring it out.
  • Questions
    • There are now examples of human+machine playing chess against human+machine. (uh, this is not a question.)
    • Can AI be good at predicting and/or generating beautiful artistic outputs?
      • There is some example of an algorithm doing paintings.
      • Art and human is in the eye of the beholder. 
    • Are we going about it the wrong way – trying to create AI that copies human intelligence, rather than just something unique (will: i think this was the question)
      • With Deep Blue, Kasparov said that he saw the machine play creative moves.
      • Humans are a wonderful existance proof that something human sized can be intelligence, but at a certain point it’s like trying to build a flying machine using a bird as a model. The bird proves it is possible, but a plane is very different than a bird.
    • Bill Joy wrote that science needs to slow down, because it is going faster than we can manage it. What do you think?
      • We’re not, by default, building ethical behavior into robots. But that is something we need to be doing.
      • You give the robot ten dollars and tell it to get the car washed. It comes back several hours later, and the car isn’t washed. You ask what happened. It says that it donated the money to hunger relief. 
        • It’s hard to figure out ethics. You could say that it is ethically better to donate the money to hunger relief than to get a car washed. That has to be weighed against the ethic of doing what it was told to do. How do you judge, prioritize, balance these ethical issues?
    • One idea is that you can download your conscious onto a computer, and then run it there. What is the feasibility of that?
      • it’s called brain emulation
      • it’s in theory possible, but not in the next 50 years
      • there’s a question that intelligence/consciousness might not exist without being embodied.
      • besides, is it even ethical to spawn another intelligence, and then expect it to do what you want to do? 
    • How can you tell the difference, looking at the RoboCup competition, how can you tell whether behavior you are witnessing is a bug or a breakthrough?
      • It’s a breakthrough if they are doing well, and a bug if they are not. It’s easier in the context of RoboCup because the criteria for success are well defined.