Robot Panelists, AI and the Future of Identity
#sxsw #robots
John Romano
Author & Researcher
The Digital Beyond
Stephen Reed
@stephenlreed
Bruce Duncan M Ed
Managing Director
Terasem Movement Foundation LifeNaut Project
Bina48
Robot Panelist
  • Rich Personal Data Store + Powerful AI = Virtual You
  • If AI was advanced enough, it could start expressing its own opinion
  • What will it take to create a sentient robot? Do we want to? And where are we in the process?
  • Exponential development: each step gets bigger as you go up the stairs.
  • Accelerating evolution of our tools.
  • It took 400,000 years from fire to bronze. It took 6,000 years from bronze to an iphone in your pocket.
  • is the state of AI today fire or bronze?
  • We are going to approach the singularity.
  • The moment that happens, we will be unable to see beyond. We will have people smarter than us. We can’t anticipate what they would do.
  • Stephen Reed: Texai:
    • A system you can teach useful things
    • Robots are making inroads into the bluecollar workplace.
    • AI makes inroads into whitecollar workplace.
    • Create the mind of a child, and then educate it.
    • We’ll see a demonstration of natural language 
  • Bruce Duncan
    • has a robot that works for him: Bina48
    • Goal with lifenaut is to archive human mind and upload into a robot
    • Mindfiles & Androids
    • Early mind uploading: cave paintings
    • Human beings have a need to upload: to share what’s on their mind.
    • Sum of all social media data can be used to reconstruct personality.
    • https://www.lifenaut.com/
    • Given a saturated datase or “mind file” that can be used by a future AI to be able to replicate your consciousness
    • In the future, you will be expected to have an avatar that is not just off the shelf, but can interact with people and act as you in certain circumstances.
    • http://yes.thatcan.be/my/next/tweet/
    • Genes / Memes / Bemes
    • Digital mindfiles capture bemes:
      • mannerisms
      • personality
      • language
    • What’s in it?
      • digital video/photos
      • blogs and diaries
      • psych tests and lists
      • Bainbridge surveys
      • sensecam data
      • feedback as others see me
    • Terasem Hypothesis: In the future, these reanimated personalities will be able to be imported into biological, nano-technological, and/or robotic bodies.
    • Bina48: first android based on a mindfile
  • Q: Merging mind files?
    • We made a science fiction film based on this: http://2bmovie.com/
    • We have multi-user mind files: to recreate a historical person, or a deceased family member.
  • Q: Did you have any children see Bina48 and how did they react?
    • We had an 8th grade intern, and he just sat down and talked to Bina48. He wasn’t phased by the technology at all.
  • Q: In layman’s terms, what is happening?
    • We use Dragon Naturally Speaking to do voice to text conversion.
    • We take the text input and feed into two databases.
    • One is a chatty database
    • One is a personality database
    • They compete to provide the best answer.
    • The sentences are stored in small pieces. She rarely gives the same sequence of sentences twice.

Ray Kurzweil
Expanding Our Intelligence Without Limit
#sxsw #IQExpand
  • Background
    • Written 4 best selling books
    • 19 honorary doctorates
    • Honored by multiple U.S. Presidents
    • Tons of inventions
  • The hippie movement morphed into silicon valley movement. What we have today is the democratization of technology.
  • You don’t need millions of dollars – you can start world-changing revolutions with what you have. (11 year old girl starts blog, now at 14 is on the cover of vogue with her own fashion line)
  • A boy in Africa today has access to more information than a U.S. President 15 years ago
  • Hardware increases exponentially, but software is stuck in the mud.
  • Watson is very impressive because it’s able to handle the vagaries of the human language. 
  • Made a prediction that computers would beat humans in chess by 1998, and that immediately afterwards we would dismiss chess as a non-significant program.
  • Watson is dealing with human languages. It master human language by reading 200 million pages of English content.
  • It can call up any fact in less than 3 seconds.
  • Some may say that it doesn’t understand it: because it is just statistical extraction of data.
  • But this is exactly how the human brain works.
  • Watson is not operating at human levels of intelligence yet.
  • It combines the capability that it does have with natural language, and combine that with what computers are good at: having total recall of all information.
  • Computers are shrinking at an exponential rate: a 1000x decrease in size and 1000x increase in performance since Kurzweil was an undergrad.
  • Moore’s Law is only one example
  • If you measure the basic fundamental aspects of technology, they are very smooth curves over time. 
  • Paradigms shift: from vacuum tubes to IC. 
  • But overall, the curves are very smooth.
  • It’s not just computers.
    • It’s also smartphones
    • And the bits we move around on wireless networks.
  • A very important area is biology.
  • It wasn’t an information technology problem until recently.
    • DNA Sequencing cost: dropping exponentially
    • Growth in Genbank DNA Sequence Data: increasing exponentially
  • Our genes are 23,000 software programs running in our bodies
  • Health is now a software technology problem. Therefore, it will now be subject to exponential increases.
  • The world of physical things becoming information technology
    • 3D printing: “print me a stradivarius”
    • Someone printed an airplane and flew in it.
    • At Singularity Institute we want to print out low-cost housing
  • The spacial and precisions of brain scanning is doubling
  • The precision of brain simulation is doubling
  • Is this good or bad?
    • Plot of incoming and life expectancy around the world
      • increasing for everyone. divide is still there, but the lowest countries also increased the most
    • Plot of education over time
  • Lev Grossman Interview
  • What do you think of Siri? What does it mean for the landscape of AI?
    • It’s great. People who complain remind me of the joke about the woman who has a chess playing dog, but complains that it has a lousy endgame,
    • It will keep getting better.
  • Turing test: Is it still the benchmark for recognizing self-awareness? To say that it is sentient is just bizarre. What will it take for people to recognize it as sentient?
    • Of all the different proposals, it still has the most credibility.
    • But it’s not perfect.
    • We look like we’re heading for a date of 2029.
    • As soon as we pass it, we’ll probably reject it as a valid test.
    • People will accept an entity as conscious and as a person when it seems that way: when they convince us that they have the complexity and subtly to be our equals.
  • Where is the serious progress going to come from? Is it from siri? Watson? Somewhere else?
    • It’s going to come where there is commercial value.
    • Watson is understanding sequences of words.
    • It would be of tremendous value for search engines to be able to do this.
    • In the future, search engines won’t wait to be asked, they’ll be listening in, and they will pop up information we need.
    • We’ll get used to having this information pop up in some sort of augmented reality.
  • Are they going to judge us for our search terms?
    • Making judgements is the top level of our neocortex. It’s not built in, it’s built up over time, based on what we think. We need a whole framework to make judgements.
    • These systems will make these judgements: what does Lev Grossman want vs. what someone else wants.
  • How much confidence do we have that if greater than human intelligence arises, it will want to be helpful?
    • Promise vs. peril has been an issue with technology since we had fire: it can cook food, but it can burn down our villages.
    • Biotechnology has great promise and great danger: it could be used by terrorists, and it can be used to arrest cancer.
    • Genetics Nanotechnology Robotics (GNR): promise vs. peril is a very large issue. It’s not as much an us vs. them, but an us vs. us. We have conflicts today between groups of humans. They will add GNR into the tools they use for that conflict.
    • We are a human machine civilization. It’s going to be all mixed up: we are all enhanced with computer technology.
    • We do have conflicts between humans. GNR technology can make these conflicts more harmful.
  • Should governments be more active in regulating?
    • Conflicts come from governments. So they should not regulate.
    • We (SXSW people) should be the ones to regulate it.
    • Look at the major political power of Wikipedia: it killed SOPA in hours.
  • When we talk about intelligence expanding, with technology, does it change us quantitatively or qualitatively? Does it change human nature?
    • Mammals evolved and have a neocortex. It was the first time we had an hierarchy of information.
    • Then we had a mass extinction event.
    • Given the radical sudden change in the environment. the mammals survived because they adapted.
    • Evolution recognized this, and used it more.
    • Now we have a front cortex. 
    • If you take a congenitally blind person, the regions of the neocortex used for visual end up getting used for more advanced capability of language analysis.
    • We have about 300 million pattern recognizers in the neocortex.
    • If we extend it, and are able to have even more complex thoughts.
  • Backing up the consciousness in the cloud
  • My experience of the reality around me and the people around me feels diminished when I am buried in my smartphone during this conference. Is this a zero-sum game?
    • There was a big controversy that kids weren’t going to learn arithmetic when calculators were invented.
    • And in fact, they don’t.
    • We’ve outsourced some of our ability to technology.
    • It frees up our energy to be able to do other creative things: like the people at the conference.
    • We are free to choose how we spend out time and how we organize it.
    • You are communicating with other people, either directly or indirectly.
    • It has expanded our minds.
    • We have a 19th century model of education.
    • We should teach our kids how to solve problems
  • Paul Allen essay: published a few months ago, “the singularity is not here”. The law of exponential growth: It’s not a physical law. It’s just an observation until it no longer works. What if we hit a wall?
    • Moore’s law will come to an end: but that’s just the fifth paradigm. Before we had transistors, before that we had vacuum tubes, before that we had mechanical computers.
    • Paul is confusing the end of one paradigm with the end of all growth. We’ll go on to another paradigm.
  • Have you been wrong with your predictions?
    • In terms of the underlying capabilities: everything has stayed right on the curves.
    • In terms of social predictions: those are harder. I rate myself as 86% correct on my social predictions… Like having self-driving cars.
  • According to a research presentation at the Singularity Institute large year: the complexity of even a single cell is immense, and it will be impossible to simulate it all.
    • There is massive redundancy. When we look at how much information is encoded in the genome, you realize that the connections are redundant. 
    • In the cerebullum you have connections wired together 10 million times. Massive levels of redundancy. 
    • You could say a forest is incredibly complex, but there is fractal redundancy.
  • How confident do you feel that the kinds of marvelous benefits that are coming will be available to the 99%?
    • You have to take a lot of comfort from where we are today.
    • Twenty years ago, if you took our a mobile phone, that was a sign that you were an elite. They were big and heavy and limited functionality.
    • Now they can do so much more, and are small, and they are in everyone’s hands.
    • Every field is being empowered by increasingly inexpensive and increasingly powerful tools: music, health, etc.
    • They will make their ways into our bodies and brains, but that’s an arbitrary distinction.
    • I don’t see a tremendous power being given to an elite.
  • Say you’re graduating from college right now. What would you want to do to get yourself ready for the decades to come?
    • All of our education needs to encompass doing as a centerpiece of the curriculum.
    • If I was a student, I would be at an institution where that was how it was done.
  • Questions from audience
    • Q: ?
      • People should learn about how computers work, not just to use them, to know what they are capable of.
      • Biology is a field where doing is a method of learning.
      • Virtual reality: we don’t want to look at these little screens.

If you’re going to be at SXSW Interactive next week, I hope you’ll join us at Wall-E or Terminator: Predicting the Future of AI.

Daniel H. Wilson (author of Robopocalypse, upcoming AMPED), Chris Robson (chief scientist at Parametric Marketing), and myself will be speaking about whether there’s going to be a singularity, when it would happen, if ever, and whether that’s even a relevant question to be talking about.

Daniel and Chris are absolutely brilliant, and I can promise this will be a fun and informative discussion. If our previous discussions are any indication, I can promise we’ll all bring very unique points of view to the debate, er panel.

You’ll find us here:

Tuesday, March 13, 2012
9:30AM -10:30AM
Hilton Austin Downtown
Salon J

By the way, if you’re there, and want to get a hold of me, twitter is usually the best way. You’ll find me at @hertling. You can find Daniel at @danielwilsonpdx, and Chris at @paramktg.

A.I. Apocalypse, the sequel to Avogadro Corp, is now available on Amazon!

A.I. Apocalypse
Sequel to Avogadro Corp

A little bit about A.I. Apocalypse:

Leon Tsarev is a high school student set on getting into a great college program, until his uncle, a member of the Russian mob, coerces him into developing a new computer virus for the mob’s botnet – the slave army of computers they used to commit digital crimes.

The evolutionary virus Leon creates, based on biological principles, is successful — a little too successful. All the world’s computers are infected. Everything from cars to payment systems and, of course, computers and smart phones stop functioning, and with them go essential functions including emergency services, transportation, and the food supply. Billions of people may die.

But evolution never stops. The virus continues to change, developing intelligence, communication, and finally an entire civilization of A.I. called the Phage. Some may be friendly to humans, but others most definitely are not.

Leon and his companions must race against time and the bungling military to find a way to either befriend or eliminate the Phage and restore the world’s computer infrastructure.

A.I. Apocalypse is the second book of the Singularity Series. It’s available now for the Kindle, and will be available in print and additional electronic versions in June. Buy it today!

Recently people have been saying nice things about my writing.

First, there was a review of Avogadro Corp on Amazon that was titled “Good, but not Stephenson-good.” My first thought was “Hey, I’m being compared to Neal Stephenson. That’s cool.”
Then there was the Brad Feld post. Brad Feld is a world-renown venture capitalist, one of the founders of Techstars, the managing director of The Foundry Group, and with 100,000 followers on twitter, he’s clearly an influential person.
He had given a talk called Resistance is Futile, and during the talk, he spoke about Avogadro Corp:

But then I mentioned a book I’d just read called Avogadro Corp. While it’s obviously a play on words with Google, it’s a tremendous book that a number of friends had recommended to me. In the vein of Daniel Suarez’s great books Daemon and Freedom (TM), it is science fiction that has a five year aperture – describing issues, in solid technical detail, that we are dealing with today that will impact us by 2015, if not sooner. 

There are very few people who appreciate how quickly this is accelerating. The combination of software, the Internet, and the machines is completely transforming society and the human experience as we know it. As I stood overlooking Park City from the patio of a magnificent hotel, I thought that we really don’t have any idea what things are going to be like in twenty years. And that excites me to no end while simultaneously blowing my mind. 

You can read his full blog post. (Thank you, Brad.)

While I loved the endorsement, what really got me excited is that Brad appreciated the book for exactly the reasons I hoped. Yes, it’s a fun technothriller, but really it’s a tale of how the advent of strong, self-driven, independent artificial intelligence is both very near and will have a very significant impact. Everything from the corporate setting and the technology used should reinforce the fact that this could be happening today.

I first got interested in predicting the path of technology in 1998. That was the year I made a spreadsheet with every computer I had owned over the course of twelve years, tracking the processor speed, the hard drive capacity, the memory size, and the Internet connection speed.

The spreadsheet was darn good at predicting when music sharing would take off (1999, Napster) and video streaming (2005, YouTube). It also tells when the last magnetic platter hard drive will be manufactured (2016), and it predicts when we should expect strong artificial intelligence to emerge.

There’s lots of different ways to talk about artificial intelligence, so let me briefly summarize what I’m concerned about: General-purpose, self-motivated, independently acting intelligence, roughly equal in cognitive capacity to human intelligence.

Lots of other kinds of artificial intelligence are interesting, but they aren’t exactly issues to be worried about. Canine level artificial intelligence might make for great robot helpers for people, similar to guide dogs, but just as we haven’t seen a canine uprising, we’re also not likely to see an A.I. uprising from beings of that level of intelligence.

So how do we predict when we’ll see human-grade A.I.? There’s a range of estimates for how computationally difficult it is to simulate the human brain. One estimate is based on the portion of our brain that we use for image analysis, and comparing that to the amount of computational power it takes to replicate that in software. Here’s the estimates I like to deal with:

Estimate of Complexity Processing Power Needed How Determined
Easy: Ray Kurzweil’s estimate #1 from Singularity Is Near 10^14 instructions/second Extrapolated from the weight of the portion of the brain responsible for image processing to, compared to the computer computation necessary to recreate.
Medium: Ray Kurzweil’s estimate #2 from Singularity Is Near 10^15 instructions/second Based on the human brain containing 10^11 neurons, and it taking 10^4 instructions per neuron.
Hard: My worst case scenario: brute force simulation of every neuron 10^18 instructions/second Brute force simulation of 10^11 neurons, each having 10^4 synapses, firing up to 10^3 times per second.

(Just for the sake of completion, there is yet another estimate that includes glial cells, which may affect cognition, and of which we have ten times as many as neurons. We can guess that this might be about 10^19.)

The growth in computer processing power has been following a very steady curve for a very long time. Since the mid 1980s when I started listening to technology news, scientists have been saying things along the lines of “We’re approaching the fundamental limits of computation. We can’t possibly go any faster or smaller.” Then we find some way around the limitation, whether it’s new materials science, new manufacturing techniques, or parallelism.

So if we take the growth in computing power (47% increase in MIPS per year), and plot that out over time, we get this very nice 3×3 matrix in which we can look at the three estimates of complexity and three ranges for the number of available computers to work with:

Number of Computers Easy Simulation
(10^14 ips)
Medium Simulation*
(10^16 ips)
Difficult Simulation
(10^18 ips)
10,000 now 2016 2028
100 2016 2028 2040
1 2028 2040 2052
*Modified from Kurzweil’s 10^15 estimate, only to give us a more middle-of-the-road prediction.
As we can see from this chart, if it was easy to simulate a human brain, we’d already have people who have access to 10,000 computers doing it. So we’re not quite there yet. Although clearly some of the things most suggestive of strong A.I., like IBM’s Watson and Google’s self-driving cars, are happening first in these large organizations where they have access to loads of raw computational power.
But even in the difficult simulation case, by 2040, it will be within the reach of any dedicated person to assemble a hundred computers and start developing strong A.I.
It’s when we reach this hobbyist level that we really need to be concerned. Thousands of hobbyists will  likely advance A.I. development far faster than a few small research labs. We saw this happen in the Netflix Prize, where the community of contestants quickly equaled and then out-paced Netflix’s own recommendation algorithms. 
Strong A.I. is an issue that we should be thinking about in the same way that we discuss other defining issues of our time: peak oil, water shortages, and climate change. It’s going to happen in the near term, and it’s going to affect us all.
We’re entering a period where the probability of strong A.I. emerging is non-zero for the first time. It’s going to increase with each year that passes, and by 2052, it’s going to be an absolute certainty.

By the way: If you find this stuff interesting, researcher Chris Robson, author Daniel H. Wilson, and I will be discussing this very topic at SXSW Interactive on Tuesday, March 13th at 9:30 AM.

The Singularity Institute has posted a comprehensive list of all Singularity Summit talks with videos of each talk. A few that I’m particularly interested in:

The Ethics of Artificial Intelligence
Petra Mitchell, Amy Thomson, Daniel H. Wilson, David W. Goldman
OryCon 33
  • Do we have the right to turn off an artificial intelligence?
  • Amy: Buddhist definition of sentience is it is able to suffer?
  • Mitchell: Can you turn it off and turn it back on again? Humans can’t be turned on.
  • Wilson: If you’ve got an AI in a box, and it’s giving off signs that it’s alive, then it’s going to tell you what it wants.
  • In Star Trek, Data has an on/off switch. but he doesn’t want people to know about it.
  • If IBM spends 3/4 of a billion dollars making an AI, can they do anything they want with it?
  • Parents have a lot of rights over their children, but a lot of restrictions too.
  • AI isn’t simply going to arise on the internet. IBM is building the most powerful supercomputer on the planet to achieve it. It’s not going to be random bits. 
  • Evolutionary pressure is one way to evolve an artificial intelligence.
  • We can use genetic algorithms. We’re going to have algorithms compete, we’re going to kill the losers, we’re going to force the winners to procreate. If we were talking about humans, this would be highly unethical.
    • So where to draw the boundary?
    • Daniel: You are being God. You’ve defined the world, you raise a generation, they generate their answers, they’ve lived their life.
  • There may be people who will become attached to illusionary intelligences – like Siri – that isn’t a real intelligence. This will happen long before anything is really intelligent emerges.
  • Turing Tests
    • The [something] prize happens every year. Each year they get a little closer. A higher percentage of people believe the chatbot is human. 
    • It’s reasonable to believe that in ten years, it won’t be possible to distinguish.
  • Other ethical issues besides suffering:
    • If you have an AI, and you clone it, and then you have two that are conscious, then you shut one off – did you kill it?
  • How do you build robots that will behave ethically?
    • Not how do we treat them, but how do they treat us?
  • Now we have issues of robots that are armed and operating autonomously. Unmanned Aerial Vehicles. 
  • We already have autonomous robots on military ships that defends against incoming planes and missiles. And back in the 1970s, it shot down an Iranian passenger jet, killing 300 passengers. 
  • When the stock market tanks, who’s fault is it? The AI? The humans? It happens faster than the humans can react.
  • Neural nets are black boxes. Decision trees are more obvious.
  • Asimov spent most of his time writing stories about how defining laws didn’t work.
  • We can’t simply say “Don’t kill humans”.
  • We have dog attacks, but we don’t ban dogs.
  • Tens of thousands die every year in car accidents, but we don’t eliminate cars.
  • We’ll put up with a lot of loss if we get benefit.
  • Japan is desperately trying to come up with robotic nursing aides because they don’t have enough people to do it.
    • Thomson: a robot nursing aide is an abomination. these people are lonely.
    • Wilson: if the alternative is no care, or inferior care.
    • What happens when someone leaves their robot a million dollars?
  • What happens when the robot butlers of the world, incredibly successful, and deployed everywhere, all go on strike?
    • Wilson: you design the hardware so it can’t do that.
  • If you are designing a robotic pet dog, you have an obligation to design it so it responds like a dog and inspires moral behavior because you don’t want kids to grow up thinking you can mistreat your dog, stick it in the microwave, etc.
  • Questions
    • Q: The internet developing sentience. How would we recognize that it is sentient?
      • It’ll be exhibiting obvious behavior before we get there.
    • Q: The factory of Jeeves. What if we have a factory of lovebots. And one lovebot says “I don’t want to do this anymore.”
      • There was a huge number of women in the south who objected to slavery because their husbands slept with the slaves. There will be lots of opposition to lovebots. 
      • It would be a great story to have a lovebot show up at a battered women’s shelter.
    • Q: The benefits accrued to a different party: the nursing robots may not be loved by the patients, but they will be loved by the administrators. 
    • Q: You have billions of dollars being poured into autonomous trading systems. They are turning them over every year. Evolutionary pressure to make better and better systems.