This is an amazing deal: Audible just bought Avogadro Corp and A.I. Apocalypse audio books on sale for $1.99 each!

As I don’t have any control over Audible.com pricing, this is an exciting opportunity to pick the audio editions up at a significant discount compared to their usual price of $17.95. I don’t know how long it will last, so take advantage of it while you can!

A Robot in Your Pocket
Jeff Bonforte, CEO Xobni, @bonforte
Amit Kapur, CEO Gravity, @amitk
#robotapps


¡      Marvin Minsky
o   In the 50s, predicted robots would be everywhere in 5 years
o   In the 60s, it was 10 years
o   In the 70s, it was 20 years
o   In the 80s, it was 40 years
¡      It’s a fine line between tools and robots
o   Robata is Czech for “hard work”
o   It’s a fine line between a tool and a point where it becomes something that works for you.
¡      We think of robots as a hardware thing
o   We want R2D2, Rosie, and Six.
o   What we have are vacuum cleaners and industrial robots.
¡      They’re here, and they’re software.
¡      What’s changed in the last decade?
o   Data
o   Smaller and cheaper sensors.
o   The more things we measure, the more accurately we can respond.
o   Smartphones are a collection of sensors we carry with us all the time.
¡      Software, too.
o   Natural Language Processing: Understanding semantically what something is about.
o   Machine Learning: Software can look at data, learn from it, do intelligence tasks.
o   Distinct Ontologies: Instead of a rigid taxonomy, … Humans don’t think in hierarchical structures. We think flexibly. An iRobot vacuum makes us think about things like chores, and how we don’t have time, and the cost of hiring a may.
§  Machines need to be able to understand and combine things.
¡      More data than we know what to do with.
o   We start by measuring things we don’t know what to do with.
o   Will it rain today?
§  It’s a deterministic problem. Use barometer, wind conditions, etc.
§  Stochastic: Look at 10 million shoe selections of New Yorks, and you can figure out if it’s going to rain.
¡      The point of stochastic is that one data point doesn’t matter. Whereas in a deterministic model, you could crash your model with a weird data point.
¡      After 24 hours, shoe selection is not correlated to weather.
§  The point is, we can correlate surprising things.
o   Xobni does this with inboxes. The average inbox is a couple of megabytes. The Xobni inbox has 40 MB of data.
¡      Explicit versus implicit data
o   â€œI’m here at this restaurant”, or “this is my favorite person”
o   vs.
o   We look at your data, and observe what you do. If you text a person 1,000 times to the same number, why does the phone still ask you which number to use?
o   Examples of implicit data:
§  Payment patterns from credit cards
§  Locations you pass when you drive, locations you stay a long time.
§  You express your preferences and patterns through what you do every day.
o   For example: let’s say I get a txt message from someone with a link. How often do I click on links from that person? If it’s high, then go fetch the page in the background, so that when I do click on it, the page is preloaded.
o   Implicit systems are much more accurate, because they are related to current behavior and actual actions, rather that what people think they are interested in, or what they explicitly said 2 years ago.
o   Features like circles in google are explicit and they cause high cognitive load.
¡      Where giants tread
o   IBM’s Watson.
§  Smart AI can win Jeopardy.
§  Now diagnose cancer.
o   Google’s self-driving car.
§  Passes 300,000 miles driven.
o   Huge R&D budgets, years of efforts.
¡      Startups coming into the equation
o   The cost of getting technology and processing data is going down
o   More tools are open source
¡      Big R&D innovations feel like they’re five years away, but it’s usually 10 years.
o   Example of iDrive: cost and effort to do ($5.7M for 16 terabyte drive, $1.5M monthly bandwidth bill, write every component of systems) versus Dropbox ten years later (off the shelf components, cheap costs).
¡      Progression
o   Analogy: Brakes
o   Digital: Antilock
o   Robot: Crash avoidance
¡      Progression
o   Analog: thermostat
o   Digital: timer thermostat
o   Robotic: Nest
¡      News
o   Analog: Newspapers.
o   Digital: Online representation.
o   Robot (gravity): Personalized experience based on their preferences, derived from their past behavior
¡      Businesses
o   A: Yellow pages
o   D: Yelp
o   R: Ness
¡      Information
o   A: Encyclopedia
o   D: Google Search
o   R: Google Now
¡      Contacts
o   A: Address book
o   D: Contacts / Email
o   R: Xobni
¡      Objectives
o   Learn
o   Adapt
o   Implicit
o   Proactive
o   Personalized
¡      A spam filter that’s 95% accurate is totally unreliable. 0% adoption. At 98%, still bad. 99%, still bad. You need to get to 99.8% before you get adoption.
o   But for restaurant selection, 95% is great.
o   Different level of expected quality for different systems.
¡      Gravity
o   Personalizing the Internet
o   Marissa Meyer saying that Yahoo is going to be very focused on personalization.
o   Surrounding ourselves with the experts in machine learning, natural language processing.
o   Mission: leverage the interest graph to personalize the internet
o   The more information that flows into a system, the harder it becomes to find great content. It’s the signal to noise ratio.
o   The history of the internet is of companies creating better filters to find great content.
o   Phases
§  Their web: directories, google.
§  Our web: use social graph, get content shared with us from friends
§  Your web: using technology to process data to understand the individual, and have adaptive, personalized experience.
o   Interest Graphing
§  Semantic analysis of webpage. Match against ontologies we’ve built.
§  Watch what people do, match against interests.
§  Then personalize what they see.
§  Show examples of how sites filled with links (New York Times, Huffington Post), Gravity will find the top articles you’d be interested in.
¡      Xobni
o   Why who matters?
§  It starts with the idea of attaching files to email. You know the sender, the receiver, and the email. Instead of presenting all files on the computer for possible attachment, you can prefilter the list, and it’s a 3x reduction in possible files.
o   Super cool demo of voicemail.
§  Voicemail transcribes and hotlinks to contacts, doing things like resolving references to email (“see the address I emailed you”), and people (the venndiagram of people they know in common means they must be talking about this Chris), and vocabulary (this two people use words like “dude”, and “hey man”)
¡      Future Applications
o   Trackers are digital. What’s the robot version? The equivalent of a check engine light for your health.
o   Education: the creation of personalized education and teaching.
o   Finance: help for your particular financial situation.
¡      Often people are worried about privacy. Anytime you give people data, you have to worry to what are they going to do.
¡       

Look out ELOPe, you’ve got competition:

Via io9:

In what is the largest and most significant effort to re-create the human brain to date, an international group of researchers has secured $1.6 billion to fund the incredibly ambitious Human Brain Project. For the next ten years, scientists from various disciplines will seek to understand and map the network of over a hundred billion neuronal connections that illicit emotions, volitional thought, and even consciousness itself. And to do so, the researchers will be using a progressively scaled-up multilayered simulation running on a supercomputer.
And indeed, the project organizers are not thinking small. The entire team will consist of over 200 individual researchers in 80 different institutions across the globe. They’re even comparing it the Large Hadron Colllider in terms of scope and ambition, describing the Human Brain Project as “Cern for the brain.” The project, which will be based in Lausanne, Switzerland, is an initiative of the European Commission.

Read more at the Human Brain Project.

I read an interesting comment on a blog recently, although I can’t remember where, that made the point that as the pace of technology accelerates, we’re going through massive shifts more and more quickly, such that it becomes exceedingly difficult to predict the future beyond a certain point, and that point is coming closer and closer as time progresses.

A writer in 1850 could easily imagine out 100 years. They might not be right about what society would be like, but they could imagine. Writers in the early 1900s were imagining out about 75 years, and midcentury writers 50 years, and so on.

I’m writing now, and I enjoy the act of grounding my society in hard predictions, and it’s hard to go out beyond about 25 years because pending changing in the technology landscape are so radical (artificial intelligence, nanotechnology) that it’s really hard to conceive of what life will be like in 50 or 100 years from now, and still have it be an extrapolation of current trends, rather than just wild-ass guesses, e.g. a fantasy of the future.

If it really is harder to extrapolate trends out any sort of meaningful distance, I wonder if that exerts a subtle effect on what people choose to write.

Via Techcrunch:

Famed inventor, entrepreneur, author, and futurist Ray Kurzweil announced this afternoon that he has been hired by search engine giantGoogle as a director of engineering focused on machine learning and language processing. He starts this upcoming Monday, according to a report issued on his website.

And from Ray’s website:

“In 1999, I said that in about a decade we would see technologies such as self-driving cars and mobile phones that could answer your questions, and people criticized these predictions as unrealistic. Fast forward a decade — Google has demonstrated self-driving cars, and people are indeed asking questions of their Android phones. It’s easy to shrug our collective shoulders as if these technologies have always been around, but we’re really on a remarkable trajectory of quickening innovation, and Google is at the forefront of much of this development.

“I’m thrilled to be teaming up with Google to work on some of the hardest problems in computer science so we can turn the next decade’s ‘unrealistic’ visions into reality.” 

The singularity is a little closer now. — Will

In one of my writers groups, we’ve been talking extensively about AI emergence. I wanted to share one thought around AI intelligence:

Many of the threats of AI originate from a lack of intelligence, not a surplus of it.

An example from my Buddhist mathematician friend Chris Robson: If you’re walking down a street late at night and see a thuggish looking person walking toward you, you would never think to yourself “Oh, I hope he’s not intelligent.” On the contrary, the more intelligent, the less likely they are to be a threat.

Similarly, we have stock trading AI right now. They aren’t very intelligent. They could easily cause a global economic meltdown. They’d never understand the ramifications.

We’ll soon have autonomous military drones. They’ll kill people and obey orders without ever making a judgement call.

So it’s likely that the earliest AI problems are more likely to be from a lack of relevant intelligence than from a surplus of it.

On the flip side, Computer One by Warwick Collins is a good AI emergence novel that makes the reverse case: that preemptive aggression is a winning strategy, and any AI smart enough to see that it could be turned off will see people as a threat and preemptively eliminate us.

Cory Doctorow, author and internet activist, held an “Ask me anything” on Reddit last week. I took the opportunity to ask him two questions, which he answered. I’m reproducing them below, but you can read the entirety of the ama on reddit.

I asked:

I understand and agree with your arguments against Trusted Computing.
I also know that with the government taking an increasing role in underwriting viruses, and the looming specter of evolutionary viruses, it seems like maintain a secure computing environment may become more and more difficult.
Is there any chance Trusted Computer could have a role to play in protecting us against a future onslaught of semi-sentient computer viruses, and if so, is it worth it?

He answered:

Yeah — I cover that in my Defcon talk:
http://www.youtube.com/watch?v=1Ogmy8XRXvo

I also asked:

Hi Cory, I love your work. How do you decide what level of technical detail to get into when you’re writing fiction? Do you get pushback from editors on the way you handle more complicated issues (e.g. what’s the right level of detail to include when discussing copyright law in Pirate Cinema), and if so, how do you handle that?

He answered:

Naw. I’ve got an AWESOME editor at Tor, Patrick Nielsen Hayden, who makes my books better. He got me to rewrite the dual-key crypto stuff in LB a couple times, but only to make it clearer, not less nerdy.

From Silas Beane, at the University of Bonn in Germany, comes new evidence that the universe we live in is indeed a computer simulation: 

It’s this kind of thinking that forces physicists to consider the possibility that our entire cosmos could be running on a vastly powerful computer. If so, is there any way we could ever know?

Today, we get an answer of sorts from Silas Beane, at the University of Bonn in Germany, and a few pals.  They say there is a way to see evidence that we are being simulated, at least in certain scenarios.

First, some background. The problem with all simulations is that the laws of physics, which appear continuous, have to be superimposed onto a discrete three dimensional lattice which advances in steps of time.

The question that Beane and co ask is whether the lattice spacing imposes any kind of limitation on the physical processes we see in the universe. They examine, in particular, high energy processes, which probe smaller regions of space as they get more energetic

What they find is interesting. They say that the lattice spacing imposes a fundamental limit on the energy that particles can have. That’s because nothing can exist that is smaller than the lattice itself.

So if our cosmos is merely a simulation, there ought to be a cut off in the spectrum of high energy particles.

It turns out there is exactly this kind of cut off in the energy of cosmic ray particles, a limit known as the Greisen–Zatsepin–Kuzmin or GZK cut off.

Let’s just hope they keep running the simulation through to completion. Also, this suggests all kind of interesting Inception-style questions: e.g. At just what level of simulation are we running? Or Matrix-style: Can we hack the simulation and modify our own limits?