The Pentagon’s research arm, DARPA, wants to crowdsource a fully automated cyber defense system, and they’re offering a two million dollar prize:

The so-called “Cyber Grand Challenge” will take place over the next three years, which seems like plenty of time to write a few lines of code. But DARPA’s not just asking for any old cyber defense system. They want one “with reasoning abilities exceeding those of human experts” that “will create its own knowledge.” They want it to deflect cyberattacks, not in a matter of days—which is how the Pentagon currently works—but in a matter of hours or even seconds. That’s profoundly difficult.

On the one hand, this is brilliant. I can easily imagine some huge leaps forward made as a result of the contest. The Netflix Prize advanced recommendation algorithms while the DARPA Grand Prize gave us autonomous cars. Clearly competitions work, especially in this domain where the barrier to entry is low.

On the other hand, this is scary. They’re asking competitors to marry artificial intelligence with cyber defense systems. Cyber defense requires a solid understanding of cyber offense, and aggressive defensive capabilities could be nearly as destructive as offensive capabilities. Cyber defense software could decide to block a threatening virus with a counter-virus, or shut down parts of the Internet to stop or slow infection.

Artificial intelligence has taking over stock trading, and look where that’s gotten us. Trading AI has become so sophisticated it is described in terms of submarine warfare, with offensive and defensive capabilities.

I don’t doubt that the competition will advance cyber defense. But the side effect will be a radical increase in cyber offense, as well as a system in which both side operate at algorithmic speeds.

Full information about the Cyber Grand Challenge, including rules and registration, is available on DARPA’s website.

I’d like to announce that The Last Firewall is available!

In the year 2035, robots, artificial intelligences, and neural implants have become commonplace. The Institute for Applied Ethics keeps the peace, using social reputation to ensure that robots and humans don’t harm society or each other. But a powerful AI named Adam has found a way around the restrictions. 

Catherine Matthews, nineteen years old, has a unique gift: the ability to manipulate the net with her neural implant. Yanked out of her perfectly ordinary life, Catherine becomes the last firewall standing between Adam and his quest for world domination. 

Two+ years in the making, I’m just so excited to finally release this novel. As with my other novels, I explore themes of what life will be like with artificial intelligence, how we deal with the inevitable man-vs-machine struggle, and the repercussions of using online social reputation as a form of governmental control.

The Last Firewall joins its siblings. 
Buy it now: Amazon Kindle, in paperback, and Kobo eReader.
(Other retailers coming soon.)

I hope you enjoy it! Here is some of the early praise for the book:

“Awesome near-term science fiction.” – Brad Feld, Foundry Group managing director

“An insightful and adrenaline-inducing tale of what humanity could become and the machines we could spawn.” – Ben Huh, CEO of Cheezburger

“A fun read and tantalizing study of the future of technology: both inviting and alarming.” – Harper Reed, former CTO of Obama for America, Threadless

“A fascinating and prescient take on what the world will look like once computers become smarter than people. Highly recommended.” – Mat Ellis, Founder & CEO Cloudability

“A phenomenal ride through a post-scarcity world where humans are caught between rogue AIs. If you like having your mind blown, read this book!” – Gene Kim, author of The Phoenix Project: A Novel About IT, DevOps, and Helping Your Business Win

“The Last Firewall is like William Gibson had a baby with Tom Clancy and let Walter Jon Williams teach it karate. Superbly done.” – Jake F. Simons, author of Wingman and Train Wreck

Everyone would like a sure-fire way to predict the future. Maybe you’re thinking about startups to invest in, or making decisions about where to place resources in your company, or deciding on a future career, or where to live. Maybe you just care about what things will be like in 10, 20, or 30 years.

There are many techniques to think logically about the future, to inspire idea creation, and to predict when future inventions will occur.


I’d like to share one technique that I’ve used successfully. It’s proven accurate on many occasions. And it’s the same technique that I’ve used as a writer to create realistic technothrillers set in the near future. I’m going to start by going back to 1994.


Predicting Streaming Video and the Birth of the Spreadsheet
There seem to be two schools of thought on how to predict the future of information technology: looking at software or looking at hardware. I believe that looking at hardware curves is always simpler and more accurate.
This is the story of a spreadsheet I’ve been keeping for almost twenty years.
In the mid-1990s, a good friend of mine, Gene Kim (founder of Tripwire and author of When IT Fails: A Business Novel) and I were in graduate school together in the Computer Science program at the University of Arizona. A big technical challenge we studied was piping streaming video over networks. It was difficult because we had limited bandwidth to send the bits through, and limited processing power to compress and decompress the video. We needed improvements in video compression and in TCP/IP – the underlying protocol that essentially runs the Internet.
The funny thing was that no matter how many incremental improvements researchers made (there were dozens of people working on different angles of this), streaming video always seemed to be just around the corner. I heard “Next year will be the year for video” or similar refrains many times over the course of several years. Yet it never happened.
Around this time I started a spreadsheet, seeding it with all of the computers I’d owned over the years. I included their processing power, the size of their hard drives, the amount of RAM they had, and their modem speed. I calculated the average annual increase of each of these attributes, and then plotted these forward in time.
I looked at the future predictions for “modem speed” (as I called it back then, today we’d called it internet connection speed or bandwidth). By this time, I was tired of hearing that streaming video was just around the corner, and I decided to forget about trying to predict advancements in software compression, and just look at the hardware trend. The hardware trend showed that internet connection speeds were increasing, and by 2005, the speed of the connection would be sufficient that we could reasonably stream video in real time without resorting to heroic amounts of video compression or miracles in internet protocols. Gene Kim laughed at my prediction.
Nine years later, in February 2005, YouTube arrived. Streaming video had finally made it.
The same spreadsheet also predicted we’d see a music downloading service in 1999 or 2000. Napster arrived in June, 1999.
The data has held surprisingly accurate over the long term. Using just two data points, the modem I had in 1986 and the modem I had in 1998, the spreadsheet predicts that I’d have a 25 megabit/second connection in 2012. As I currently have a 30 megabit/second connection, this is a very accurate 15 year prediction.
Why It Works Part One: Linear vs. Non-Linear
Without really understanding the concept, it turns out that what I was doing was using linear trends (advancements that proceed smoothly over time), to predict the timing of non-linear events (technology disruptions) by calculating when the underlying hardware would enable a breakthrough. This is what I mean by “forget about trying to predict advancements in software and just look at the hardware trend”.
It’s still necessary to imagine the future development (although the trends can help inspire ideas). What this technique does is let you map an idea to the underlying requirements to figure out when it will happen.
For example, it answers questions like these:
When will the last magnetic platter hard drive be manufactured?
2016. I plotted the growth in capacity of magnetic platter hard drives and flash drives back in 2006 or so, and saw that flash would overtake magnetic media in 2016.
When will a general purpose computer be small enough to be implanted inside your brain?
2030. Based on the continual shrinking of computers, by 2030 an entire computer will be the size of a pencil eraser, which would be easy to implant.
When will a general purpose computer be able to simulate human level intelligence?
Between 2024 and 2050, depending on which estimate of the complexity of human intelligence is selected, and the number of computers used to simulate it.
Wait, a second: Human level artificial intelligence by 2024? Gene Kim would laugh at this. Isn’t AI a really challenging field? Haven’t people been predicting artificial intelligence would be just around the corner for forty years?
Why It Works Part Two: Crowdsourcing
At my panel on the future of artificial intelligence at SXSW, one of my co-panelists objected to the notion that exponential growth in computer power was, by itself, all that was necessary to develop human level intelligence in computers. There are very difficult problems to solve in artificial intelligence, he said, and each of those problems requires effort by very talented researchers.
I don’t disagree, but the world is a big place full of talented people. Open source and crowdsourcing principles are well understood: When you get enough talented people working on a problem, especially in an open way, progress comes quickly.
I wrote an article for the IEEE Spectrum called The Future of Robotics and Artificial Intelligence is Open. In it, I examine how the hobbyist community is now building inexpensive unmanned aerial vehicle auto-pilot hardware and software. What once cost $20,000 and was produced by skilled researchers in a lab, now costs $500 and is produced by hobbyists working part-time.
Once the hardware is capable enough, the invention is enabled. Before this point, it can’t be done.  You can’t have a motor vehicle without a motor, for example.
As the capable hardware becomes widely available, the invention becomes inevitable, because it enters the realm of crowdsourcing: now hundreds or thousands of people can contribute to it. When enough people had enough bandwidth for sharing music, it was inevitable that someone, somewhere was going to invent online music sharing. Napster just happened to have been first.
IBM’s Watson, which won Jeopardy, was built using three million dollars in hardware and had 2,880 processing cores. When that same amount of computer power is available in our personal computers (about 2025), we won’t just have a team of researchers at IBM playing with advanced AI. We’ll have hundreds of thousands of AI enthusiasts around the world contributing to an open source equivalent to Watson. Then AI will really take off.
(If you doubt that many people are interested, recall that more than 100,000 people registered for Stanford’s free course on AI and a similar number registered for the machine learning / Google self-driving car class.)
Of course, this technique doesn’t work for every class of innovation. Wikipedia was a tremendous invention in the process of knowledge curation, and it was dependent, in turn, on the invention of wikis. But it’s hard to say, even with hindsight, that we could have predicted Wikipedia, let alone forecast when it would occur.
(If one had the idea of an crowd curated online knowledge system, you could apply the litmus test of internet connection rate to assess when there would be a viable number of contributors and users. A documentation system such as a wiki is useless without any way to access it. But I digress…)
Objection, Your Honor
A common objection is that linear trends won’t continue to increase exponentially because we’ll run into a fundamental limitation: e.g. for computer processing speeds, we’ll run into the manufacturing limits for silicon, or the heat dissipation limit, or the signal propagation limit, etc.
I remember first reading statements like the above in the mid-1980s about the Intel 80386 processor. I think the statement was that they were using an 800 nm process for manufacturing the chips, but they were about to run into a fundamental limit and wouldn’t be able to go much smaller. (Smaller equals faster in processor technology.)


Source: Wikipedia
But manufacturing technology has proceeded to get smaller and smaller.  Limits are overcome, worked around, or solved by switching technology. For a long time, increases in processing power were due, in large part, to increases in clock speed. As that approach started to run into limits, we’ve added parallelism to achieve speed increases, using more processing cores and more execution threads per core. In the future, we may have graphene processors or quantum processors, but whatever the underlying technology is, it’s likely to continue to increase in speed at roughly the same rate.
Why Predicting The Future Is Useful: Predicting and Checking
There are two ways I like to use this technique. The first is as a seed for brainstorming. By projecting out linear trends and having a solid understanding of where technology is going, it frees up creativity to generate ideas about what could happen with that technology.
It never occurred to me, for example, to think seriously about neural implant technology until I was looking at the physical size trend chart, and realized that neural implants would be feasible in the near future. And if they are technically feasible, then they are essentially inevitable.
What OS will they run? From what app store will I get my neural apps? Who will sell the advertising space in our brains? What else can we do with uber-powerful computers about the size of a penny?
The second way I like to use this technique is to check other people’s assertions. There’s a company called Lifenaut that is archiving data about people to provide a life-after-death personality simulation. It’s a wonderfully compelling idea, but it’s a little like video streaming in 1994: the hardware simply isn’t there yet. If the earliest we’re likely to see human-level AI is 2024, and even that would be on a cluster of 1,000+ computers, then it’s seems impossible that Lifenaut will be able to provide realistic personality simulation anytime before that.* On the other hand, if they have the commitment needed to keep working on this project for fifteen years, they may be excellently positioned when the necessary horsepower is available.
At a recent Science Fiction Science Fact panel, other panelists and most of the audience believed that strong AI was fifty years off, and brain augmentation technology was a hundred years away. That’s so distant in time that the ideas then become things we don’t need to think about. That seems a bit dangerous.
* The counter-argument frequently offered is “we’ll implement it in software more efficiently than nature implements it in a brain.” Sorry, but I’ll bet on millions of years of evolution.

How To Do It

This article is How To Predict The Future, so now we’ve reached the how-to part. I’m going to show some spreadsheet calculations and formulas, but I promise they are fairly simple. There’s three parts to to the process: Calculate the annual increase in a technology trend, forecast the linear trend out, and then map future disruptions to the trend.
Step 1: Calculate the annual increase
It turns out that you can do this with just two data points, and it’s pretty reliable. Here’s an example using two personal computers, one from 1996 and one from 2011. You can see that cell B7 shows that computer processing power, in MIPS (millions of instructions per second), grew at a rate of 1.47x each year, over those 15 years.
Intel Pentium Pro
Intel Core i7 3960X
Gap in years
Total Growth
Rate of growth
I like to use data related to technology I have, rather than technology that’s limited to researchers in labs somewhere. Sure, there are supercomputers that are vastly more powerful than a personal computer, but I don’t have those, and more importantly, they aren’t open to crowdsourcing techniques.
I also like to calculate these figures myself, even though you can research similar data on the web. That’s because the same basic principle can be applied to many different characteristics.
Step 2: Forecast the linear trend
The second step is to take the technology trend and predict it out over time. In this case we take the annual increase in advancement (B$7 – previous screenshot), raised to an exponent of the number of elapsed years, and multiply it by the base level (B$11). The formula displayed in cell C12 is the key one.
Expected MIPS
I also like to use a sanity check to ensure that what appears to be a trend really is one. The trick is to pick two data points in the past: one is as far back as you have good data for, the other is halfway to the current point in time. Then run the forecast to see if the prediction for the current time is pretty close. In the bandwidth example, picking a point in 1986 and a point in 1998 exactly predicts the bandwidth I have in 2012. That’s the ideal case.
Step 3: Mapping non-linear events to linear trend
The final step is to map disruptions to enabling technology. In the case of the streaming video example, I knew that a minimal quality video signal was composed of a resolution of 320 pixels wide by 200 pixels high by 16 frames per second with a minimum of 1 byte per pixel. I assumed an achievable amount for video compression: a compressed video signal would be 20% of the uncompressed size (a 5x reduction). The underlying requirement based on those assumptions was an available bandwidth of about 1.6mb/sec, which we would hit in 2005.
In the case of implantable computers, I assume that a computer of the size of a pencil eraser (1/4” cube) could easily be inserted into a human’s skull. By looking at physical size of computers over time, we’ll hit this by 2030:
(cubic inches)
Apple //e with two disk drives
Motorola Droid 3
Elapsed years
Size delta
Rate of shrinkage per year
Future Size
Less than 1/4 inch on a side cube. Could easily fit in your skull.
This is a tricky prediction: traditional desktop computers have tended to be big square boxes constrained by the standardized form factor of components such as hard drives, optical drives, and power supplies. I chose to use computers I owned that were designed for compactness for their time. Also, I chose a 1996 Toshiba Portege 300CT for a sanity check: if I project the trend between the Apple //e and Portege forward, my Droid should be about 1 cubic inch, not 6. So this is not an ideal prediction to make, but it’s still clues us in about the general direction and timing.
The predictions for human-level AI are more straightforward, but more difficult to display, because there’s a range of assumptions for how difficult it will be to simulate human intelligence, and a range of projections depending on how many computers you can bring to pair on the problem. Combining three factors (time, brain complexity, available computers) doesn’t make a nice 2-axis graph, but I have made the full human-level AI spreadsheet available to explore.
I’ll leave you with a reminder of a few important caveats:
  1. Not everything in life is subject to exponential improvements.
  2. Some trends, even those that appear to be consistent over time, will run into limits. For example, it’s clear that the rate of settling new land in the 1800s (a trend that was increasing over time) couldn’t continue indefinitely since land is finite. But it’s necessary to distinguish genuine hard limits (e.g. amount of land left to be settled) from the appearance of limits (e.g. manufacturing limits for computer processors).
  3. Some trends run into negative feedback loops. In the late 1890s, when all forms of personal and cargo transport depended on horses, there was a horse manure crisis. (Read Gotham: The History of New York City to 1898.) Had one plotted the trend over time, soon cities like New York were going to be buried under horse manure. Of course, that’s a negative feedback loop: if the horse manure kept growing, at a certain point people would have left the city. As it turns out, the automobile solved the problem and enabled cities to keep growing.


So please keep in mind that this is a technique that works for a subset of technology, and it’s always necessary to apply common sense. I’ve used it only for information technology predictions, but I’d be interested in hearing about other applications.

This is a repost of an article I originally wrote for If you enjoyed this post, please check out my novels Avogadro Corp: The Singularity Is Closer Than It Appears and A.I. Apocalypse, near-term science-fiction novels about realistic ways strong AI might emerge. They’ve been called “frighteningly plausible”, “tremendous”, and “thought-provoking”.

Technological unemployment is the notion that even as innovation creates new opportunities, it destroys old jobs. Automobiles, for example, created entirely new industries (and convenience), but eliminated jobs like train engineers and buggy builders. As the pace of technology change grows faster, the impact of large scale job elimination increases, and some fear we’ve passed the point of peak jobs. This post explores the past and future of technological unemployment.

Growing up in Brooklyn, my friend Vito’s father spent as much time tinkering around his home as he did working. He was just around more than other dads. I found it quite puzzling until Vito explained that his father was a stevedore, or longshoreman, a worker who loaded and unloaded shipping vessels.

New York Shipyard

Shipping containers (specifically intermodal containers) started to be widely used in the late 1960s and early 1970s. They took far less time to load and unload than un-contained cargo. Longshoreman, represented by a strong union, opposed the intermodal containers, until the union came to an agreement that the longshoreman would be compensated for the loss of employment due to the container innovation. So longshoreman worked when ships came in, and received payment (partial or whole I’m not sure) for the time they didn’t work because of how quickly the containers could be unloaded.

As a result Vito’s father was paid a full salary, even though his job didn’t require him full time. The extra time he was able to be with his kids and work around the home.

Other industries have had innovations that led to unemployment, and in most cases, those professions were not so protected. Blacksmiths are few and far between, and they didn’t get a stipend. Nor did wagon wheel makers, or train conductors, or cowboys. In fact, if we look at professions of the 1800s, we can see many that are gone today. And through there may have been public outcry at the time, we recognize that times change, and clearly we couldn’t protect their jobs forever, even if we wanted to.

Victorian Blacksmiths

However, technology changed slower in the 1800s. It’s likely that wagon wheel makers and blacksmiths died out through attribution (less people entering the profession because they saw fewer opportunities while existing, older practitioners retiring or dying) than through mass unemployment.

By comparison, in the 1900s, technology changed fast enough, and with enough disruption, that it routinely put people out of work. Washing machines put laundries out of business. Desktop publishing put typesetters out of work. (Desktop publishing created new jobs, new business, and new opportunities, but for people whose livelihood was typesetting: they were out of luck.) Travel websites put travel agents out of business. Telephone automation put operators out of work. Automated teller machines put many bank tellers out of work (and many more soon), and so on.

This notion that particular kinds of jobs cease to exist is known as technological unemployment. It’s been the subject of numerous articles lately. John Maynard Keynes used the term in 1930:

We are being afflicted with a new disease of which some readers may not yet have heard the name, but of which they will hear a great deal in the years to come-namely, technological unemployment. This means unemployment due to our discovery of means of economizing the use of labor outrunning the pace at which we can find new uses for labor.”

In December, Wired ran a feature length article on how robots would take over our jobs. TechCrunch wrote that we’ve hit peak jobs. Andrew McAfee (of Enterprise 2.0 fame) and Erik Brynjolfsson wrote Race Against The Machine. There is a Google+ community dedicated to discussing the concept and its implications.

There are different ways to respond to technological unemployment.

One approach is to do nothing: let people who lose their jobs retrain on their own to find new jobs. Of course, this causes pain and suffering. It can be hard enough to find a job when you’re trained for one, let alone when your skills are obsolete. Meanwhile, poverty destroys individuals, children, and families through long-term financial problems, lack of healthcare, food, shelter, and sufficient material goods. I find this approach objectionable on this reason alone.

A little personal side story: Several years ago a good friend broke his ankle and didn’t have health insurance. The resulting medical bills would be difficult to handle under any circumstances, but especially so in the midst of the great recession when their income was low. We helped raised money through donations to the family to help pay for the medical expenses. Friends and strangers chipped in. But what was remarkable is that nearly all the strangers that chipped in were other folks that either didn’t have health insurance or had extremely modest incomes. That is, although we made the same plea for help to everyone, except for friends, it was only those people who had been in the same or similar situations who actually donated money — despite them being the ones who could least afford to do it. I bring this up because I don’t think people who have a job and health insurance can really appreciate what it means to not have those things. 

The other reason it doesn’t make sense to do nothing is because it can often become a roadblock to meaningful change. One example of this is the logging industry. Despite fairly broad public support to changes in clear-cutting policies, loggers and logging companies often fight back with claims of the number of jobs that will be lost. Whether this is true or not, the job loss argument has stymied attempts to change logging policy to more sustainable practices. So even though we could get to a better long-term place through changes, the short-term fear of job loss can hold us back.

Similar arguments have often been made about military investments. Although this is a more complicated issue (it’s not just about jobs, but about overall global policy and positioning), I know particular military families that will consistently vote for the political candidate that supports the biggest military investment because that will preserve their jobs. Again, fear of job loss drives decision making, as opposed to bigger picture concerns.

Longshoremen circa 1912
by Lewis Hine

The longshoreman union agreement is what I’d call a semi-functional response to technological unemployment. Rather than stopping innovation, the compromise allowed innovation to happen while preserving the income of the affected workers. It certainly wasn’t a bad thing that Vito’s father was around more to help his family.

There are two small problems with this approach: it doesn’t scale, and it doesn’t enable change. Stevedores were a small number of workers in a big industry, one that was profitable enough to afford to continue to equal pay for reduced work.

I started to think about technological unemployment a few years ago when I published my first book. I was shocked at the amount of manual work it took to transform a manuscript into a finished book: anywhere from 20 to 60 hours for a print book.

As a software developer who is used to repeatable processes, I found the manual work highly objectionable. One of the recent principles of software development is the agile methodology, where change is embraced and expected. If I discover a problem in a web site, I can fix that problem, test the software, and deploy it, in a fully automated way, within minutes. Yet if I found a problem in my manuscript, it was tedious to fix: it required changes in multiple places, handoffs between multiple people and computers and software programs. It would take days of work to fix a single typo, and weeks to do a few hundred changes. What I expected, based on my years of experience in the software industry, was a tool that would automatically transform my manuscript into an ebook, printed book, manuscript format, etcetera.

I envisioned creating a tool to automate this workflow, something that would turn novel manuscripts into print-ready books with a single click. I also realized that such a tool would eliminate hundreds or thousands of jobs: designers who currently do this would be put out of work. Of course it wouldn’t be all designers, and it wouldn’t be able books. But for many books, the $250 to $1,000 that might currently be paid to a designer would be replaced by $20 for the software or web service to do that same job.

It is progress, and I would love such a tool, and it would undoubtably enable new publishing opportunities. But it has a cost, too.

Designers are intelligent people, and most would find other jobs. A few might eek out an existence creating book themes for the book formatting services,  but that would be a tiny opportunity compared to the earnings before, much the same way iStockPhoto changed the dynamics of photography. In essence, a little piece of the economic pie would be forever destroyed by that particular innovation.

When I thought about this, I realized that this was the story of the technology industry writ large: the innovations that have enabled new businesses, new economic opportunities, more convenience — they all come of the expense of existing businesses and existing opportunities.

I like looking up and reserving my own flights and I don’t want to go backwards and have travel agents again. But neither do I want to live in a society where people can’t find meaningful work.

Meet your future boss.

Innovation won’t stop, and many of us don’t want it to. I think there is a revolution coming in artificial intelligence, and subsequently in robotics, and these will speed up the pace of change, rendering even more jobs obsolete. The technological singularity may bring many wondrous things, but change and job loss is an inevitable part of it.

If we don’t want to lose civilization to poverty, and the longshoreman approach isn’t scalable, then what do we do?

One thing that’s clear is that we can’t do it piecemeal: If we must negotiate over every class of work, we’ll quickly become overwhelmed. We can’t reach one agreement with loggers, another with manufacturing workers, another with construction workers, another with street cleaners, a different one for computer programmers. That sort of approach doesn’t work either, and it’s not timely enough.

I think one answer is that we provide education and transitional income so that workers can learn new skills. If a logger’s job is eliminated, then we should be able to provide a year of income at their current income rate while they are trained in a new career. Either benefit alone doesn’t make sense: simply giving someone unemployment benefits to look for a job in a dying career doesn’t make a long term change. And we can’t send someone to school and expect them to learn something new if we don’t take care of their basic needs.

The shortcoming of the longshoreman solution is that the longshoremen were never trained in a new field. The expense of paying them for reduced work was never going to go away, because they were never going to make the move to a new career, so there would always be more of them than needed.

And rather than legislate which jobs receive these kinds of benefits, I think it’s easy to determine statistically. The U.S. government has thousands of job classifications: it can track which classifications are losing workers at a statistically significant rate, and automatically grant a “career transition” benefit if a worker loses a job in an affected field.

In effect, we’re adjusting the supply and demand of workers to match available opportunities. If logging jobs are decreasing, not only do you have loggers out of work, but you also have loggers competing for a limited number of jobs, in which case wages decrease, and even those workers with jobs are making so little money they soon can’t survive.

Even as many workers are struggling to find jobs, I see companies struggling to find workers. Skills don’t match needs, so we need to add to people’s skills.

Programmers need to eat too,

I use the term education, but I suspect there are a range of ways that retraining can happen besides the traditional education experience: unpaid work internships, virtual learning, and business incubators.

There is currently a big focus on high tech incubators like TechStars because of the significant return on investment in technology companies, but many firms from restaurants to farming to brick and mortar stores would be amenable to incubators. Incubator graduates are nearly twice as likely to stay in business as compared to the average company, so the process clearly works. It just needs to be expanded to new businesses and more geographic areas.

The essential attributes of entrepreneurs are an ability to learn quickly, respond to changing conditions, and big picture thinking. These will be vital skills in the fast-evolving future. It’s why, when my school age kids talk about getting ‘jobs’ when they grow up, I push them towards thinking about starting their own businesses.

Still, I think a broad spectrum retraining program, including a greater move toward entrepreneurship, is just one part of the solution.

I think the other part of the solution is to recognize that even with retraining, there will come a time,
whether in twenty-five or fifty years, when the majority of jobs are performed by machines and computers. (There may be a small subset of jobs humans do because they want to, but eventually all work will become a hobby: something we do because we want to, not because we need to.)

This job will be available
in the future.

The pessimistic view would be bad indeed: 99% of humanity scrambling for any kind of existence at all. I don’t believe it will end up like this, but clearly we need a different kind of economic infrastructure for the period when there are no jobs. Rather than wait until the situation is dire, we should start putting that infrastructure in place now.

We need a post-scarcity economic infrastructure. Here’s one example:

We have about 650,000 homeless in the United States and foreclosures on millions of homes but about 11% of U.S. houses are empty. Empty! They are creating no value for anyone. Houses are not scarce, and we could reduce both suffering (homeless) and economic strain (foreclosures) by realizing this. We can give these non-scarce resources to people, and free up their money for actually scarce resources, like food and material goods.

Who wins when a home is foreclosed?

To weather the coming wave of joblessness, we need a combination of better redistribution of non-scarce resources as well as a basic living stipend. There are various models of this from Alaska’s Permanent Fund to guaranteed basic income. Unlike full fledged socialism, where everyone receives the same income regardless of their work (and can earn neither more or less than this, and by traditional thinking, therefore may have little motivation to work), a stipend or basic income model provides a minimal level of income so that people can live humanely. It does not provide for luxuries: if you want to own a car, or a big screen TV, or eat steak every night, you’re still going to have to work.

European Initiative for
Unconditional Basic Income

This can be dramatically less expensive than it might seem. When you realize that housing is often a family’s largest expense (consuming more than half of the income of a family at poverty level), and the marginal cost of housing is $0 (see above), and if universal healthcare exists (we can hope the U.S. will eventually reach the 21st century), then providing a basic living stipend is not a lot of additional money.

I think this is the inevitable future we’re marching towards. To reiterate:

  1. full income and retraining for jobs eliminated due to technological change
  2. redistribution of unused, non-scarce resources
  3. eventual move toward a basic living stipend

I think we can fight the future, perhaps delay it by a few years. If we do, we’ll cause untold suffering along the way as yet more people edge toward poverty, joblessness, or homelessness.

Or we can embrace the future quickly, and put in place the right structural supports that allow us to move to a post-scarcity economy with less pain.

What do you think?

This is an amazing deal: Audible just bought Avogadro Corp and A.I. Apocalypse audio books on sale for $1.99 each!

As I don’t have any control over pricing, this is an exciting opportunity to pick the audio editions up at a significant discount compared to their usual price of $17.95. I don’t know how long it will last, so take advantage of it while you can!

A Manhattan Project of the Mind (or Brain Wars)
Sharon Weinberger, @weinbergersa, Columnist
Presentation at SXSW

·      Background
o   Do a weekly column called “Code Red”
o   Write about the Pentagon’s role in neuroscience
o   For ten years I’ve written about the technology the Pentagon chooses to fund.
o   About 6 years starting writing this articles.
o   After writing these articles, starting getting thousand of letters from people who claimed to be experimental test subjects.
§  Whether these people are right or wrong, they are googling what the Pentagon is doing, and finding out that in fact, the Pentagon does have technology to make voices in people’s heads.
o   This is partly about neuroscience as a weapon.
o   What are they really doing, and what are they not doing? What’s the hype and what’s the reality?
o   There’s some good science, and some bad science.
·      You can trace the Pentagon’s interest back to:
o    J.C.R. Licklider’s vision in 1960: a man-compuyter symbiosis.
§  Seems obvious today, but in 1960, the notion that a computer wouldn’t just crunch numbers, but would interact with you and help you make decisions.
§  The game Missile Command is similar to a real problem the air force had in the 1950s, and hence developed Sage, a system for monitoring and tracking incoming missiles.
o   Jacques Vidal’s “Toward direct brain-computer communication”
§  Got funding from DARPA as a basic science project to use observable electrical brain signals to control technology.
·      DARPA Director’s Vision 2002:
o   Imagine 25 years from now where old guys like me put on a pair of glasses or a helment and open our eyes. Somewhere there will be a robot that will open it’s eyes…
·      Duke University Medical Center in 2003
o   Taught rhesus monkeys to consciously control the movement of a robot arm in real time, using only signals from their brains.
o   Crude approximation, takes a lot of training.
o   But it works
·      Augmented Cognition (AugCog) 2003/2004:
o   Goal for order of magnitude increase in mental capacity.
o   Want to help soldiers manage cognitive overload.
o   Vision of Augmented Cognition 2005
§  Video showing how sensors can be used to detect overload by brain. When too many streams of information threaten to overload here, the user interface is streamlined to highlight certain elements and reduce others (e.g. maximize text, minimize images).
·      Neurotechnology for intelligence analysts
o   They look at hundreds of images each day, trying to glean information.
o   Scientists wanted to watch the P300 signal (object recognition), to see if they could help the analysts better spot things.
o   In theory, they could detect the signal faster than the consciousness can interpret it. There’s 300ms delay in the conscious brain.
§  We don’t totally understand why there is the delay.
·      In 2008, did project called “Luc’s binoculars”
o   Wanted to use binoculars and P300 signals to help identify objects of interest.
·      In 2012, actually have a system…soldier with EEG in the lab. But to actually develop technology to use in the field, it is much harder.
·      Neuroprosthetics: 2009
o   By 2004, 2005, and 2006, one of the biggest problems was roadside bombs. Lots of soldiers losing limbs.
o   Modern prosthetics are cable systems: you clench a muscle in your back, it is sensed, and moves a cable to move the arm.
o   It’s very, very hard.
o   Our understanding of which neurons do what is still crude…it’s probabilistic.
o   Mechanical arms are still more useful.
·      2013: Brain Net
o   brain-to-brain interface in rats
o   Same guys at Duke who did the rhesus monkey brain implant
o   They linked two rat brains… one is the encoder, and one is the decoder.
·      Other Directions: Narrative Networks
o   neuroscience for propaganda”
·      Future Attributes Screening Technology
o   When you go through the airport, a subset of agents are trained to specifically look for suspicious behaviors: facial expressions, body language.
o   DHS want use remote sensors to look at physiological indicators: heart rate, sweating, blood flow in face, etc
§  They want to identify “mal-intent”. Whether you harbor the intent to commit a crime.
o   See Homeland Security Youtube video
§  Future Attribute Screening Technology
§  Battelle / Farber
o   All sorts of issues: Why are people nervous? Because they are going to commit a crime? Or because they’re skipping work to go to an event? Or because they ate a ham sandwich?
§  Decades of research have shown we still can’t reliably detect lies.
§  We certainly can’t detect mal-intent.
·      Future Directions: Smart Drugs
o   No formal studies done.
o   Anecdotal reports: 25% of soldiers in field use a smart drug such as Ritalin or Adderall.
o   Should we test smart drugs?
o   Possibly the government is staying away from it because of the long history of problematic research done in the past (LSD experiments by CIA), plagued by ethical concerns.
·      President Obama’s Vision 2013
o   Unlocking and mapping the brain. Wants to flow billions of dollars into. If that happens, DARPA will be one of the major sources of funding.
·      Hype vs. Reality
o   Brain controlled drones?
§  The technique is slow. 10s of bits of information per minute, and subject to noise.
§  Not obvious that it can be used yet for field applications.
·      Where are we today?
o   Brain-computer interface already here in limited capacity: very limited, very crude, don’t work well.
o   Neuroprosthetics still years away.
o   Deception Detection: very little agreement, even on the basic science.
o   Mind-controlled drones: a generation away.
·      Implications
o   Technological: expands the battlefield through telepresence
o   Ethical: human testing questions…is it an even battlefield for augmented soldiers?
o   Do we have the right to brain privacy? Do soldiers?
o   It’s hard to have a serious debate about futuristic technology. (The giggle factor)
§  Will: interesting idea, given increasing pace of technology, we need to talk about the future, but it’s hard to do.
·      Was able to participate in experiment using FMRI to try to read associated brain activity
o   In two hour session, surrounded by ton of gear, had a difficult time being able to even detect what part of brain associated with tapping her finger.
o   If it was that hard to detect such a simple thing, then reading deeper thoughts, reading minds, is very far off, even if it is ever possible.

A Robot in Your Pocket
Jeff Bonforte, CEO Xobni, @bonforte
Amit Kapur, CEO Gravity, @amitk

·      Marvin Minsky
o   In the 50s, predicted robots would be everywhere in 5 years
o   In the 60s, it was 10 years
o   In the 70s, it was 20 years
o   In the 80s, it was 40 years
·      It’s a fine line between tools and robots
o   Robata is Czech for “hard work”
o   It’s a fine line between a tool and a point where it becomes something that works for you.
·      We think of robots as a hardware thing
o   We want R2D2, Rosie, and Six.
o   What we have are vacuum cleaners and industrial robots.
·      They’re here, and they’re software.
·      What’s changed in the last decade?
o   Data
o   Smaller and cheaper sensors.
o   The more things we measure, the more accurately we can respond.
o   Smartphones are a collection of sensors we carry with us all the time.
·      Software, too.
o   Natural Language Processing: Understanding semantically what something is about.
o   Machine Learning: Software can look at data, learn from it, do intelligence tasks.
o   Distinct Ontologies: Instead of a rigid taxonomy, … Humans don’t think in hierarchical structures. We think flexibly. An iRobot vacuum makes us think about things like chores, and how we don’t have time, and the cost of hiring a may.
§  Machines need to be able to understand and combine things.
·      More data than we know what to do with.
o   We start by measuring things we don’t know what to do with.
o   Will it rain today?
§  It’s a deterministic problem. Use barometer, wind conditions, etc.
§  Stochastic: Look at 10 million shoe selections of New Yorks, and you can figure out if it’s going to rain.
·      The point of stochastic is that one data point doesn’t matter. Whereas in a deterministic model, you could crash your model with a weird data point.
·      After 24 hours, shoe selection is not correlated to weather.
§  The point is, we can correlate surprising things.
o   Xobni does this with inboxes. The average inbox is a couple of megabytes. The Xobni inbox has 40 MB of data.
·      Explicit versus implicit data
o   “I’m here at this restaurant”, or “this is my favorite person”
o   vs.
o   We look at your data, and observe what you do. If you text a person 1,000 times to the same number, why does the phone still ask you which number to use?
o   Examples of implicit data:
§  Payment patterns from credit cards
§  Locations you pass when you drive, locations you stay a long time.
§  You express your preferences and patterns through what you do every day.
o   For example: let’s say I get a txt message from someone with a link. How often do I click on links from that person? If it’s high, then go fetch the page in the background, so that when I do click on it, the page is preloaded.
o   Implicit systems are much more accurate, because they are related to current behavior and actual actions, rather that what people think they are interested in, or what they explicitly said 2 years ago.
o   Features like circles in google are explicit and they cause high cognitive load.
·      Where giants tread
o   IBM’s Watson.
§  Smart AI can win Jeopardy.
§  Now diagnose cancer.
o   Google’s self-driving car.
§  Passes 300,000 miles driven.
o   Huge R&D budgets, years of efforts.
·      Startups coming into the equation
o   The cost of getting technology and processing data is going down
o   More tools are open source
·      Big R&D innovations feel like they’re five years away, but it’s usually 10 years.
o   Example of iDrive: cost and effort to do ($5.7M for 16 terabyte drive, $1.5M monthly bandwidth bill, write every component of systems) versus Dropbox ten years later (off the shelf components, cheap costs).
·      Progression
o   Analogy: Brakes
o   Digital: Antilock
o   Robot: Crash avoidance
·      Progression
o   Analog: thermostat
o   Digital: timer thermostat
o   Robotic: Nest
·      News
o   Analog: Newspapers.
o   Digital: Online representation.
o   Robot (gravity): Personalized experience based on their preferences, derived from their past behavior
·      Businesses
o   A: Yellow pages
o   D: Yelp
o   R: Ness
·      Information
o   A: Encyclopedia
o   D: Google Search
o   R: Google Now
·      Contacts
o   A: Address book
o   D: Contacts / Email
o   R: Xobni
·      Objectives
o   Learn
o   Adapt
o   Implicit
o   Proactive
o   Personalized
·      A spam filter that’s 95% accurate is totally unreliable. 0% adoption. At 98%, still bad. 99%, still bad. You need to get to 99.8% before you get adoption.
o   But for restaurant selection, 95% is great.
o   Different level of expected quality for different systems.
·      Gravity
o   Personalizing the Internet
o   Marissa Meyer saying that Yahoo is going to be very focused on personalization.
o   Surrounding ourselves with the experts in machine learning, natural language processing.
o   Mission: leverage the interest graph to personalize the internet
o   The more information that flows into a system, the harder it becomes to find great content. It’s the signal to noise ratio.
o   The history of the internet is of companies creating better filters to find great content.
o   Phases
§  Their web: directories, google.
§  Our web: use social graph, get content shared with us from friends
§  Your web: using technology to process data to understand the individual, and have adaptive, personalized experience.
o   Interest Graphing
§  Semantic analysis of webpage. Match against ontologies we’ve built.
§  Watch what people do, match against interests.
§  Then personalize what they see.
§  Show examples of how sites filled with links (New York Times, Huffington Post), Gravity will find the top articles you’d be interested in.
·      Xobni
o   Why who matters?
§  It starts with the idea of attaching files to email. You know the sender, the receiver, and the email. Instead of presenting all files on the computer for possible attachment, you can prefilter the list, and it’s a 3x reduction in possible files.
o   Super cool demo of voicemail.
§  Voicemail transcribes and hotlinks to contacts, doing things like resolving references to email (“see the address I emailed you”), and people (the venndiagram of people they know in common means they must be talking about this Chris), and vocabulary (this two people use words like “dude”, and “hey man”)
·      Future Applications
o   Trackers are digital. What’s the robot version? The equivalent of a check engine light for your health.
o   Education: the creation of personalized education and teaching.
o   Finance: help for your particular financial situation.
·      Often people are worried about privacy. Anytime you give people data, you have to worry to what are they going to do.

Via Techcrunch:

Famed inventor, entrepreneur, author, and futurist Ray Kurzweil announced this afternoon that he has been hired by search engine giantGoogle as a director of engineering focused on machine learning and language processing. He starts this upcoming Monday, according to a report issued on his website.

And from Ray’s website:

“In 1999, I said that in about a decade we would see technologies such as self-driving cars and mobile phones that could answer your questions, and people criticized these predictions as unrealistic. Fast forward a decade — Google has demonstrated self-driving cars, and people are indeed asking questions of their Android phones. It’s easy to shrug our collective shoulders as if these technologies have always been around, but we’re really on a remarkable trajectory of quickening innovation, and Google is at the forefront of much of this development.

“I’m thrilled to be teaming up with Google to work on some of the hardest problems in computer science so we can turn the next decade’s ‘unrealistic’ visions into reality.” 

The singularity is a little closer now. — Will

In one of my writers groups, we’ve been talking extensively about AI emergence. I wanted to share one thought around AI intelligence:

Many of the threats of AI originate from a lack of intelligence, not a surplus of it.

An example from my Buddhist mathematician friend Chris Robson: If you’re walking down a street late at night and see a thuggish looking person walking toward you, you would never think to yourself “Oh, I hope he’s not intelligent.” On the contrary, the more intelligent, the less likely they are to be a threat.

Similarly, we have stock trading AI right now. They aren’t very intelligent. They could easily cause a global economic meltdown. They’d never understand the ramifications.

We’ll soon have autonomous military drones. They’ll kill people and obey orders without ever making a judgement call.

So it’s likely that the earliest AI problems are more likely to be from a lack of relevant intelligence than from a surplus of it.

On the flip side, Computer One by Warwick Collins is a good AI emergence novel that makes the reverse case: that preemptive aggression is a winning strategy, and any AI smart enough to see that it could be turned off will see people as a threat and preemptively eliminate us.