With so many discussions happening about the risks and benefits of artificial intelligence (AI), I really want to collect data from a broader population to understand what others think about the possible risks, benefits, and development path.

Take the Survey

I’ve created a nine question survey that takes less than six minutes to complete. Please help contribute to our understand of the perception of artificial intelligence by completing the survey.

Take the survey now

Share the Survey

I hope you’ll also share the survey with others. The more responses we get, the more useful the data becomes.

Share AI survey on Twitter

Share AI survey on Facebook

Share the link: https://www.surveymonkey.com/s/ai-risks

Thanks to Elon Musk’s fame and his concerns about the risks of AI, it seems like everyone’s talking about it.

One difficulty that I’ve noticed is agreement on exactly what risk we’re talking about. I’ve had several discussions in just the last few days, both at the Defrag conference in Colorado and online.

One thing I’ve noticed is that the risk naysayers tend to say “I don’t believe there is risk due to AI”. But when you probe them further, what they are often saying is “I don’t believe there is existential risk from a skynet scenario due to a super-intelligence created from existing technology.” The second statement is far narrower, so let’s dig into the components of it.

Existential risk is defined by Nick Bostrum as a risk “where an adverse outcome would either annihilate Earth-originating intelligent life or permanently and drastically curtail its potential.” Essentially, we’re talking about either the extinction of humankind, or something close to it. However, most of us would agree that there are very bad outcomes that are nowhere near an existential risk. For example, about 4% of the global population died in WWII. That’s not an existential risk, but it’s still horrific by anybody’s standards.

Runaway AI, accelerating super-intelligence, or hard takeoff are all terms that refer to the idea that once an artificial intelligence is created, it will recursively improve its own intelligence, becoming vastly smarter and more powerful in a matter of hours, days, or months. We have no idea if this will happen (I don’t think it’s likely), but simply because we don’t have a hard takeoff doesn’t mean that an AI would be stagnant or lack power compared to people. There are many different ways even a modest AI with the creativity, motivation, and drive equivalent to that of a human could affect a great deal more than a human could:

  • Humans can type 50 words a minute. AI could communicate with tens of thousands of computers simultaneously.
  • Humans can drive one car at a time. AI could fly all the world’s airplanes simultaneously.
  • Humans can trip one circuit breaker. AI could trip all the world’s circuit breakers.
  • Humans can reproduce a handful of times over the course of a lifetime. AI could reproduce millions of times over the course of a day.
  • Humans evolve over the course of tens of thousands of years or more. Computers become 50% more powerful each year.

So for many reasons, even if we don’t have a hard takeoff, we can still have AI actions and improvement that occur far faster, and with far wider effect than we humans are adapted to handling.

Skynet scenario, terminator scenario, or killer robots are terms that refer to the idea that AI could choose to wage open warfare on humans using robots. This is just one type of risk, of many different possibilities. Other ways that AI could harm us include deliberate mechanisms, like trying to manipulate us by controlling the information we see, or by killing off particular people that pose threats, or by extorting us to deliver services they want. This idea of manipulation is important, because while death is terrible, the loss of free will is pretty bad too.

Frankly, most of those seem silly or unlikely compared to unintentional harm that AI could cause: the electrical grid could go down, transportation could stop working, our home climate control could stop functioning, or a virus could crash all computers. If these don’t seem very threatening, consider…

  • What if one winter, for whatever reason, homes wouldn’t heat? How many people would freeze to death?
  • Consider that Google’s self-driving car doesn’t have any manual controls. It’s the AI or it’s no-go. More vehicles will move in this direction, especially all forms of bulk delivery. If all transportation stopped, how would people in cities get food when their 3-day supply runs out?
  • How long can those city dwellers last without fresh water if pumping stations are under computer control and they stop?

Existing technology: Some will argue that because we don’t have strong AI (e.g. human level intelligence or better) now, there’s no point in even talking about risk. However, this sounds like “Let’s not build any asteroid defenses until we clearly see an asteroid headed for Earth”. It’s far too late by then. Similarly, once the AI is here, it’s too late to talk about precautions.

In conclusion, if you have a conversation about AI risks, be clear what you’re talking about. Frankly, all of humanity being killed by robots under the control of a super-intelligence AI doesn’t even seem worth talking about compared to all of the more likely risks. A better conversation might start with a question like this:

Are we at risk of death, manipulation, or other harm from future AI, whether deliberate or accidental, and if so, what can we do to decrease those risks?

I only have a limited amount of writing time this week, and I want to focus that time on my next novel. (No, not book 4. That’s off with an editor right now. I’m drafting the novel after that, the first non-Avogadro Corp book.) But I feel compelled to briefly address the reaction to Elon Musk’s opinion about AI.

Brief summary: Elon Musk said that AI is a risk, and that the risks could be bigger than those posed by nuclear weapons. He compared AI to summoning a demon, using the comparison to illustrate the idea that although we think we’d be in control, AI could easily escape from that control.

Brief summary of the reaction: A bunch of vocal folks have ridiculed Elon Musk for raising these concerns. I don’t know how vocal they are, but there seems to be a lot of posts in my feeds from them.

I think I’ve said enough to make it clear that I agree that there is the potential for risk. I’m not claiming the danger is guaranteed, nor do I believe that it will come in the form of armed robots (despite the fiction I write). Again, to summarize very briefly: the risk of AI danger can come from many different dimensions:

  • accidents (a programming bug that causes the power grid to die, for example)
  • unintentional side effects (an AI that decides on the best path to fulfill it’s goal without taking into account the impact on humans: maybe an autonomous mining robot that harvests the foundations of buildings)
  • complex interactions (e.g. stock trading AI that nearly collapsed the financial markets a few years ago)
  • intention decisions (an AI that decides humans pose a risk to AI, or an AI that is merely angry or vengeful.)
  • human-driven terrorism (e.g. nanotechnology made possible by AI, but programmed by a person to attack other people)

Accidents and complex interactions have already happened. Programmers already don’t understand their code, and AI are often written as black-boxes that are even more incomprehensible. There will be more of these, and they don’t require human-level intelligence. Once AI does achieve human-level intelligence, then new risks become more likely.

What makes AI risks different than more traditional ones are their speed and scale. A financial melt-down can happen in seconds, and we humans would know about it only afterwards. Bad decisions by a human doctor could affect a few dozen patients. Bad decisions by a medical AI that’s installed in every hospital could affects hundreds of thousands of patients.

There are many potential benefits to AI. They are also not guaranteed, but they include things like more efficient production so that we humans might work less, greater advances in medicine and technology so that we can live longer, and reducing our impact on the environment so we have a healthier planet.

Because of these many potential benefits, we probably don’t want to stop work on AI. But since almost all research effort is going into creating AI and very little is going into reducing the risks of AI, we have an imbalance. When Elon Musk, who has a great deal of visibility and credibility, talks about the risks of AI, this is a very good thing, because it will help us address that imbalance and invest more in risk reduction.

This year is a bonanza for singularity movies, starting with Her (great), Transcendence (fun, but didn’t deliver on expectations), and now Lucy.

Overall, I liked a lot of things about Lucy although it has a few shortcomings.

Lucy spoilers ahead. Spoilers. Did you hear that? Now is your chance to stop reading.

The basic plot from Wikipedia: Lucy (Scarlett Johansson) is a woman living in Taipei, Taiwan who is forced to work as a drug mule for the mob. A drug implanted in her body inadvertently leaks into her system, which allows her to use more than the “normal” 10% of her brain’s capacity, thus changing her into a superhuman. As a result, she can absorb information instantaneously, is able to move objects with her mind, and can choose not to feel pain or other discomforts, in addition to other abilities.

I was expecting two things from this movie:

Great action scenes. This is a Luc Besson movie. Think Fifth Element, Taxi, District 13. Great action scenes and car chases are staples. Delivered as expected.

Good movie visualizations of posthumanism. The movie description and trailer indicates that Lucy gets super human ability, starting with the ability to control her own body, then other humans, and then basic matter. I think this was done great. The progression over the course of the movie feels logical, and the ending in particular, was spectacular. What happens to Lucy after she meets the professor felt spot on.

Those are the strengths. There are a few weaknesses.

10% of the brain. Lucy stumbled when it chose this concept of “humans only use 10% of their brain” as a way to describe what was happening as Lucy progressed to greater and greater capabilities. We know this is scientifically false. A freak nanotechnology accident would be more plausible.

However, I think it’s more useful to see this as metaphor: I’m guessing Luc Besson wanted an easily-understood gauge that ran from human to ultimate-posthuman. And what we got was a percentage number to stand in for that. So ignore the scientific correctness, and just think of it as a power gauge.

Philosophy. But there’s a bigger area in which the movie fell down. That’s in the philosophical underpinnings, which take up a significant amount of time, but don’t make a lot of sense. io9 described it this way:

When you’ve got a badass superhero with evil futuristic drug lord enemies, you’d better have a damn good theory about the meaning of existence if we’re going to take lots of time out to talk about it. And Lucy doesn’t. It’s like Besson read about the superintelligence explosion and the singularity, then decided to slather some soundbytes from What the Bleep Do We Know?! on top of what would otherwise have been a really compelling superhero story.

By comparison, The Matrix does plenty of philosophy about existence, but it’s tightly woven into the story and conflict. In Lucy, the philosophy has nothing to do with the conflict (e.g. the drug lords chasing her), so it can only be taken as a commentary on our world, and in that context, it fizzles out.

In Rolling Stone, Luc Besson said he wanted to do something more than just the usual shoot ’em up:

The bait-and-switch aspects of Lucy — make viewers think they’re watching a trashy action flick, then thrust them into 2001: A Space Odyssey territory — shows the evolution of the 50-year-old Besson, who says he’s grown tired of the shoot-’em-up genre. “I’m not the same moviegoer or moviemaker as I was 10 years ago,” he says. “There are action films made now that are really well done, but after 40 minutes, I get bored. It’s all the same.”

Overall, Lucy was a lot of fun, and what happens to the character Lucy at the end is more plausible than what happened to Dr. Will Caster in the end of Transcendence.

Now I can’t help but imagine Luc Besson directing The Last Firewall.

I love trying to extrapolate trends and seeing what I can learn from the process. This past weekend I spent some time thinking about the size of computers.

From 1986 (Apple //e) to 2012 (Motorola Droid 4), my “computer” shrinking 290-fold, or about 19% per year. I know, you can argue about my choices of what constitutes a computer, and whether I should be including displays, batteries, and so forth. But the purpose isn’t to be exact, but to establish a general trend. I think we can agree that, for some definition of computer, they’re shrinking steadily over time. (If you pick different endpoints, using an IBM PC, a Macbook Air, or a Mac Mini, for example, you’ll still get similar sorts of numbers.)

So where does that leave us going forward? To very small places:

Year Cubic volume of computer
2020 1.07
2025 0.36
2030 0.12
2035 0.04
2040 0.01
2045 0.0046

In a spreadsheet right next to the sheet entitled “Attacking nanotech with nuclear warheads,” I have another sheet called “Data center size” where I’m trying to calculate how big a data center will be in 2045.

A stick of is “2-7/8 inches in length, 7/8 inch in width, and 3/32 inch”  or about 0.23 cubic inches, and we know this thanks to the military specification on chewing gum. According to the chart above, computers will get smaller than that around 2030, or certainly by 2035. They’ll also be about 2,000 times more powerful than one of today’s computers.

Imagine today’s blade computers used in data centers, except shrunk to the size of sticks of gum. If they’re spaced 1″ apart, and 2″ apart vertically (like a DIMM memory plugged into it’s end), a backplane could hold about 72 of these for every square foot. A “rack” would hold something like 2,800 of these computers. That’s assuming we would even want them to be human-replaceable. If they’re all compacted together, it could be even denser.

It turns out my living room could hold something like 100,000 of these computers, each 2,000 times more powerful one of today’s computers, for the equivalent of about two million 2014 computers. That’s roughly all of Google’s computing power. In my living room.

I emailed Amber Case and Aaron Parecki about this, and Aaron said “What happens when everyone has a data center in their pockets?”

Good question.

You move all applications to your pocket, because latency is the one thing that doesn’t benefit from technology gains. It’s largely limited by speed of light issues.

If I’ve got a data center in my pocket, I put all the data and applications I might possibly want there.

Want Wikipedia? (14GB) — copy it locally.

Want to watch a movie? It’s reasonable to have the top 500,000 movies and TV shows of all time (2.5 petabytes) in your pocket by 2035, when you’ll have about 292 petabytes of solid-state storage. (I know 292 petabytes seems incredulous, but the theoretical maximum data density is 10^66 bits per cubic inch.)

Want to run an web application? It’s instantiated on virtual machines in your pocket. Long before 2035, even if a web developer needs redis, mysql, mongodb, and rails, it’s just a provisioning script away… You could have a cluster of virtual machines, an entire cloud infrastructure, running in your pocket.

Latency goes to zero, except when you need to do a transactional update of some kind. Most data updates could be done through lazy data coherency.

It doesn’t work for real-time communication with other people. Except possibly in the very long term, when you might run a copy of my personality upload locally, and I’d synchronize memories later.

This also has interesting implications for global networking. It becomes more important to have a high bandwidth net than a low latency net, because the default strategy becomes one of pre-fetching anything that might be needed.

Things will be very different in twenty years. All those massive data centers we’re building out now? They’ll be totally obsolete in twenty years, replaced by closet-sized data centers. How we deploy code will change. Entire new strategies will develop. Today we have DOS-box and NES emulators for legacy software, and in twenty years we might have AWS-emulators that can simulate the entire AWS cloud in a box.

I was honored to be interviewed by the inimitable Nikola Danaylov (aka Socrates) for the Singularity 1 on 1 podcast.

In our 45 minute discussion, we covered the technological singularity, the role of open source and the hacker community in artificial intelligence, the risks of AI, mind-uploading and mind-connectivity, my influences and inspirations, and more. You can watch the video version below, or hop over to the Singularity 1 on 1 blog for audio and download options.

Ramez Naam, author of Nexus and Crux (two books I enjoyed and recommend), has recently put together a few guest posts for Charlie Stross (another author I love). The posts are The Singularity Is Further Than It Appears and Why AIs Won’t Ascend in the Blink of an Eye.

They’re both excellent posts, and I’d recommend reading them in full before continuing here.

I’d like to offer a slight rebuttal and explain why I think the singularity is still closer than it appears.

But first, I want to say that I very much respect Ramez, his ideas and writing. I don’t think he’s wrong and I’m right. I think the question of the singularity is a bit more like Drake’s Equation about intelligent extraterrestrial life: a series of probabilities, the values of which are not known precisely enough to determine the “correct” output value with strong confidence. I simply want to provide a different set of values for consideration than the ones that Ramez has chosen.

First, let’s talk about definitions. As Ramez describes in his first article, there are two versions of singularity often talked about.

The hard takeoff is one in which an AI rapidly creates newer, more intelligent versions of itself. Within minutes, days, or weeks, the AI has progressed from a level 1 AI to a level 20 grand-wizard AI, far beyond human intellect and anything we can comprehend. Ramez doesn’t think this will happen for a variety of reasons, one of which is the exponential difficulty involved in creating successively more complex algorithm (the argument he lays out in his second post).

I agree. I don’t see a hard takeoff. In addition to the reasons Ramez stated, I also believe it takes so long to test and qualify candidates for improvement that successive iteration will be slow.

Let’s imagine the first AI is created and runs on an infrastructure of 10,000 computers. Let’s further assume the AI is composed of neural networks and other similar algorithms that require training on large pools of data. The AI will want to test many ideas for improvements, each requiring training. The training will be followed by multiple rounds of successively more comprehensive testing: first the AI needs to see if the algorithm appears to improve a select area of intelligence, but then it will want to run regressive tests to ensure no other aspect of its intelligence or capabilities is adversely impacted. If the AI wants to test 1,000 ideas for improvements, and each idea requires 10 hours of training, 1 hour of assessment, and averages 1 hour of regressive testing, it would take 1.4 years to complete a round of improvements. Parallelism is the alternative, but remember that first AI is likely to be a behemoth, require 10,000 computers to run. It’s not possible to get that much parallelism.

The soft takeoff is one in which an artificial general intelligence (AGI) is created and gradually improved. As Ramez points out, that first AI might be on the order of human intellect, but it’s not smarter than the accumulated intelligence of all the humans that created it: many tens of thousands of scientists will collaborate to build the first AGI.

This is where we start to diverge. Consider a simple domain like chess playing computers. Since 2005, chess software running on commercially available hardware can outplay even the strongest human chess players. I don’t have data, but I suspect the number of very strong human chess players is somewhere in the hundreds or low thousands. However, the number of computers capable of running the very best chess playing software is in the millions or hundreds of millions. The aggregate chess playing capacity of computers is far greater than that of humans, because the best chess playing program can be propagated everywhere.

So too, AGI will be propagated everywhere. But I just argued that those first AI will require tens of thousands computers, right? Yes, except thanks to Moore’s Law (the observation that computing power tends to double every 18 months), the same AI that required 10,000 computers will need a mere 100 computers ten years later and just a single computer another ten years after that. Or an individual AGI could run up to 10,000 times faster. That speed-up alone means something different when it comes to intelligence: to have a single being with 10,000 times the experience and learning and practice that a human has.

Even Ramez agrees that it will be feasible to have destructive human brain uploads approximating human intelligence around 2040: “Do the math, and it appears that a super-computer capable of simulating an entire human brain and do so as fast as a human brain should be on the market by roughly 2035 – 2040. And of course, from that point on, speedups in computing should speed up the simulation of the brain, allowing it to run faster than a biological human’s.”

This is the soft takeoff: from a single AGI at some point in time to an entire civilization of that AGI twenty years later, all running at faster than human intellect speeds. A race consisting of an essentially alien intelligence, cohabiting the planet with us. Even if they don’t experience an intelligence explosion as Verner Vinge described, the combination of fast speeds, aggregate intelligence, and inherently different motivations will create an unknowable future that likely out of our control. And that’s very much a singularity.

But Ramez questions whether we can even achieve an AGI comparable to a human in the first place. There’s this pesky question of sentience and consciousness. Please go read Ramez’s first article in full, I don’t want you to think I’m summarizing everything he said here, but he basically cites three points:

1) No one’s really sure how to do it. AI theories have been around for decades, but none of them has led to anything that resembles sentience.

This is a difficulty. One analogy that comes to mind is the history of aviation. For nearly a hundred years prior to the Wright Brothers, heavier than air flight was being studied, with many different gliders created and flown. It was the innovation of powered engines that made heavier than air flight practically possible, and which led to rapid innovation. Perhaps we just don’t yet have the equivalent yet in AI. We’ve got people learning how to make airfoils and control services and airplane structure, and we’re just waiting for the engine to show up.

We also know that nature evolved sentience without any theory of how to do it. Having a proof point is powerful motivation.

2) There’s a huge lack of incentive. Would you like a self-driving car that has its own opinions? That might someday decide it doesn’t feel like driving you where you want to go?

There’s no lack of incentive. As James Barrat detailed in Our Final Invention, there are billions of dollars being poured into building AGI, both in big profile projects like the US BRAIN project and Europe’s Human Brain Project, as well as countless smaller AI companies and research projects.

There’s plenty of human incentive, too. How many people were inspired by Star Trek’s Data? At a recent conference, I asked attendees who would want Data as a friend, and more than half the audience’s hands went up. Among the elderly, loneliness is a very real issue that could be helped with AGI companionship, and many people might choose an artificial psychologist for reasons of confidence, cost, and convenience. All of these require at least the semblance of opinions.

More than that, we know we want initiative. If we have a self-driving car, we expect that it will use that initiative to find faster routes to destinations, possibly go around dangerous neighborhoods, and take necessary measures to avoid an accident. Indeed, even Google Maps has an “opinion” of the right way to get somewhere that often differs from my own. It’s usually right.

If we have an autonomous customer service agent, we’ll want it to flexibly meet business goals including pleasing the customer while controlling cost. All of these require something like opinions and sentience: goals, motivation to meet those goals, and mechanisms to flexibly meet those goals.

3) There are ethical issues. If we design an AI that truly is sentient, even at slightly less than human intelligence we’ll suddenly be faced with very real ethical issues. Can we turn it off? 

I absolutely agree that we’ve got ethical issues with AGI, but that hasn’t stopped us from creating other technology (nuclear bombs, bio-weapons, internal combustion engine, the transportation system) that also has ethical issues.

In sum, Ramez brings up great points, and he may very well be correct: the singularity might be a hundred years off instead of twenty or thirty.

However, the discussion around the singularity is also one about risk. Having artificial general intelligence running around, potentially in control of our computing infrastructure, may be risky. What happens if the AI has different motivations than us? What if it decides we’d be happier and less destructive if we’re all drugged? What if it just crashes and accidentally shuts down the entire electrical grid? (Read James Barrat’s Our Final Invention for more about the risks of AI.)

Ramez wrote Infinite Resource: The Power of Ideas on a Finite Planet, a wonderful and optimistic book about how science and technology are solving many resource problems around the world. I think it’s a powerful book because it gives us hope and proof points that we can solve the problems facing us.

Unfortunately, I think the argument that the singularity is far off is different and problematic because it denies the possibility of problems facing us. Instead of encouraging us to use technology to address the issues that could arise with the singularity, the argument instead concludes the singularity is either unlikely or simply a long time away. With that mindset, we’re less likely as a society to examine both AI progress and take steps to reduce the risks of AGI.

On the other hand, if we can agree that the singularity is a possibility, even just a modest possibility, then we may spur more discussion and investment into the safety and ethics of AGI.

Here’s a scary paragraph from a longer article about Google’s acquisition of AI company Deep Mind:

One of DeepMind’s cofounders, Demis Hassabis, possesses an impressive resume packed with prestigious titles, including software developer, neuroscientist, and teenage chess prodigy among the bullet points. But as the Economist suggested, one of Hassabis’s better-known contributions to society might be a video game; a niche but adored 2006 simulator called Evil Genius, in which you play as a malevolent mastermind hell-bent on world domination.

That sounds just like the plot of Daniel Suarez’s Daemon:

When a designer of computer games dies, he leaves behind a program that unravels the Internet’s interconnected world. It corrupts, kills, and runs independent of human control. It’s up to Detective Peter Sebeck to wrest the world from the malevolent virtual enemy before its ultimate purpose is realized: to dismantle society and bring about a new world order.

I’m reading Our Final Invention by James Barrat right now, about the dangers of artificial intelligence. I just got to a chapter in which he discussed that any reasonably complex artificial general intelligence (AGI) is going to want to control its own resources: e.g. if it has a goal, even a simple goal like playing chess, it will be able to achieve its goal better with more computing resources, and won’t be able to achieve its goal at all if its shut off. (Similar themes exist in all of my novels.)

This made me snap back to a conversation I had last week at my day job. I’m a web developer, and my current project, without giving too much away, is a RESTful web service that runs workflows composed of other RESTful web services.

We’re currently automating some of our operational tasks. For example, when our code passes unit tests, it’s automatically deployed. We’d like to expand on that so that after deployment, it will run integration tests, and if those pass, deploy up to the next stack, and then run performance tests, and so on.

Although we’re running on a cloud provider, it’s not AWS, and they don’t support autoscaling, so another automation task we need is to roll our own scaling solution.

Then we realized that running tests, deployments, and scaling all require calling RESTful JSON APIs, and that’s exactly what our service is designed to do. So the logical solution is that our software will test itself, deploy itself, and autoscale itself.

That’s an awful lot like the kind of resource control that James Barrat was writing about.

I was having a discussion with a group of writers about the technological singularity, and several asserted that the rate of increasing processor power was declining. They backed it up with a chart showing that the increase in MIPS per unit of clock speed stalled about ten years ago.

If computer processing speeds fail to increase exponentially, as they have for the last forty years, this will throw off many different predictions for the future, and dramatically decreases the likelihood of human-grade AI arising.

I did a bit of research last night and this morning. Using the chart of historical computer speeds from Wikipedia, and I placed a few key intervals in a spreadsheet and found:

  • From 1972 to 1985: MIPS grew by 19% per year.
  • From 1985 to 1996: MIPS grew by 43% per year.
  • From 1996 to 2003: MIPS grew by 51% per year.
  • From 2003 to 2013: MIPS grew by 29% per year.

By no means is the list of MIPS ratings exhaustive, but it does give us a general idea of what’s going on. The data shows the rate of CPU speed increases has declined in the last ten years.

I split up the last ten years:

  • From 2003 to 2008: MIPS grew by 53% per year.
  • From 2008 to 2013: MIPS grew by 9% per year.

According to that, the decline in processing rate increases is isolated to the last five years.

Five years isn’t much of a long term trend, and there are some processors missing from the end of the matrix. The Intel Xeon X5675, a 12 core processor isn’t shown, and it’s twice as powerful as the Intel Core i7 4770k that’s the bottom row on the MIPS table. If we substitute the Xeon processor, we find the growth rate from 2008 to 2012 was 31% annually, a more respectable improvement.

However, I’ve been tracking technology trends for a while (see my post on How to Predict the Future), and I try to use only those computers and devices I’ve personally owned. There’s always something faster out there, but it’s not what people have in their home, which is what I’m interested in.

I also know that my device landscape has changed over the last five years. In 2008, I had a laptop (Windows Intel Core 2 T7200) and a modest smartphone (a Treo 650). In 2013, I have a laptop (MBP 2.6 GHz Core i7), a powerful smartphone (Nexus 5), and a tablet (iPad Mini). I’m counting only my own devices and excluding those from my day job as a software engineer.

It’s harder to do this comparison, because there’s no one common benchmark among all these processors. I did the best I could to determine DMIPS for each, converting GeekBench cores for the Mac, and using the closest available processor for mobile devices that had a MIPS rating.

When I compared my personal device growth in combined processing power, I found it increased 51% annually from 2008 to 2013, essentially the same rate as for the longer period 1996 through 2011 (47%), which is what I use for my long-term predictions.

What does all this mean? Maybe there is a slight slow-down in the rate at which computing processing is increasing. Maybe there isn’t. Maybe the emphasis on low-power computing for mobile devices and server farms has slowed down progress on top-end speeds, and maybe that emphasis will contribute to higher top-end speeds down the road. Maybe the landscape will move from single-devices to clouds of devices, in the same way that we already moved from single cores to multiple cores.

Either way, I’m not giving up on the singularity yet.