I only have a limited amount of writing time this week, and I want to focus that time on my next novel. (No, not book 4. That’s off with an editor right now. I’m drafting the novel after that, the first non-Avogadro Corp book.) But I feel compelled to briefly address the reaction to Elon Musk’s opinion about AI.
Brief summary: Elon Musk said that AI is a risk, and that the risks could be bigger than those posed by nuclear weapons. He compared AI to summoning a demon, using the comparison to illustrate the idea that although we think we’d be in control, AI could easily escape from that control.
Brief summary of the reaction: A bunch of vocal folks have ridiculed Elon Musk for raising these concerns. I don’t know how vocal they are, but there seems to be a lot of posts in my feeds from them.
I think I’ve said enough to make it clear that I agree that there is the potential for risk. I’m not claiming the danger is guaranteed, nor do I believe that it will come in the form of armed robots (despite the fiction I write). Again, to summarize very briefly: the risk of AI danger can come from many different dimensions:
- accidents (a programming bug that causes the power grid to die, for example)
- unintentional side effects (an AI that decides on the best path to fulfill it’s goal without taking into account the impact on humans: maybe an autonomous mining robot that harvests the foundations of buildings)
- complex interactions (e.g. stock trading AI that nearly collapsed the financial markets a few years ago)
- intention decisions (an AI that decides humans pose a risk to AI, or an AI that is merely angry or vengeful.)
- human-driven terrorism (e.g. nanotechnology made possible by AI, but programmed by a person to attack other people)
Accidents and complex interactions have already happened. Programmers already don’t understand their code, and AI are often written as black-boxes that are even more incomprehensible. There will be more of these, and they don’t require human-level intelligence. Once AI does achieve human-level intelligence, then new risks become more likely.
What makes AI risks different than more traditional ones are their speed and scale. A financial melt-down can happen in seconds, and we humans would know about it only afterwards. Bad decisions by a human doctor could affect a few dozen patients. Bad decisions by a medical AI that’s installed in every hospital could affects hundreds of thousands of patients.
There are many potential benefits to AI. They are also not guaranteed, but they include things like more efficient production so that we humans might work less, greater advances in medicine and technology so that we can live longer, and reducing our impact on the environment so we have a healthier planet.
Because of these many potential benefits, we probably don’t want to stop work on AI. But since almost all research effort is going into creating AI and very little is going into reducing the risks of AI, we have an imbalance. When Elon Musk, who has a great deal of visibility and credibility, talks about the risks of AI, this is a very good thing, because it will help us address that imbalance and invest more in risk reduction.
We’ll, said! RT @hertling: Elon Musk and the risks of AI: http://t.co/6NzRIEZx1j
@bfeld @pweiser Our friend @hertling offers a brief but very thoughtful perspective on AI today: http://t.co/6NzRIEZx1j
[…] best near term science fiction writer I know, William Hertling, had a post over the weekend titled Elon Musk and the risks of AI. He had a balanced view of Elon’s comment and, as William always does, has a thoughtful […]
[…] best near term science fiction writer I know, William Hertling, had a post over the weekend titled Elon Musk and the risks of AI. He had a balanced view of Elon’s comment and, as William always does, has a thoughtful […]
[…] period science fiction author I do know, William Hertling, had a publish over the weekend titled Elon Musk and the risks of AI. He had a balanced view of Elon’s remark and, as William all the time does, has a […]