The real world’s closest thing to Tony Stark told the National Governors Association that artificial intelligence (AI) is “summoning the demon.” The Hill reported Elon Musk’s remarks:
“With artificial intelligence, we are summoning the demon. In all those stories where there’s the guy with the pentagram and the holy water, it’s like — yeah, he’s sure he can control the demon. Doesn’t work out,” said Musk.
This kind of fear-mongering summons up images of Skynet, or The Matrix, where self-aware machines decide (on their own) to put the muzzle on humans and take away our bite. But the real issue is much more mundane, and it’s related to people, not machines.
A fascinating interview with computer scientist and author Jaron Lanier unpacks the issue in painstaking detail. Lanier’s main point is that American law recognizes corporations as “persons,” capable of executing agency (legal, even moral) that’s typically reserved for individual human beings.
He calls AI “fake” in the sense that, the scary language is constructed as “a layer of religious thinking” of technology removing actual human agency and replacing it with algorithms.
I’ll quote a little bit from it.
Since our economy has shifted to what I call a surveillance economy, but let’s say an economy where algorithms guide people a lot, we have this very odd situation where you have these algorithms that rely on big data in order to figure out who you should date, who you should sleep with, what music you should listen to, what books you should read, and on and on and on. And people often accept that because there’s no empirical alternative to compare it to, there’s no baseline. It’s bad personal science. It’s bad self-understanding.
In other words: big data is based on watching people make choices, and using that data to suggest future choices. It allows Amazon, for instance, to be efficient in they steer consumers to buy items they have in immediate stock by completing your search bar request, then they stock the items bought most. It allows Netflix to be efficient by running with an incredibly small sample of available content (compared to, say, iTunes), but using suggestions to steer watching habits.
The one thing I want to say about this is I’m not blaming Netflix for doing anything bad, because the whole point of Netflix is to deliver theatrical illusions to you, so this is just another layer of theatrical illusion—more power to them. That’s them being a good presenter. What’s a theater without a barker on the street? That’s what it is, and that’s fine. But it does contribute, at a macro level, to this overall atmosphere of accepting the algorithms as doing a lot more than they do. In the case of Netflix, the recommendation engine is serving to distract you from the fact that there’s not much choice anyway.
When you translate these algorithms into more serious real world decisions, they do tend to skew themselves into bias, and maybe that is the problem Musk is worried so much about.
An algorithm that predicts baseball outcomes (there is a whole field on this called Sabermetrics) might suggest the game would be better with a pitch clock, because fans complain that games are too long and getting longer. Sabermetrics is, ironically, responsible in part for the games being longer. But the algorithm doesn’t always account for fans inner preferences: Baseball is an institution that resists change. That’s part of the charm and attraction of the game.
When the pitch clock is implemented, this will surrender some of our human agency to a computer. Like calling balls and strikes, or fair and foul balls, or tennis balls in or out, or touchdowns in the end zone or out of bounds. Measurement and agency can be human things with AI helpers, or they can be AI things with human participants.
Moving even deeper into the “real world” is something Elon Musk knows much about: Self-driving cars. If automobile algorithms can effectively drive (as Google’s can) as well as, or better than, humans, what will happen when an algorithm avoids an accident with a human driver, causing the human driver to hit another driver with injuries or death as the outcome? Is the algorithm responsible for making moral choices of avoiding a baby carriage to hit a bike?
These are human questions, and they do tend to slow down the pace of adoption.
When AI diagnoses illnesses or prioritizes care, certainly hospitals and doctors can feel better about using time and resources more efficiently, but then the biases of those doctors’ choices can be amplified into “bad algorithms” that are not legitimate in the sense of working toward meaningful truth. As Lanier wrote:
In other words, the only way for such a system to be legitimate would be for it to have an observatory that could observe in peace, not being sullied by its own recommendations. Otherwise, it simply turns into a system that measures which manipulations work, as opposed to which ones don’t work, which is very different from a virginal and empirically careful system that’s trying to tell what recommendations would work had it not intervened. That’s a pretty clear thing. What’s not clear is where the boundary is.
Where reality gets closer to Musk’s nightmare is a scenario (a thought experiment) Lanier describes. Let’s say someone comes up with a way to 3-D print a little assassination drone that can buzz around and kill somebody: a cheap, easy to make assassin.
I’m going to give you two scenarios. In one scenario, there’s suddenly a bunch of these, and some disaffected teenagers, or terrorists, or whoever start making a bunch of them, and they go out and start killing people randomly. There’s so many of them that it’s hard to find all of them to shut it down, and there keep on being more and more of them. That’s one scenario; it’s a pretty ugly scenario.
There’s another one where there’s so-called artificial intelligence, some kind of big data scheme, that’s doing exactly the same thing, that is self-directed and taking over 3-D printers, and sending these things off to kill people. The question is, does it make any difference which it is?
Musk, like many technologists with little policy experience, conflates the fact that someone could make this kind of killer tech with the policy issues of making cheap killer drones. Lanier spends a few thousand words delving into the topic (which I won’t do, for the reader’s sake–I’m already way long here).
The key is using smart policy to prevent the end result without throwing away the benefits of AI. It’s the same as baseball, or self-driving cars, or counterfeiting currency. Scanners and color copiers have long had the resolution to produce fairly good counterfeit currency. But legitimate manufacturers have complied with laws that kill attempts to actually do it. Try copying a $20 bill on your scanner.
There’s no reason that certain rules can’t be applied to 3-D printers, or other devices that “make” things in the real world. Or to medical software, or–as a hot-button issue–using AI to recommend sentences and parole for convicted criminals.
Lawmakers and politicians need to be aware of these real issues, and the limitations of AI in replacing human agency. These are the actual problems we face, versus the dystopian Everybody Dies™ apocalyptic warnings by people like Musk.
If Google and Netflix are corporate persons, which in turn own AI algorithms based on human choices, imbued with the power to suggest future choices, that does not foreshadow the end of the world. But it does raise some serious issues. Most of these will take care of themselves (people have a tendency to change faster than algorithms can predict, leading to disappointment with the algorithms).
It’s the legal, human, and social issues raised by AI we need to focus on. In the end, people, not machines, are the demons we summon.