Connect with us

Everything

Elon Musk’s nightmare is way overblown: AI isn’t the demon, people are

Published

on

The real world’s closest thing to Tony Stark told the National Governors Association that artificial intelligence (AI) is “summoning the demon.” The Hill reported Elon Musk’s remarks:

“With artificial intelligence, we are summoning the demon. In all those stories where there’s the guy with the pentagram and the holy water, it’s like — yeah, he’s sure he can control the demon. Doesn’t work out,” said Musk.

This kind of fear-mongering summons up images of Skynet, or The Matrix, where self-aware machines decide (on their own) to put the muzzle on humans and take away our bite. But the real issue is much more mundane, and it’s related to people, not machines.

A fascinating interview with computer scientist and author Jaron Lanier unpacks the issue in painstaking detail. Lanier’s main point is that American law recognizes corporations as “persons,” capable of executing agency (legal, even moral) that’s typically reserved for individual human beings.

He calls AI “fake” in the sense that, the scary language is constructed as “a layer of religious thinking” of technology removing actual human agency and replacing it with algorithms.

I’ll quote a little bit from it.

Since our economy has shifted to what I call a surveillance economy, but let’s say an economy where algorithms guide people a lot, we have this very odd situation where you have these algorithms that rely on big data in order to figure out who you should date, who you should sleep with, what music you should listen to, what books you should read, and on and on and on. And people often accept that because there’s no empirical alternative to compare it to, there’s no baseline. It’s bad personal science. It’s bad self-understanding.

In other words: big data is based on watching people make choices, and using that data to suggest future choices. It allows Amazon, for instance, to be efficient in they steer consumers to buy items they have in immediate stock by completing your search bar request, then they stock the items bought most. It allows Netflix to be efficient by running with an incredibly small sample of available content (compared to, say, iTunes), but using suggestions to steer watching habits.

The one thing I want to say about this is I’m not blaming Netflix for doing anything bad, because the whole point of Netflix is to deliver theatrical illusions to you, so this is just another layer of theatrical illusion—more power to them. That’s them being a good presenter. What’s a theater without a barker on the street? That’s what it is, and that’s fine. But it does contribute, at a macro level, to this overall atmosphere of accepting the algorithms as doing a lot more than they do. In the case of Netflix, the recommendation engine is serving to distract you from the fact that there’s not much choice anyway.

When you translate these algorithms into more serious real world decisions, they do tend to skew themselves into bias, and maybe that is the problem Musk is worried so much about.

An algorithm that predicts baseball outcomes (there is a whole field on this called Sabermetrics) might suggest the game would be better with a pitch clock, because fans complain that games are too long and getting longer. Sabermetrics is, ironically, responsible in part for the games being longer. But the algorithm doesn’t always account for fans inner preferences: Baseball is an institution that resists change. That’s part of the charm and attraction of the game.

When the pitch clock is implemented, this will surrender some of our human agency to a computer. Like calling balls and strikes, or fair and foul balls, or tennis balls in or out, or touchdowns in the end zone or out of bounds. Measurement and agency can be human things with AI helpers, or they can be AI things with human participants.

Moving even deeper into the “real world” is something Elon Musk knows much about: Self-driving cars. If automobile algorithms can effectively drive (as Google’s can) as well as, or better than, humans, what will happen when an algorithm avoids an accident with a human driver, causing the human driver to hit another driver with injuries or death as the outcome? Is the algorithm responsible for making moral choices of avoiding a baby carriage to hit a bike?

These are human questions, and they do tend to slow down the pace of adoption.

When AI diagnoses illnesses or prioritizes care, certainly hospitals and doctors can feel better about using time and resources more efficiently, but then the biases of those doctors’ choices can be amplified into “bad algorithms” that are not legitimate in the sense of working toward meaningful truth. As Lanier wrote:

In other words, the only way for such a system to be legitimate would be for it to have an observatory that could observe in peace, not being sullied by its own recommendations. Otherwise, it simply turns into a system that measures which manipulations work, as opposed to which ones don’t work, which is very different from a virginal and empirically careful system that’s trying to tell what recommendations would work had it not intervened. That’s a pretty clear thing. What’s not clear is where the boundary is.

Where reality gets closer to Musk’s nightmare is a scenario (a thought experiment) Lanier describes. Let’s say someone comes up with a way to 3-D print a little assassination drone that can buzz around and kill somebody: a cheap, easy to make assassin.

I’m going to give you two scenarios. In one scenario, there’s suddenly a bunch of these, and some disaffected teenagers, or terrorists, or whoever start making a bunch of them, and they go out and start killing people randomly. There’s so many of them that it’s hard to find all of them to shut it down, and there keep on being more and more of them. That’s one scenario; it’s a pretty ugly scenario.

There’s another one where there’s so-called artificial intelligence, some kind of big data scheme, that’s doing exactly the same thing, that is self-directed and taking over 3-D printers, and sending these things off to kill people. The question is, does it make any difference which it is?

Musk, like many technologists with little policy experience, conflates the fact that someone could make this kind of killer tech with the policy issues of making cheap killer drones. Lanier spends a few thousand words delving into the topic (which I won’t do, for the reader’s sake–I’m already way long here).

The key is using smart policy to prevent the end result without throwing away the benefits of AI. It’s the same as baseball, or self-driving cars, or counterfeiting currency. Scanners and color copiers have long had the resolution to produce fairly good counterfeit currency. But legitimate manufacturers have complied with laws that kill attempts to actually do it. Try copying a $20 bill on your scanner.

There’s no reason that certain rules can’t be applied to 3-D printers, or other devices that “make” things in the real world. Or to medical software, or–as a hot-button issue–using AI to recommend sentences and parole for convicted criminals.

Lawmakers and politicians need to be aware of these real issues, and the limitations of AI in replacing human agency. These are the actual problems we face, versus the dystopian Everybody Dies™ apocalyptic warnings by people like Musk.

If Google and Netflix are corporate persons, which in turn own AI algorithms based on human choices, imbued with the power to suggest future choices, that does not foreshadow the end of the world. But it does raise some serious issues. Most of these will take care of themselves (people have a tendency to change faster than algorithms can predict, leading to disappointment with the algorithms).

It’s the legal, human, and social issues raised by AI we need to focus on. In the end, people, not machines, are the demons we summon.

Continue Reading
Advertisement
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Democrats

Kyrsten Sinema’s socialist thoughts now exemplify over half of Arizona

Published

on

Kyrsten Sinema's socialist thoughts now exemplify over half of Arizona

Arizona can no longer be considered a red state. As the Senate election vote counts finish up, Democrat Kyrsten Sinema appears poised to win. It isn’t that a Democrat won that makes me move Arizona from red to purple. It’s that a socialist in moderate clothing was able to pull the wool over the eyes of Arizona voters so easily.

Just an hour of research is enough to break through the Arizona mainstream media’s false narrative that Sinema is a moderate. She is anti-capitalism, in favor of open borders, and had the lowest Liberty Score of anyone in the House representing Arizona.

Then, there’s this:

“A huge dollar bill is the most accurate way to teach children the real motto of the United States: In the Almighty Dollar We Trust… Until the average American realizes that capitalism damages her livelihood while augmenting the livelihoods of the wealthy, the Almighty Dollar will continue to rule. It certainly is not ruling in our favor.”

Arizona chose poorly.

Continue Reading

Guns and Crime

Trust in Chicago area police was already low. Then they killed Jemel Roberson.

Published

on

Trust in Chicago area police was already low Then they killed Jemel Roberson

An armed security guard prevented anyone from getting killed when gunmen returned to his bar after getting thrown out. He subdued them without using deadly force and was restraining one of the alleged assailants when police arrived. That’s when a resolved situation turned ugly.

A Midlothian police officer shot and killed Jemel Roberson, 26, while responding to a shooting inside Manny’s Blue Room Bar in Robbins, Illinois, about 4 a.m. Sunday. Roberson was pronounced dead at the scene.

This appears to be a case of a truly decent person doing his job and losing his life as a result.

Security guard killed by police in Robbins bar wanted to be a cop, friends say

https://wgntv.com/2018/11/12/officer-responds-to-gunfire-fatally-shoots-security-guard-at-robbins-bar/Friends said Roberson was an upstanding guy who had plans to become a police officer. He was also a musician, playing keyboard and drums at several Chicago-area churches.

“Every artist he’s ever played for, every musician he’s ever sat beside, we’re all just broken because we have no answers,” the Rev. Patricia Hill from Purposed Church said. “He was getting ready to train and do all that stuff, so the very people he wanted to be family with, took his life.”

“Once again, it’s the continued narrative that we see of shoot first, ask questions later,” the Rev. LeAundre Hill said.

My Take

Chicago area residents have had many reasons to not trust the men and women charged with keeping them safe. Controversial police-involved shootings, rising crime rates, and tone deaf leadership in city, county, and state governments have been pushing people in the area to give up on law enforcement.

This will make matters much worse.

The optics on this couldn’t get much uglier, especially if the unnamed police officer who shot Roberson turns out to be Caucasian. Roberson, an African-American, was able to detain four assailants without anyone getting fatally wounded. The fact that he was then fatally shot by police adds a new dimension to the rift between police and the people.

In most incidents where police are believed to have used deadly force unnecessarily, it’s a matter of them shooting an alleged criminal when other means of subduing them could have been used. Such is the case with Jason Van Dyke who fatally shot Laquan McDonald. Nobody argued that McDonald wasn’t dangerous. He was high on PCP, had a knife, and was walking in the middle of the street despite police warnings for him to drop the weapon and get on the ground.

Roberson’s situation is the opposite. He was doing his duty as a security guard and very likely saved lives in the process. His death is almost certainly going to start another round of racial tensions and anti-police protests that could cause tremendous turmoil throughout the Chicagoland area.

There is usual gray area in police shootings, but this seems pretty black and white to me. Jemel Roberson acted heroically. Instead of a happy ending for the day and a bright future in law enforcement ahead, he’s gone.

Continue Reading

Entertainment and Sports

Stan Lee’s 10 greatest comics

Published

on

Stan Lees 10 greatest comics

Stan Lee has died. While modern audiences probably know much more about the Marvel movies and televisions shows that dominate our viewing pleasures, it was his genius in creating so many beloved comic book characters decades ago that fuels Hollywood today.

Looper put out a video with his greatest comics. These subjective lists are usually fodder for debate, but I was so pleasantly surprised by their choices I decided to post it here. It may be the first time I agree with nearly everything in a video top 10 list. Fitting that it surrounds an icon like Lee.

From his quirky cameos in every Marvel movie to his down-to-earth perspectives present in every interview, there’s plenty to love about Stan Lee. But it was his comic book creations that have made a permanent mark on American culture.

Continue Reading
Advertisement
Advertisement Donate to NOQ Report

Facebook

Twitter

Trending

Copyright © 2018 NOQ Report