Connect with us

Everything

Elon Musk’s nightmare is way overblown: AI isn’t the demon, people are

Published

on

The real world’s closest thing to Tony Stark told the National Governors Association that artificial intelligence (AI) is “summoning the demon.” The Hill reported Elon Musk’s remarks:

“With artificial intelligence, we are summoning the demon. In all those stories where there’s the guy with the pentagram and the holy water, it’s like — yeah, he’s sure he can control the demon. Doesn’t work out,” said Musk.

This kind of fear-mongering summons up images of Skynet, or The Matrix, where self-aware machines decide (on their own) to put the muzzle on humans and take away our bite. But the real issue is much more mundane, and it’s related to people, not machines.

A fascinating interview with computer scientist and author Jaron Lanier unpacks the issue in painstaking detail. Lanier’s main point is that American law recognizes corporations as “persons,” capable of executing agency (legal, even moral) that’s typically reserved for individual human beings.

He calls AI “fake” in the sense that, the scary language is constructed as “a layer of religious thinking” of technology removing actual human agency and replacing it with algorithms.

I’ll quote a little bit from it.

Since our economy has shifted to what I call a surveillance economy, but let’s say an economy where algorithms guide people a lot, we have this very odd situation where you have these algorithms that rely on big data in order to figure out who you should date, who you should sleep with, what music you should listen to, what books you should read, and on and on and on. And people often accept that because there’s no empirical alternative to compare it to, there’s no baseline. It’s bad personal science. It’s bad self-understanding.

In other words: big data is based on watching people make choices, and using that data to suggest future choices. It allows Amazon, for instance, to be efficient in they steer consumers to buy items they have in immediate stock by completing your search bar request, then they stock the items bought most. It allows Netflix to be efficient by running with an incredibly small sample of available content (compared to, say, iTunes), but using suggestions to steer watching habits.

The one thing I want to say about this is I’m not blaming Netflix for doing anything bad, because the whole point of Netflix is to deliver theatrical illusions to you, so this is just another layer of theatrical illusion—more power to them. That’s them being a good presenter. What’s a theater without a barker on the street? That’s what it is, and that’s fine. But it does contribute, at a macro level, to this overall atmosphere of accepting the algorithms as doing a lot more than they do. In the case of Netflix, the recommendation engine is serving to distract you from the fact that there’s not much choice anyway.

When you translate these algorithms into more serious real world decisions, they do tend to skew themselves into bias, and maybe that is the problem Musk is worried so much about.

An algorithm that predicts baseball outcomes (there is a whole field on this called Sabermetrics) might suggest the game would be better with a pitch clock, because fans complain that games are too long and getting longer. Sabermetrics is, ironically, responsible in part for the games being longer. But the algorithm doesn’t always account for fans inner preferences: Baseball is an institution that resists change. That’s part of the charm and attraction of the game.

When the pitch clock is implemented, this will surrender some of our human agency to a computer. Like calling balls and strikes, or fair and foul balls, or tennis balls in or out, or touchdowns in the end zone or out of bounds. Measurement and agency can be human things with AI helpers, or they can be AI things with human participants.

Moving even deeper into the “real world” is something Elon Musk knows much about: Self-driving cars. If automobile algorithms can effectively drive (as Google’s can) as well as, or better than, humans, what will happen when an algorithm avoids an accident with a human driver, causing the human driver to hit another driver with injuries or death as the outcome? Is the algorithm responsible for making moral choices of avoiding a baby carriage to hit a bike?

These are human questions, and they do tend to slow down the pace of adoption.

When AI diagnoses illnesses or prioritizes care, certainly hospitals and doctors can feel better about using time and resources more efficiently, but then the biases of those doctors’ choices can be amplified into “bad algorithms” that are not legitimate in the sense of working toward meaningful truth. As Lanier wrote:

In other words, the only way for such a system to be legitimate would be for it to have an observatory that could observe in peace, not being sullied by its own recommendations. Otherwise, it simply turns into a system that measures which manipulations work, as opposed to which ones don’t work, which is very different from a virginal and empirically careful system that’s trying to tell what recommendations would work had it not intervened. That’s a pretty clear thing. What’s not clear is where the boundary is.

Where reality gets closer to Musk’s nightmare is a scenario (a thought experiment) Lanier describes. Let’s say someone comes up with a way to 3-D print a little assassination drone that can buzz around and kill somebody: a cheap, easy to make assassin.

I’m going to give you two scenarios. In one scenario, there’s suddenly a bunch of these, and some disaffected teenagers, or terrorists, or whoever start making a bunch of them, and they go out and start killing people randomly. There’s so many of them that it’s hard to find all of them to shut it down, and there keep on being more and more of them. That’s one scenario; it’s a pretty ugly scenario.

There’s another one where there’s so-called artificial intelligence, some kind of big data scheme, that’s doing exactly the same thing, that is self-directed and taking over 3-D printers, and sending these things off to kill people. The question is, does it make any difference which it is?

Musk, like many technologists with little policy experience, conflates the fact that someone could make this kind of killer tech with the policy issues of making cheap killer drones. Lanier spends a few thousand words delving into the topic (which I won’t do, for the reader’s sake–I’m already way long here).

The key is using smart policy to prevent the end result without throwing away the benefits of AI. It’s the same as baseball, or self-driving cars, or counterfeiting currency. Scanners and color copiers have long had the resolution to produce fairly good counterfeit currency. But legitimate manufacturers have complied with laws that kill attempts to actually do it. Try copying a $20 bill on your scanner.

There’s no reason that certain rules can’t be applied to 3-D printers, or other devices that “make” things in the real world. Or to medical software, or–as a hot-button issue–using AI to recommend sentences and parole for convicted criminals.

Lawmakers and politicians need to be aware of these real issues, and the limitations of AI in replacing human agency. These are the actual problems we face, versus the dystopian Everybody Dies™ apocalyptic warnings by people like Musk.

If Google and Netflix are corporate persons, which in turn own AI algorithms based on human choices, imbued with the power to suggest future choices, that does not foreshadow the end of the world. But it does raise some serious issues. Most of these will take care of themselves (people have a tendency to change faster than algorithms can predict, leading to disappointment with the algorithms).

It’s the legal, human, and social issues raised by AI we need to focus on. In the end, people, not machines, are the demons we summon.

Managing Editor of NOQ Report. Serial entrepreneur. Faith, family, federal republic. One nation, under God, indivisible, with liberty and justice for all.

Continue Reading
Advertisement
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Entertainment and Sports

LeVar Burton is being attacked by people thinking he’s LaVar Ball. Brent Spiner’s response is hilarious.

Published

on

LeVar Burton is being attacked by people thinking hes LaVar Ball Brent Spiners response is hilarious

LaVar Ball wasn’t impressed with President Trump’s efforts to get his son released from a Chinese prison for shoplifting. His reactions have prompted many Trump supporters to go after him as ungracious, hypocritical, and much worse.

Unfortunately, many of these attacks are being directed towards actor LeVar Burton. The Reading Rainbow host who rose to prominence after Roots and solidified his status as a Hollywood icon while playing Geordi La Forge on Star Trek: The Next Generation has a name similar to Ball’s and is also black. Responses to the attacks from other Twitter users has been brutal, but Burton has remained calm. His lone response:

Former colleague Brent Spiner, who played Data on ST:TNG, offered some advice to his friend.

“If you cared about our President, you’d change your name.”

I don’t normally applaud when leftist Hollywood gets political, but this one was too good to pass.

Source: Twitter

Continue Reading

Guns and Crime

Michael Flynn’s lawyers break contact with White House lawyers

Published

on

Michael Flynns lawyers break contact with White House lawyers

The legal team for former White House National Security Adviser Michael Flynn have stopped sharing information about special counsel Robert Mueller’s investigation into Russian tampering in the 2016 election. This could be a blow for the President and some who are close to him if information gleaned from Flynn points to the Trump campaign, his transition team, or his administration itself.

The NY Times is reporting that four anonymous sources have said the agreement between the two legal teams has been ended from Flynn’s side. It is normal for teams with parallel interests to share information, but when there becomes a conflict of interest, any such sharing is halted. This leaves two likely possibilities: either Flynn is negotiating a deal to cooperate with the investigation or they’re cooperating already.

If it’s the former, there’s a chance the information sharing could be renewed if no deal is struck

Flynn is at the heart of the investigation. It was his actions and the White House’s reactions before and after he resigned that prompted the investigation in the first place. Flynn had lied on more than one occasions about financial interactions he’d had with Russian and Turkish interests. This made him vulnerable to blackmail, according to former acting attorney general Sally Q. Yates. After Flynn resigned, the President had a one-on-one meeting with then-FBI Director James Comey and allegedly asked him to stop pursuing Flynn. Comey was fired by the President, then leaked a memo detailing the meeting regarding Flynn.

Outcry from many in DC and in the media prompted Mueller’s appointment. Since then, he charged Paul Manafort, Rick Gates, and George Papadopoulos. Charging or cutting a deal with Flynn would likely be the step prior to pursuing people directly associated with the President.

Further Reading

Flynn moving to cooperate with Mueller in Russia probe: report | TheHill

http://thehill.com/homenews/administration/361687-flynn-moving-to-cooperate-with-mueller-in-russia-probe-reportThe report comes after NBC News on Wednesday reported that Mueller is looking to question Bijan Kian, an associate of Flynn. Previous reports have suggested that the special counsel already has enough evidence to indict Flynn and his son, who also worked for Trump’s campaign.

Trump’s legal team has insisted recently that Mueller’s probe will end in the coming months, though legal experts have said the investigation is likely to drag on.

Continue Reading

News

After nearly 4 decades of crimes against his people, Robert Mugabe granted immunity, military protection

Published

on

After nearly 4 decades of crimes against his people Robert Mugabe granted immunity military protecti

In what may be the best deal ever struck by a dictator forcibly removed by the military and despised by a majority of his people, Zimbabwe’s former president Robert Mugabe has been granted full immunity, a “generous pension,” and military protection so he can stay in his country without fear that any of the millions of people he persecuted will be able to seek their vengeance.

Zimbabwe grants Robert Mugabe immunity from prosecution

Mugabe, who ruled Zimbabwe with an iron fist for 37 years, resigned on Tuesday, hours after parliament launched proceedings to impeach him. He had refused to leave office during eight days of uncertainty that began with a military takeover.

Emmerson Mnangagwa, the former vice-president sacked by Mugabe this month, is to be sworn in as president on Friday.

My Take

Despite complaints from the people, this is the smart move. If they allow him to leave, they have no control over him or the influence that he continues to wield at home and abroad. If they jail him, kill him, or otherwise make him face prosecution, he would be at best a distraction and at worst a martyr. This move allows them to move forward the fastest which is what former Vice President Emmerson Mnangagwa and his military allies want.

Continue Reading

NOQ Report Daily

Advertisement

Facebook

Twitter

Advertisement

Trending

Copyright © 2017 NOQ Report.