• About
  • Contact
  • Give
Newsletter
NOQ Report - Conservative Christian News, Opinions, and Quotes
Saturday, May 28, 2022
  • Home
    • About
    • Give
  • News
  • Opinions
  • Quotes
  • Around the Web
  • Videos
  • Podcasts
No Result
View All Result
  • Home
    • About
    • Give
  • News
  • Opinions
  • Quotes
  • Around the Web
  • Videos
  • Podcasts
No Result
View All Result
NOQ Report - Conservative Christian News, Opinions, and Quotes
No Result
View All Result

Elon Musk's nightmare is way overblown: AI isn't the demon, people are

by Steve Berman
July 17, 2017
in News
TOP Prepare Now

The real world’s closest thing to Tony Stark told the National Governors Association that artificial intelligence (AI) is “summoning the demon.” The Hill reported Elon Musk’s remarks:

“With artificial intelligence, we are summoning the demon. In all those stories where there’s the guy with the pentagram and the holy water, it’s like — yeah, he’s sure he can control the demon. Doesn’t work out,” said Musk.

This kind of fear-mongering summons up images of Skynet, or The Matrix, where self-aware machines decide (on their own) to put the muzzle on humans and take away our bite. But the real issue is much more mundane, and it’s related to people, not machines.
A fascinating interview with computer scientist and author Jaron Lanier unpacks the issue in painstaking detail. Lanier’s main point is that American law recognizes corporations as “persons,” capable of executing agency (legal, even moral) that’s typically reserved for individual human beings.
He calls AI “fake” in the sense that, the scary language is constructed as “a layer of religious thinking” of technology removing actual human agency and replacing it with algorithms.
I’ll quote a little bit from it.

Since our economy has shifted to what I call a surveillance economy, but let’s say an economy where algorithms guide people a lot, we have this very odd situation where you have these algorithms that rely on big data in order to figure out who you should date, who you should sleep with, what music you should listen to, what books you should read, and on and on and on. And people often accept that because there’s no empirical alternative to compare it to, there’s no baseline. It’s bad personal science. It’s bad self-understanding.

In other words: big data is based on watching people make choices, and using that data to suggest future choices. It allows Amazon, for instance, to be efficient in they steer consumers to buy items they have in immediate stock by completing your search bar request, then they stock the items bought most. It allows Netflix to be efficient by running with an incredibly small sample of available content (compared to, say, iTunes), but using suggestions to steer watching habits.

The one thing I want to say about this is I’m not blaming Netflix for doing anything bad, because the whole point of Netflix is to deliver theatrical illusions to you, so this is just another layer of theatrical illusion—more power to them. That’s them being a good presenter. What’s a theater without a barker on the street? That’s what it is, and that’s fine. But it does contribute, at a macro level, to this overall atmosphere of accepting the algorithms as doing a lot more than they do. In the case of Netflix, the recommendation engine is serving to distract you from the fact that there’s not much choice anyway.

When you translate these algorithms into more serious real world decisions, they do tend to skew themselves into bias, and maybe that is the problem Musk is worried so much about.
An algorithm that predicts baseball outcomes (there is a whole field on this called Sabermetrics) might suggest the game would be better with a pitch clock, because fans complain that games are too long and getting longer. Sabermetrics is, ironically, responsible in part for the games being longer. But the algorithm doesn’t always account for fans inner preferences: Baseball is an institution that resists change. That’s part of the charm and attraction of the game.
When the pitch clock is implemented, this will surrender some of our human agency to a computer. Like calling balls and strikes, or fair and foul balls, or tennis balls in or out, or touchdowns in the end zone or out of bounds. Measurement and agency can be human things with AI helpers, or they can be AI things with human participants.
Moving even deeper into the “real world” is something Elon Musk knows much about: Self-driving cars. If automobile algorithms can effectively drive (as Google’s can) as well as, or better than, humans, what will happen when an algorithm avoids an accident with a human driver, causing the human driver to hit another driver with injuries or death as the outcome? Is the algorithm responsible for making moral choices of avoiding a baby carriage to hit a bike?
These are human questions, and they do tend to slow down the pace of adoption.
When AI diagnoses illnesses or prioritizes care, certainly hospitals and doctors can feel better about using time and resources more efficiently, but then the biases of those doctors’ choices can be amplified into “bad algorithms” that are not legitimate in the sense of working toward meaningful truth. As Lanier wrote:

Support undeniable patriot Mike Lindell (and us!). Buy from MyPillow with promo code “JDR” at checkout or call 800-862-0382.

In other words, the only way for such a system to be legitimate would be for it to have an observatory that could observe in peace, not being sullied by its own recommendations. Otherwise, it simply turns into a system that measures which manipulations work, as opposed to which ones don’t work, which is very different from a virginal and empirically careful system that’s trying to tell what recommendations would work had it not intervened. That’s a pretty clear thing. What’s not clear is where the boundary is.

Where reality gets closer to Musk’s nightmare is a scenario (a thought experiment) Lanier describes. Let’s say someone comes up with a way to 3-D print a little assassination drone that can buzz around and kill somebody: a cheap, easy to make assassin.

I’m going to give you two scenarios. In one scenario, there’s suddenly a bunch of these, and some disaffected teenagers, or terrorists, or whoever start making a bunch of them, and they go out and start killing people randomly. There’s so many of them that it’s hard to find all of them to shut it down, and there keep on being more and more of them. That’s one scenario; it’s a pretty ugly scenario.

There’s another one where there’s so-called artificial intelligence, some kind of big data scheme, that’s doing exactly the same thing, that is self-directed and taking over 3-D printers, and sending these things off to kill people. The question is, does it make any difference which it is?

Musk, like many technologists with little policy experience, conflates the fact that someone could make this kind of killer tech with the policy issues of making cheap killer drones. Lanier spends a few thousand words delving into the topic (which I won’t do, for the reader’s sake–I’m already way long here).
The key is using smart policy to prevent the end result without throwing away the benefits of AI. It’s the same as baseball, or self-driving cars, or counterfeiting currency. Scanners and color copiers have long had the resolution to produce fairly good counterfeit currency. But legitimate manufacturers have complied with laws that kill attempts to actually do it. Try copying a $20 bill on your scanner.
There’s no reason that certain rules can’t be applied to 3-D printers, or other devices that “make” things in the real world. Or to medical software, or–as a hot-button issue–using AI to recommend sentences and parole for convicted criminals.
Lawmakers and politicians need to be aware of these real issues, and the limitations of AI in replacing human agency. These are the actual problems we face, versus the dystopian Everybody Dies™ apocalyptic warnings by people like Musk.
If Google and Netflix are corporate persons, which in turn own AI algorithms based on human choices, imbued with the power to suggest future choices, that does not foreshadow the end of the world. But it does raise some serious issues. Most of these will take care of themselves (people have a tendency to change faster than algorithms can predict, leading to disappointment with the algorithms).
It’s the legal, human, and social issues raised by AI we need to focus on. In the end, people, not machines, are the demons we summon.

TOP Organic Freeze-Dried Chicken



They’re Trying to Shut Us Down

Over the last several months, I’ve lost count of how many times the powers-that-be have tried to shut us down. They’ve sent hackers at us, forcing us to take extreme measures on web security. They sent attorneys after us, but thankfully we’re not easily intimidated by baseless accusations or threats. They’ve even gone so far as to make physical threats. Those can actually be a bit worrisome but Remington has me covered.

For us to continue to deliver the truth that Americans need to read and hear, we ask you, our amazing audience, for financial assistance. We just launched a GiveSendGo page to help us pay the bills. It’s brand new so don’t be discouraged by the lack of donations there. It’s a funny reality that the fewer the donations that have been made, the less likely people are willing to donate to it. One would think this is counterintuitive, but sometimes people are skeptical because they think that perhaps there’s a reason others haven’t been donating. In our situation, we’re just getting started so please don’t be shy if you have the means to help.

Thank you and God bless!

JD Rucker

Tags: AIElon Musk

Bypass Big Tech Censors



My Show

The Midnight Sentinel

Our Sponsors

MyPillow Promo Code NOQ

MyPatriotSupply
Z-Stack Life
Our Gold Guy

Shows

WEF and the Metaverse

Gun Control Doesn't Work

7 Globalist Monkeypox Scenarios

Food = Control

Pandemic Treaty Is Not a Nothingburger

Bypass Big Tech Censors


RSS The Federalist

  • In Uvalde, A Picture Is Emerging Of Extreme Cowardice And Incompetence Among Local Police
  • Uvalde Proves We Can’t Keep Outsourcing Our Kids’ Safety To A Cowardly Bureaucracy
  • House Republicans Subpoenaed By J6 Committee After Having Nothing To Do With Jan. 6 Blast ‘Banana Republic’ Probe
  • Wisconsin Voters Sue Democrat Cities Over Illegal Drop Boxes In 2020 Election
  • Mitch McConnell’s Media Tour To Brag About Defeating GOP Opponents Of Biden’s Foreign Policy Was Remarkably Ill-Advised

RSS The Blaze

  • Democrat Nikki Fried said she does not sign pledges — but after gun control activist David Hogg demanded that she 'sign the damn pledge,' she responded by saying that she had signed it
  • 'Don't be ashamed you are using, be empowered that you are using safely': NYC Health Department issues eyebrow-raising ad about drug use
  • Democratic congressman questions the use of the word 'Latinx' when responding to a Yankees tweet that used the term
  • San Diego has declared itself an abortion safe haven. The gesture has no legal impact whatsoever.
  • Elitist Obama  romanticizes George Floyd's death for street cred

RSS PJ Media

  • Reprehensible Leftists Spin Disinformation, Gin Up Hate for Heroes Who Rescued Dozens From Uvalde School
  • West Coast, Messed Coast™ Report: A Time When Police Knew How to Respond to a Mass Shooting
  • Waiting Was Inexcusable
  • Democrats Give Us Cool Nicknames Like 'Deplorable' and 'Ultra MAGA' so We Owe Them a Few
  • Durham: The Case of Hillary's Accused Lying Lawyer Goes to D.C. Jury

RSS National File

  • MLB Manager Will Skip National Anthem Until ‘Direction of the Country’ Changes
  • New Zealand PM Says Gun Control Helps to Protect Democracy
  • How Many Times Are The Feds Going To Be Involved In Violent Incidents In America?
  • [UPDATED] Uvalde Gunman Threatened to Shoot up a School Four Years Ago, Rep. Claims
  • Authorities Investigating Whether Retired Federal Agent Knew of Buffalo Massacre Plans in Advance

RSS Townhall

  • Pray for Pelosi
  • Primary Results: Trump Repudiated, Republicans Enthused, Democrats Focused On Theoretical Issues
  • If Republicans Collaborate with Dems to Betray Us on the 2A, They Will Lose the Midterms
  • A Solution or an Issue?
  • The Trouble With Do-Somethingism on Guns

RSS RedState

  • Beto O'Rourke Wilts When Pressed on His Gross Political Stunt in Uvalde
  • 'Good Gal With a Gun' Saves Many Lives in What Could Have Been Mass Casualty Event
  • Andy Ngo Exposes Twitter Hoaxster After Spurious Claims About ‘Visit’ From Greg Abbott ‘Representative'
  • Tone-Deaf Meghan Markle Sparks Outrage With ‘Surprise’ Uvalde Visit
  • Greg Abbott's Turn to Anger at Police Response in Uvalde Exposes a Blind Spot for Republican Politicians
  • About
  • Contact
  • Give

© 2021 NOQ Report

No Result
View All Result
  • Home
    • About
    • Give
  • News
  • Opinions
  • Quotes
  • Around the Web
  • Videos
  • Podcasts

© 2021 NOQ Report

Session expired

Please log in again. The login page will open in a new tab. After logging in you can close it and return to this page.

>