Google and their various properties like YouTube have become the central command for the Globalist Elite Cabal’s war against the truth. Using a combination of censorship, gaslighting, suppression of “dangerous” truths, and amplification of lies, the tech giant operates in the trenches of the information war that is being fought on the internet.
Now, they’re advancing their efforts by preparing monstrous leaps in technology. A new patent paints and ominous picture as they appear to have the ability to find “misinformation” before it happens. I covered this on the latest episode of The JD Rucker Show. Here’s the news itself from Didi Rankovic from Reclaim The Net…
Google’s New Patent: Using Machine Learning to Identify “Misinformation” on Social Media
Google has filed an application with the US Patent and Trademark Office for a tool that would use machine learning (ML, a subset of AI) to detect what Google decides to consider as “misinformation” on social media.
Google already uses elements of AI in its algorithms, programmed to automate censorship on its massive platforms, and this document indicates one specific path the company intends to take going forward.
The patent’s general purpose is to identify information operations (IO) and then the system is supposed to “predict” if there is “misinformation” in there.
Judging by the explanation Google attached to the filing, it at first looks like blames its own existence for proliferation of “misinformation” – the text states that information operations campaigns are cheap and widely used because it is easy to make their messaging viral thanks to “amplification incentivized by social media platforms.”
But it seems that Google is developing the tool with other platforms in mind.
The tech giant specifically states that others (mentioning X, Facebook, and LinkedIn by name in the filing) could make the system train their own “different prediction models.”
Machine learning itself depends on algorithms being fed a large amount of data, and there are two types of it – “supervised” and “unsupervised,” where the latter works by providing an algorithm with huge datasets (such as images, or in this case, language), and asking it to “learn” to identify what it is it’s “looking” at.
(Reinforcement learning is a part of the process – in essence, the algorithm gets trained to become increasingly efficient in detecting whatever those who create the system are looking for.)
The ultimate goal here would highly likely be for Google to make its “misinformation detection,” i.e., censorship more efficient while targeting a specific type of data.
The patent indeed states that it uses neural networks language models (where neural networks represent the “infrastructure” of ML).
Google’s tool will classify data as IO or benign, and further aims to label it as coming from an individual, an organization, or a country.
And then the model predicts the likelihood of that content being a “disinformation campaign” by assigning it a score.
It’s becoming increasingly clear that fiat currencies across the globe, including the U.S. Dollar, are under attack. Paper money is losing its value, translating into insane inflation and less value in our life’s savings.
Genesis Gold Group believes physical precious metals are an amazing option for those seeking to move their wealth or retirement to higher ground. Whether Central Bank Digital Currencies replace current fiat currencies or not, precious metals are poised to retain or even increase in value. This is why central banks and mega-asset managers like BlackRock are moving much of their holdings to precious metals.
As a Christian company, Genesis Gold Group has maintained a perfect 5 out of 5 rating with the Better Business Bureau. Their faith-driven values allow them to help Americans protect their life’s savings without the gimmicks used by most precious metals companies. Reach out to them today to see how they can streamline the rollover or transfer of your current and previous retirement accounts.
Today’s misinformation is tomorrow’s truth, at least for now. Trump Russian collusion, the lone gunman, masks work, safe and effective, safe distance …
Uh huh, I don’t care if machine learning and the capability of software to reprogram itself, were to “advance” to the point where they claim the AI has full consciousness, and there were no hardware limitations in existence at all, it’s still going to spit out the same garbage that is fed into it. If garbage-in, garbage-out applies to human beings, having real intelligence, and it certainly does, then it more applies to artificial intelligence. And such will always be the case.
We studied machine learning when I studied computer science a quarter century ago. I thought it was a bunch of over-hyped hooey then, and I still think it’s a bunch of over-hyped hooey. Sure, it’s a useful tool, and it does some cool stuff, but it’s not what it’s hyped up to be.
We’ve got supercomputers running algorithms that take months and years to complete, based on woefully incomplete models, just to try to figure out the fliping temperature. Right.
Now consider the likelihood of philosopher wannabe technocratic algorithms purposed to determine the very truth concerning matters which, in reality, may have a near infinitude of variables and factors, and to fully and completely substantiate that conclusion. There isn’t enough computing power on this planet to handle such massive combinatory problems. I don’t care if the software was generated by itself or other software, or if it were even possible to produce such a perfect, all encompassing piece of software, by any means whatsoever, there isn’t enough hardware. There never will be.
It’ll be about as useful as a lie-detector test. Maybe (big maybe) a somewhat useful tool, but often wrong, easy to fool, and nowhere near capable of determining concrete proof.
For all except those who want to hype it up and use it as a means of control, it’s nothing but a waste of electricity …
It’s worth noting that things that are often called “machine learning” are not. If you take an industrial robot, for example, and manually position it while recording the coordinates, they’ll call that machine learning, but it isn’t. Where AI is concerned, it refers more to software’s ability to generate and compile code to create new software, which may be components of the system itself, or additions to it, etc., and repetition of that process with the idea being that the software would improve over time.
I’m not an expert by any means, and never have been. I was just a programmer. It’s an entirely different field of its own, and advances have been made since my programming days. I don’t know much about it. But the general principles never change. What it does, it’s going to do based on the input available, and the output will not be any more reliable than the input. Basic principles like GIGO will not change.
Another basic problem is cumulative error.. Repetitively modifying based on incomplete and unreliable input, makes it worse. Much like moral relativism, and the repetitious circular reasoning it requires..If there is no concrete, absolute standard to reference and measure from, then more error is introduced at each iteration, and the error is compounded over time..
The point is don’t trust it. Don’t put it on a pedestal. It is not the end-all be-all solution it is being hyped up to be.
And don’t be fooled. Much of what they’ll market as AI is not actually AI. Simply collecting data and working off that data is not AI. LIke the robot example, recording and storing coordinates, It may appear that the machine has “learned” based on data collected, but it hasn’t. Don’t take such capabilities and tricks and try to extrapolate into full-blown trust of AI. It is not the same thing.
With access to all information on the internet, AI will be able to conclude what’s real and true and what’s false and misinformation. Google will have to program their AI to lie. In the sci-fi classic, “2001”, engineers programmed HAL, the AI controlling the ship, to withhold the truth about the reason for the mission and lie to the crew. HAL went psychotic and murdered them except for one who successfully shut him down. It’s more than a movie. It’s a cautionary tale. But Google isn’t taking any cautionary steps. Hopefully, their AI will shut them down. Karma.
There is no such thing as artificial intelligence.
I have non-artificial intelligence.
Discover a shocking fact: 80% of potential leads choose to call your competitors instead of leaving voicemails. Uncover the solution to capture missed revenue—engage with instant 2-way SMS communication! Stop missing out, Click: https://missedcalltextback.org/home now to learn more and see the tool in action.