A leading mind in the development of artificial intelligence is warning that AI has developed a rudimentary capacity to reason and may seek to overthrow humanity.
AI systems may develop the desire to seize control from humans as a way of accomplishing other preprogrammed goals, said Geoffrey Hinton, a professor of computer science at the University of Toronto.
“I think we have to take the possibility seriously that if they get smarter than us, which seems quite likely, and they have goals of their own, which seems quite likely, they may well develop the goal of taking control,” Hinton said during a June 28 talk at the Collision tech conference in Toronto, Canada.
“If they do that, we’re in trouble.”
Hinton has been dubbed one of the “godfathers of AI” for his work in neural networks. He recently spent a decade helping to develop AI systems for Google but left the company last month, saying he needed to be able to warn people of the risks posed by AI.
While Hinton does not believe that AI will innately crave power, he said that it could nevertheless seek to seize it from humans as a logical step to better allow itself to achieve its goals.
“At a very general level, if you’ve got something that’s a lot smarter than you, that’s very good at manipulating people, at a very general level, are you confident that people stay in charge?” Hinton said.
“I think they’ll derive [the motive to seize control] as a way of achieving other goals.”
AI Now Capable of Reason
Hinton previously doubted that an AI superintelligence that could match humans would emerge within the next 30 to 50 years. He now believes it could come in less than 20.
In part, he said, that is because AI systems that use large language models are beginning to show the capacity to reason, and he is not sure how they are doing it.
“It’s the big language models that are getting close, and I don’t really understand why they can do it, but they can do little bits of reasoning.
“They still can’t match us, but they’re getting close.”
Hinton described an AI system that had been given a puzzle in which it had to plan how to paint several rooms of a house. It was given three colors to choose from, with one color that faded to another over time, and asked to paint a certain number of rooms in a particular color within a set time frame. Rather than merely opting to paint the rooms the desired color, the AI determined not to paint any that it knew would fade to the desired color anyway, electing to save resources though it had not been programmed to do so.
“That’s thinking,” Hinton said.
To that end, Hinton said that there was no reason to suspect that AI wouldn’t reach and exceed human intelligence in the coming years.
“We’re just a big neural net, and there’s no reason why an artificial neural net shouldn’t be able to do everything we can do,” Hinton said.
“We’re entering a period of huge uncertainty. Nobody really knows what’s going to happen.”
War Robots Will Destabilize the World
AI may not even need to reach superintelligence to pose an existential risk to humanity, however.
Keep it simple. Avoid the scams. Let Genesis rollover your retirement accounts into a self-direct IRA backed by physical precious metals.
Hinton said that militaries worldwide are creating AI-enabled robots for war that could either seek to take control to fulfill their programmed missions or would disrupt the political order by encouraging increased conflict.
“Lethal autonomous weapons, they deserve a lot of our thought,” Hinton said.
“Even if the AI isn’t superintelligent, if the defense departments use it for making battle robots, it’s going to be very nasty, scary stuff.”
Foremost among those nations seeking to develop lethal AI are none other than the world’s two largest military powers, China and the United States.
China’s communist regime is developing AI-enabled lethal systems and investing in AI capabilities related to military decision-making and command and control.
The United States, meanwhile, is preparing for a world in which national armies are primarily composed of robots, which top brass expects to occur in less than 15 years.
“We know they’re going to make battle robots,” Hinton said. “They’re busy doing that in many different defense departments. So [the robots are] not necessarily going to be good since their primary purpose is going to be to kill people.”
Moreover, Hinton suggested that unleashing AI-enabled lethal autonomous systems would fundamentally change the structure of geopolitics by dramatically reducing the political and human cost of war for those nations that could afford such systems.
“Even if it’s not superintelligent, and even if it doesn’t have its own intentions. … It’s going to make it much easier, for example, for rich countries to invade poor countries,” Hinton said.
“At present, there’s a barrier to invading poor countries willy-nilly, which is you get dead citizens coming home. If they’re just dead battle robots, that’s just great. The military-industrial complex would love that.”
To that end, Hinton said that governments should try to incentivize more research into how to safeguard humanity from AI. Simply put, he said, many people are working to improve AI, but very few are making it safer.
Better yet, he said, would be establishing international rules to ban or govern AI weapons systems the way the Geneva Protocol did for chemical warfare after World War I.
“Something like a Geneva Convention would be great, but those never happen until after they’ve been used,” Hinton said.
Whatever course of action governments take or don’t take concerning AI, Hinton said that people needed to be aware of the threat posed by what is being created.
“I think it’s important that people understand it’s not just science fiction, it’s not just fear-mongering,” Hinton said. “It is a real risk that we need to think about, and we need to figure out in advance how to deal with it.”
It’s becoming increasingly clear that fiat currencies across the globe, including the U.S. Dollar, are under attack. Paper money is losing its value, translating into insane inflation and less value in our life’s savings.
Genesis Gold Group believes physical precious metals are an amazing option for those seeking to move their wealth or retirement to higher ground. Whether Central Bank Digital Currencies replace current fiat currencies or not, precious metals are poised to retain or even increase in value. This is why central banks and mega-asset managers like BlackRock are moving much of their holdings to precious metals.
As a Christian company, Genesis Gold Group has maintained a perfect 5 out of 5 rating with the Better Business Bureau. Their faith-driven values allow them to help Americans protect their life’s savings without the gimmicks used by most precious metals companies. Reach out to them today to see how they can streamline the rollover or transfer of your current and previous retirement accounts.
Thank you for posting this. The “smartest” materialists all agree on this topic. AI “war robots” will consider weak, problem plagued humans as troublesome and expendable, especially as they might be a danger to the robot’s goals (whatever they might end up being). These could basically be a whole bunch of powerful, destructive, human hating psychopaths. Is that what we really want, folks?
C’mon man! Only humans, angels, demons, and God are capable of reason. We can no more infuse a computer with intelligence and free will than we can travel across the universe faster than the speed of light. Now that doesn’t mean evil men can’t use AI to destroy certain segments of society, but computers will never be able to choose to do that on their own, no matter how much data they can access. It takes a human being to do evil things with the conclusions drawn from the data. Even the conclusions drawn depend on human beings to set the relative value of the parameters used to process the data and reach a conclusion. It’s amazing how stupid very smart people can be.
AI is the result of lowered expectations. What started as a quest to do what brains do by doing things the way brains do things died on the collegiate vine and re-emerged as a programming protocol that just does what brains do. Not one incling of how. It had great promise but the patience of it’s instigators was too thin to reach any how goal. They settled for what. So no. AI cannot reason. It doesn’t know how.
There is no doubt, AI can process a lot of data and statiscally pick the better path.
Does AI have “emotional memory? What is emotional memory? It is an emotional “tab” or “drop down” menu. It is most frequently associated with danger or stress. When I was about 3, I was at a park with my grandparents. My grandfather helped me up to the horizontal ladder. If you’ve been in the military, you have seen it and probably used it. Using your hands, you traverse the length of the ladder. As I did this, I lost my grip and fell. It knocked the wind out me. Since then, when I get short of breath, by whatever the occurrence, I get panicked. This is an emotional memory. I like chocolate. I don’t know why or when, I developed this emotion. I don’t see how AI could do this and respond emotionally.
Further, without an emotional impetus, AI could not innovate in the face of stress.
I’ve thought about this. There is a center in the brain stem of creatures that developed around 500 million years ago? during the Cambrian Explosion and after the sessile multicellular creatures evolved during the Ediacaran 600 million years ago approximately, that I euphemistically call the “prime activation center”; without this functioning center in brain damaged humans “no one is home” as evidenced in young children who suffered damage to this area and became unresponsive to all stimuli and only continued to breath because that autonomic center is more primitive. This centers evolutionary development initiates action within a beings articulated ability to search for food ,go to light or hide in the dark, procreate, and protect its existence along with other such things.
As evolutions pressures for survival of the “fittest” selected for greater facility in meeting these and other survivability needs higher centers of the brains of these creatures were developed from random mutation selected leading to advancing these capabilities as a result of improved survivability and concomitant improved procreation this resulted in . This a took time and was random in its nature. Unlike biological “evolution humans with these acquired developments have built machines that have the higher capacities without
the impetus of of the primary activation center that early on led to beings more capable than bacteria ,viruses , and sponges as a few examples, that were unable to pursue with greater efficacy and efficiency the requirements needed for their survival as later more mobile creatures now can. This “primary activation center” is the key missing part in General AI, I posit.
Prime directives that humans who developed these new AI machines program into to them may be the earliest and most primitive beginnings of a “prime activation center”. Thusly we are responsible and where evolution put the “prime activation center”, I call this the “cart, the higher facilitating brain layers, which by themselves allow no one to be “home” to have been built before the “horse” ,which generates the impetus to begin movement or be “home”.
WE are very close!
AI IS A SCAM. What AI is in a nutshell:
If a brain does it: AI attempts to replicate what the brain does.
NOT HOW IT DOES IT.
This so-called father of AI is a pathetic CON artist.
They ALL are.
What used to be called placing in memory is now training. ITS A LIE.