(Natural News)—Amazon’s fledgling generative AI assistant, Q, has been struggling with factual inaccuracies and privacy issues, according to leaked internal communications.
The chatbot was recently announced by Amazon’s cloud computing division and will be aimed at businesses. A company blog post says it was built to help employees write emails, troubleshoot, code, research and summarize reports and will provide users with helpful answers that relate only to the content that “each user is permitted to access.”
It was promoted as a safer and more secure offering than ChatGPT. However, leaked documents show that it is not performing up to standards, experiencing “severe hallucinations” and leaking confidential data.
According to Platformer, who obtained the leaked documents, one incident was flagged as “sev 2.” This designation is reserved for events deemed serious enough to page Amazon engineers overnight and have them work on the weekend to correct them. The publication revealed that the tool leaked unreleased features and shared the locations of Amazon Web Services data centers.
One employee wrote in the company’s Slack channel that Q could provide advice that is so bad that it could “potentially induce cardiac incidents in Legal.”
An internal document referring to the wrong answers and hallucinations of the AI assistant noted: “Amazon Q can hallucinate and return harmful or inappropriate responses. For example, Amazon Q might return out of date security information that could put customer accounts at risk.”
Maximize savings. Support great patriot Mike Lindell. Use promo code “JDR” at MyPillow and take advantage of the $25 EXTRAVAGANZA happening right now.
These are very worrying problems for a chatbot that the company is gearing toward businesses who will likely have data protection and compliance concerns. It also doesn’t bode well for the company in its quest to prove that it is not falling behind its competitors in the AI sphere, like OpenAI and Microsoft.
Amazon has denied that Q leaked confidential information. A spokesperson for the company noted: “Some employees are sharing feedback through internal channels and ticketing systems, which is standard practice at Amazon. No security issue was identified as a result of that feedback.
The company said it became interested in developing Q in response to many companies banning AI assistants from being used for business out of privacy and security concerns. It was essentially built to serve as a more private and secure alternative, and these leaks indicate that they are failing to meet their objectives with this project.
AI chatbots are prone to hallucinations
Q is far from the only generative AI chatbot to encounter major issues like hallucinations, the term given to the tendency for AI models to present inaccurate information as facts. However, experts suggest this characterization is not accurate as these language models are trained to provide plausible-sounding answers to prompts from users rather than correct ones. As far as the models are concerned, any answer that sounds plausible is acceptable, whether it is factual or not.
Although some companies have taken steps to keep these hallucinations under control to some extent, some computer scientists believe that this is a problem that simply cannot be solved.
When Google unveiled its ChatGPT competitor Bard, it provided a wrong answer to a question about the James Webb Space Telescope during a public demo. In another high-profile incident, the tech news site CNET had to issue corrections after an article it wrote using an AI tool provided highly inaccurate financial advice to readers. On another occasion, a New York lawyer got in trouble after using ChatGPT to conduct legal research and he submitted a brief with a series of cases that the chatbot invented.
There are so many ways that relying on this technology can go wrong, particularly when people use answers from chatbots to make decisions about their health, finances, who to vote for and other sensitive topics.
Sources for this article include:
Controlling Protein Is One of the Globalists’ Primary Goals
Between the globalists, corporate interests, and our own government, the food supply is being targeted from multiple angles. It isn’t just silly regulations and misguided subsidies driving natural foods away. Bird flu, sabotaged food processing plants, mysterious deaths of entire cattle herds, arson attacks, and an incessant push to make climate change the primary consideration for all things are combining for a perfect storm to exacerbate the ongoing food crisis.
The primary target is protein. Specifically, they’re going after beef as the environmental boogeyman. They want us eating vegetable-based proteins, lab-grown meat, or even bugs instead of anything that walked the pastures of America. This is why we launched a long-term storage prepper beef company that provides high-quality food that’s shelf-stable for up to 25-years.
At Prepper All-Naturals, we believe Americans should be eating real food today and into the future regardless of what the powers-that-be demand of us. We will never use lab-grown beef. We will never allow our cattle to be injected with mRNA vaccines. We will never bow to the draconian diktats of the climate change cult.
Visit Prepper All-Naturals and use promo code “veterans25” to get 25% off plus free shipping on Ribeye, NY Strip, Tenderloin, and other high-quality cuts of beef. It’s cooked sous vide, then freeze dried and packaged with no other ingredients, just beef. Stock up for the long haul today.