A recent report has uncovered a concerning trend in the development of artificial intelligence image generators, revealing the use of explicit photos of children in their training datasets.
The Associated Press reports that The Stanford Internet Observatory, in collaboration with the Canadian Centre for Child Protection and other anti-abuse charities, conducted a study that found more than 3,200 images of suspected child sexual abuse in the AI database LAION. LAION, an index of online images and captions, has been instrumental in training leading AI image-makers such as Stable Diffusion. teenagers look at phone ( NICHOLAS KAMM/Getty) This discovery has raised alarms across various sectors, including schools and law enforcement. The child pornography has enabled AI systems to produce explicit and realistic imagery of fake children and transform social media photos of real teens into deepfake nudes. Previously, it was believed that AI tools produced abusive imagery by combining […]
Read the Whole Article From the Source: www.breitbart.com
-
Learn the TRUTH about Gold IRAs and how most precious metals companies play dirty.