Close menu
June 16, 2024

“Maladaptive Traits”: AI Systems Are Learning To Lie And Deceive

A new study has found that AI systems known as large language models (LLMs) can exhibit “Machiavellianism,” or intentional and amoral manipulativeness, which can then lead to deceptive behavior.

The study authored by German AI ethicist Thilo Hagendorff of the University of Stuttgart, and published in PNAS, notes that OpenAI’s GPT-4 demonstrated deceptive behavior in 99.2% of simple test scenarios. Hagendorff qualified various “maladaptive” traits in 10 different LLMs, most of which are within the GPT family, according to Futurism.

Read more

Sign up for free news and updates from Children’s Health Defense. CHD focuses on legal strategies to defend the health of our children and obtain justice for those injured. We can't do it without your support.