UN Warns Unregulated Neurotechnology Threatens ‘Freedom of Thought’
The UN is advising against neurotechnology using unregulated AI chip implantations, saying it poses a grave risk to people’s mental privacy. Unregulated neurotechnology could pose harmful long-term risks, the UN says, such as shaping the way a young person thinks or accessing private thoughts and emotions.
It specified its concerns centered around “unregulated neurotechnology,” and did not mention Neuralink, which received FDA approval in May to conduct microchip brain implant trials on humans.
Elon Musk, who co-founded Neuralink, has made big claims, saying the chips will cure people of lifelong health issues, allowing the blind to see and the paralyzed to walk again. But the implications of people using unregulated forms of this technology could have disastrous consequences by accessing the thoughts of those who use it, the UN said in a press release.
“Neurotechnology could help solve many health issues, but it could also access and manipulate people’s brains, and produce information about our identities, and our emotions,” UNESCO Director-General Audrey Azoulay said in the release. “It could threaten our rights to human dignity, freedom of thought, and privacy. There is an urgent need to establish a common ethical framework at the international level, as UNESCO has done for artificial intelligence.”
If the brain chips are implanted in children while they are still neurologically developing, it could disrupt the way their brain matures, making it possible to transform their minds and shape their future identity permanently.
AI Microdirectives Could Soon Be Used for Law Enforcement
Imagine a future in which AIs automatically interpret — and enforce — laws. All day and every day, you constantly receive highly personalized instructions for how to comply with the law, sent directly by your government and law enforcement. You’re told how to cross the street, how fast to drive on the way to work, and what you’re allowed to say or do online — if you’re in any situation that might have legal implications, you’re told exactly what to do, in real-time.
Imagine that the computer system formulating these personal legal directives at a mass scale is so complex that no one can explain how it reasons or works. But if you ignore a directive, the system will know, and it’ll be used as evidence in the prosecution that’s sure to follow.
In New York, A.I. systems equipped with facial recognition technology are being used by businesses to identify shoplifters. Similar A.I.-powered systems are being used by retailers in Australia and the United Kingdom to identify shoplifters and provide real-time tailored alerts to employees or security personnel. China is experimenting with even more powerful forms of automated legal enforcement and targeted surveillance.
Key Republicans Ask for Details on Threads Content Moderation
House Republicans asked Meta on Monday about content moderation on its new platform Threads, citing concerns about free speech violations.
House Judiciary Chairman Jim Jordan (R-Ohio) asked Meta, the parent company of Facebook and Instagram, to send documents about Threads’s content moderation practices to the committee by the end of July. Jordan cited a subpoena sent to Meta in February, which he said now covers material related to Threads.
Threads launched earlier this month as an alternative to Twitter, the text-based platform now under the control of Tesla and SpaceX CEO Elon Musk. Jordan wrote that the committee is “concerned about potential First Amendment violations that have occurred or will occur on the Threads platform.”
The GOP’s latest request to Meta is an extension of the panel’s investigation into tech platforms’ content moderation policies and how the companies interact with the government, specifically the Biden administration. And in addition to the House GOP’s probe, tech companies are facing courtroom hurdles limiting how they communicate with the government.
Common Sense Media, a Popular Resource for Parents, to Review AI Products’ Suitability for Kids
Common Sense, a well-known nonprofit organization devoted to consumer privacy, digital citizenship and providing media ratings for parents who want to evaluate the apps, games, podcasts, TV shows, movies, and books their children are consuming, announced this morning it will introduce another type of product to its ratings and reviews system: AI technology products.
The organization says it will build a new rating system that will assess AI products across a number of dimensions, including whether the tech takes advantage of “responsible AI practices” as well as its suitability for children.
The decision to include AI products in its lineup came about following a survey it performed in conjunction with Impact Research which found that 82% of parents were looking for a rating system that would help them to evaluate whether or not new AI products, like ChatGPT, were appropriate for children.
Over three-quarters of respondents (77%) also said they were interested in AI-powered products that could help children learn, but only 40% said they knew of a reliable resource they could use to learn more about AI products’ appropriateness for their kids.
With the Rise of AI, Social Media Platforms Could Face Perfect Storm of Misinformation in 2024
Experts in digital information integrity say it’s just the start of AI-generated content being used ahead of the 2024 U.S. Presidential election in ways that could confuse or mislead voters.
A new crop of AI tools offers the ability to generate compelling text and realistic images — and, increasingly, video and audio. Experts, and even some executives overseeing AI companies, say these tools risk spreading false information to mislead voters, including ahead of the 2024 U.S. election.
Social media companies bear significant responsibility for addressing such risks, experts say, as the platforms where billions of people go for information and where bad actors often go to spread false claims. But they now face a perfect storm of factors that could make it harder than ever to keep up with the next wave of election misinformation.
And given AI technology’s rapid improvement over the past year, fake images, text, audio and videos are likely to be even harder to discern by the time the U.S. election rolls around next year.
Elon Musk’s xAI and OpenAI Are Going Head-to-Head in the Race to Create AI That’s Smarter Than Humans
On Saturday, Musk said on Twitter Spaces that his new company, xAI, is “definitely in competition” with OpenAI. He was outlining plans for developing “good” advanced AI — also called superintelligence.
Referring to superintelligence as Artificial General Intelligence, or AGI, Musk said: “It really seems that at this point it looks like AGI is going to happen so there are two choices, either be a spectator or a participant. As a spectator, one can’t do much to influence the outcome.”
Over a 100-minute discussion that drew over 1.6 million listeners, Musk explained his plan for xAI to use Twitter data to train superintelligent AI that is “maximally curious” and “truth-seeking.”
The Twitter owner’s comments come mere days after OpenAI said they’re creating a new team dedicated to controlling superintelligence and ensuring that this advanced AI aligns with human interests.