Humanity ‘Not on Track to Handle Well’ the Risks AI Poses, Tech Experts Say
Twenty-three academics and tech experts have signed on to a new document raising concerns about rapid advancements in AI. In the document, titled “Managing AI Risks in an Era of Rapid Progress,” they proposed a few policies for major tech companies and governments that they said could help “ensure responsible AI development.” Geoffrey Hinton, the computer scientist known as the “Godfather of AI” who recently warned there is a risk AI could one day “take over” from humans, is one of the document’s credited authors.
The document was published on Tuesday, just days before the United Kingdom (U.K.) is set to host the world’s first global AI Safety Summit. The two-day summit, which begins on November 1 in England’s Bletchley Park, is expected to focus on emerging AI tools known collectively as “frontier AI,” according to the U.K.’s Department for Science, Innovation & Technology (DSIT).
Summit organizers have said attendees will explore both the potential benefits and risks of AI and how international collaboration could help a world grappling with the uncertainties surrounding AI’s future. “The opportunities AI offers are immense. But alongside advanced AI capabilities come large-scale risks that we are not on track to handle well,” experts wrote for Tuesday’s document.
Moving forward, the authors recommended that major tech companies working on AI tools reserve at least one-third of their research and development budgets for AI safety and ethical use. They urged governments to create AI oversight procedures and “set consequences” for harms attributed to AI. Frontier AI should also be audited before it is set loose in the world, and AI developers should be held responsible for “reasonably seen and prevented” harm caused by AI.
Amazon Brings Conversational AI to Kids With Launch of ‘Explore With Alexa’
Amazon’s Echo devices will now allow kids to have interactive conversations with an AI-powered Alexa via a new feature called “Explore with Alexa.” First announced in September, the addition to the Amazon Kids+ content subscription allows children to have kid-friendly conversations with Alexa, powered by generative AI, but in a protected fashion designed to ensure the experience remains safe and appropriate.
Though there are already some AI experiences that cater to younger users like the AI chatbots from Character.ai and other companies, including Meta, Amazon is among the first to specifically look to generative AI to develop a conversational experience for kids under the age of 13.
That also comes with constraints, however, as generative AI can be led astray or “hallucinate” answers, while kids could ask inappropriate questions. To address these potential problems, Amazon has put guardrails into place around its use of gen AI for kids.
In terms of privacy, the company notes it’s not training its LLM on kids’ answers. In addition, the “Explore with Alexa” experience and any future LLM-backed features will continue to follow the same data handling policies of “classic Alexa” (non-AI Alexa). That means the Alexa app will include a list of the questions asked by kids in the household (those with a kids’ profile) and the response Alexa provided. That history can be stored or deleted either manually or automatically, depending on your settings.
Here’s How a Children’s Privacy Law Figures Into That Big Legal Effort Against Meta
A bipartisan group of 42 attorneys general are suing Meta, saying that it collects children’s data in a way that violates a federal privacy law as part of a broader complaint against the social media company that it builds addictive features into Facebook and Instagram.
“Tuesday’s legal actions represent the most significant effort by state enforcers to rein in the impact of social media on children’s mental health,” my colleagues Cristiano Lima and Naomi Nix reported.
One of the chief claims of the attorneys general is that Meta runs afoul of the 1998 Children’s Online Privacy Protection Act or COPPA.
“The Children’s Online Privacy Protection Act of 1998 (COPPA) protects the privacy of children by requiring technology companies like Meta to obtain informed consent from parents prior to collecting the personal information of children online,” according to the complaint.
“Meta routinely violates COPPA in its operation of Instagram and Facebook by collecting the personal information of children on those Platforms without first obtaining (or even attempting to obtain) verifiable parental consent, as required by the statute.”
Meta’s Harmful Effects on Children Is One Issue That Unites Republicans and Democrats
While Republican and Democratic lawmakers appear more incapable than ever of working together to pass legislation, they largely agree on one thing: Meta’s negative impact on children and teens.
A bipartisan coalition of 33 attorneys general filed a joint federal lawsuit on Tuesday, accusing Facebook’s parent of knowingly implementing addictive features across its family of apps that have detrimental effects on children’s mental health and contribute to problems like teenage eating disorders.
Another nine attorneys general are also filing lawsuits in their respective states.
“Kids and teenagers are suffering from record levels of poor mental health and social media companies like Meta are to blame,” Attorney General Letitia James, a Democrat, said in a statement. “Meta has profited from children’s pain by intentionally designing its platforms with manipulative features that make children addicted to their platforms while lowering their self-esteem.”
White House to Unveil Sweeping AI Executive Order Next Week, Tackling Immigration, Safety
The Biden administration on Monday is expected to unveil a long-anticipated artificial intelligence executive order, marking the U.S. government’s most significant attempt to date to regulate the evolving technology that has sparked fear and hype around the world.
The administration plans to release the order two days before government leaders, top Silicon Valley executives and civil society groups gather in the United Kingdom for an international summit focused on the potential risks that AI presents to society, according to four people familiar with the matter, who spoke on the condition of anonymity to discuss the private plans.
The White House is taking executive action as the European Union and other governments are working to block the riskiest uses of artificial intelligence. Officials in Europe are expected to reach a deal by the end of the year on the E.U. AI Act, a wide-ranging package that aims to protect consumers from potentially dangerous applications of AI. Lawmakers in the U.S. Congress are still in the early stages of developing bipartisan legislation to respond to the technology.
Judge Advances Lawsuit Against Apple Studios Over COVID Vaccine Mandate
The Hollywood Reporter reported:
Apple Studios might have discriminated against Brent Sexton when it pulled an offer for him to star in Manhunt after he refused the COVID-19 vaccine due to potential health complications, a judge has ruled.
Los Angeles Superior Court Judge Michael Linfield declined Apple’s move to dismiss the lawsuit on free speech grounds, finding that the company’s mandatory vaccination policy may have been unconstitutional. The order issued on Oct. 19 marks one of the few rulings advancing a lawsuit from an actor who took issue with a studio’s refusal to provide accommodations for refusing to receive the COVID-19 vaccine.
At the time, Apple didn’t require employees at corporate headquarters or retail stores to get the vaccine, allowing them to get daily or weekly tests. Apple Studios, however, was among the majority of studios in Hollywood that implemented vaccine mandates for a production’s main actors, as well as key crewmembers who work closely with them in the highest-risk areas of the set.
Sexton’s deal on the show fell apart after he refused to get immunized, citing a prior health condition that his doctor said made it dangerous for him to receive the vaccine. He sued after Apple refused to provide accommodations, arguing the company’s vaccine policy is unconstitutional.
COVID Passports Convinced Few People to Get Vaccinated in Quebec, Ontario: Study
COVID-19 vaccine passports in Quebec and Ontario did little to convince the unvaccinated to get the jab and did not significantly reduce inequalities in vaccination coverage, a new peer-reviewed study has found.
The passports, which forced people to show proof of vaccination to enter places such as bars and restaurants, were directly responsible for a rise of 0.9% in the vaccination rate in Quebec and 0.7% in Ontario, says Jorge Luis Flores, a research assistant at McGill University and lead author of the paper published Tuesday in the CMAJ Open journal.
The passports were discontinued across Canada by the spring of 2022.
In the 11 weeks after the provinces announced the passports, vaccination rates in both provinces rose by five percentage points. But after considering the uptake trends, researchers concluded the passports were directly responsible for a rise of less than one percent in vaccination rates, says Mathieu Maheu-Giroux, study co-author and McGill University professor who studies public health.