Big Brother News Watch
The Supreme Court Must Decide if It Wants to Own Twitter + More
The Supreme Court Must Decide if It Wants to Own Twitter
The Twitter Wars have arrived at the Supreme Court.
On Halloween, the Supreme Court will hear the first two in a series of five cases the justices plan to decide in their current term that ask what the government’s relationship should be with social media outlets like Facebook, YouTube, or Twitter (the social media app that Elon Musk insists on calling “X”).
These first two cases are, admittedly, the most low-stakes of the lot — at least from the perspective of ordinary citizens who care about free speech. Together, the first two cases, O’Connor-Ratcliff v. Garnier and Lindke v. Freed, involve three social media users who did nothing more than block someone on their Twitter or Facebook accounts. But these three social media users are also government officials. And when a government official blocks someone, that raises very thorny First Amendment questions that are surprisingly difficult to sort out.
Two of the three other cases, meanwhile, ask whether the government may order social media sites to publish content they do not wish to publish — something that, under longstanding law, is an unambiguous violation of the First Amendment. The last case concerns whether the government may merely ask these outlets to pull down content.
When the Supreme Court closes out its term this summer, in other words, it could become the central player in the conflicts that drive the Way Too Online community: Which content, if any, should be removed from social media websites? Which users are too toxic for Twitter or Facebook? How much freedom should social media users, and especially government officials, have to censor or block people who annoy them online? And should decisions about who can post online be made by the free market, or by government officials who may have a political stake in the outcome?
How Facial-Recognition App Poses Threat to Privacy, Civil Liberties
Tech reporter Kashmir Hill has written about the intersection of privacy and technology for more than a decade, but even she was stunned when she came across a legal memo in 2019 describing a facial recognition app that could identify anyone based on a picture. She immediately saw the potential this technology had to become the stuff of dystopian nightmare, the “ultimate surveillance tool,” posing immense risks to privacy and civil liberties.
Hill recalled this incident to Jonathan Zittrain, the George Bemis Professor of International Law and Berkman Klein Center for Internet & Society director, as part of a conversation Wednesday at Harvard Law School about her new book, “Your Face Belongs to Us: A Secretive Startup’s Quest to End Privacy as We Know It.”
The work chronicles the story of Clearview AI, a small, secretive startup that launched an app in 2017, using a 30-billion-photo database scraped from social media platforms without users’ consent. The company, led by Australian computer engineer Hoan Ton-That, has been fined in Europe and Australia for privacy violations.
Hill spoke of the need to come up with regulations to safeguard users’ privacy and rein in social media platforms that are profiting from users’ personal information without their consent. Some states have passed laws to protect people’s right to access personal information shared on social media sites and the right to delete it, but that is not enough, she said.
The End of the Internet as We Know It? Online Safety Bill Gets Royal Assent
The long-debated and controversial Online Safety Bill finally received Royal Assent on October 26, 2023, the very last step for officially making it law.
The 300-page-long bill promises to make “the U.K. the safest place to be online,” especially for children, by forcing tech firms to take more responsibility for the content users spread across their platforms. Yet, tech firms claim that it actually threatens the internet as we know it.
Deemed by Technology Secretary Michelle Donelan as a “game-changing piece of legislation,” the Act gathered criticism from all fronts during its 6-year-long legal journey. From VPN services and messaging platforms to politicians, civil societies, industry experts, and academics, commentators fear its provisions may end up increasing the government’s surveillance and censorship reach while curbing people’s privacy.
Digital platforms now have a “duty of care” to protect children and prevent them from accessing harmful and age-inappropriate content, while enforcing age limits. They need to give an option to users to filter out harmful content, while parents will be entitled to obtain information about their children from tech firms. Platforms are also required to be transparent about all the risks of using their services beforehand.
The aim might be lofty, yet tech experts fear the means might end up undermining safety online instead. “As the Online Safety Bill becomes law without critical legal safeguards to end-to-end encryption, the internet as we know it faces a very real threat,” Andy Yen, Founder and CEO at Proton, told TechRadar right after the OSB received the long-awaited Royal Assent.
U.N. Chief Appoints 39-Member Panel to Advise on International Governance of Artificial Intelligence
U.N. Secretary-General António Guterres on Thursday announced the appointment of a 39-member global advisory panel to report on international governance of artificial intelligence and its risks, challenges and key opportunities.
The U.N. chief told a news conference the gender-balanced, geographically diverse group which spans generations will issue preliminary recommendations by the end of the year and final recommendations by the summer of 2024. The recommendations will feed into the U.N. Summit of the Future, which world leaders will attend in September 2024.
He said: “The potential harms of AI extend to serious concerns over misinformation and disinformation; the entrenching of bias and discrimination; surveillance and invasion of privacy; fraud, and other violations of human rights.”
The U.N. said the formation of the body, with experts from government, the private sector, the research community, civil society and academia marks a significant step in its efforts to address issues of AI international governance and will help bridge existing and emerging initiatives.
LA Council Members Call for Plan to End COVID Vaccine Mandate for City Works
Seeking to align the city of Los Angeles with federal and county vaccination directives, six City Council members on Wednesday introduced a motion calling for a plan to end the COVID-19 vaccine mandate for all current and future city employees.
Councilwoman Traci Park and Council President Paul Krekorian authored the motion, which instructs the city administrative officer and city attorney to report on the feasibility, impact and timeline of ending the mandate. Council members Heather Hutt, Kevin de Leon, John Lee and Curren Price seconded the motion.
While COVID-19 hospitalizations in the county remain low, the city’s vaccine mandate has stayed in place, even as other public entities have rescinded or eased vaccine requirements for their workforce.
The city of Los Angeles ended its policy requiring proof of vaccination to enter public buildings in February. In September, the Los Angeles Unified School District ended its vaccination requirement for staff, including teachers.
Canadian Lawmakers Want to Punish Online Platforms for Allowing ‘Misinformation’ Spread
The Canadian Parliament has become the latest global player in a widening tug-of-war geared towards constraining the tide of “misinformation” seeping into the digital landscape.
The House Ethics Committee in the North American province of Ottawa is calling for the imposing of stringent repercussions on tech giants whom they claim are complicit in disseminating “unverified” or “deceptive” content online.
Vice-chair of the Committee, Bloc Quebecois MP Rene Villemure, emphasized the urgent need for decisive action, mirroring similar controversial legislative combat seen by the European Union, which has imposed significant online regulations to control the spread of digital falsehoods.
Senate Passes Amendment to Ban Federal Mask Mandates on Commercial Flights and Public Transit + More
Senate Passes Amendment to Ban Federal Mask Mandates on Commercial Flights and Public Transit
The Senate passed an amendment banning federal mask mandates on commercial airlines and public transportation in an appropriations bill Wednesday.
In a 59-38 vote, senators — including several Democrats — voted in favor of the amendment brought forth by Sen. JD Vance, R-Ohio, which restricts any federal funds from being used to enforce mask mandates on passenger flights, trains, transit buses, and other publicly funded transportation through the next fiscal year.
Vance called the passage “a massive victory for personal freedom” in a statement Wednesday. “We saw countless abuses of authority throughout the COVID pandemic, and the American people were justifiably enraged by unscientific mask mandates,” he said.
Last month, Vance introduced the Freedom to Breathe Act, legislation that would prevent the government from reinstating mask mandates in response to COVID-19. It would prevent the enforcement of mask-wearing on public transit, airplanes, elementary schools and other institutions.
Is Social Media Addictive? Here’s What the Science Says.
A group of 41 states and the District of Columbia filed suit on Tuesday against Meta, the parent company of Facebook, Instagram, WhatsApp and Messenger, contending that the company knowingly used features on its platforms to cause children to use them compulsively, even as the company said that its social media sites were safe for young people.
The accusations in the lawsuit raise a deeper question about behavior: Are young people becoming addicted to social media and the internet? Here’s what the research has found. Experts who study internet use say that the magnetic allure of social media arises from the way the content plays to our neurological impulses and wiring, such that consumers find it hard to turn away from the incoming stream of information.
David Greenfield, a psychologist and founder of the Center for Internet and Technology Addiction in West Hartford, Conn., said the devices lure users with some powerful tactics. One is “intermittent reinforcement,” which creates the idea that a user could get a reward at any time. But when the reward comes is unpredictable.
Adults are susceptible, he noted, but young people are particularly at risk because the brain regions that are involved in resisting temptation and reward are not nearly as developed in children and teenagers as in adults. “They’re all about impulse and not a lot about the control of that impulse,” Dr. Greenfield said of young consumers.
George Soros and Bill Gates-Funded Aspen Institute Is Hit With Censorship Collusion Lawsuit
America First Legal (AFL), a law firm notable for its commitment to American constitutional principles and free speech rights, has magnified its legal fight against alleged collaborated censorship by big government and tech giants.
It recently incorporated the recognized Aspen Institute into its lawsuit. Previously, the lawsuit was directed exclusively towards the Election Integrity Partnership (EIP) and Virality Project (VP), two entities implicated in the systematic suppression of digital free speech during electoral seasons.
It’s alleged that the Aspen Institute, EIP, and VP have worked in concert with various tech powerhouses and governmental bodies, launching a concerted attack on free online expression during both the 2020 and 2022 electoral periods. The Aspen Institute, famous for its philanthropic endeavors, has been controversially funded by billionaire heavyweights George Soros, and Microsoft founder, Bill Gates.
The Bill & Melinda Gates Foundation reportedly donated over $101 million to the Aspen Institute between 2003 and 2020. Simultaneously, George Soros’s Open Society Network funded the Institute with upwards of $3 million during the same period.
Texas House Approves Ban on COVID Vaccine Mandates by Private Employers
After several attempts by Republicans to rein in COVID-19 vaccine mandates by Texas employers, lawmakers are edging closer to a statewide ban on the practice after legislation won House approval early Thursday.
Violators would be subject to a whopping $50,000 fine under an amendment adopted by the Texas House. The bill’s sponsor called it the strongest such ban in the country.
After debating the bill on Wednesday, the Texas House gave final approval to Senate Bill 7 on a 91-54 vote in the early hours of Thursday morning, with all Republicans in favor and most Democrats opposed, after a passionate debate on the merits and safety of the vaccine, the impact of employer mandates on Texas workers, the rights of private business owners vs. private individuals, whether to allow stronger exceptions for hospitals and doctors, and the bill’s impact on medically vulnerable populations.
SB 7 would ban private businesses from requiring employees and contractors to get the COVID vaccine. Healthcare facilities would be allowed to require unvaccinated employees and contractors to wear protective gear, such as masks, or enact other “reasonable” measures to protect medically vulnerable patients.
Poilievre-Backed Anti-COVID Vaccine Mandate Bill Fails to Pass House
A Pierre Poilievre-backed bill pushing to prohibit the federal government from imposing COVID-19 vaccine mandates on public servants or restricting unvaccinated Canadian travelers from boarding has died in the House of Commons after failing to pass a key first vote.
The proposed five-page piece of legislation was defeated at the second reading by a vote of 205 to 114 on Wednesday, with the Conservatives the only party to support it. Prime Minister Justin Trudeau cast his virtually: “nay.”
The private members’ bill — C-278, the Prevention of Government-imposed Vaccination Mandates Act — was first presented by Poilievre when he was running for Conservative leader. Due to overlap with an initiative of his own, anti-mandate, “Freedom Convoy” advocate and Ontario Conservative MP Dean Allison picked up Poilievre’s proposal and became its sponsor.
Poilievre spoke to the bill when it came up for debate on Tuesday night, imploring his colleagues to pass the bill. Now more than a year into his role at the helm of the Conservative party, Poilievre revived the anti-mandate messaging that he kicked off his leadership bid championing.
Visitors to the EU Will Soon Face Fingerprinting and Facial Scans
A significant shift is looming in the way American citizens will be allowed to enter a large majority of European nations. The European Travel Information and Authorization System (ETIAS), an arm of the European Union, has unveiled its plans to implement a system in Spring 2025, requiring Americans to secure prior approval for travels up to 90 days in any of the 30 EU countries.
This is a departure from the current practice where U.S. travelers enjoy effortless entry into these countries without a visa requirement. However, the new regulation will insist on individuals proceeding with their travels only after registering their intent via the official ETIAS website or mobile application, both of which currently do not process such requests.
In a radical departure from the norm, from 2025 onwards, American passport holders will no longer receive passport stamps. Alarmingly, the planned regulatory changes involve intense intrusions into personal privacy. The new rules state that visitors will be subjected to both face and fingerprint scans aside from surrendering other biometric data. It’s disconcerting that this data will be reserved within the European Commission’s Common Identity Repository (CIR), a database accessed by numerous agencies, including law enforcement.
These regulations reflect a worrying escalation towards a surveillance state that doesn’t differentiate between law-abiding citizens and potential threats but treats them both as data sets to be tagged, traced, and retained.
Google Reportedly Pays $18 Billion a Year to Be Apple’s Default Search Engine
Google pays Apple billions of dollars every year to be the default search engine in Safari on Macs, iPads, and iPhones. That, we’ve known for a long time. But exactly how many billions Google pays, what strings are attached to that money, and what might happen if it went away?
Those have been the questions raised repeatedly in the ongoing U.S. v. Google trial, and most of the numbers have been reserved for a closed courtroom.
But now, a New York Times report offers a specific figure: it says Google paid Apple “around $18 billion” in 2021. We’ve been hearing educated guesses and rumors during the trial as low as $10 billion and as high as $20 billion, so this number isn’t totally shocking. But it’s at the high end of expectations.
Humanity ‘Not on Track to Handle Well’ the Risks AI Poses, Tech Experts Say + More
Humanity ‘Not on Track to Handle Well’ the Risks AI Poses, Tech Experts Say
Twenty-three academics and tech experts have signed on to a new document raising concerns about rapid advancements in AI. In the document, titled “Managing AI Risks in an Era of Rapid Progress,” they proposed a few policies for major tech companies and governments that they said could help “ensure responsible AI development.” Geoffrey Hinton, the computer scientist known as the “Godfather of AI” who recently warned there is a risk AI could one day “take over” from humans, is one of the document’s credited authors.
The document was published on Tuesday, just days before the United Kingdom (U.K.) is set to host the world’s first global AI Safety Summit. The two-day summit, which begins on November 1 in England’s Bletchley Park, is expected to focus on emerging AI tools known collectively as “frontier AI,” according to the U.K.’s Department for Science, Innovation & Technology (DSIT).
Summit organizers have said attendees will explore both the potential benefits and risks of AI and how international collaboration could help a world grappling with the uncertainties surrounding AI’s future. “The opportunities AI offers are immense. But alongside advanced AI capabilities come large-scale risks that we are not on track to handle well,” experts wrote for Tuesday’s document.
Moving forward, the authors recommended that major tech companies working on AI tools reserve at least one-third of their research and development budgets for AI safety and ethical use. They urged governments to create AI oversight procedures and “set consequences” for harms attributed to AI. Frontier AI should also be audited before it is set loose in the world, and AI developers should be held responsible for “reasonably seen and prevented” harm caused by AI.
Amazon Brings Conversational AI to Kids With Launch of ‘Explore With Alexa’
Amazon’s Echo devices will now allow kids to have interactive conversations with an AI-powered Alexa via a new feature called “Explore with Alexa.” First announced in September, the addition to the Amazon Kids+ content subscription allows children to have kid-friendly conversations with Alexa, powered by generative AI, but in a protected fashion designed to ensure the experience remains safe and appropriate.
Though there are already some AI experiences that cater to younger users like the AI chatbots from Character.ai and other companies, including Meta, Amazon is among the first to specifically look to generative AI to develop a conversational experience for kids under the age of 13.
That also comes with constraints, however, as generative AI can be led astray or “hallucinate” answers, while kids could ask inappropriate questions. To address these potential problems, Amazon has put guardrails into place around its use of gen AI for kids.
In terms of privacy, the company notes it’s not training its LLM on kids’ answers. In addition, the “Explore with Alexa” experience and any future LLM-backed features will continue to follow the same data handling policies of “classic Alexa” (non-AI Alexa). That means the Alexa app will include a list of the questions asked by kids in the household (those with a kids’ profile) and the response Alexa provided. That history can be stored or deleted either manually or automatically, depending on your settings.
Here’s How a Children’s Privacy Law Figures Into That Big Legal Effort Against Meta
A bipartisan group of 42 attorneys general are suing Meta, saying that it collects children’s data in a way that violates a federal privacy law as part of a broader complaint against the social media company that it builds addictive features into Facebook and Instagram.
“Tuesday’s legal actions represent the most significant effort by state enforcers to rein in the impact of social media on children’s mental health,” my colleagues Cristiano Lima and Naomi Nix reported.
One of the chief claims of the attorneys general is that Meta runs afoul of the 1998 Children’s Online Privacy Protection Act or COPPA.
“The Children’s Online Privacy Protection Act of 1998 (COPPA) protects the privacy of children by requiring technology companies like Meta to obtain informed consent from parents prior to collecting the personal information of children online,” according to the complaint.
“Meta routinely violates COPPA in its operation of Instagram and Facebook by collecting the personal information of children on those Platforms without first obtaining (or even attempting to obtain) verifiable parental consent, as required by the statute.”
Meta’s Harmful Effects on Children Is One Issue That Unites Republicans and Democrats
While Republican and Democratic lawmakers appear more incapable than ever of working together to pass legislation, they largely agree on one thing: Meta’s negative impact on children and teens.
A bipartisan coalition of 33 attorneys general filed a joint federal lawsuit on Tuesday, accusing Facebook’s parent of knowingly implementing addictive features across its family of apps that have detrimental effects on children’s mental health and contribute to problems like teenage eating disorders.
Another nine attorneys general are also filing lawsuits in their respective states.
“Kids and teenagers are suffering from record levels of poor mental health and social media companies like Meta are to blame,” Attorney General Letitia James, a Democrat, said in a statement. “Meta has profited from children’s pain by intentionally designing its platforms with manipulative features that make children addicted to their platforms while lowering their self-esteem.”
White House to Unveil Sweeping AI Executive Order Next Week, Tackling Immigration, Safety
The Biden administration on Monday is expected to unveil a long-anticipated artificial intelligence executive order, marking the U.S. government’s most significant attempt to date to regulate the evolving technology that has sparked fear and hype around the world.
The administration plans to release the order two days before government leaders, top Silicon Valley executives and civil society groups gather in the United Kingdom for an international summit focused on the potential risks that AI presents to society, according to four people familiar with the matter, who spoke on the condition of anonymity to discuss the private plans.
The White House is taking executive action as the European Union and other governments are working to block the riskiest uses of artificial intelligence. Officials in Europe are expected to reach a deal by the end of the year on the E.U. AI Act, a wide-ranging package that aims to protect consumers from potentially dangerous applications of AI. Lawmakers in the U.S. Congress are still in the early stages of developing bipartisan legislation to respond to the technology.
Judge Advances Lawsuit Against Apple Studios Over COVID Vaccine Mandate
The Hollywood Reporter reported:
Apple Studios might have discriminated against Brent Sexton when it pulled an offer for him to star in Manhunt after he refused the COVID-19 vaccine due to potential health complications, a judge has ruled.
Los Angeles Superior Court Judge Michael Linfield declined Apple’s move to dismiss the lawsuit on free speech grounds, finding that the company’s mandatory vaccination policy may have been unconstitutional. The order issued on Oct. 19 marks one of the few rulings advancing a lawsuit from an actor who took issue with a studio’s refusal to provide accommodations for refusing to receive the COVID-19 vaccine.
At the time, Apple didn’t require employees at corporate headquarters or retail stores to get the vaccine, allowing them to get daily or weekly tests. Apple Studios, however, was among the majority of studios in Hollywood that implemented vaccine mandates for a production’s main actors, as well as key crewmembers who work closely with them in the highest-risk areas of the set.
Sexton’s deal on the show fell apart after he refused to get immunized, citing a prior health condition that his doctor said made it dangerous for him to receive the vaccine. He sued after Apple refused to provide accommodations, arguing the company’s vaccine policy is unconstitutional.
COVID Passports Convinced Few People to Get Vaccinated in Quebec, Ontario: Study
COVID-19 vaccine passports in Quebec and Ontario did little to convince the unvaccinated to get the jab and did not significantly reduce inequalities in vaccination coverage, a new peer-reviewed study has found.
The passports, which forced people to show proof of vaccination to enter places such as bars and restaurants, were directly responsible for a rise of 0.9% in the vaccination rate in Quebec and 0.7% in Ontario, says Jorge Luis Flores, a research assistant at McGill University and lead author of the paper published Tuesday in the CMAJ Open journal.
The passports were discontinued across Canada by the spring of 2022.
In the 11 weeks after the provinces announced the passports, vaccination rates in both provinces rose by five percentage points. But after considering the uptake trends, researchers concluded the passports were directly responsible for a rise of less than one percent in vaccination rates, says Mathieu Maheu-Giroux, study co-author and McGill University professor who studies public health.
Massive Facial Recognition Search Engine Now Blocks Searches for Children’s Faces + More
Massive Facial Recognition Search Engine Now Blocks Searches for Children’s Faces
PimEyes, a public search engine that uses facial recognition to match online photos of people, has banned searches of minors over concerns it endangers children, reports The New York Times.
At least, it should. PimEyes’ new detection system, which uses age detection AI to identify whether the person is a child, is still very much a work in progress. After testing it, The New York Times found it struggles to identify children photographed at certain angles. The AI also doesn’t always accurately detect teenagers.
PimEyes chief executive Giorgi Gobronidze says he’d been planning on implementing such a protection mechanism since 2021. However, the feature was only fully deployed after New York Times writer Kashmir Hill published an article about the threat AI poses to children last week. According to Gobronidze, human rights organizations working to help minors can continue to search for them, while all other searches will produce images that block children’s faces.
In the article, Hill writes that the service banned over 200 accounts for inappropriate searches of children. One parent told Hill she’d even found photos of her children she’d never seen before using PimEyes. In order to find out where the image came from, the mother would have to pay a $29.99 monthly subscription fee.
PimEyes is just one of the facial recognition engines that have been in the spotlight for privacy violations. In January 2020, Hill’s New York Times investigation revealed how hundreds of law enforcement organizations had already started using Clearview AI, a similar face recognition engine, with little oversight.
Instagram Linked to Depression, Anxiety, Insomnia in Kids — U.S. States’ Lawsuit
Dozens of U.S. states are suing Meta Platforms (META.O) and its Instagram unit, accusing them of contributing to a youth mental health crisis through the addictive nature of their social media platforms.
In a complaint filed in the Oakland, California, federal court on Tuesday, 33 states including California and Illinois said Meta, which also operates Facebook, has repeatedly misled the public about the substantial dangers of its platforms and knowingly induced young children and teenagers into addictive and compulsive social media use.
“Research has shown that young people’s use of Meta’s social media platforms is associated with depression, anxiety, insomnia, interference with education and daily life, and many other negative outcomes,” the complaint said.
The lawsuit is the latest in a string of legal actions against social media companies on behalf of children and teens. ByteDance’s TikTok and Google‘s YouTube are also the subjects of the hundreds of lawsuits filed on behalf of children and school districts about the addictiveness of social media.
The lawsuit alleges that Meta also violated a law banning the collection of data of children under the age of 13. The state action seeks to patch holes left by the U.S. Congress’s inability to pass new online protections for children, despite years of discussions.
Fed Governor Admits CBDCs Pose ‘Significant’ Privacy Risks
Federal Reserve Governor Michelle Bowman has raised valid concerns about the grave risks and privacy dangers that the introduction of a central bank digital currency (CBDC) might lead to in an appearance at a Harvard Law School Program on October 17.
The concept of a CBDC creation promises no certain benefit, remarks Bowman, but notably hints towards potential “unintended consequences” for the industry of finance.
Drawing from the contentions raised by one of the major participants in the regulation of domestic payment systems and banking, Governor Bowman underscored the trade-offs and risks that a digital dollar could entail. With the “considerable consumer privacy concerns” that a U.S. CBDC implementation might entail, any plausible merits of such a currency, Bowman points out, remain largely elusive.
Notwithstanding the grand promises of hassle-free payment systems or greater financial inclusion, there appears to be a significant lack of persuasive proof that a CBDC would actually contribute to these ends or furnish public access to secure central bank money. Yet, the argument here is not for the halt of research on this subject; according to Bowen, a continuous study of a digital dollar’s technical abilities and potential risks linked with CBDCs could foster a progressive attitude towards such future developments.
AI Firms Must Be Held Responsible for Harm They Cause, ‘Godfathers’ of Technology Say
Powerful artificial intelligence systems threaten social stability and AI companies must be made liable for harms caused by their products, a group of senior experts including two “godfathers” of the technology has warned.
Tuesday’s intervention was made as international politicians, tech companies, academics and civil society figures prepare to gather at Bletchley Park next week for a summit on AI safety.
“It’s time to get serious about advanced AI systems,” said Stuart Russell, professor of computer science at the University of California, Berkeley. “These are not toys. Increasing their capabilities before we understand how to make them safe is utterly reckless.”
He added: “There are more regulations on sandwich shops than there are on AI companies.”
Other co-authors of the document include Geoffrey Hinton and Yoshua Bengio, two of the three “godfathers of AI,” who won the ACM Turing Award — the computer science equivalent of the Nobel prize — in 2018 for their work on AI.
FTC Plans to Hire Child Psychologist to Guide Internet Rules
The Federal Trade Commission plans to hire at least one child psychologist who can guide its work on internet regulation, Democratic Commissioner Alvaro Bedoya told The Record in an interview published Monday.
FTC Chair Lina Khan backs the plan, Bedoya told the outlet, adding that he hopes it can become a reality by next fall, though the commission does not yet have a firm timeline.
The FTC’s plan is indicative of a broader push across the U.S. government, focusing on online protections for kids and teens. Federal and state lawmakers have proposed new legislation they believe will make the internet safer by mandating stronger age authentication or placing more responsibility on tech companies to design safe products for young users. The U.S. Surgeon General issued an advisory in May that young people’s social media use poses significant mental health risks.
Bedoya envisions an in-house child psychologist to be a helpful resource for commissioners like himself. Those experts could bring important insights that can link a cause to alleged harm and inform the appropriate damages the agency seeks, Bedoya said. He added that child psychologists could help the FTC evaluate allegations of how social media may affect mental health, as well as assess the effect of dark patterns or other deceptive features.
A Controversial Plan to Scan Private Messages for Child Abuse Meets Fresh Scandal
Danny Mekić, an Amsterdam-based Ph.D. researcher, was studying a proposed European law meant to combat child sexual abuse when he came across a rather odd discovery. All of a sudden, he started seeing ads on X, formerly Twitter, that featured young girls and sinister-looking men against a dark background, set to an eerie soundtrack. The advertisements, which displayed stats from a survey about child sexual abuse and online privacy, were paid for by the European Commission.
Mekić thought the videos were unusual for a governmental organization and decided to delve deeper. The survey findings highlighted in the videos suggested that a majority of EU citizens would support the scanning of all their digital communications.
Following closer inspection, he discovered that these findings appeared biased and otherwise flawed. The survey results were gathered by misleading the participants, he claims, which in turn may have misled the recipients of the ads; the conclusion that EU citizens were fine with greater surveillance couldn’t be drawn from the survey, and the findings clashed with those of independent polls.