The Defender Children’s Health Defense News and Views
Close menu
Close menu

You must be a CHD Insider to save this article Sign Up

Already an Insider? Log in

October 24, 2023

Big Brother News Watch

Massive Facial Recognition Search Engine Now Blocks Searches for Children’s Faces + More

The Defender’s Big Brother NewsWatch brings you the latest headlines related to governments’ abuse of power, including attacks on democracy, civil liberties and use of mass surveillance. The views expressed in the excerpts from other news sources do not necessarily reflect the views of The Defender.

The Defender’s Big Brother NewsWatch brings you the latest headlines.

Massive Facial Recognition Search Engine Now Blocks Searches for Children’s Faces

The Verge reported:

PimEyes, a public search engine that uses facial recognition to match online photos of people, has banned searches of minors over concerns it endangers children, reports The New York Times.

At least, it should. PimEyes’ new detection system, which uses age detection AI to identify whether the person is a child, is still very much a work in progress. After testing it, The New York Times found it struggles to identify children photographed at certain angles. The AI also doesn’t always accurately detect teenagers.

PimEyes chief executive Giorgi Gobronidze says he’d been planning on implementing such a protection mechanism since 2021. However, the feature was only fully deployed after New York Times writer Kashmir Hill published an article about the threat AI poses to children last week. According to Gobronidze, human rights organizations working to help minors can continue to search for them, while all other searches will produce images that block children’s faces.

In the article, Hill writes that the service banned over 200 accounts for inappropriate searches of children. One parent told Hill she’d even found photos of her children she’d never seen before using PimEyes. In order to find out where the image came from, the mother would have to pay a $29.99 monthly subscription fee.

PimEyes is just one of the facial recognition engines that have been in the spotlight for privacy violations. In January 2020, Hill’s New York Times investigation revealed how hundreds of law enforcement organizations had already started using Clearview AI, a similar face recognition engine, with little oversight.

Instagram Linked to Depression, Anxiety, Insomnia in Kids — U.S. States’ Lawsuit

Reuters reported:

Dozens of U.S. states are suing Meta Platforms (META.O) and its Instagram unit, accusing them of contributing to a youth mental health crisis through the addictive nature of their social media platforms.

In a complaint filed in the Oakland, California, federal court on Tuesday, 33 states including California and Illinois said Meta, which also operates Facebook, has repeatedly misled the public about the substantial dangers of its platforms and knowingly induced young children and teenagers into addictive and compulsive social media use.

“Research has shown that young people’s use of Meta’s social media platforms is associated with depression, anxiety, insomnia, interference with education and daily life, and many other negative outcomes,” the complaint said.

The lawsuit is the latest in a string of legal actions against social media companies on behalf of children and teens. ByteDance’s TikTok and Google‘s YouTube are also the subjects of the hundreds of lawsuits filed on behalf of children and school districts about the addictiveness of social media.

The lawsuit alleges that Meta also violated a law banning the collection of data of children under the age of 13. The state action seeks to patch holes left by the U.S. Congress’s inability to pass new online protections for children, despite years of discussions.

Fed Governor Admits CBDCs Pose ‘Significant’ Privacy Risks

Reclaim the Net reported:

Federal Reserve Governor Michelle Bowman has raised valid concerns about the grave risks and privacy dangers that the introduction of a central bank digital currency (CBDC) might lead to in an appearance at a Harvard Law School Program on October 17.

The concept of a CBDC creation promises no certain benefit, remarks Bowman, but notably hints towards potential “unintended consequences” for the industry of finance.

Drawing from the contentions raised by one of the major participants in the regulation of domestic payment systems and banking, Governor Bowman underscored the trade-offs and risks that a digital dollar could entail. With the “considerable consumer privacy concerns” that a U.S. CBDC implementation might entail, any plausible merits of such a currency, Bowman points out, remain largely elusive.

Notwithstanding the grand promises of hassle-free payment systems or greater financial inclusion, there appears to be a significant lack of persuasive proof that a CBDC would actually contribute to these ends or furnish public access to secure central bank money. Yet, the argument here is not for the halt of research on this subject; according to Bowen, a continuous study of a digital dollar’s technical abilities and potential risks linked with CBDCs could foster a progressive attitude towards such future developments.

AI Firms Must Be Held Responsible for Harm They Cause, ‘Godfathers’ of Technology Say

The Guardian reported:

Powerful artificial intelligence systems threaten social stability and AI companies must be made liable for harms caused by their products, a group of senior experts including two “godfathers” of the technology has warned.

Tuesday’s intervention was made as international politicians, tech companies, academics and civil society figures prepare to gather at Bletchley Park next week for a summit on AI safety.

“It’s time to get serious about advanced AI systems,” said Stuart Russell, professor of computer science at the University of California, Berkeley. “These are not toys. Increasing their capabilities before we understand how to make them safe is utterly reckless.”

He added: “There are more regulations on sandwich shops than there are on AI companies.”

Other co-authors of the document include Geoffrey Hinton and Yoshua Bengio, two of the three “godfathers of AI,” who won the ACM Turing Award — the computer science equivalent of the Nobel prize — in 2018 for their work on AI.

FTC Plans to Hire Child Psychologist to Guide Internet Rules

CNBC reported:

The Federal Trade Commission plans to hire at least one child psychologist who can guide its work on internet regulation, Democratic Commissioner Alvaro Bedoya told The Record in an interview published Monday.

FTC Chair Lina Khan backs the plan, Bedoya told the outlet, adding that he hopes it can become a reality by next fall, though the commission does not yet have a firm timeline.

The FTC’s plan is indicative of a broader push across the U.S. government, focusing on online protections for kids and teens. Federal and state lawmakers have proposed new legislation they believe will make the internet safer by mandating stronger age authentication or placing more responsibility on tech companies to design safe products for young users. The U.S. Surgeon General issued an advisory in May that young people’s social media use poses significant mental health risks.

Bedoya envisions an in-house child psychologist to be a helpful resource for commissioners like himself. Those experts could bring important insights that can link a cause to alleged harm and inform the appropriate damages the agency seeks, Bedoya said. He added that child psychologists could help the FTC evaluate allegations of how social media may affect mental health, as well as assess the effect of dark patterns or other deceptive features.

A Controversial Plan to Scan Private Messages for Child Abuse Meets Fresh Scandal

Wired reported:

Danny Mekić, an Amsterdam-based Ph.D. researcher, was studying a proposed European law meant to combat child sexual abuse when he came across a rather odd discovery. All of a sudden, he started seeing ads on X, formerly Twitter, that featured young girls and sinister-looking men against a dark background, set to an eerie soundtrack. The advertisements, which displayed stats from a survey about child sexual abuse and online privacy, were paid for by the European Commission.

Mekić thought the videos were unusual for a governmental organization and decided to delve deeper. The survey findings highlighted in the videos suggested that a majority of EU citizens would support the scanning of all their digital communications.

Following closer inspection, he discovered that these findings appeared biased and otherwise flawed. The survey results were gathered by misleading the participants, he claims, which in turn may have misled the recipients of the ads; the conclusion that EU citizens were fine with greater surveillance couldn’t be drawn from the survey, and the findings clashed with those of independent polls.

Suggest A Correction

Share Options

Close menu

Republish Article

Please use the HTML above to republish this article. It is pre-formatted to follow our republication guidelines. Among other things, these require that the article not be edited; that the author’s byline is included; and that The Defender is clearly credited as the original source.

Please visit our full guidelines for more information. By republishing this article, you agree to these terms.

Woman drinking coffee looking at phone

Join hundreds of thousands of subscribers who rely on The Defender for their daily dose of critical analysis and accurate, nonpartisan reporting on Big Pharma, Big Food, Big Chemical, Big Energy, and Big Tech and
their impact on children’s health and the environment.

  • This field is for validation purposes and should be left unchanged.
  • This field is hidden when viewing the form
  • This field is hidden when viewing the form
    MM slash DD slash YYYY
  • This field is hidden when viewing the form