The Defender Children’s Health Defense News and Views
Close menu
Close menu

You must be a CHD Insider to save this article Sign Up

Already an Insider? Log in

January 18, 2024

Big Brother News Watch

Inside Biden’s Secret Surveillance Court + More

The Defender’s Big Brother NewsWatch brings you the latest headlines related to governments’ abuse of power, including attacks on democracy, civil liberties and use of mass surveillance. The views expressed in the excerpts from other news sources do not necessarily reflect the views of The Defender.

The Defender’s Big Brother NewsWatch brings you the latest headlines.

Inside Biden’s Secret Surveillance Court

Politico reported:

At an undetermined date, in an undisclosed location, the Biden administration began operating a secretive new court to protect Europeans’ privacy rights under U.S. law.

Officially known as the Data Protection Review Court, it was authorized in an October 2022 executive order to fix a collision of European and American law that had been blocking the lucrative flow of consumer data between American and European companies for three years.

The court’s eight judges were named last November, including former U.S. Attorney General Eric Holder. Its existence has allowed companies to resume the lucrative transatlantic data trade with the blessing of EU officials.

The details get blurry after that. And critics worry it will tie the hands of U.S. intelligence agencies with an unusual power: It can make binding decisions on surveillance practices with federal agencies, which won’t be able to challenge those decisions.

The court’s creation is also raising fears within U.S. circles that Europeans could get certain privacy protections that American citizens lack. U.S. residents who suspect they are under improper surveillance cannot go to the Data Protection Review Court. Under U.S. law, they can go to a federal court — but only if they can show a concrete wrong or harm that gives them legal standing, which presents a Catch-22 since they can’t prove what they don’t know.

Children on Instagram and Facebook Were Frequent Targets of Sexual Harassment, State Says

The Wall Street Journal reported:

Children using Instagram and Facebook  have been frequent targets of sexual harassment, according to a 2021 internal Meta Platforms presentation that estimated that 100,000 minors each day received photos of adult genitalia or other sexually abusive content.

That finding is among newly unredacted material about the company’s child-safety policies in a lawsuit filed last month by New Mexico that alleges Meta’s platforms recommend sexual content to underage users and promote underage accounts to predatory adult users.

In one 2021 internal document described in the now unredacted material, Meta employees noted that one of its recommendation algorithms, called “People You May Know,” was known among employees to connect child users with potential predators. The New Mexico lawsuit says the finding had been flagged to executives several years earlier, and that they had rejected a staff recommendation that the company adjust the design of the algorithm, known internally as PYMK, to stop it from recommending minors to adults.

New Mexico alleges that Meta has failed to address widespread predation on its platform or limit design features that recommended children to adults with malicious intentions. Instead of publicly acknowledging internal findings such as the 100,000 child-a-day scale of harassment on its platforms, the suit alleges, Meta falsely assured the public that its platforms were safe.

Much of the internal discussion described in the newly unredacted material focused on Instagram. In an internal email in 2020, employees reported that the prevalence of “sex talk” to minors was 38 times greater on Instagram than on Facebook Messenger in the U.S. and urged the company to enact more safeguards on the platform, according to documents cited in the lawsuit.

Mother Whose Child Died in TikTok Challenge Urges U.S. Court to Revive Lawsuit

Reuters reported:

A U.S. appeals court on Wednesday wrestled with whether the video-based social media platform TikTok could be sued for causing a 10-year-old girl’s death by promoting a deadly “blackout challenge” that encouraged people to choke themselves.

Members of a three-judge panel of the Philadelphia-based 3rd U.S. Circuit Court of Appeals noted during oral arguments that a key federal law typically shields internet companies like TikTok from lawsuits for content posted by users.

But some judges questioned whether Congress in adopting Section 230 of the Communications Decency Act in 1996 could have imagined the growth of platforms like TikTok that do not just host content but recommend it to users using complex algorithms.

Tawainna Anderson sued TikTok and its Chinese parent company ByteDance after her daughter Nylah in 2021 attempted the blackout challenge using a purse strap hung in her mother’s closet. She lost consciousness, suffered severe injuries, and died five days later. Anderson’s lawyer, Jeffrey Goodman, told the court that while Section 230 provides TikTok some legal protection, it does not bar claims that its product was defective and that its algorithm pushed videos about the blackout challenge to the child.

‘Fundamentally Against Their Safety’: The Social Media Insiders Fearing for Their Kids

The Guardian reported:

Parents working for tech companies have a first-hand look at how the industry works — and the threats it poses to child safety.

Arturo Bejar would not have let his daughter use Instagram at the age of 14 if he’d known then what he knows now. Bejar left Facebook in 2015, where he spent six years making it easier for users to report when they had problems on the platform. But it wasn’t until his departure that he witnessed what he described in recent congressional testimony as the “true level of harm” the products his former employer built are inflicting on children and teens – his own included.

Bejar discovered his then 14-year-old daughter and her friends were routinely subjected to unwanted sexual advances, harassment and misogyny on Instagram, according to his testimony.

But it wasn’t his daughter’s experience on Instagram alone that convinced Bejar that the social network is unsafe for kids younger than 16; it was the company’s meager response to his concerns. Ultimately, he concluded, companies like Meta will need to be “compelled by regulators and policymakers to be transparent about these harms and what they are doing to address them.”

Iowa Sues TikTok, Alleging App Misleads Parents About Inappropriate Content

The Hill reported:

Iowa sued TikTok on Wednesday, accusing the video-based social media app of misrepresenting the prevalence of inappropriate content on the platform to avoid parental controls.

The lawsuit alleges that TikTok falsely claims there is only infrequent or mild sexual content and nudity, profanity or crude humor, mature and suggestive themes, and alcohol, tobacco or drug use references on the platform to obtain a “12+” rating in Apple’s App Store.

“TikTok knows and intends to evade the parental controls on Apple devices by rating its app ‘12+,’” the complaint reads. “If TikTok correctly rated its app, it would receive a ‘17+’ age rating, and parental restrictions on phones would prevent many kids from downloading it.”

“TikTok has kept parents in the dark,” Iowa Attorney General Brenna Bird (R) said in a statement. “It’s time we shine a light on TikTok for exposing young children to graphic materials such as sexual content, self-harm, illegal drug use, and worse.”

Watch Out Windows 11 Users: Microsoft May Be Sharing Your Outlook Emails Without You Knowing — Here’s How to Stop It

TechRadar reported:

It looks like Microsoft’s penchant for collecting its users’ data may get it in more trouble, with a worrying new report suggesting that it’s sharing more information from emails sent by the new Outlook for Windows app than people may know.

This is particularly concerning as most people check their emails daily, to keep up with friends and family, or send important documents and information at work, and with the Outlook for Windows app now being the default program for emails in Windows 11, this discovery could impact a lot of people.

MSPoweruser reports that the team behind ProtonMail, an end-to-end encrypted email service and competitor to Microsoft Outlook, has discovered the worrying scale of user data being collected by Outlook for Windows, which reportedly includes your emails, contacts, browsing history, and possibly even location data.

ProtonMail’s blog post goes so far as to call Outlook for Windows  “a surveillance tool for targeted advertising”, a harsh comment, certainly, but people who downloaded the new Outlook for Windows app have encountered a disclaimer that explains how Microsoft and hundreds of third parties will be helping themselves to your data.

EU Set to Allow Draconian Use of Facial Recognition Tech, Say Lawmakers

Politico reported:

Last-minute tweaks to the European Union’s Artificial Intelligence Act will allow law enforcement to use facial recognition technology on recorded video footage without a judge’s approval — going further than what was agreed by the three EU institutions, according to European lawmaker Svenja Hahn.

The German member of the European Parliament said the final text of the bloc’s new rules on artificial intelligence, obtained by POLITICO, was “an attack on civil rights” and could enable “irresponsible and disproportionate use of biometric identification technology, as we otherwise only know from authoritarian states such as China.”

The wording also made it to the full legal text, which the Spanish Council presidency put together on December 22. The current presidency of the EU Council, held by Belgium, is working with Parliament to finalize bits of interpretative text known as recitals.

The Davos Elite Embraced AI in 2023. Now They Fear It.

The Washington Post reported:

ChatGPT was the breakout star of last year’s World Economic Forum, as the nascent chatbot’s ability to code, draft emails and write speeches captured the imaginations of the leaders gathered in this posh ski town.

But this year, tremendous excitement over the nearly limitless economic potential of the technology is coupled with a more clear-eyed assessment of its risks. Heads of state, billionaires and CEOs appear aligned in their anxieties, as they warn that the burgeoning technology might supercharge misinformation, displace jobs and deepen the economic gap between wealthy and poor nations.

In contrast to far-off fears of the technology ending humanity, a spotlight is on concrete hazards borne out last year by a flood of AI-generated fakes and the automation of jobs in copywriting and customer service. The debate has taken on new urgency amid global efforts to regulate the swiftly evolving technology.

China-Made Drones Pose Significant Risk to U.S. Data, Security Agencies Say

Newsweek reported:

China-made drones pose a significant risk to American data, critical infrastructure and national security, two federal security agencies said this week in an official cybersecurity guidance urging U.S. professional and hobby users to transition to safer alternatives.

The use of Chinese-manufactured unmanned aircraft systems — UAS, or more commonly known as drones — in critical infrastructure “risks exposing sensitive information to PRC authorities, jeopardizing U.S. national security, economic security, and public health and safety,” the Cybersecurity and Infrastructure Security Agency and the Federal Bureau of Investigation on Wednesday.

The warning comes amid a deepening technological and national security competition between the United States and China, with concerns growing among lawmakers and officials over a range of technologies that are made in China or by Chinese companies with U.S. branches, widely sold in the U.S., and are even in the heart of critical infrastructure systems.

“Central to this strategy is the acquisition and collection of data — which the PRC views as a strategic resource and growing arena of geopolitical competition,” said the CISA and the FBI, using the acronym for the People’s Republic of China. All drones collect information and could have vulnerabilities that compromise networks, thus enabling data theft by companies or governments, the guidance said.

Suggest A Correction

Share Options

Close menu

Republish Article

Please use the HTML above to republish this article. It is pre-formatted to follow our republication guidelines. Among other things, these require that the article not be edited; that the author’s byline is included; and that The Defender is clearly credited as the original source.

Please visit our full guidelines for more information. By republishing this article, you agree to these terms.

Woman drinking coffee looking at phone

Join hundreds of thousands of subscribers who rely on The Defender for their daily dose of critical analysis and accurate, nonpartisan reporting on Big Pharma, Big Food, Big Chemical, Big Energy, and Big Tech and
their impact on children’s health and the environment.

  • This field is for validation purposes and should be left unchanged.
  • This field is hidden when viewing the form
  • This field is hidden when viewing the form
    MM slash DD slash YYYY
  • This field is hidden when viewing the form