The Defender Children’s Health Defense News and Views
Close menu
Close menu

You must be a CHD Insider to save this article Sign Up

Already an Insider? Log in

December 20, 2023

Big Brother News Watch

Clear Wants to Scan Your Face at Airports. Privacy Experts Are Worried. + More

The Defender’s Big Brother NewsWatch brings you the latest headlines related to governments’ abuse of power, including attacks on democracy, civil liberties and use of mass surveillance. The views expressed in the excerpts from other news sources do not necessarily reflect the views of The Defender.

The Defender’s Big Brother NewsWatch brings you the latest headlines.

Clear Wants to Scan Your Face at Airports. Privacy Experts Are Worried.

The Washington Post reported:

The private security screening company Clear is rolling out facial recognition technology at its expedited airport checkpoints in 2024, replacing the company’s iris-scanning and fingerprint-checking measures. With a presence at more than 50 U.S. airports, Clear’s update is the latest sign in a broader shift toward biometrics in air travel that is raising concerns from some privacy experts and advocates.

Clear’s shift to its new screening technology, which the company is calling NextGen Identity Plus, also includes stronger verification of identity documents by comparing them “back to the issuing source,” the company told The Washington Post. Clear said it has been collaborating with the Department of Homeland Security and TSA since 2020 to make these changes. Members who pay $189 a year for a Clear Plus subscription will be moved to the new technology free of charge.

Clear’s system differs, the company told The Post, in that it only compares live snapshots taken of travelers using the designated Clear airport lane to data from their enrollment in NextGen Identity Plus. Moving from iris and fingerprint scanning to facial scanning should help customers get through Clear’s checkpoints faster.

Clear has long been in the business of biometrics in its screening practices at airports, arenas and other public venues. But a turn to facial recognition may lead to increased risk of surveillance and reduced privacy for travelers, privacy advocates say.

“As someone who flies constantly, I’m really disturbed to see the transformation of airports into biometric surveillance centers,” said Albert Fox Cahn, founder and executive director of the Surveillance Technology Oversight Project (STOP).

This Scary AI Breakthrough Means You Can Run but Not Hide — How AI Can Guess Your Location From a Single Image

TechRadar reported:

There’s no question that artificial intelligence (AI) is in the process of upending society, with ChatGPT and its rivals already changing the way we live our lives. But a new AI project has just emerged that can pinpoint the location of where almost any photo was taken — and it has the potential to become a privacy nightmare.

The project, dubbed Predicting Image Geolocations (or PIGEON for short) was created by three students at Stanford University and was designed to help find where images from Google Street View were taken. But when fed personal photos it had never seen before, it was even able to accurately find their locations, usually with a high degree of accuracy.

Jay Stanley of the American Civil Liberties Union says that has serious privacy implications, including government surveillance, corporate tracking and stalking, according to NPR. For instance, a government could use PIGEON to find dissidents or see whether you have visited places it disapproves of. Or a stalker could employ it to work out where a potential victim lives. In the wrong hands, this kind of tech could wreak havoc.

Motivated by those concerns, the student creators have decided against releasing the tech to the wider world. But as Stanley points out, that might not be the end of the matter: “The fact that this was done as a student project makes you wonder what could be done by, for example, Google.”

Rite Aid Banned From Using AI Facial Recognition

Reuters reported:

Bankrupt U.S. pharmacy chain Rite Aid will be prohibited from using facial recognition technology for surveillance purposes for five years to settle U.S. Federal Trade Commission charges it harmed consumers, the FTC said on Tuesday.

Rite Aid deployed artificial intelligence-based facial recognition technology from 2012 to 2020 in order to identify shoplifters but the company falsely flagged some consumers as matching someone who had previously been identified as a shoplifter, the FTC said.

FTC Unveils Sweeping Plan to Boost Children’s Privacy Online

The Washington Post reported:

The Federal Trade Commission on Wednesday unveiled a major proposal to expand protections for children’s personal data and limit what information companies can collect from kids online, marking one of the U.S. government’s most aggressive efforts to create digital safeguards for children.

Under the proposal, digital platforms would be required to turn off targeted ads to children under 13 by default and prohibited from using certain data to send kids push notifications or “nudges” to encourage them to keep using their products.

The plan, which still needs to be adopted, marks one of the most significant attempts by U.S. regulators to broaden their oversight over children’s online privacy, an issue that has gained bipartisan traction across states and the federal government.

The proposed rulemaking seeks to update the Children’s Online Privacy Protection Act (COPPA), a landmark 1998 law requiring websites and other digital service providers to obtain consent from parents before collecting data from users under 13, among other safeguards. The agency unveiled the long-awaited plan in a call for comment from the public on Wednesday.

TikTok Allowing Under-13s to Keep Accounts, Evidence Suggests

The Guardian reported:

TikTok faces questions over safeguards for child users after a Guardian investigation found that moderators were being told to allow under-13s to stay on the platform if they claimed their parents were overseeing their accounts.

In one example seen by the Guardian, a user who declared themselves to be 12 in their account bio, under TikTok’s minimum age of 13, was allowed to stay on the platform because their user profile stated the account was managed by their parents.

Suspected cases of underage account holders are sent to an “underage” queue for further moderation. Moderators have two options: to ban, which would mean the removal of the account, or to approve, allowing the account to stay on the platform.

A staff member at TikTok said they believed it was “incredibly easy to avoid getting banned for being underage. Once a kid learns that this works, they will tell their friends.”

Missouri Supreme Court Strikes Down Law Against Homelessness, COVID Vaccine Mandates

Associated Press reported:

The Missouri Supreme Court on Tuesday struck down a law that threatened homeless people with jail time for sleeping on state land.

In this case, the sweeping 64-page bill also dealt with city and county governance and banned COVID-19 vaccine requirements for public workers in Missouri.

Judges ruled that the law is “invalid in its entirety,” Judge Paul Wilson wrote in the court’s decision.

AI Image Generators Trained on Pictures of Child Sexual Abuse, Study Finds

The Guardian reported:

Hidden inside the foundation of popular artificial intelligence (AI) image generators are thousands of images of child sexual abuse, according to new research published on Wednesday. The operators of some of the largest and most-used sets of images utilized to train AI shut off access to them in response to the study.

The Stanford Internet Observatory found more than 3,200 images of suspected child sexual abuse in the giant AI database LAION, an index of online images and captions that’s been used to train leading AI image-makers such as Stable Diffusion. The watchdog group based at Stanford University worked with the Canadian Centre for Child Protection and other anti-abuse charities to identify the illegal material and report the original photo links to law enforcement. More than 1,000 of the suspected images were confirmed as child sexual abuse material.

The response was immediate. On the eve of the Wednesday release of the Stanford Internet Observatory’s report, LAION said it was temporarily removing its datasets. LAION, which stands for the non-profit Large-scale Artificial Intelligence Open Network, said in a statement that it “has a zero-tolerance policy for illegal content and in an abundance of caution, we have taken down the LAION datasets to ensure they are safe before republishing them”.

While the images account for just a fraction of LAION’s index of about 5.8bn images, the Stanford group says it is probably influencing the ability of AI tools to generate harmful outputs and reinforcing the prior abuse of real victims who appear multiple times.

Suggest A Correction

Share Options

Close menu

Republish Article

Please use the HTML above to republish this article. It is pre-formatted to follow our republication guidelines. Among other things, these require that the article not be edited; that the author’s byline is included; and that The Defender is clearly credited as the original source.

Please visit our full guidelines for more information. By republishing this article, you agree to these terms.

Woman drinking coffee looking at phone

Join hundreds of thousands of subscribers who rely on The Defender for their daily dose of critical analysis and accurate, nonpartisan reporting on Big Pharma, Big Food, Big Chemical, Big Energy, and Big Tech and
their impact on children’s health and the environment.

  • This field is for validation purposes and should be left unchanged.
  • This field is hidden when viewing the form
  • This field is hidden when viewing the form
    MM slash DD slash YYYY
  • This field is hidden when viewing the form