The Defender Children’s Health Defense News and Views
Close menu
Close menu

You must be a CHD Insider to save this article Sign Up

Already an Insider? Log in

December 19, 2023

Big Brother News Watch

EU Fingerprint Checks for British Travelers to Start in 2024 + More

The Defender’s Big Brother NewsWatch brings you the latest headlines related to governments’ abuse of power, including attacks on democracy, civil liberties and use of mass surveillance. The views expressed in the excerpts from other news sources do not necessarily reflect the views of The Defender.

The Defender’s Big Brother NewsWatch brings you the latest headlines.

EU Fingerprint Checks for British Travelers to Start in 2024

The Guardian reported:

A new EU digital border system that will require fingerprints and facial scans to be taken from British travelers on first use is expected to launch next autumn, according to reports. The entry/exit system (EES) is earmarked to start on  October 6, 2024, according to the i and Times newspapers, citing Getlink, the owner of Eurotunnel. The Guardian has contacted Getlink for comment.

Eurotunnel, which runs a car transport service between Folkestone and Calais, is said to be testing the technology, in which personal data will be collected at borders and entered into an EU-wide database.

Under the EES, passengers would have to agree to fingerprinting and facial image capture the first time they arrived on the continent. After that, the data, including any record of refused entry, should allow quicker processing, according to travel bosses.

According to the European Commission, the system will apply when entering 25 EU countries (all member states apart from Cyprus and Ireland) and four non-EU countries (Norway, Iceland, Switzerland and Lichtenstein) that are part of the border-free Schengen area along with most EU member states.

Courts Are Choosing TikTok Over Children

The Atlantic reported:

Some court decisions are bad; others are abysmal. The bad ones merely misapply the law; abysmal decisions go a step further and elevate abstract principles over democratic will and basic morality. The latter’s flaw is less about legal error and more about “a judicial system gone wrong,” as the legal scholar Gerard Magliocca once put it.

In our times, some of the leading candidates for the “abysmal” category are the extraordinarily out-of-touch decisions striking down laws protecting children from social-media harms. The exemplar is NetChoice v. Bonta, in which a U.S. district court in California struck down the state’s efforts to protect children from harm arising from TikTok, Instagram, and other social media firms. In its insensitivity to our moment and elevation of conjectural theory over consequence, NetChoice is a true heir to the Dagenhart tradition.

Social media presents an undoubted public health crisis for the country’s preteens and teens. A surgeon-general report released earlier this year noted that, per a recent study, “adolescents who spent more than 3 hours per day on social media faced double the risk of experiencing poor mental health outcomes including symptoms of depression and anxiety,” compared with their peers who spent less time on such platforms. A particular concern is algorithms that serve content that promotes eating disorders, suicide, and substance abuse, based on close surveillance of a given teenager.

The California law, passed last year, seeks to make social media companies “prioritize the privacy, safety, and well-being of children over commercial interests.” It may not have been a perfect work of draftsmanship, but in its basic form, it sought to protect children by barring companies such as TikTok from profiling children, excessively collecting data, and using those data in ways that are harmful to children.

After the law’s enactment, big tech firms and their lawyers, apparently unafraid of bad publicity, sued the state through an industry group, NetChoice. Their lawyers advanced a theory that collecting data from children is “speech” protected by the First Amendment. To her lasting disgrace, Judge Beth Freeman bought that ridiculous proposition.

Group Representing Social Media Giants Sues Utah Over Parental Consent Law

The Hill reported:

A group representing several social media giants, including Google, Meta, TikTok and X, sued Utah on Monday over the state’s new social media law that requires platforms to verify user ages and obtain parental consent for minors.

The Utah Social Media Regulation Act, which is set to go into effect in March, also requires social media companies to restrict minor access to accounts between 10:30 p.m. and 6:30 a.m. and bars advertising and data collection on their accounts.

NetChoice, a trade association of nearly three dozen internet companies, argued in Monday’s filing that the Utah law violates the First Amendment, representing an “unconstitutional attempt to regulate both minors’ and adults’ access to — and ability to engage in — protected expression.”

The group also argues that the law singles out certain websites, such as YouTube, Facebook and X, for regulation based on a series of “vague definitions and exceptions with arbitrary thresholds.”

Healthcare Industry Fights Back Against Crackdowns on Health Data Tracking

STAT News reported:

Wherever you go on the internet, trackers follow. These ubiquitous bits of code, invisibly embedded in most websites, are powerful tools that can reveal the pages you visit, the buttons you click, and the forms you fill to help advertisers tail and target you across the web.

But put those trackers on a healthcare website, and they have the potential to leak sensitive medical information — a risk that, in the last year, has driven the Department of Health and Human Services and the Federal Trade Commission to crack down on trackers in the websites of hospitals, telehealth companies, and more.

Health systems and companies have scrambled to adapt, many removing the trackers entirely in the face of regulatory enforcement and a growing set of class action lawsuits alleging the disclosure of patients’ protected health information. But another contingent steeling itself for a fight, arguing that regulators have overstepped their authority and hobbled critical healthcare infrastructure by targeting trackers.

Which Apps Are Collecting the Most Data on You?

Fox News reported:

The number of apps that collect detailed personal data might surprise you. That includes some of the top apps on the App Store and Google Play Store. What I love doing most as the CyberGuy is making a difference in informing you about the power you need to protect yourself, especially your privacy.

AtlasVPN has released a new report that lists the shopping apps that collect the most data on you. Topping the chart was eBay. According to AtlasVPN, eBay’s Android app grabs 28 data points. Here’s a look at the top 10: 1. EBay 2. Amazon Shopping 3. Afterpay 4. Lowe’s 5. iHerb 6. Vinted 7. The Home Depot 8. Alibaba 9. Poshmark 10. Nike.

All of these apps collect at least 18 data points on you. While some of that information can be data performance or app activity to help developers, some apps collect financial and personal data.

Think Tank Tied to Tech Billionaires Played Key Role in Biden’s AI Order

Politico reported:

The RAND Corporation — a prominent international think tank that has recently been tied to a growing influence network backed by tech billionaires — played a key role in drafting President Joe Biden’s new executive order on artificial intelligence, according to an AI researcher with knowledge of the order’s drafting and a recording of an internal RAND meeting obtained by POLITICO.

The provisions advanced by RAND in the October executive order included a sweeping set of reporting requirements placed on the most powerful AI systems, ostensibly designed to lessen the technology’s catastrophic risks. Those requirements hew closely to the policy priorities pursued by Open Philanthropy, a group that pumped over $15 million into RAND this year.

Financed by billionaire Facebook co-founder and Asana CEO Dustin Moskovitz and his wife Cari Tuna, Open Philanthropy is a major funder of causes associated with “effective altruism” — an ideology, made famous by disgraced FTX founder Sam Bankman-Fried, that emphasizes a data-driven approach to philanthropy.

EU Parliament Supports Granting Access to Sensitive Health Data Without Asking Patients

Reclaim the Net reported:

The latest European Parliament (EP) efforts, grappling with how to allow access to sensitive medical records without patients’ permission, while still maintaining a semblance of caring for privacy, has had an update.

Last week, a plenary vote in this EU institution revealed that while most EP members (MEPs) want to allow that access — and in that manner — they are also opposed to wholesale, mandatory creation of electronic records for every person in the EU. The scheme is known as the European Health Data Space, and it received support from an EP majority.

As privacy and digital security advocate, lawyer and MEP Patrick Breyer notes on his blog, this database would be accessible remotely and consist of health records of each medical treatment.

It was only thanks to an amendment accepted at the last minute (proposed by Breyer, of Germany’s Pirate Party, and several other EP groups that do not form the EP majority) that nation-states will be able to let their citizens object to having their sensitive health data harvested into this interconnected system of medical records.

TikTok’s Chinese Owner Stole ChatGPT Secrets to Make Copycat AI — Report

Newsweek reported:

ByteDance, the Beijing-based parent company behind the short video app TikTok, has been accused of secretly using ChatGPT to develop its own commercial artificial intelligence for the Chinese market.

OpenAI, which owns the chatbot, suspended the Chinese tech giant‘s developer account following the alleged violation of its terms of service, The Verge reported on Saturday.

The technology in question is the application programming interface, or API, behind ChatGPT, which enables coders to incorporate the AI into other apps. In its terms of service, OpenAI forbids clients from using the chatbot to “develop models that compete with OpenAI.”

The Verge’s Alex Health said he obtained access to internal ByteDance documents that showed the company had used ChatGPT at almost every stage of its own chatbot’s development, from training to performance evaluation.

Suggest A Correction

Share Options

Close menu

Republish Article

Please use the HTML above to republish this article. It is pre-formatted to follow our republication guidelines. Among other things, these require that the article not be edited; that the author’s byline is included; and that The Defender is clearly credited as the original source.

Please visit our full guidelines for more information. By republishing this article, you agree to these terms.

Woman drinking coffee looking at phone

Join hundreds of thousands of subscribers who rely on The Defender for their daily dose of critical analysis and accurate, nonpartisan reporting on Big Pharma, Big Food, Big Chemical, Big Energy, and Big Tech and
their impact on children’s health and the environment.

  • This field is for validation purposes and should be left unchanged.
  • This field is hidden when viewing the form
  • This field is hidden when viewing the form
    MM slash DD slash YYYY
  • This field is hidden when viewing the form