The Defender Children’s Health Defense News and Views
Close menu
Close menu

You must be a CHD Insider to save this article Sign Up

Already an Insider? Log in

December 4, 2023

Big Brother News Watch

‘Medical Freedom’ Activists Take Aim at New Target: Childhood Vaccine Mandates + More

The Defender’s Big Brother NewsWatch brings you the latest headlines related to governments’ abuse of power, including attacks on democracy, civil liberties and use of mass surveillance. The views expressed in the excerpts from other news sources do not necessarily reflect the views of The Defender.

The Defender’s Big Brother NewsWatch brings you the latest headlines.

‘Medical Freedom’ Activists Take Aim at New Target: Childhood Vaccine Mandates

The New York Times via Yahoo!News reported:

For more than 40 years, Mississippi had one of the strictest school vaccination requirements in the nation, and its high childhood immunization rates have been a source of pride. But in July, the state began excusing children from vaccination if their parents cited religious objections after a federal judge sided with a “medical freedom” group.

Mississippi is not an isolated case. Buoyed by their success at overturning coronavirus mandates, medical and religious freedom groups are taking aim at a new target: childhood school vaccine mandates, long considered the foundation of the nation’s defense against infectious disease.

Until the Mississippi ruling, the state was one of only six that refused to excuse students from vaccination for religious or philosophical reasons. Similar legal challenges have been filed in the five remaining states: California, Connecticut, Maine, New York and West Virginia. The ultimate goal, according to advocates behind the lawsuits, is to undo vaccine mandates entirely, by getting the issue before a Supreme Court that is increasingly sympathetic to religious freedom arguments.

The legal push comes as childhood vaccine exemptions have reached a new high in the United States, according to a report released last month by the Centers for Disease Control and Prevention. Three percent of children who entered kindergarten last year received an exemption, the CDC said, up from 1.6% in the 2011-12 school year.

The Internet Enabled Mass Surveillance. A.I. Will Enable Mass Spying.

Slate reported:

Spying and surveillance are different but related things. If I hired a private detective to spy on you, that detective could hide a bug in your home or car, tap your phone, and listen to what you said. At the end, I would get a report of all the conversations you had and the contents of those conversations. If I hired that same private detective to put you under surveillance, I would get a different report: where you went, whom you talked to, what you purchased, what you did.

Before the internet, putting someone under surveillance was expensive and time-consuming. You had to manually follow someone around, noting where they went, whom they talked to, what they purchased, what they did, and what they read.

That world is forever gone. Our phones track our locations. Credit cards track our purchases. Apps track whom we talk to, and e-readers know what we read. Computers collect data about what we’re doing on them, and as both storage and processing have become cheaper, that data is increasingly saved and used. What was manual and individual has become bulk and mass. Surveillance has become the business model of the internet, and there’s no reasonable way for us to opt out of it.

Knowing that they are under constant surveillance changes how people behave. They conform. They self-censor, with the chilling effects that brings. Surveillance facilitates social control, and spying will only make this worse. Governments around the world already use mass surveillance; they will engage in mass spying as well.

Key Congress Staffers in AI Debate Are Funded by Tech Giants Like Google and Microsoft

Politico reported:

Top tech companies with major stakes in artificial intelligence are channeling money through a venerable science nonprofit to help fund fellows working on AI policy in key Senate offices, adding to the roster of government staffers across Washington whose salaries are being paid by tech billionaires and others with direct interests in AI regulation.

The new “rapid response cohort” of congressional AI fellows is run by the American Association for the Advancement of Science, a Washington-based nonprofit, with substantial support from Microsoft, OpenAI, Google, IBM and Nvidia, according to the AAAS. It comes on top of the network of AI fellows funded by Open Philanthropy, a group financed by billionaire Facebook co-founder Dustin Moskovitz.

Alongside the Open Philanthropy fellows — and hundreds of outside-funded fellows throughout the government, including many with links to the tech industry — the six AI staffers in the industry-funded rapid response cohort are helping shape how key players in Congress approach the debate over when and how to regulate AI, at a time when many Americans are deeply skeptical of the industry.

The apparent conflict of tech-funded figures working inside the Capitol Hill offices at the forefront of AI policy worries some tech experts, who fear Congress could be distracted from rules that would protect the public from biased, discriminatory or inaccurate AI systems.

How Meta Can — or Can Be Forced to — Avoid Addicting Kids

The Washington Post reported:

When 41 state attorneys general and the District of Columbia sued Meta, Facebook’s parent company, more than a month ago, their complaint was redacted to the point of illegibility. Now, an unredacted version has emerged, and it’s well worth the read.

The state AGs claim, essentially, that Meta is exploiting its younger users for profit, privately prioritizing growth over teens’ well-being, even as it claims publicly that safety is paramount: “The lifetime value of a 13 y/o teen is roughly $270,” one internal company email counsels. This mindset, the AGs say, informed the very design of the company’s products.

And it has led executives, counter to internal research, to back off proposals that would improve those products by discouraging “problematic use” — a jargony way of saying addiction. (Meta has disputed that characterization.)

Obviously, Meta is a business, and moneymaking is what businesses do. Current law does little to restrain social media services from luring users down the rabbit hole, and, for the most part, that’s how it should be. Yet there is leeway for the government to place restrictions on products that harm children’s health. The type of problematic use the complaint describes, hours spent scrolling, is precisely the kind that research shows damages minds not yet fully formed.

Houston Methodist Hospital Lifts Employee COVID Vaccine Mandate in Light of New State Law

The Texan reported:

Houston Methodist Hospital has lifted its COVID-19 vaccine mandate on hospital employees in light of a new state law prohibiting such policies recently passed by the Texas Legislature. The policy cost over 150 healthcare workers their jobs. It was implemented in 2021 by the hospital, which cited studies claiming the vaccine was “as much as 95% protective against the virus.”

Those claims are now the subject of a lawsuit brought by the Office of the Texas Attorney General (OAG) against pharmaceutical giant Pfizer, alleging the drug maker violated the Deceptive Trade Practices Act by claiming their COVID-19 vaccine had “95% efficacy” against the virus when they knew it did not and withheld information from the public that undermined the claims.

A lawsuit brought by hospital employees in 2021 against Houston Methodist over the mandate was rejected by a federal judge who wrote the mandate violated no laws and allowed it to continue.

Houston Methodist CEO David Bernard reportedly told the hospital’s employees they were replaceable, writing, “100% vaccination is more important than your freedom. Every one of you is replaceable. If you do not like what you’re doing you can leave, and we will replace your spot.”

Misinformation Expert Says She Was Fired by Harvard Under Meta Pressure

The Guardian reported:

One of the world’s leading experts on misinformation says she was fired by Harvard University for criticizing Meta at a time that the school was being pledged $500 million from Mark Zuckerberg’s charity.

Joan Donovan says her funding was cut off, she could not hire assistants and she was made the target of a smear campaign by Harvard employees. In a legal filing with the U.S. Education Department and the Massachusetts attorney general first published by the Washington Post, she said her right to free speech had been abrogated.

The controversial claims stem in part from Donovan’s publication of the Facebook papers, a bombshell leak of 22,000 pages of Facebook’s internal documents by the whistleblower Frances Haugen, who used to work at the company. Donovan, believing them to be of huge public interest, began publishing them on Harvard’s website for anyone to access. “From that very day forward, I was treated differently by the university to the point where I lost my job,” Donovan told the Logic.

Donovan claims that Zuckerberg and his wife, Priscilla Chan, both Harvard alumni, have given it hundreds of millions of dollars, including promising $500m to the school’s Kempner Institute for the Study of Natural and Artificial Intelligence.

Medical AI Tools Can Make Dangerous Mistakes. Can the Government Help Prevent Them?

The Wall Street Journal reported:

Doctors have started using artificial intelligence in novel ways to communicate with patients and help make diagnoses. Now the government is wrestling with how to ensure the tools do not harm.

Federal regulators are proposing a new labeling system for AI healthcare apps designed to make it easier for clinicians to spot the pitfalls and shortcomings of these tools. The Biden administration has proposed that these apps come with a “nutrition label” that discloses how the app was trained, how it performs, how it should be used and how it shouldn’t.

The labeling rule, which could be finalized before year’s end, represents one of Washington’s first tangible attempts to impose new safety requirements on artificial intelligence. Healthcare and technology companies are pushing back on it, saying the rule could compromise proprietary information and hurt competition, in a sign of how difficult it is for the government to police rapidly evolving AI systems.

23andMe Hackers Accessed a Whole Lot of User’s Personal Data

TechRadar reported:

Biogenetics company 23andMe has submitted a new filing with the U.S. Securities and Exchange Commission (SEC) detailing the data breach it suffered in early October 2023.

In the filing, the company said that the threat actors accessed data on 0.1% of its customer base — roughly 14,000 individuals if the company’s recent claim to have “more than 14 million customers worldwide” in a recent annual earnings report is to be believed.

But it gets a bit more complicated than that. 23andme is a genetics testing and ancestry company and sometimes users share this data with other accounts via the “DNA Relatives” feature. Consequently, the attackers accessed “a significant number of files containing profile information about other users’ ancestry that such users chose to share.”

Europe’s World-Leading Artificial Intelligence Rules Are Facing a Do-or-Die Moment

Associated Press reported:

Hailed as a world first, European Union artificial intelligence rules are facing a make-or-break moment as negotiators try to hammer out the final details this week — talks complicated by the sudden rise of generative AI that produces human-like work.

First suggested in 2019, the EU’s AI Act was expected to be the world’s first comprehensive AI regulations, further cementing the 27-nation bloc’s position as a global trendsetter when it comes to reining in the tech industry.

But the process has been bogged down by a last-minute battle over how to govern systems that underpin general-purpose AI services like OpenAI’s ChatGPT and Google’s Bard chatbot. Big tech companies are lobbying against what they see as overregulation that stifles innovation, while European lawmakers want added safeguards for the cutting-edge AI systems those companies are developing.

Meanwhile, the U.S., U.K., China and global coalitions like the Group of 7 major democracies have joined the race to draw up guardrails for the rapidly developing technology, underscored by warnings from researchers and rights groups of the existential dangers that generative AI poses to humanity as well as the risks to everyday life.

Suggest A Correction

Share Options

Close menu

Republish Article

Please use the HTML above to republish this article. It is pre-formatted to follow our republication guidelines. Among other things, these require that the article not be edited; that the author’s byline is included; and that The Defender is clearly credited as the original source.

Please visit our full guidelines for more information. By republishing this article, you agree to these terms.

Woman drinking coffee looking at phone

Join hundreds of thousands of subscribers who rely on The Defender for their daily dose of critical analysis and accurate, nonpartisan reporting on Big Pharma, Big Food, Big Chemical, Big Energy, and Big Tech and
their impact on children’s health and the environment.

  • This field is for validation purposes and should be left unchanged.
  • This field is hidden when viewing the form
  • This field is hidden when viewing the form
    MM slash DD slash YYYY
  • This field is hidden when viewing the form