Close menu
February 8, 2024 Big Tech News

Big Tech

Biden Administration Funding AI Tools to Censor Americans’ Speech

“The purpose of these taxpayer-funded projects is to develop AI-powered censorship and propaganda tools that can be used by governments and Big Tech to shape public opinion,” according to a report released this week by the U.S. House Judiciary’s Subcommittee on the Weaponization of the Federal Government.

The Biden administration is funding research on artificial intelligence (AI) tools to identify and censor “misinformation,” “disinformation” and “malinformation” online, according to a U.S. House of Representatives interim report released Monday.

“The purpose of these taxpayer-funded projects is to develop AI-powered censorship and propaganda tools that can be used by governments and Big Tech to shape public opinion,” the Subcommittee on the Weaponization of the Federal Government said Tuesday in a press release.

The report was followed by a contentious hearing before the subcommittee Tuesday that focused on the use of AI for censorship.

On Monday, Rep. Jim Jordan (R-Ohio), chair of the subcommittee and the House Judiciary Committee, released the “Amazon Files” on X, formerly known as Twitter. The files contain documents highlighting how the Biden administration exerted pressure on Amazon to censor books critical of COVID-19 vaccines.

The subcommittee’s report cites non-public documents and focuses on grants the National Science Foundation (NSF) provided to nonprofit and academic institutions to develop AI tools.

The NSF funded the projects under its “Trust & Authenticity in Communication Systems,” part of a larger NSF initiative launched in 2021, that aims to identify “misinformation” and develop “education and training materials” for those with “vulnerabilities to disinformation methods.”

According to the report, these tools would allow for faster and more expansive content moderation than can be accomplished by humans. The tools could be made available to social media giants to aid them in efforts to remove non-establishment narratives, such as posts questioning the safety and efficacy of COVID-19 vaccines.

The efforts are part of a reported $38.8 million the NSF spent on “misinformation” efforts during the Biden administration up to November 2022, according to a report published last month by the Foundation for Freedom Online.

The new revelations were the subject of debate during Tuesday’s hearing, which included testimony from former U.S. Ambassador to the Czech Republic Norman Eisen, investigative journalist Lee Fang, President and CEO of the Foundation for Individual Rights and Expression Greg Lukianoff and Daily Caller News Foundation journalist Katelynn Richardson.

In previous hearings, the subcommittee heard testimony on other aspects of what has been described as the “censorship-industrial complex,” including testimony in July 2023 by Children Health Defense’s (CHD) chairman on leave, Robert F. Kennedy Jr.

Experts who spoke with The Defender said the latest revelations are a sign the federal government feels it is losing control of the narrative on key issues.

“Social media is one of the best things to ever happen to mankind, because it means now that we are the news,” attorney Greg Glaser said.

“Anyone with a popular social media channel can be as influential as a major news network, and that scares the daylights out of the old guard … Predictably, the censors are trying to scare us too by using AI censorship tools,” Glaser told The Defender.

During last month’s World Economic Forum annual meeting, panelists expressed unease over the legacy media’s loss of control over information and public opinion and over the potential of AI to help spread so-called “misinformation” and “disinformation,” leading to the election of the “wrong leaders.”

Independent journalist Paul D. Thacker, who previously released “Twitter Files” documents revealing government censorship, said, “Documents continue to come out showing that the federal government has an expansive program across multiple agencies to censor Americans and fund academic research in this area.”

“The federal funding for censorship science is most interesting because this research will help the government understand how to censor more effectively while buying off universities so professors won’t criticize censorship policies,” Thacker said.

Mark Crispin Miller, Ph.D., an author and professor of media studies at New York University whose research focuses on propaganda, told The Defender:

“While bellowing non-stop, in perfect unison, that Trump poses a clear and present danger of fascism, the ‘liberals’ backing Biden all throughout the government and media — and Biden himself — have turned a big blind eye to the fascist practices of this administration, since such state and corporate collusion against our free speech rights is a classic sign of fascist rule.”

This week’s revelations will likely have legal implications for multiple parties. On Tuesday, the House Judiciary Committee subpoenaed NSF Director Sethuraman Panchanathan, Ph.D., demanding he personally turn over all internal records regarding the restriction or suppression of online content by Feb. 28 at 9 a.m.

“NSF still has not adequately complied with a request for relevant documents” and is “attempting to stonewall Congressional investigations,” according to the interim report.

On Tuesday, the House Judiciary Committee also sued FBI Agent Elvis Chan, claiming he didn’t comply with a subpoena regarding Biden administration censorship practices.

The ongoing Murthy et al. v. Missouri et al. lawsuit and previous “Twitter Files” releases have implicated Chan in Biden administration efforts to coerce social media platforms to censor content that opposed its positions on COVID-19 and election interference.

Murthy v. Missouri — previously known as Missouri et al. v. Biden et al., which alleges government violations of First Amendment free speech protections, may itself be affected by this week’s revelations, according to STAT News. The U.S. Supreme Court is expected to issue a decision in that case by June.

Kennedy et al. v. Biden et al., filed by Kennedy and CHD, makes similar First Amendment claims. It was consolidated with Missouri et al. v. Biden et al. in July 2023.

‘Egregious’ First Amendment violations

According to the interim report, the Biden administration has pursued “collusion with third-party intermediaries, including universities, non-profits, and businesses, to censor protected speech on social media,” in an attempt to sidestep the First Amendment’s prohibition of government censorship of speech.

The report described this as an “egregious” violation of the First Amendment. But the report also said human censorship of online content was a slow and incomplete process, due to the cost and manpower limitations of using human content moderators. AI tools can significantly speed up and expand such efforts, the report said.

The report asks:

“What happens if the censorship is automated and the censors are machines? There is no need for shifts or huge teams of people to identify and flag problematic online speech. AI-driven tools can monitor online speech at a scale that would far outmatch even the largest team of ‘disinformation’ bureaucrats and researchers. …

“The NSF-funded projects threaten to help create a censorship regime that could significantly impede the fundamental First Amendment rights of millions of Americans, and potentially do so in a manner that is instantaneous and largely invisible to its victims.”

Kim Mack Rosenberg, general counsel for CHD, told The Defender these grants “come dangerously close to crossing the line of the government itself engaging in censorship. That taxpayer dollars are paying for these schemes should give everyone pause.”

The report names the NSF as “a key player in the ‘censorship industrial complex,’” as in recent years, “under the guise of combatting so-called misinformation, NSF has been funding AI-driven tools and other new technologies that can be used to censor or propagandize online speech.”

The NSF, established in 1950, long focused on science and engineering. However, according to the report, the agency’s mission “has shifted over the years to encompass social and behavioral sciences.”

The report specifically highlights the NSF’s Convergence Accelerator Grant Program, launched in 2019, which seeks “to bring together multiple disciplines, ideas, approaches and technologies to solve ‘national-scale societal challenges’ aligned with specific research ‘tracks’ that ‘have the potential for significant national impact.’”

The program has 13 funding tracks, one of which is “Track F,” the “Trust & Authenticity in Communication Systems” track, established in 2021. Described by the report as “the Censorship Program,” the NSF granted $21 million in funding to this track to “address the manipulation or ‘unanticipated negative effects’ of communication systems.”

In an email cited in the report, NSF staffer Michael Pozmantier, Track F’s program manager, described it as the track “focused on combating mis/disinformation.”

According to the report, in March 2021, the NSF issued a funding opportunity for Track F, soliciting proposals for “solutions involving AI-powered tools to help Big Tech combat misinformation and provide ‘education and training materials’ for school children and communities that might ‘exhibit different vulnerabilities to disinformation methods.’”

By September 2022, six applicants had received Track F funding for their projects. The report focused on four of these projects: the University of Michigan’s WiseDex tool, Meedan’s Co-Insights tool, the University of Wisconsin-Madison’s CourseCorrect tool and the Massachusetts Institute of Technology’s (MIT) Search Lit platform.

According to the report, University of Michigan researchers told the NSF the WiseDex tool sought to “develop processes that would have public legitimacy [and] which social media platforms could use for taking enforcement action against misinformation,” including a service that would tell platforms which content “deserves enforcement” and how “true” any content item is.

The University of Michigan described WiseDex as a tool policymakers at social media platforms could use to “externaliz[e] the difficult responsibility of censorship.” A later presentation described WiseDex as a tool enabling the “scaling-up enforcement of misinformation policies.”

Meedan is described in the report as a tool “to counter misinformation online” and “advance the state-of-art in misinformation research,” that would leverage its “relationships and experience” with platforms like WhatsApp, Telegram and Signal to build approaches that “identify and limit susceptibility to misinformation” and “pseudoscientific information.”

A Meedan pitch to the NSF claimed that it used AI to “monitor 750,000 blogs and media articles daily as well as mine data from the major social media platforms” and that the tool used the “world’s best system for matching social media posts to fact-checks.”

Scott A. Hale, Ph.D., Meedan’s director of research, is quoted in the report as telling NSF in an email that, in his “dream world,” Big Tech would be able to develop “automated detection” tools to automatically censor content — and any similar speech.

CourseCorrect was developed as a tool to “empower efforts by journalists, developers, and citizens to fact-check” what was described as “delegitimizing information” about “election integrity and vaccine integrity” on social media, by allowing “fact-checkers to perform rapid-cycle testing of fact-checking messages and monitor[ing] their real-time performance among online communities at risk of misinformation exposure.”

The report notes that CourseCorrect, which also used AI and machine learning technologies, was “specifically focused on ‘address[ing] two democratic and public health crises facing the U.S.: skepticism regarding the integrity of U.S. elections and hesitancy related to the COVID-19 vaccines.’”

MIT’s Search Lit tool sought to develop “effective interventions” which, according to the report, were intended to “educate Americans — specifically, those that the MIT researchers alleged ‘may be more vulnerable to misinformation campaigns’ — on how to discern fact from fiction online.’”

The communities in question targeted by Search Lit included “conservatives, minorities, and veterans,” as well as rural and indigenous communities and older adults, who, according to the report, were viewed as being “uniquely incapable of assessing the veracity of content online” and being susceptible to “dangerous digital content.”

The Search Lit research team also highlighted Americans who considered the Constitution and the Bible “sacred,” suggesting that such “everyday people” tend to “distrust … journalists and academics” and, accordingly, seek primary sources of information instead of trusting the “professional consensus.”

According to the report, the findings “demonstrate that … the ‘disinformation’ academics understood their work as part of a partisan project; and … the bureaucrats and so-called ‘experts’ in this space have complete disdain for most of the American population.”

One disinformation researcher cited in the report, Renee DiResta of the Stanford Internet Observatory — also implicated in the “Twitter Files” — acknowledged the constitutional murkiness of such practices, saying at a 2021 presentation that there were “[u]nclear legal authorities including very real 1st amendment questions.”

“Examples like these illustrate the tremendous sway these so-called ‘disinformation’ researchers hold over social media platforms and why the federal government often turns to these unaccountable academics when seeking a proxy for their censorship activities,” the report states.

‘The government is not the arbiter of truth’

Tuesday’s hearing, which built on the contents of the interim report, quickly turned contentious.

“They’re going to put your tax dollars into developing software to censor your speech,” Jordan said. “AI, which can censor in real time and at scale, should scare us all.”

But according to the Daily Caller, ranking member Rep. Stacey Plaskett (D-V.I.) “spent nearly seven minutes … to warn about the ‘dictatorial’ threat supposedly posed by former President Donald Trump.” Such remarks were echoed by other Democrat members of the subcommittee, and by Eisen, the sole witness for the Democrats.

Some of the interim report’s findings were released by the Daily Caller News Foundation in February 2023. In her testimony Tuesday, Richardson said her investigation revealed “a multi-million dollar effort to build … a Censorship Industrial Complex, using taxpayer dollars as seed funding for various projects.”

“The effort fits within the broader trend of the federal government’s increasing involvement in online censorship, from the Centers for Disease Control flagging posts during COVID-19 to the FBI working with social media companies to suppress the Hunter Biden laptop story,” she said.

Richardson also said:

“The government is not the arbiter of truth. Our founders understood this, which is why we have a First Amendment. They understood the danger of the government telling people what they should believe and targeting opinions that cut against the official narrative. Pursuing information control by funding outside organizations is no less a threat to free speech and freedom of the press than a tyrannical government.”

Fang, in his testimony, said his investigative reports have “shed light on private entities’ attempts to control and curtail public discourse on major areas of public policy.”

These efforts included the Public Good Projects — funded by “biopharma lobbyists that represent Moderna and Pfizer” — who “collaborated with Twitter during the pandemic to censor specific social media accounts because of their criticism of establishment views around COVID-19 vaccines while amplifying accounts supportive of vaccines and government viewpoints,” Fang said.

Fang, who previously released “Twitter Files” documents, said the Public Good Projects are continuing “efforts to influence vaccine discourse,” working with Moderna and AI firm Talkwalker “to monitor vaccine-related conversations across 150 million websites.”

He also highlighted Logically, a British AI firm that received contracts from the U.K. government “to combat misinformation about the COVID-19 pandemic” and partnered with Meta “to automatically suppress and label content they deemed as misinformation.”

Noting that censorship “affects dissenting voices of all ideological stripes,” Fang “implore[d] this committee to rise above partisanship and treat the threat posed by online surveillance and censorship as an American issue, affecting all of us equally.”

Michael Rectenwald, Ph.D., author of “Google Archipelago: The Digital Gulag and the Simulation of Freedom,” told The Defender that “In the hands of the state, AI represents a sure path to totalitarianism,” adding that “AI is code primarily designed to target certain viewpoints deemed by its writers to be ‘false’ or ‘harmful.’”

Attorney Richard Jaffe told The Defender “The government shouldn’t be in the business of regulating the content or viewpoint of speech, certainly not about something as fast-changing and uncertain like COVID, [and] shouldn’t be in the business of funding private entities to develop more efficient tools [that are] doing the censoring.”

White House complained about ‘high levels’ of ‘misinformation’ on Amazon

On Monday, Jordan released the “Amazon Files” in a thread on X. According to his revelations, the Biden administration pressured Amazon to censor books critical of COVID-19 vaccine safety and efficacy and the company “bowed down” to this pressure.

The documents, which the House Judiciary Committee obtained via subpoena, included a March 2021 internal email from Amazon questioning whether the Biden administration was “asking us to remove books.”

These requests came from Andy Slavitt, the White House’s former COVID-19 adviser, who is implicated in the “Twitter Files.” In March 2021, he contacted Amazon about “the high levels of propaganda and misinformation and disinformation of Amazon.”

Amazon initially decided not to perform “a manual intervention” to target specific books available on its platform — on the basis that such an action would be “too visible” to the public and because “retailers are different than social media communities.”

By March 9, 2021, though, Amazon representatives met with White House officials, according to Jordan, who cited an internal Amazon email stating that the company was “feeling pressure from the White House.”

On the same day as the meeting, “Amazon enabled ‘Do Not Promote’ for books that expressed the view that vaccines were not effective,” Jordan wrote, adding that the company also “considered other ways ‘to reduce the visibility’” of the books in question.

Rosenberg said the “Amazon Files” are “concerning” and “deeply disturbing, both as a lawyer and as a U.S. citizen.”

The Defender on occasion posts content related to Children’s Health Defense’s nonprofit mission that features Mr. Kennedy’s views on the issues CHD and The Defender regularly cover. In keeping with Federal Election Commission rules, this content does not represent an endorsement of Mr. Kennedy, who is on leave from CHD and is running as an independent for president of the U.S.

Suggest A Correction

Share Options

Close menu

Republish Article

Please use the HTML above to republish this article. It is pre-formatted to follow our republication guidelines. Among other things, these require that the article not be edited; that the author’s byline is included; and that The Defender is clearly credited as the original source.

Please visit our full guidelines for more information. By republishing this article, you agree to these terms.