Miss a day, miss a lot. Subscribe to The Defender's Top News of the Day. It's free.

A team of researchers has developed a “powerful new tool in artificial intelligence” (AI) that can predict if someone is or isn’t likely to get a COVID-19 vaccine.

According to the University of Cincinnati, the new AI tool “uses a small set of data from demographics and personal judgments such as aversion to risk or loss” to identify “reward and aversion judgment” patterns in humans that may help explain one’s willingness to get vaccinated.

The researchers presented their findings in a study published Tuesday in the Journal of Medical Internet Research Public Health and Surveillance.

The study’s findings “could have broad applications for predicting mental health and result in more effective public health campaigns,” the university said.

According to the study, “Despite COVID-19 vaccine mandates, many chose to forgo vaccination, raising questions about the psychology underlying how judgment affects these choices.”

The researchers claim their findings “demonstrate the underlying importance of judgment variables for vaccine choice and uptake, suggesting that vaccine education and messaging might target varying judgment profiles to improve uptake.”

But critics like Brian Hooker, Ph.D., chief scientific officer for Children’s Health Defense, said that the new technology implies that those who question vaccines have mental health problems:

“The whole implication here is that nonconformity to the government propaganda machine’s standard of care makes one some type of mental case or extreme outlier. The whole thing smacks of a Brave New World where potentially non-compliant individuals are targeted with messaging based on fear and irrationality.”

Hooker said the new technology “is a prefabricated substitute to what Big Pharma and government health agencies avoid: rational discussions of science and medicine that might expose the truth about vaccine adverse events.”

Using AI to target the ‘vaccine-hesitant’?

Nicole Vike, Ph.D., senior research associate at the University of Cincinnati’s College of Engineering and Applied Science, was the paper’s lead author.

“COVID-19 is unlikely to be the last pandemic we see in the next decades,” Vike said. “Having a new form of AI for prediction in public health provides a valuable tool that could help prepare hospitals for predicting vaccination rates and consequential infection rates.”

The study’s authors said the technology also could be used to “aid vaccine rollouts and health care preparedness by providing location-specific details” — in other words, identifying geographic areas that may experience low vaccination and high hospitalization rates, according to the study.

Critics questioned the study’s claims and also said they were worried about the potential adverse uses of this technology.

“The main problem with research like this is the underlying premise: Vaccine hesitancy must be accounted for in terms of the (aberrant) psychology of the subjects and not with reference to the efficacy and safety of the vaccine(s) in question,” said Michael Rectenwald, Ph.D., author of “Google Archipelago: The Digital Gulag and the Simulation of Freedom.”

As a result, Rectenwald said, it’s implied that “if people are vaccine-hesitant, the fault is endemic to them rather than to the vaccine itself. From this premise, the research seeks to justify vaccination as normal by linking anomalous mental and psychological characteristics with vaccine hesitancy.”

This may lead to individuals being targeted, Rectenwald said:

“Using AI to predict vaccine hesitancy on these terms might include mobilizing AI programs to target and even identify individually vaccine-hesitant subjects. We might also expect AI programs that seek to overcome vaccine hesitancy with attempts to ‘reprogram’ said defective subjects.

“At the very least, identifying, targeting and re-educating vaccine hesitant subjects is in the offing.”

Scott C. Tips, president of the National Health Federation, said the new technology poses privacy concerns.

Tips said:

“It is nobody’s business but that of the individual as to whether he or she wants to be vaccinated. Why does anyone need to predict health decisions? ‘Predictive’ AI on this issue is nothing but a solution looking for a problem. There is no problem here. In fact, we should be glad that there are people who do not want to be vaccinated.”

Similarly, Dr. Kat Lindley, president of the Global Health Project and director of the Global COVID Summit, agreed. “There are many reasons why someone may be vaccine-hesitant, and relying on a program, no matter how intelligent, to predict the outcome, I fear will underestimate the human element and individual experiences.”

Critics also question claims about the technology’s effectiveness. “AI is only as good as the programmer and the parameters it was given, which also includes the biases with which it was created,” Lindley said.

Tim Hinchliffe, editor of The Sociable, said, “We’ve seen how ChatGPT spits out nonsense and we’ve seen the diversity disaster that was Google Gemini, so it’d be best to approach the results with caution. And when there is AI-human teaming, the results can still be biased.”

“‘Garbage in, garbage out’ applies equally to AI-driven decisions and results every bit as much as it applies to any other decisions or results made by humans and ‘dumb’ computers,” Tips said. “If the AI is searching through mainstream-only files and data for its answers, then it will come up with incorrect and biased results.”

‘Who will be the next targets of this attitude-predicting apparatus?’

Other experts suggested governments could abuse the technology and weaponize it against the public.

“It’s indicative of the state of medicine and the priorities of our federal government to see more research being done on how to increase uptake of whatever product they’re defining as a vaccine, than to do the safety studies the public has been crying out for,” said Valerie Borek, associate director and lead policy analyst for Stand For Health Freedom.

“This study fits the decades-long approach to using psychology and our subconscious to push products and agendas,” she said. “There is already technology that can assess biometric data such as heart rate, temperature and eye movements, combined with audio and location information.”

Citing an example, Borek said the Centers for Disease Control and Prevention “already has a record of using cellphone data for public health surveillance.”

Borek added:

“The government has too much data to comb through, so the use of AI is inevitable for public health surveillance. How long before the devices we voluntarily wear and carry are used for AI predictions of our health choices?

“Will those predictions lead to any governmental interventions? We need to ask these questions of our lawmakers and do what we can to minimize our digital footprint.”

According to Hinchliffe:

“If ‘AI can predict people’s attitudes,’ then predicting so-called vaccine hesitancy would be just the start. What comes next? Predicting who is a climate denier? What about predicting people’s attitudes toward presidential candidates and who they’ll likely vote for? Who needs elections when the AI already knows who will win?

“What happens when the re-education and propaganda schemes don’t work? Will data on predicting people’s attitudes go to governments so they can crack down on dissidents? Who will be the next targets of this attitude-predicting apparatus? My guess would be people who are ‘hesitant’ about the climate change narrative.”

Study claims AI can ‘make accurate predictions about human attitudes’

According to the University of Cincinnati’s announcement, the development of the new AI tool was based on a survey conducted in the U.S. in 2021, involving a representative sample of 3,476 adults. Respondents “provided information such as where they live, income, highest education level completed, ethnicity and access to the internet.”

Participants were asked if they had received a COVID-19 vaccine, with approximately 73% of respondents reporting they were vaccinated, “slightly more than the 70% of the nation’s population that had been vaccinated in 2021,” according to the study.

They were then “asked to rate how much they liked or disliked a randomly sequenced set of 48 pictures on a seven-point scale of 3 to -3,” to quantify “mathematical features of people’s judgments as they observe mildly emotional stimuli.”

“The judgment variables and demographics were compared between respondents who were vaccinated and those who were not. Three machine learning approaches were used to test how well the respondents’ judgment, demographics and attitudes toward COVID-19 precautions predicted whether they would get the vaccine,” the announcement states.

According to the study, “a small set of demographic variables and 15 judgment variables” were identified, which “predict vaccine uptake with moderate to high accuracy and high precision.”

The announcement says these findings show “that artificial intelligence can make accurate predictions about human attitudes with surprisingly little data or reliance on expensive and time-consuming clinical assessments.”

The same announcement quoted Aggelos Katsaggelos, Ph.D., endowed professor of electrical engineering and computer science at Northwestern University, who claimed “The study is anti-big-data” because the new technology “can work very simply” and without the need for “super-computation.”

“It’s inexpensive and can be applied with anyone who has a smartphone. We refer to it as computational cognition AI. It is likely you will be seeing other applications regarding alterations in judgment in the very near future,” Katsaggelos said.

Lindley disagreed. She told The Defender “Calling this anti-big-data is an oxymoron, because to be able to claim a high level of accuracy, the program would have to encompass a high level of understanding of the hesitancy itself.”

“The problem with this AI initiative is the population-wide approach, which disregards any individual concerns and experiences,” Lindley said. “If I have learned anything practicing medicine these past 20 years, it is that the human element matters and it’s unpredictable by its nature.”

‘Tip of the iceberg’: AI may also be used for rapid development of vaccines

Other AI-related technologies in the healthcare realm have recently been introduced.

At the annual meeting of the World Economic Forum (WEF) in January, Pfizer CEO Albert Bourla praised the role of AI in the development of Paxlovid, a prescription oral medication marketed as a treatment for COVID-19.

“It was developed in four months,” Bourla said, whereas development of such a drug “usually takes four years.” He said AI helped significantly reduce the amount of time needed for the “drug discovery” process, where you “really synthesize millions of molecules and then you try to discover within them, which one works.”

Bourla credited this breakthrough with saving “millions of lives” and predicted more such developments in the future. “Our job is to make breakthroughs that change patients’ lives,” Bourla said. “With AI, I can do it faster and I can do it better.”

“I truly believe that we are about to enter a scientific renaissance in life sciences because of this coexistence of advancements in technology and biology,” Bourla added. “AI is a very powerful tool. In the hands of bad people [it] can do bad things for the world, but in the hands of good people [it] can do great things for the world.”

During the same WEF panel discussion, Jeremy Hunt, the United Kingdom’s chancellor of the Exchequer, said AI could lead to the rapid development and deployment of vaccines.

“When we have the next pandemic, we don’t want to have to wait a year before we get the vaccine,” he said. “If AI can shrink the time it takes to get that vaccine to a month, then that is a massive step forward for humanity.”

A WEF project, first announced in 2019, is funding research on the use of “synthetic” AI-generated “patients” and “synthetic” clinical trial data.

Concerns over AI’s predictive ability have led to some action from lawmakers around the world. On March 13, the European Parliament passed the Artificial Intelligence Act, which contains several restrictions and prohibitions on the use of AI in various contexts.

According to Greece’s Business Daily, “Emotion recognition in the workplace and schools, social scoring, predictive policing (when it is based solely on profiling a person or assessing their characteristics), and AI that manipulates human behaviour or exploits people’s vulnerabilities” are forbidden by this legislation.

Yet, for Hinchliffe, “Using AI to predict people’s attitudes towards vaccines and vaccine hesitancy is just the tip of the iceberg,” as AI technology can then “be used to predict attitudes on just about anything.”

“If successful, predicting people’s attitudes will lead to predicting their behavior. Predicting their behavior means knowing more about them than they know about themselves,” he said. “Once humans are ‘hackable,’ then all bets are off: They can be manipulated and controlled in the most nefarious of ways.”