Miss a day, miss a lot. Subscribe to The Defender's Top News of the Day. It's free.

Big Tech companies are pushing back against proposed federal rules to increase oversight of artificial intelligence (AI) tools used in patient care, STAT reported.

The companies, including Google, Amazon and Epic, are set to make windfall profits from the expansion of AI in healthcare by selling a wide variety of tools to insurers and providers for everything from automating clinical documentation procedures, to assessing insurance claims to providing virtual care.

The tech industry was joined by large healthcare providers, major private insurers and medical software vendors in opposing the Office of the National Coordinator for Health Information Technology’s (ONC) proposal to place restrictions on and expand the transparency of the AI and machine learning (ML) technologies that support decision-making in healthcare — what ONC calls “predictive decision support interventions,” or DSI.

The ONC, which is housed within the U.S. Department of Health and Human Services and oversees electronic health records, in April proposed rules for “Health Data, Technology, and Interoperability.”

The rules are open to public comment through July 20.

The proposed rules, which update the 21st Century Cures Act, seek to support the Biden administration’s mandate to increase public trust in AI by giving clinicians more information in their electronic health record systems about the AI algorithms so they can better assess the technology’s applicability for a particular patient population or circumstance, the ONC told reporters.

A study published last year in the Journal of Medical Internet Research found that “despite the plethora of claims for the benefits of AI in enhancing clinical outcomes, there is a paucity of robust evidence.”

But that lack of evidence hasn’t stopped tech giants from developing AI tools, or private healthcare companies from increasingly using them to “help make life-altering decisions with little independent oversight,” STAT determined, after reviewing secret corporate documents and hundreds of pages of federal records and court filings.

Proponents tout the expansion of AI into healthcare as an easy way to cut costs and improve care by, for example, cutting down on patient face time with doctors, particularly in primary care.

But critics, such as lawyer and healthcare campaigner Ady Barkan, say there’s growing evidence that AI decision-making in healthcare has harmed patients. “Robots should not be making life-or-death decisions,” he said.

In proposing the new rules, the ONC also cited concerns about increasing evidence that predictive models both introduce and increase the potential for many different risks that have negative impacts on patients and communities.

In response, the ONC said it is proposing a new “decision support intervention” criterion that would create new transparency and risk management expectations for AI and ML technology that aid decision-making in healthcare.

The new regulations also would apply to healthcare information technology models that interface with AI.

The key changes proposed by the ONC would require all of these systems to make information about how the algorithms work available to users — in plain language — similar to a “nutrition label.”

The rules also require new risk management practices to be developed for all of the systems. And information about how risk is managed would have to be publicly available.

Developers also would be required to provide a mechanism for public feedback.

“If finalized,” according to the National Law Review, the proposal for regulations to DSI “will significantly impact the development, deployment and use of AI/ML tools in healthcare.”

Tech giants say transparency rules would force them to disclose ‘trade secrets’

Over 200 public comments filed with ONC on the proposal — mostly by the tech and healthcare companies that will be subject to the regulations — “make plain the battle about to unfold as businesses race to deploy increasingly powerful AI tools in safety-critical sectors such as healthcare, where potential profit margins are much wider than the margin for error,” STAT said.

Many of Google’s own scientists, along with top tech billionaires and AI developers have publicly voiced concerns that AI poses an existential threat to humanity and immediate risks to human health and well-being.

Yet, In their letters to the ONC, tech companies, insurers and providers associations expressed “concerns” about the proposed regulations. They said the proposed rules are “overly broad” and the required transparency would force them to disclose “intellectual property” and “trade secrets.”

In its comments, Google objected to what it called “an overbroad or unnecessary disclosure obligation” that would increase costs for developers and therefore for consumers.

Google also said that including generative AI or large language models such as BARD, along with what it called “low-risk” AI tools, would place an undue burden on the companies.

But even the World Health Organization (WHO) has said that failure to adequately regulate artificial intelligence-generated large language model tools is jeopardizing human well-being, calling out programs such as BARD and ChatGPT specifically.

Amazon also wrote that by applying its rules to a broad class of technology that relies on predictive AI/ML the ONC was reaching “broadly beyond” what would be appropriate to regulate the industry and mitigate risk.

Epic Systems, the nation’s largest electronic health record vendor, also argued in its comments that ONC’s proposed regulations applied too broadly, stating the ONC should not make public disclosures about how AI works a requirement of getting health IT systems certified.

“Considerable imbalance in marketplace transparency — and, ultimately, incentive to innovate — would be created if certified health IT developers have to disclose publicly their intellectual property, while non-certified predictive model developers are not required to make the same disclosure,” the company wrote.

Lack of oversight lets companies manipulate algorithms to boost profits

Critics of the growing unaccountable and unregulated role of AI/ML in healthcare, including researchers who develop AI and healthcare providers, argued the proposed rules don’t go far enough to inform and protect the public.

A letter submitted to the ONC by an interdisciplinary group of 34 scholars at the University of Michigan — with co-signatories from more than 30 other health research institutions across the U.S. and Europe — urged the ONC to apply more stringent regulations.

They argued the public should also be informed of what data go into these models — that includes mandating transparency around which variables the models use to make their predictions and also where the data come from that are used to train and evaluate the AI/ML systems.

Failing to inform the public of this information has led to “recent backlash and harm from the use of predictive (models) in healthcare,” they wrote, such as racial, gender and class bias and flawed predictions based on poorly-selected data that led to worse health outcomes.

But new evidence also shows how the lack of transparency and regulation make it possible for companies to manipulate algorithms to increase their bottom line — with serious consequences for human health.

A STAT investigation in March found AI is driving skyrocketing levels of medical claim denials in Medicare Advantage. The investigation revealed that insurers were using unregulated predictive algorithms created by naviHealth to determine the point at which they can plausibly cut off payment for treatment for older patients.

Last week, STAT reported that after United Healthcare acquired naviHealth in 2020, the problem got worse and even clinicians could not override the care determinations made by the algorithm without serious pushback from management.

In its comments to the ONC, United Healthcare also vehemently opposed several aspects of the proposed OTC rules, which would increase regulations on precisely these types of algorithms.

The company said the ONC’s definition of DSI was “vague and overly broad,” and that it would move the meaning of DSI “beyond clinical intended uses” and impose onerous requirements on low-risk tools.

It also opposed the proposed ONC requirements to disclose information about how the algorithms work, citing “risks to the developer” of being required to share sensitive or proprietary information with users.

Some activists have been mobilizing the Stop #DeathByAI campaign to stop abuses by AI in Medicare Advantage, Common Dreams reported.

But that is just the tip of the iceberg in this exploding market, which Markets and Markets predicts will be worth over $102 billion by 2028 — up from its current value of $14.6 billion.