Miss a day, miss a lot. Subscribe to The Defender's Top News of the Day. It's free.

Your Doctor’s Office Might Be Bugged. Here’s Why

Forbes reported:

It used to be safe to assume your doctor’s visit was a completely private affair between you and your physician. This is changing with ambient artificial intelligence, a new technology that listens to your conversation and processes information. Think Amazon’s Alexa, but in your doctor’s office. An early use case is ambient AI scribing: it listens and then writes a clinical note summarizing your visit. Clinical notes are used to communicate diagnostic and treatment plans within electronic health records, and as a basis to generate your bill.

A recent report in NEJM Catalyst described the deployment of ambient AI in The Permanente Medical Group, Kaiser’s Northern California physician group. Since October 2023, ambient AI scribes have been used by more than 3,400 doctors in more than 300,000 encounters.

In the study, the doctors cited many benefits including more meaningful interactions and reductions in after-hours note-writing. Patients also reportedly liked it. Some described their physicians as more attentive, possibly because ambient AI avoids the practice of doctors writing their notes during the visit, which is distracting.

Okay, your conversation just got recorded. But where does it go? Is it stored somewhere? How is it used beyond writing my note? The AI technology companies need to address these questions and comply with Health Insurance Portability and Accountability Act laws. Additionally, new regulations may be needed as the technology evolves.

Moms’ Group Launches Grassroots Fight Against Social Media ‘Addiction’

The Washington Post reported:

A group of mothers is launching a grass-roots initiative aimed at combating what they call rising “addiction” among kids to social media and other digital tools, bringing a well-connected new entrant into the contentious debate over children’s online safety.

Parental groups have become a major political force in those discussions, making their presence felt in federal talks over potential child protection rules and at the recent blockbuster hearing with Meta’s Mark Zuckerberg and other tech CEOs.

But the group — Mothers Against Media Addiction, or MAMA — is looking to bring more parents into the fold at the local and state level through grass-roots organizing.

The new initiative is financially backed by the Center for Humane Technology (CHT) nonprofit — led by influential and outspoken social media critic Tristan Harris — and is allying with key legislators driving kids’ online safety efforts at the state level.

Harris, a former Google ethicist, has emerged as a key player in federal discussions about social media’s perceived harms after appearing in “The Social Dilemma” film, which accused digital platforms of fueling tech addiction for profit.

House Bill Would Force ByteDance to Divest TikTok or Face Ban

The Hill reported:

A bipartisan House bill unveiled Tuesday would force ByteDance, the China-based parent company of TikTok, to divest the short form video app or face a ban of the platform in the U.S.

Introduced by Reps. Mike Gallagher (R-Wisc.) and Raja Krishnamoorthi (D-Ill.), the top lawmakers on the House Select Committee on the Chinese Communist Party, are the latest effort to ban TikTok over concerns about potential national security threats posed by ByteDance.

The “Protecting Americans From Foreign Adversary Controlled Applications Act” specifically defines ByteDance and TikTok as a foreign adversary-controlled application. The bill also creates a broader framework that would allow the president to designate other foreign adversary-controlled applications.

The bill would give ByteDance more than five months after the law goes into effect to divest TikTok. If the company does not divest from TikTok, it would become illegal to distribute it through an app store or web hosting platform in the U.S., effectively banning it even among current users.

However, there are still political concerns with banning the app given its growing popularity with U.S. users, and practical concerns based on the loopholes users could use to gain access to TikTok even if it were effectively banned.

White House Lifts COVID Testing Rule for People Around President Biden

U.S. News & World Report reported:

In a move that acknowledges that COVID-19 is no longer the danger it once was, the White House on Monday lifted a COVID testing requirement for anyone who plans to be near President Joe Biden, Vice President Kamala Harris and their spouses.

The change follows the relaxation of COVID isolation policies announced by the U.S. Centers for Disease Control and Prevention last week, the Associated Press reported.

How the Government Used ‘Track F’ to Fund Censorship Tools: Report

The Epoch Times reported:

Officials from the National Science Foundation tried to conceal the spending of millions of taxpayer dollars on research and development for artificial intelligence tools used to censor political speech and influence the outcome of elections, according to a new congressional report.

The report looking into the National Science Foundation (NSF) is the latest addition to a growing body of evidence that critics claim shows federal officials — especially at the FBI and the CIA — are creating a “censorship-industrial complex” to monitor American public expression and suppress speech disfavored by the government.

“In the name of combatting alleged misinformation regarding COVID-19 and the 2020 election, NSF has been issuing multimillion-dollar grants to university and nonprofit research teams,” states the report by the House Judiciary Committee and its Select Subcommittee on the Weaponization of the Federal Government.

Researchers Create AI ‘Worms’ Able to Spread Between Systems — Stealing Private Data as They Go

TechRadar reported:

A team of researchers has created a self-replicating computer worm that wriggles through the web to target Gemini Pro, ChatGPT 4.0, and LLaVA AI-powered apps. The researchers developed the worm to demonstrate the risks and vulnerabilities of AI-enabled applications, particularly how the links between generative AI systems can help to spread malware.

In their report, the researchers, Stav Cohen from the Israel Institute of Technology, Ben Nassi of Cornell Tech, and Ron Bitton from Intuit, came up with the name ‘Morris II’ using inspiration from the original worm that wreaked havoc on the internet in 1988.

In testing performed by the researchers, the worm was able to steal social security numbers and credit card details.

The researchers sent their paper to Google and OpenAI to raise awareness about the potential dangers of these worms, and while Google did not comment, an OpenAI spokesperson told Wired that, “They appear to have found a way to exploit prompt-injection type vulnerabilities by relying on user input that hasn’t been checked or filtered.”

OpenAI’s Growing List of Legal Headaches

Axios reported:

Elon Musk‘s lawsuit against OpenAI is adding to a large and growing list of legal actions that could impair the company as it seeks to maintain its lead in the fast-changing world of generative AI.

Why it matters: Several of the lawsuits threaten to upend the way the company does business and, even if they aren’t successful, the courtroom battles could distract the company and take energy away from its business efforts.

Driving the news: Elon Musk’s suit, filed Thursday, claims that OpenAI has abandoned its mission by pursuing profit over its stated mission of delivering artificial general intelligence for the benefit of humanity.

While Musk’s suit got all the attention, OpenAI faces other fresh challenges, including a continuing investigation in Italy. The FTC has also opened an inquiry into whether OpenAI’s business arrangements with Microsoft violate antitrust law.

BlackCat Ransomware Gang Shuts Down Servers After Multi-Million Dollar UnitedHealth Payout — but Is This Really the End?

TechRadar reported:

The notorious BlackCat ransomware operator (also known as ALPHV) has apparently shut down its entire infrastructure, including servers, and websites.

The circumstances leading up to the decision are unclear, but some things point to a possible exit scam.

The attack, which was reported in late February this year, forced some of Change Healthcare’s services offline and even affected local pharmacies. The company merged with Optum two years ago, in a $7.8 billion deal. Following the ransomware attack, the affiliate criminals claim, Optum paid $22 million in bitcoin (roughly 350 BTC) for sensitive data not to be released online, and for the group to provide the decryption key.

American Express Confirms Customer Details Exposed — Third-Party Data Breach Sees Info Leaked Online

TechRadar reported:

Some American Express card users may have had their sensitive data exposed to hackers, the company has confirmed.

In a breach notification letter sent to affected customers, the credit card giant claimed it wasn’t American Express infrastructure that was breached, but rather systems belonging to a third-party service provider, which works with “numerous merchants.”

The data stolen included customer names, card numbers, and expiry dates. That is more than enough information to perform wire fraud or, at the least, identity theft and impersonation.