The Defender Children’s Health Defense News and Views
Close menu
Close menu

You must be a CHD Insider to save this article Sign Up

Already an Insider? Log in

September 22, 2023

Big Brother News Watch

What Big Tech Knows About Your Body + More

The Defender’s Big Brother NewsWatch brings you the latest headlines related to governments’ abuse of power, including attacks on democracy, civil liberties and use of mass surveillance. The views expressed in the excerpts from other news sources do not necessarily reflect the views of The Defender.

The Defender’s Big Brother NewsWatch brings you the latest headlines.

What Big Tech Knows About Your Body

The Atlantic reported:

If you were seeking online therapy from 2017 to 2021 — and a lot of people were — chances are good that you found your way to BetterHelp, which today describes itself as the world’s largest online therapy purveyor, with more than 2 million users. Once you were there, after a few clicks, you would have completed a form — an intake questionnaire, not unlike the paper one you’d fill out at any therapist’s office: Are you new to therapy? Are you taking any medications? Having problems with intimacy? Experiencing overwhelming sadness? Thinking of hurting yourself? BetterHelp would have asked you if you were religious, if you were LGBTQ, if you were a teenager. These questions were just meant to match you with the best counselor for your needs, small text would have assured you. Your information would remain private.

Except BetterHelp isn’t exactly a therapist’s office, and your information may not have been completely private. In fact, according to a complaint brought by federal regulators, for years, BetterHelp was sharing user data — including email addresses, IP addresses, and questionnaire answers — with third parties, including Facebook and Snapchat, for the purposes of targeting ads for its services.

It was also, according to the Federal Trade Commission, poorly regulating what those third parties did with users’ data once they got them. In July, the company finalized a settlement with the FTC and agreed to refund $7.8 million to consumers whose privacy regulators claimed had been compromised. (In a statement, BetterHelp admitted no wrongdoing and described the alleged sharing of user information as an “industry-standard practice.”)

All of this information is valuable to advertisers and to the tech companies that sell ad space and targeting to them. It’s valuable precisely because it’s intimate: More than perhaps anything else, our health guides our behavior. And the more these companies know, the easier they can influence us. Over the past year or so, reporting has found evidence of a Meta tracking tool collecting patient information from hospital websites, and apps from Drugs.com and WebMD sharing search terms such as herpes and depression, plus identifying information about users, with advertisers.

Amazon’s Generative-AI-Powered Alexa Is as Big a Privacy Red Flag as Old Alexa

Ars Technica reported:

Amazon is trying to make Alexa simpler and more intuitive for users through the use of a new large language model (LLM). During its annual hardware event on Wednesday, Amazon demoed the generative AI-powered Alexa that users can soon preview on Echo devices. But in all its talk of new features and a generative-AI-fueled future, Amazon barely acknowledged the longstanding elephant in the room: privacy.

Amazon’s devices event featured a new Echo Show 8, updated Ring devices, and new Fire TV sticks. But most interesting was a look at how the company is trying to navigate generative AI hype and the uncertainty around the future of voice assistants. Amazon said users will be able to start previewing Alexa’s new features via any Echo device, including the original, in a few weeks.

One development with an immediately noticeable impact is Alexa learning to listen without the user needing to say “Alexa” first. A device will be able to use its camera, a user’s pre-created visual ID, and a previous setup with Alexa to determine when someone is speaking to it.

All this points to an Alexa that listens and watches with more intent than ever. But Amazon’s presentation didn’t detail any new privacy or security capabilities to make sure this new power isn’t used maliciously or in a way that users don’t agree with.

COVID Surge Shouldn’t Close Schools, Says Biden Education Secretary: ‘I Worry About Government Overreach’

The Hill reported:

Education Secretary Miguel Cardona says schools should not be shutting down due to surges in COVID-19 and expressed worry about government overreach.

“I worry about government overreach, sending down edicts that will lead to school closures because either folks are afraid to go in or are infected and can’t go,” Cardona told The Associated Press in an interview.

Despite the new wave of COVID-19 cases, “schools should be open, period,” Cardona said, according to the AP.

Cardona told the AP that in-person instruction “should not be sacrificed for ideology” and that school closures harmed community relationships.

AI Might Be Listening During Your Next Health Appointment

Axios reported:

Your doctor or therapist might not be the only one listening in during your next visit. Artificial intelligence may be tuning in as well.

Why it matters: Healthcare is racing to incorporate generative AI and natural language processing to help wrangle patient information, provide reliable care summaries and flag health risks. But the efforts come with quality and privacy concerns that people developing these tools acknowledge.

Driving the news: On Thursday, digital health company Hint Health announced a product in collaboration with OpenAI that will allow doctors to record an appointment, automatically transcribe the notes from it and generate a summary that can be embedded directly in the patient’s medical record.

Between the lines: It’s also among a growing number of AI applications interacting directly with patients. What to watch: The use of AI in patient encounters raises a number of privacy concerns, as well as worries about the accuracy of the data and potential biases.

Americans Deeply Dissatisfied With Government and Both Parties: Study

Newsweek reported:

Public trust in the federal government has reached a historic low, and most Americans say they’re deeply unhappy with both major political parties and their choices in 2024 presidential candidates, according to a new report by the Pew Research Center.

Americans’ approval of Congress, the Supreme Court and other political institutions have been declining for years, driven by a sharp rise in political polarization as Democrats and Republicans increasingly view the opposite party with skepticism or outright disgust.

But the public’s trust in the American political system has hit a low not seen in several decades, the new Pew study found. Just 16% of U.S. adults said they trust the federal government, the lowest trust level in nearly 70 years of polling, according to Pew.

A Flood of New AI Products Just Arrived — Whether We’re Ready or Not

The Washington Post reported:

Big Tech launched multiple new artificial intelligence products this week, capable of reading emails and documents or conversing in a personal way. But even in their public unveilings, these new tools were already making mistakes — inventing information or getting basic facts confused — a sign that the tech giants are rushing out their latest developments before they are fully ready.

Google said its Bard chatbot can summarize files from Gmail and Google Docs, but users showed it falsely making up emails that were never sent. OpenAI heralded its new Dall-E 3 image generator, but people on social media soon pointed out that the images in the official demos missed some requested details. And Amazon announced a new conversational mode for Alexa, but the device repeatedly messed up in a demo for The Washington Post, including recommending a museum in the wrong part of the country.

Spurred by a hypercompetitive race to dominate the revolutionary “generative” AI technology that can write humanlike text and produce realistic-looking images, the tech giants are fast-tracking their products to consumers. Getting more people to use them generates the data needed to make them better, an incentive to push the tools out to as many people as they can. But many experts — and even tech executives themselves — have cautioned against the dangers of releasing largely new and untested technology.

The Great AI ‘Pause’ That Wasn’t

Axios reported:

The organizers of a high-profile open letter last March calling for a “pause” in work on advanced artificial intelligence lost that battle, but they could be winning a longer-term fight to persuade the world to slow AI down.

The big picture: Almost exactly six months after the Future of Life Institute’s letter — signed by Elon Musk, Steve Wozniak and more than 1,000 others — called for a six-month moratorium on advanced AI, the work is still charging ahead. But the ensuing massive debate deepened public unease with the technology.

Between the lines: In recent months, the AI conversation around the world has intensely focused on the social, political and economic risks associated with generative AI, and voters have been vocal in telling pollsters about their AI concerns.

Driving the news:  The British government is gathering a who’s who of deep thinkers on AI safety at a global summit Nov. 1-2. The event is “aimed specifically at frontier AI,” U.K. Deputy Prime Minister Oliver Dowden told a conference in Washington on Thursday afternoon.

Suggest A Correction

Share Options

Close menu

Republish Article

Please use the HTML above to republish this article. It is pre-formatted to follow our republication guidelines. Among other things, these require that the article not be edited; that the author’s byline is included; and that The Defender is clearly credited as the original source.

Please visit our full guidelines for more information. By republishing this article, you agree to these terms.

Woman drinking coffee looking at phone

Join hundreds of thousands of subscribers who rely on The Defender for their daily dose of critical analysis and accurate, nonpartisan reporting on Big Pharma, Big Food, Big Chemical, Big Energy, and Big Tech and
their impact on children’s health and the environment.

  • This field is for validation purposes and should be left unchanged.
  • This field is hidden when viewing the form
  • This field is hidden when viewing the form
    MM slash DD slash YYYY
  • This field is hidden when viewing the form