Miss a day, miss a lot. Subscribe to The Defender's Top News of the Day. It's free.

Fertility App Fined $200,000 for Leaking Customers’ Health Data

CNN Business reported:

The company behind a popular fertility app has agreed to pay $200,000 in federal and state fines after authorities alleged that it had shared users’ personal health information for years without their consent, including to Google and to two companies based in China.

The app, known as Premom, will also be banned from sharing personal health information for advertising purposes and must ensure that the data it shared without users’ consent is deleted from third-party systems, according to the Federal Trade Commission, along with the attorneys general of Connecticut, the District of Columbia and Oregon.

The sharing of personal data allegedly affected Premom’s hundreds of thousands of users from at least 2018 until 2020, and violated a federal regulation known as the Health Breach Notification Rule, according to an FTC complaint against Easy Healthcare, Premom’s parent company.

Montana Is First State to Ban TikTok Over National Security Concerns

Ars Technica reported:

Montana became the first state to ban TikTok yesterday. In a press release, the state’s Republican governor, Greg Gianforte, said the move was a necessary step to keep Montanans safe from Chinese Communist Party surveillance. The ban will take effect on January 1, 2024.

Prior to signing Montana Senate Bill 419 into law, critics reported that banning TikTok in the state would likely be both technically and legally unfeasible. Technically, since Montana doesn’t control all Internet access in the state, the ban may be difficult to enforce. And legally, it must hold up to First Amendment scrutiny, because Montanans should have the right to access information and express themselves using whatever communications tool they prefer.

There are also possible complications with the ban because it prevents “mobile application stores from offering TikTok within the state.” Under the law, app stores like Google Play or the Apple App Store could be fined up to $10,000 a day for allowing TikTok downloads in the state. To many critics, that seems like Montana is trying to illegally regulate interstate commerce. And a trade group that Apple and Google help fund has recently confirmed that preventing access to TikTok in a single state would be impossible, The New York Times reported.

Supreme Court Hands Twitter, Google Wins in Internet Liability Cases

The Hill reported:

The Supreme Court on Thursday punted the issue of determining when internet companies are protected under a controversial liability shield, instead resolving the case on other grounds. The justices were considering two lawsuits in which families of terrorist attack victims said Google and Twitter should be held liable for aiding and abetting ISIS, leading to their relatives’ deaths.

Google asserted that Section 230 of the Communications Decency Act, enacted in 1996 to prevent internet companies from being held liable for content posted by third parties, protected the company from all of the claims.

But rather than wading into the weighty Section 230 dispute — which internet companies say allows them to serve users and offers protection from a deluge of litigation — the court Thursday found neither company had any underlying liability to need the protections.

Section 230 protects internet companies, of all sizes, from being held legally responsible for content posted by third parties. The protection has faced criticism from both sides of the aisle, with Democrats largely arguing it allows tech companies to host hate speech and misinformation without consequences and some Republicans alleging it allows tech companies to make content moderation decisions with an anti-conservative bias.

AI Is Getting Better at Reading Our Minds

Mashable reported:

AI is getting way better at deciphering our thoughts, for better or worse. Scientists at the University of Texas published a study in Nature describing how they used functional magnetic resonance (fMRI) and an AI system preceding ChatGPT called GPT-1, to create a non-invasive mind decoder that can detect brain activity and capture the essence of what someone is thinking.

To train the AI, researchers placed three people in fMRI scans and played entertaining podcasts for them to listen to, including The New York Times’ Modern Love, and The Moth Radio Hour. The scientists used transcripts of the podcasts to track brain activity and figure out which parts of the brain were activated by different words.

The decoder, however, is not fully developed yet. The AI only works if it’s trained with data from the brain activity of the person it is used on, which limits its distribution possibilities. There’s also a barrier with the fMRI scans, which are big and expensive. Plus, scientists found that the decoder can get confused if people decide to ‘lie’ to it by choosing to think about something different than what is required.

These obstacles may be a positive, as the potential to create a machine that can decode people’s thoughts raises serious privacy concerns; there’s currently no way to limit the tech’s use to medicine, and just imagine if the decoder could be used as surveillance or an interrogation method. So, before AI mind-reading develops further, scientists and policymakers need to seriously consider the ethical implications, and enforce laws that protect mental privacy to ensure this kind of tech is only used to benefit humanity.

How Addictive Tech Hacks Your Brain

Gizmodo reported:

Addictive cravings have become an everyday part of our relationship with technology. At the same time, comparing these urges to drug addictions can seem like hyperbole. Addictive drugs, after all, are chemical substances that need to physically enter your body to get you hooked — but you can’t exactly inject an iPhone. So how similar could they be?

More than you might think. From a neuroscientific perspective, it’s not unreasonable to draw parallels between addictive tech and cocaine. That’s not to say that compulsively checking your phone is as harmful as a substance use disorder, but the underlying neural circuitry is essentially the same, and in both situations, the catalyst is — you guessed it — dopamine.

We’ve established that the cycle of cocaine addiction is kicked off by chemically hotwiring our reward system. But your phone and computer (hopefully) aren’t putting any substances into your body. They can’t chemically hotwire anything.

Instead, they hook us by targeting our natural triggers for dopamine release. In contrast to hotwiring a car, this is like setting up misleading road signs, tricking a driver into unwittingly turning in the direction you indicate. Research into how this works on a biological level is still in its infancy, but there are a number of mechanisms that seem plausible.

In Battle Over A.I., Meta Decides to Give Away Its Crown Jewels

The New York Times reported:

In February, Meta made an unusual move in the rapidly evolving world of artificial intelligence: It decided to give away its A.I. crown jewels.

The Silicon Valley giant, which owns Facebook, Instagram and WhatsApp, had created an A.I. technology, called LLaMA, that can power online chatbots. But instead of keeping the technology to itself, Meta released the system’s underlying computer code into the wild. Academics, government researchers and others who gave their email address to Meta could download the code once the company had vetted the individual.

Essentially, Meta was giving its A.I. technology away as open-source software — computer code that can be freely copied, modified and reused — providing outsiders with everything they needed to quickly build chatbots of their own.

Its actions contrast with those of Google and OpenAI, the two companies leading the new A.I. arms race. Worried that A.I. tools like chatbots will be used to spread disinformation, hate speech and other toxic content, those companies are becoming increasingly secretive about the methods and software that underpin their A.I. products.

Colorado Senator Proposes Special Regulator for Big Tech and AI

The Hill reported:

Colorado Sen. Michael Bennet (D) will introduce legislation Thursday to establish a new regulator for the tech industry and the development of artificial intelligence (AI), which experts predict will have wide-ranging impacts on society.

Bennet introduced his Digital Platform Commission Act a year ago, but he has updated it to make its coverage of AI even more explicit.

The updated bill requires the commission to establish an age-appropriate design code and age verification standards for AI.

It would establish a Federal Digital Platform Commission to regulate digital platforms consistent with the public interest to encourage the creation of new online services and to provide consumer benefits, prevent harmful concentrations of private power and protect consumers from deceptive, unfair or abusive practices.

AI Pioneer Yoshua Bengio: Governments Must Move Fast to ‘Protect the Public’

Financial Times reported:

Advanced artificial intelligence systems such as OpenAI’s GPT could destabilize democracy unless governments take quick action and “protect the public”, an AI pioneer has warned.

Yoshua Bengio, who won the Turing Award alongside Geoffrey Hinton and Yann LeCun in 2018, said the recent rush by Big Tech to launch AI products had become “unhealthy,” adding he saw a “danger to political systems, to democracy, to the very nature of truth.”

Bengio is the latest in a growing faction of AI experts ringing alarm bells about the rapid rollout of powerful large language models. His colleague and friend Hinton resigned from Google this month to speak more freely about the risks AI poses to humanity.

In an interview with the Financial Times, Bengio pointed to society’s increasingly indiscriminate access to large language models as a serious concern, noting the lack of scrutiny currently being applied to the technology.

Aviation Advocacy Group Files Class Action Lawsuit Against Feds Over Mandatory COVID Shots

The Epoch Times reported:

Free to Fly Canada, an advocacy group for pilots and aviation employees, has filed a class action lawsuit against the federal government over mandatory workplace vaccination policies.

The organization said in a May 17 news release that it has chosen representative plaintiffs Greg Hill, Brent Warren, and Tanya Lewis, and will argue that the rights of thousands of Canadian aviation employees were violated by Transport Canada’s regulation, Interim Order Respecting Certain Requirements for Civil Aviation Due to COVID-19, No. 43. The legal action names as defendants the federal government and the minister of transportation.

The class action is open to unvaccinated employees who were affected by the Transport Canada Order, whether they were suspended, put on unpaid leave, fired, or coerced into early retirement, said Free to Fly. Interested aviation employees can sign up on the Free to Fly website.

The group said this is the first time a case of this type has been brought in Canadian courts. The class action now has to be certified by a court to proceed, which is standard procedure.