Miss a day, miss a lot. Subscribe to The Defender's Top News of the Day. It's free.

An AI Chatbot May Be Your Next Therapist. Will It Actually Help Your Mental Health?

KFF Health News reported:

In the past few years, 10,000 to 20,000 apps have stampeded into the mental health space, offering to “disrupt” traditional therapy. With the frenzy around AI innovations like ChatGPT, the claim that chatbots can provide mental healthcare is on the horizon.

The numbers explain why: Pandemic stresses led to millions more Americans seeking treatment. At the same time, there has long been a shortage of mental health professionals in the United States; more than half of all counties lack psychiatrists. Given the Affordable Care Act’s mandate that insurers offer parity between mental and physical health coverage, there is a gaping chasm between demand and supply.

Unfortunately, in the mental health space, evidence of effectiveness is lacking. Few of the many apps on the market have independent outcomes research showing they help; most haven’t been scrutinized at all by the FDA. Though marketed to treat conditions such as anxiety, attention-deficit/hyperactivity disorder, and depression, or to predict suicidal tendencies, many warn users (in small print) that they are “not intended to be medical, behavioral health or other healthcare service” or “not an FDA cleared product.”

There are good reasons to be cautious in the face of this marketing juggernaut.

YouTube’s Recommendations Send Violent and Graphic Gun Videos to 9-Year-Olds, Study Finds

Associated Press reported:

When researchers at a nonprofit that studies social media wanted to understand the connection between YouTube videos and gun violence, they set up accounts on the platform that mimicked the behavior of typical boys living in the U.S.

They simulated two nine-year-olds who both liked video games. The accounts were identical, except that one clicked on the videos recommended by YouTube, and the other ignored the platform’s suggestions.

The account that clicked on YouTube’s suggestions was soon flooded with graphic videos about school shootings, tactical gun training videos and how-to instructions on making firearms fully automatic. One video featured an elementary school-age girl wielding a handgun; another showed a shooter using a .50 caliber gun to fire on a dummy head filled with lifelike blood and brains. Many of the videos violate YouTube’s own policies against violent or gory content.

Along with TikTok, the video-sharing platform is one of the most popular sites for children and teens. Both sites have been criticized in the past for hosting, and in some cases promoting, videos that encourage gun violence, eating disorders and self-harm. Critics of social media have also pointed to the links between social media, radicalization and real-world violence.

OpenAI CEO Sam Altman Raises $100 Million for Worldcoin Crypto Project, Which Uses ‘Orb’ to Scan Your Eye: Report

FOXBusiness reported:

OpenAI CEO Sam Altman has reportedly raised nearly $100 million for his next big project, a cryptocurrency called Worldcoin that will verify users’ unique identities by scanning their eyes.

After revolutionizing artificial intelligence with ChatGPT, Altman has set his sights on creating an “inclusive” global cryptocurrency that will be available to anyone who verifies their “unique personhood” with the “Orb,” an imaging device that takes a picture of an iris pattern.

The company says its crypto token will be “globally and freely distributed” to users who sign up for a wallet — a sort of universal basic income with crypto. Worldcoin hopes to incentivize people to adopt its currency by giving away free coins, which will, in turn, make the coins more valuable and useful if they become widely adopted, Worldcoin claims.

Users who submit to biometric iris scans are assigned a “World ID” that enables them to receive 25 free Worldcoin tokens at launch — provided they are located in a place where Worldcoin token is available.

Musk: There’s a Chance AI ‘Goes Wrong and Destroys Humanity’

The Hill reported:

Tesla CEO Elon Musk is warning that it’s possible emerging artificial intelligence (AI) technology “goes wrong and destroys humanity.”

“There’s a strong probability that it will make life much better and that we’ll have an age of abundance. And there’s some chance that it goes wrong and destroys humanity,” Musk told CNBC anchor David Faber.

“Hopefully, that chance is small, but it’s not zero. And so I think we want to take whatever actions we can think of to minimize the probability that AI goes wrong.”

Musk called the tech a “double-edged sword” and stressed it’s hard to predict what happens next with the new tools.

Excelsior Pass Costs Ballooned to $64 Million and Keep Rising

Times Union reported:

They called it the Excelsior Pass. The first-in-the-nation app would provide a “secure and streamlined” way for people to attend live events and restaurants without digging out their vaccine card. It would be built by IBM, and it would cost $2.5 million.

The state decided early on to outsource the work on the app. While that aspect of the project didn’t change, the stated cost to taxpayers definitely did, and quickly: In June 2021, the New York Times noted that the pass would actually cost $17 million; a follow-up report two months later indicated that price tag had grown to as much as $27 million.

More than two years after Gov. Andrew M. Cuomo’s initial announcement, the payments to private companies for the app have multiplied well beyond that figure, even as the waning of the pandemic means the Excelsior Pass is rarely if ever used — and opens the related question of how many booster shots are needed to be “up to date” according to the app.

The current cost is $64 million, a previously unreported sum that includes funds paid to IBM as well as to two consultants on the project, Boston Consulting Group and Deloitte, according to records obtained by the Times Union.

The state continues to pay IBM $200,000 a month for data storage services related to the Excelsior Pass. In addition, in March the state spent $2.2 million for “application development” of the Excelsior Pass.

This Lawmaker Stands Out for His AI Expertise. Can He Help Congress?

The Washington Post reported:

Rep. Jay Obernolte has said it many times: The biggest risk posed by artificial intelligence is not “an army of evil robots with red laser eyes rising to take over the world.”

It’s the “less obvious” and more mundane issues such as data privacy, antitrust issues and AI’s potential to influence human behavior. All these take precedence over the hypothetical notion of AI ending humanity, the California Republican says.

Obernolte would know: He’s one of a handful of lawmakers with a computer science degree — including graduate research on AI in some of its earliest stages. With the rise of generative AI applications like ChatGPT — what some observers have dubbed a “big bang” moment — Obernolte has emerged as a leading expert in Congress on how the technology works and what lawmakers should worry about.

More broadly, he is blunt about the paucity of tech-savvy lawmakers. “We need more computer science professionals in Congress given the complexity of some of the technological issues we grapple with,” he told The Post.

Police to Use Live Facial Recognition in Cardiff During Beyoncé Concert

The Guardian reported:

Police will use live facial recognition technology in Cardiff during the Beyoncé concert on Wednesday, despite concerns about racial bias and human rights.

A spokesperson for the force said the technology would be used in the city center, not at the concert itself. In the past, police use of live facial recognition (LFR) in England and Wales had been limited to special operations such as football matches or the coronation, when there was a crackdown on protesters.

Daragh Murray, a senior lecturer of law at Queen Mary University in London, said the normalization of invasive surveillance capability at events such as a concert was concerning and was taking place without any real public debate.

“I think things like live facial recognition are the first step, but I think they’re opening the doors to the use of permanent facial recognition across city-wide surveillance camera networks,” he said.