Miss a day, miss a lot. Subscribe to The Defender's Top News of the Day. It's free.

Russell Brand Accuses Government of Bypassing Judicial Process to Censor Him on Social Media

Sky News reported:

Russell Brand has accused the government of trying to “bypass” the judicial system after his YouTube channel was demonetised in the wake of sexual abuse allegations against him.

In a livestream video on Rumble the comedian also accused the “legacy media” of being in “lockstep” with each other to “support a state agenda” and “stamp on independent media voices.”

It comes after four women made allegations of rape, sexual assault and abuse against the star between 2006 and 2013 as part of an investigation by The Times, The Sunday Times and Channel 4’s Dispatches.

The 48-year-old denies the allegations.

ChatGPT Can Now Talk Back to You With an Eerily Human-Like Voice

Business Insider reported:

OpenAI is introducing a new feature to ChatGPT that could make the artificial intelligence (AI) tool feel even more human: the ability to talk to you.

The AI company announced Monday that, over the next two weeks, paying users of ChatGPT will be able to start interacting with the popular chatbot by voice so they can “engage in a back-and-forth conversation.”

The feature, enabled by a new text-to-speech model, allows users to choose from five different voices — named Juniper, Sky, Cove, Ember and Breeze — developed by work done with professional voice actors, the company said.

In a review of the new feature, the Wall Street Journal‘s Joanna Stern described the voices as eerily human. In demos, the voices sound responsive and smooth, unlike the occasionally stilted responses given by smartphone assistants.

OpenAI warned that although the new voice technology, which creates “synthetic voices from just a few seconds of real speech,” offers a new tool for creativity, the feature can present risks such as “the potential for malicious actors to impersonate public figures or commit fraud.”

Supreme Court Considers Limits on White House Contacts With Social Media

Ars Technica reported:

The Supreme Court on Friday extended a stay of a lower-court order that would limit the Biden administration’s contacts with social media firms, giving justices a few more days to consider whether to block the ruling entirely. The court could rule by the middle of this week on the Biden administration motion in a case in which the states of Missouri and Louisiana allege that speech related to COVID-19 and other topics was illegally suppressed at the behest of government officials.

A stay issued Sept. 14 was scheduled to expire on Friday, but Justice Samuel Alito ordered that it be extended until Wednesday, Sept. 27, at 11:59 p.m. ET. Alito is the justice assigned to the 5th Circuit, the circuit in which an appeals court ruled that the White House and U.S. Federal Bureau of Investigation likely violated the First Amendment by coercing social media platforms into moderating content and changing their moderation policies.

While most of the original injunction’s restrictions were eliminated, the Biden administration asked the Supreme Court to block the one surviving prohibition. Under the recently revised injunction, Biden administration officials would be barred from taking any action to directly or indirectly “coerce or significantly encourage social-media companies to remove, delete, suppress, or reduce, including through altering their algorithms, posted social-media content containing protected free speech.”

Cash Will Be No Refuge Under CBDCs

ZeroHedge reported:

The world is headed toward Central Bank Digital Currencies (CBDCs) and everybody knows it, even the people who don’t want them (which at the moment looks to be most people).

But the policy-makers have deigned it be so, and CBDCs provide such a compelling opportunity for surveillance and social control that they are irresistible. That the fiat currency system is in the process of imploding makes it an imperative.

It was to draw my attention to the timeline for Australia going cashless, and while doing so doesn’t coincide with the launch of their CBDC, the RBA is dilligently headed there (as are nearly all central banks globally).

One area of focus in The Bitcoin Capitalist is that we track all the national CBDC deployments and the myriad supranational policies and aspirations that go into them (we call it “Eye on EvilCoin”, for the Mr. Robot fans out there).

With this tweet, I became more intrigued by the call-to-action itself, because I see this a lot: the idea that the way to resist the CBDC is to keep using cash. This is not only wrong-headed, it’s self-defeating.

Facial Recognition Technology Jailed a Man for Days. His Lawsuit Joins Others From Black Plaintiffs

AP News reported:

Randal Quran Reid was driving to his mother’s home the day after Thanksgiving last year when police pulled him over and arrested him on the side of a busy Georgia interstate.

He was wanted for crimes in Louisiana, they told him, before taking him to jail. Reid, who prefers to be identified as Quran, would spend the next several days locked up, trying to figure out how he could be a suspect in a state he says he had never visited.

A lawsuit filed this month blames the misuse of facial recognition technology by a sheriff’s detective in Jefferson Parish, Louisiana, for his ordeal.

“I was confused and I was angry because I didn’t know what was going on,” Quran told The Associated Press. “They couldn’t give me any information outside of, ‘You’ve got to wait for Louisiana to come take you,’ and there was no timeline on that.”

Quran, 29, is among at least five Black plaintiffs who have filed lawsuits against law enforcement in recent years, saying they were misidentified by facial recognition technology and then wrongly arrested. Three of those lawsuits, including one by a woman who was eight months pregnant and accused of a carjacking, are against Detroit police.

Your Boss’s Spyware Could Train AI to Replace You

Wired reported:

You’ve probably heard the story: A young buck comes into a new job full of confidence, and the weathered older worker has to show them the ropes — only to find out they’ll be unemployed once the new employee is up to speed. This has been happening among humans for a long time — but it may soon start happening between humans and artificial intelligence (AI).

Countless headlines over the years have warned that automation isn’t just coming for blue-collar jobs, but that AI would threaten scores of white-collar jobs as well. AI tools are becoming capable of automating tasks and sometimes entire jobs in the corporate world, especially when those jobs are repetitive and rely on processing data. This could affect everyone from workers at banks and insurance companies to paralegals and beyond.

Carl Frey, an economist at Oxford University, coauthored a landmark study in 2013 that claimed AI could threaten nearly 50% of U.S. jobs in the coming decades. Frey says that he doesn’t think new AI tools like ChatGPT are going to automate jobs in this way because they still require human involvement and are often unreliable.

Still, many of the underlying factors that were outlined in that paper remain pertinent today. Considering the rapid pace at which AI is advancing, it’s hard to predict how it could soon be utilized and what it will be capable of.

Experts Disagree Over Threat Posed but Artificial Intelligence Cannot Be Ignored

The Guardian reported:

For some AI experts, a watershed moment in artificial intelligence (AI) development is not far away. And the global artificial intelligence safety summit, to be held at Bletchley Park in Buckinghamshire in November, therefore cannot come soon enough.

Ian Hogarth, the chair of the U.K. taskforce charged with scrutinising the safety of cutting-edge AI, raised concerns before he took the job this year about artificial general intelligence (AGI), or “God-like” AI.

Definitions of AGI vary but broadly it refers to an AI system that can perform a task at a human, or above human, level — and could evade our control.

Max Tegmark, the scientist behind a headline-grabbing letter this year calling for a pause in large AI experiments, told the Guardian that tech professionals in California believe AGI is close.

“A lot of people here think that we’re going to get to God-like artificial general intelligence in maybe three years. Some think maybe two years.”

He added: “Some think it’s going to take a longer time and won’t happen until 2030.” Which doesn’t seem very far away either.