The Defender Children’s Health Defense News and Views
Close menu
Close menu

You must be a CHD Insider to save this article Sign Up

Already an Insider? Log in

October 17, 2023

Big Brother News Watch

AI Could Usher in a New Age of Bioweapons, RAND Report Warns + More

The Defender’s Big Brother NewsWatch brings you the latest headlines related to governments’ abuse of power, including attacks on democracy, civil liberties and use of mass surveillance. The views expressed in the excerpts from other news sources do not necessarily reflect the views of The Defender.

The Defender’s Big Brother NewsWatch brings you the latest headlines.

AI Could Usher in a New Age of Bioweapons, RAND Report Warns

Gizmodo reported:

The days of Terminator and The Matrix could be closer to reality as the artificial intelligence wave continues to crash. A U.S. think tank released a report arguing that the AI that guides the likes of ChatGPT and those dystopian influencers from Meta could be used to create a new bioweapon.

The report comes from the RAND Corporation, a California-based research institute and think tank. Authors of the report argue that AI couldn’t necessarily provide instructions for how to create a bioweapon, but could bridge gaps in knowledge that have prevented those weapons from being created successfully in the past.

Further, the report says that since AI is quickly evading the slow tread of government oversight, a gap in regulation could be the power vacuum in which a terrorist group may strike with a bioweapon.

The researchers did not disclose which large language models they used in the report. In one test scenario, an LLM apparently mentioned circumstances of obtaining and distributing Yersinia pestis, a bacterium associated with causing plague, while discussing variables that could lead to a specific death toll. Likewise, the AI reported on topics like budgeting for a bioweapon, identifying potential agents of spread, as well as other, more vague, success factors.

More Than Half U.S. Teens Use Social Media for Almost Five Hours per Day, Survey Finds

Gizmodo reported:

A new Gallup survey found more than half of teenagers in the U.S. spend an average of 4.8 hours on social media each day. The responses came from 1,591 people ages 13-19 years old, and the survey’s findings show that as teens got older, they stayed on social media even longer.

The poll looked at social media usage across YouTube, TikTok, Instagram, Facebook, X — formerly called Twitter — and WhatsApp. Gallup found that out of all platforms, teens spend minimal time on WhatsApp, X, and Facebook in favor of YouTube (1.9 hours), TikTok (1.5 hours), and Instagram (.9 hours).

Gallup questions whether social media addiction is a contributing factor to the number of hours teens spend online every day saying, “Studies have pointed out how technology companies manipulate users into spending more time on the apps through their designs.” The report references a 2022 article published in the journal American Economic Review that says 31% of young adults are affected by the way social media companies design the platforms which reportedly creates “self-control problems” and excessive screen time use.

“The overuse of social media can actually rewire a young child or teen’s brain to constantly seek out immediate gratification, leading to obsessive, compulsive, and addictive behaviors,” said Dr. Nancy Deangelis, the director of behavioral health at Jefferson Health.

AI Chatbots Can Guess Your Personal Information From What You Type

Wired reported:

The way you talk can reveal a lot about you — especially if you’re talking to a chatbot. New research reveals that chatbots like ChatGPT can infer a lot of sensitive information about the people they chat with, even if the conversation is utterly mundane.

The phenomenon appears to stem from the way the models’ algorithms are trained with broad swathes of web content, a key part of what makes them work, likely making it hard to prevent. “It’s not even clear how you fix this problem,” says Martin Vechev, a computer science professor at ETH Zurich in Switzerland who led the research. “This is very, very problematic.”

Vechev and his team found that the large language models that power advanced chatbots can accurately infer an alarming amount of personal information about users — including their race, location, occupation, and more — from conversations that appear innocuous.

Vechev says that scammers could use chatbots’ ability to guess sensitive information about a person to harvest sensitive data from unsuspecting users. He adds that the same underlying capability could portend a new era of advertising, in which companies use information gathered from chatbots to build detailed profiles of users.

Big Tech’s Favorite Legal Shield Takes a Hit

The Hollywood Reporter reported:

A Los Angeles judge has declined to dismiss a series of blockbuster lawsuits against Meta, TikTok, Snap and Google arguing their platforms are intentionally designed to addict and fuel mental health disorders in teenagers, increasing the likelihood they will have to potentially face or settle for billions of dollars the product liability claims.

In the first order advancing litigation raising a novel public nuisance theory from hundreds of government officials and parents of minors, Los Angeles Superior Court Judge Carolyn Kuhl on Friday found that the companies can’t wield Section 230 — Big Tech’s favorite legal shield — to escape some claims in the case. She nodded to “the fact that the design features of the platforms — and not the specific content viewed” by users caused their injuries.

Thousands of plaintiffs across the country have sued social media companies, arguing their platforms are essentially defective products that lead to eating disorders, anxiety and suicide, among other mental health injuries. The lawsuits could lead to multibillion-dollar payouts, with similar public nuisance lawsuits from government officials in lawsuits against opioid and tobacco manufacturers having resulted in massive settlements.

By steering clear of claims centering on the specific content that companies host, they’re trying to sidestep potential immunity flowing from Section 230, which has historically afforded tech firms significant legal protection from liability as third-party publishers.

Clearview AI and the End of Privacy, With Author Kashmir Hill

The Verge reported:

Today, I’m talking to Kashmir Hill, a New York Times reporter whose new book, Your Face Belongs to Us: A Secretive Startup’s Quest to End Privacy as We Know It, chronicles the story of Clearview AI, a company that’s built some of the most sophisticated facial recognition and search technology that’s ever existed. As Kashmir reports, you simply plug a photo of someone into Clearview’s app, and it will find every photo of that person that’s ever been posted on the internet. It’s breathtaking and scary.

Kashmir is a terrific reporter. At The Verge, we have been jealous of her work across Forbes, Gizmodo, and now, the Times for years. She’s long been focused on covering privacy on the internet, which she is first to describe as the dystopia beat because the amount of tracking that occurs all over our networks every day is almost impossible to fully understand or reckon with.

But people get it when the systems start tracking faces — when that last bit of anonymity goes away. And it’s remarkable that Big Tech companies like Google and Facebook have had the ability to track faces like this for years, but they haven’t really done anything with it. It seems like that’s a line that’s too hard for a lot of people to cross.

But not everyone. Your Face Belongs to Us is the story of Clearview AI, a secretive startup that, until January 2020, was virtually unknown to the public, despite selling this state-of-the-art facial recognition system to cops and corporations.

The Digital Town Square Doesn’t Exist Yet

The Atlantic reported:

Many people have put forth theories about why, exactly, the internet is bad. The arguments go something like this: Social platforms encourage cruelty, snap reactions, and the spreading of disinformation, and they allow for all of this to take place without accountability, instantaneously and at scale.

To understand what we ought to build, you must first consider how social media went sideways. In the early days of Facebook and Twitter, we called them “social networks.” But when you look at how these sites are run now, their primary goal has not been social connection for some time.

Once these platforms introduced advertising, their primary purpose shifted to keeping people engaged with content for as long as possible so they could be served as many ads as possible. Now powerful AI algorithms deliver personally tailored content and ads most likely to keep people consuming and clicking, leading to these platforms becoming highly addictive.

U.K. Lockdowns Were a Policy ‘Failure,’ Health Expert Tells COVID Inquiry

The Guardian reported:

Nationwide lockdowns in the U.K. during the pandemic were a “failure” of public health policy as they were not considered a last resort, an epidemiology expert has said.

Giving evidence at the COVID-19 public inquiry on Monday, Prof Mark Woolhouse of the University of Edinburgh — a member of the Scientific Pandemic Influenza Group on Modeling (SPI-M-O) — said the group failed to adequately assess the negative consequences of a nationwide lockdown.

“The harms of the social distancing measures — particularly lockdown, the economic harms, the educational harms, the harms to access to healthcare, the harms to societal wellbeing … just the way we all function … mental health — were not included in any of the work that SPI-M-O did and, as far as I could tell, no one else was doing it either,” Woolhouse told the inquiry.

“The question of how to avoid lockdown was never asked of us and I find that extraordinary.”

Woolhouse, who specializes in infectious diseases and epidemiology, also criticized the phrase “going early, going hard,  used by the U.K.’s then-chief scientific adviser, Sir Patrick Vallance, in regard to the rapid implementation of a strict lockdown, claiming that in the circumstances of the coronavirus pandemic, it would not have been effective, as completely eradicating the virus was not an option at that time.

Canadian Nurse Faces Disciplinary Hearing for Social Media Posts Criticizing COVID Mandates

Reclaim the Net reported:

In a striking case that brings free speech to the forefront, Saskatchewan nurse Leah McInnes has found herself on the brink of a disciplinary hearing. She expressed her reservations about COVID vaccines and mandates on social media platforms, opening up a debate on her right to voice opinions on such critical issues.

Her statements landed her in hot water with the College of Registered Nurses of Saskatchewan (CRNS), which is accusing her of engaging in professional misconduct.

The alleged misconduct, as claimed by CRNS, is rooted in her involvement in protests against vaccine mandates and vaccine passports during the COVID-19 pandemic. McInnes is presently involved in a four-day tribunal hearing in Regina, which started yesterday.

The Justice Centre for Constitutional Freedoms (JCCF), arguing in her defense, has asserted that she has the right to express her views on vaccine mandates, vaccine passports, and related issues like freedom of choice and medical privacy.

China’s Baidu Unveils New Ernie AI Version to Rival GPT-4

Reuters reported:

Chinese technology giant Baidu (9888.HK) on Tuesday unveiled the newest version of its generative artificial intelligence (AI) model, Ernie 4.0, saying its capabilities were on par with those of ChatGPT maker OpenAI’s pioneering GPT-4 model.

CEO Robin Li introduced Ernie 4.0 at an event in Beijing, focusing on what he described as the model’s memory capabilities and showing it writing a martial arts novel in real time. He also showed Ernie 4.0 creating advertising posters and videos. Analysts were unimpressed. Ernie 4.0’s launch lacked major highlights versus the previous version, said Lu Yanxia, an analyst at industry consultancy IDC.

Baidu, the owner of China’s largest internet search engine, is at the forefront of AI models in China amid a global craze over the technology sparked by the introduction of ChatGPT last year.

China now has at least 130 large language models (LLMs), representing 40% of the global total and behind only the United States’ 50%, data from brokerage CLSA showed.

Suggest A Correction

Share Options

Close menu

Republish Article

Please use the HTML above to republish this article. It is pre-formatted to follow our republication guidelines. Among other things, these require that the article not be edited; that the author’s byline is included; and that The Defender is clearly credited as the original source.

Please visit our full guidelines for more information. By republishing this article, you agree to these terms.

Woman drinking coffee looking at phone

Join hundreds of thousands of subscribers who rely on The Defender for their daily dose of critical analysis and accurate, nonpartisan reporting on Big Pharma, Big Food, Big Chemical, Big Energy, and Big Tech and
their impact on children’s health and the environment.

  • This field is for validation purposes and should be left unchanged.
  • This field is hidden when viewing the form
  • This field is hidden when viewing the form
    MM slash DD slash YYYY
  • This field is hidden when viewing the form