The Defender Children’s Health Defense News and Views
Close menu
Close menu

You must be a CHD Insider to save this article Sign Up

Already an Insider? Log in

April 6, 2023

Big Brother News Watch

Why PepsiCo Is Sweet on Artificial Intelligence + More

The Defender’s Big Brother NewsWatch brings you the latest headlines related to governments’ abuse of power, including attacks on democracy, civil liberties and use of mass surveillance. The views expressed in the excerpts from other news sources do not necessarily reflect the views of The Defender.

The Defender’s Big Brother NewsWatch brings you the latest headlines.

Why PepsiCo Is Sweet on Artificial Intelligence

Axios reported:

If your local grocery or corner mart is keeping Diet Pepsi, Gatorade or Fritos in stock, you may be able to thank artificial intelligence.

Driving the news: PepsiCo, the multinational maker of name-brand soda, chips and sports drinks, may not be a technology company, but it has gone all-in on AI in the past few years, spending “hundreds of millions” of dollars to do so, Athina Kanioura, the company’s chief strategy and transformation officer, told Axios.

Why it matters: PepsiCo is one example of a major corporation embracing AI fully in daily processes, as other companies in non-tech industries begin to grapple with advancements like generative AI. The big picture: PepsiCo, one of the largest food and beverage companies in the world, believes AI can help with improved efficiency, lowered costs and better response to customer demand.

Between the lines: Kanioura said she’s been talking to lawmakers on Capitol Hill interested in AI policy who told her they are “extremely impressed by the level of maturity” of AI deployment at PepsiCo “which they haven’t seen from any other company” beyond tech.

Why We’re Scared of AI and Not Scared Enough of Bio Risks

Vox reported:

When does America underreact, and when does it overreact? The U.S. is still conducting research into making deadlier and more contagious diseases, even while there’s a legitimate concern that work like that may have even caused COVID. And despite the enormous human and economic toll of the coronavirus, Congress has done little to fund the preparedness work that could blunt the effects of the next pandemic.

I’ve been thinking about all this as AI and the possibility that sufficiently powerful systems will kill us all suddenly emerged onto center stage. An open letter signed by major figures in machine learning research, as well as by leading tech figures like Elon Musk, called for a six-month pause on building models more powerful than OpenAI’s new GPT-4.

In Time magazine, AI safety absolutist Eliezer Yudkowsky argued the letter didn’t go far enough and that we need a lasting, enforced international moratorium that treats AI as more dangerous than nuclear weapons.

I’ve argued for years that sufficiently powerful AI systems might end civilization as we know it. In a sense, it’s gratifying to see that position given the mainstream hearing and open discussion that I think it deserves. But it’s also mystifying. Research that seeks to make pathogens more powerful might also end civilization as we know it! Yet our response to that possibility has largely been a big collective shrug.

Arkansas House Passes Bill Requiring Social Media Platforms to Verify Users’ Ages and Seek Parental Consent for Minors

CNN Business reported:

The Arkansas House of Representatives passed a bill on Wednesday that would require social media companies to verify their users’ ages and confirm that minors have permission from a parent or guardian before opening an account.

The bill dubbed the Social Media Safety Act, was passed by an overwhelming vote of 82-10, according to a tweet from the House account, and adds to the swell of efforts by state and federal lawmakers to regulate social media platforms and protect children online.

The recent legislative push from state and federal lawmakers comes amid growing anxieties from many parents struggling to navigate the potential harms of social platforms, including concerns over how they may be introducing young users to harmful content, aggravating mental health issues and creating new venues for online bullying and harassment.

If the Arkansas bill is signed into law, social media companies would be required to use third-party vendors to verify Arkansas residents’ ages — regardless of whether or not they are minors. For users younger than 18, the platform must obtain the consent of their parent or guardian in order to open an account for them.

Don’t Tell Anything to a Chatbot You Want to Keep Private

CNN Business reported:

As the tech sector races to develop and deploy a crop of powerful new AI chatbots, their widespread adoption has ignited a new set of data privacy concerns among some companies, regulators and industry watchers.  Some companies, including JPMorgan Chase (JPM), have clamped down on employees’ use of ChatGPT, the viral AI chatbot that first kicked off Big Tech’s AI arms race, due to compliance concerns related to employees’ use of third-party software.

It only added to mounting privacy worries when OpenAI, the company behind ChatGPT, disclosed it had to take the tool offline temporarily on March 20 to fix a bug that allowed some users to see the subject lines from other users’ chat history.

The same bug, now fixed, also made it possible “for some users to see another active user’s first and last name, email address, payment address, the last four digits (only) of a credit card number, and credit card expiration date,” OpenAI said in a blog post.

“The privacy considerations with something like ChatGPT cannot be overstated,” Mark McCreary, the co-chair of the privacy and data security practice at law firm Fox Rothschild LLP, told CNN. “It’s like a black box.”

Google CEO Says an AI Chatbot Is Coming to Search … Eventually

Gizmodo reported:

Following in the footsteps of competitor Microsoft — which added the same tech behind ChatGPT to Bing earlier this year — Google is on its own path to include a chatbot in its own search engine. When? Well, CEO Sundar Pichai was delightfully vague on the timeline.

In an interview with The Wall Street Journal earlier this week, Pichai spilled the beans on Google’s intention to bring a chatbot to Google search but remained tight-lipped on details surrounding when we might expect to see the tech.

Pichai also told the Journal that the company is working on several different AI-based search products — including one that allows users to ask follow-up questions after punching in their query — that could help Google move away from the link-based search that it popularized.

The move comes as Google competitor Microsoft has been pouring billions of dollars into a deal with ChatGPT’s OpenAI.

Microsoft seized the moment before the AI chatbot wave even began to pick up momentum, showing interest in OpenAI as early as 2020. Now Microsoft is shoving the AI into Bing search as fast as it could with mixed results while also scrapping its ethical AI team.

AI Desperately Needs Global Oversight

Wired reported:

Every time you post a photo, respond on social media, make a website or possibly even send an email, your data is scraped, stored and used to train generative AI technology that can create text, audio, video and images with just a few words.

This has real consequences: OpenAI researchers studying the labor market impact of their language models estimated that approximately 80% of the U.S. workforce could have at least 10% of their work tasks affected by the introduction of large language models (LLMs) like ChatGPT, while around 19% of workers may see at least half of their tasks impacted.

When a company builds its technology on a public resource — the internet — it’s sensible to say that that technology should be available and open to all. But critics have noted that GPT-4 lacked any clear information or specifications that would enable anyone outside the organization to replicate, test or verify any aspect of the model.

Some of these companies have received vast sums of funding from other major corporations to create commercial products. For some in the AI community, this is a dangerous sign that these companies are going to seek profits above public benefit.

Mass. Coalition Presses to Keep Masks on in Some Healthcare Settings

Telegram & Gazette reported:

As the state’s public health emergency inches closer to its expiration next month, a group of public health advocates, healthcare workers and patients is pressing state officials and the healthcare industry to maintain the masking requirement in place for hospitals, clinics, physician and dentist offices, nursing homes, and for home healthcare services.

Gov. Maura Healey announced last month that she will end the state’s public health emergency — which in 2021 effectively took the place of an earlier COVID-19 state of emergency — on May 11, the same day a federal public health emergency ends. That will end six state public health emergency orders, one of which requires Bay Staters to wear masks in some healthcare and congregate care settings.

The Massachusetts Coalition for Health Equity is collecting signatures (more than 740 so far) on an open letter that urges the Department of Public Health, local boards of health, and healthcare institutions to keep masking requirements in place for all healthcare settings and to provide free masks, ideally N95s, to everyone in those settings.

Chinese Officials Flock to Twitter to Defend TikTok

The New York Times reported:

When members of Congress grilled TikTok’s chief executive last month on Capitol Hill, the app’s supporters sprang to its defense online. The lawmakers were “old, tech-illiterate,” one said. “Out of touch, paranoid and self-righteous,” said another. The hourslong hearing “destroyed the illusion that the U.S. leads in cyber era,” read another post.

These particular barbs did not come from TikTok’s users — 150 million and counting in the United States — but from representatives of China’s government.

In an information campaign primarily run on Twitter, Chinese officials and state media organizations widely mocked the United States in the days before and after the hearing, accusing lawmakers of hypocrisy and even xenophobia for targeting the popular app, according to a report released on Thursday by the Alliance for Securing Democracy, a nonpartisan initiative from the German Marshall Fund.

China’s information push, however, showed just how deeply invested Beijing was in the company’s fate. Just hours before Mr. Chew’s testimony last month, China’s Commerce Ministry said it opposed a sale of TikTok in a direct rebuke of the Biden administration, which is pushing a sale.

Suggest A Correction

Share Options

Close menu

Republish Article

Please use the HTML above to republish this article. It is pre-formatted to follow our republication guidelines. Among other things, these require that the article not be edited; that the author’s byline is included; and that The Defender is clearly credited as the original source.

Please visit our full guidelines for more information. By republishing this article, you agree to these terms.

Woman drinking coffee looking at phone

Join hundreds of thousands of subscribers who rely on The Defender for their daily dose of critical analysis and accurate, nonpartisan reporting on Big Pharma, Big Food, Big Chemical, Big Energy, and Big Tech and
their impact on children’s health and the environment.

  • This field is for validation purposes and should be left unchanged.
  • This field is hidden when viewing the form
  • This field is hidden when viewing the form
    MM slash DD slash YYYY
  • This field is hidden when viewing the form