Joe Rogan Calls Out Key Issue With Social Media — ‘It’s Disheartening’
Joe Rogan has expressed concern over social media being used as a tool to spread misinformation on health issues. Speaking on The Joe Rogan Experience, the powerhouse podcaster spoke with guest Max Lugavere, a filmmaker and health and science journalist, about the dangers of toxic “forever” chemicals being found in products used by consumers.
PFAS, which stands for per- and polyfluorinated alkyl substances, are a class of chemicals that can be found in a range of everyday products, from toilet paper to food packaging, cosmetics and dental floss. Nicknamed forever chemicals, these compounds break down very slowly over time and stick around in their surrounding environment.
The widespread nature of these chemicals is concerning as numerous studies have found associations between PFAS exposure and increased blood cholesterol and blood pressure, reduced immunity, reproductive issues and a higher risk of certain cancers, the U.S. Agency for Toxic Substances and Disease Registry recently reported.
Lugavere said that when it came to the issue: “You talk about this stuff today on social media and you’re accused of fear-mongering of being alarmist.” Rogan questioned whether “trolls from pharmaceutical companies” were responsible for dismissing such statements online, adding that such practices are “something that I guarantee you corporations use. If nations use it — and we know they do — and know that there are troll farms in Russia, we know this is a real thing, Why wouldn’t corporations use that too?”
Tennessee Woman Fired for Refusing Employer’s COVID Vaccine Mandate Wins Almost $700K
A federal jury has determined a woman who was fired for refusing to get a COVID-19 vaccine mandated by her employer, BlueCross BlueShield of Tennessee, is due a settlement worth almost $700,000. The jury found that Tanja Benton ‘proved by a preponderance of the evidence” that her refusal to get the shot “was based on a sincerely-held religious belief.” Benton worked at BCBST from 2005 through November of 2022, primarily as a bio statistical research scientist.
Her federal lawsuit said it was not a part of Benton’s job to regularly come into contact with people, saying she had a portfolio of 10 to 12 clients each year, with whom she only interacted infrequently, and sometimes not in person. It also pointed out that Benton never came into contact with any patients as part of her job.
Like many others, the pandemic changed Benton’s job. She says she worked from home for the next year and a half, without any complaints. Benton submitted a request for a religious exemption to BCBST’s vaccine mandate. But BCBST denied her request, saying she could not continue her job as a biostatistical research scientist.
Benton appealed, saying she did not interact with people during the course of her workday, and a company representative responded that “there are no exceptions” for anyone who has Benton’s job title, and suggested she apply for a different job. BCBST ultimately fired Benton, and she filed a federal lawsuit.
As part of its verdict, the federal jury awarded Benton $177,240 in back pay, $10,000 in compensatory damages, and $500,000 in punitive damages, for a total of $687,240.
U.S. Supreme Court Sidesteps Dispute on State Laws Regulating Social Media
The U.S. Supreme Court on Monday threw out a pair of judicial decisions involving challenges to Republican-backed laws in Florida and Texas designed to restrict the power of social media companies to curb content that the platforms deem objectionable.
The justices directed lower appeals courts to reconsider their decisions regarding these 2021 laws authorizing the states to regulate the content-moderation practices of large social media platforms. Tech industry trade groups challenged the two laws under the U.S. Constitution’s First Amendment limits on the government’s ability to restrict speech.
At issue was whether the First Amendment protects the editorial discretion of social media platforms and prohibits governments from forcing companies to publish content against their will. The companies have said that without such discretion — including the ability to block or remove content or users, prioritize certain posts over others or include additional context — their websites would be overrun with spam, bullying, extremism and hate speech.
Supreme Court Won’t Hear Tech Liability Challenge From Teen Groomed on Snapchat
The Supreme Court will not consider a challenge to the scope of a federal law immunizing tech companies from liability for their users’ content brought by a teenager who was allegedly groomed by a teacher on Snapchat.
The teen sought to hold Snap Inc. liable for having “negligently designed an environment rife with sexual predators and then lured children in.” His lawyers also claimed Snap “knew or should have known,” given its internal technology, that the teen was being groomed.
His petition, filed with the high court in March, sought to put Section 230 of the Communications Decency Act to a new test. The law says internet service providers cannot be held liable as the “publisher” or “speaker” of content on their platforms.
Instead of holding Snap liable as a publisher or speaker, the teen asked the high court to consider whether Section 230 immunizes internet service providers from any lawsuit regarding their own misconduct just because third-party content is also involved.
OnlyFans Vows It’s a Safe Space. Predators Are Exploiting Kids There.
OnlyFans makes reassuring promises to the public: It’s strictly adults-only, with sophisticated measures to monitor every user, vet all content and swiftly remove and report any child sexual abuse material. “We know the age and identity of everyone on our platform,” said CEO Keily Blair in a speech last year. “No children allowed, nobody under 18 on the platform.”
Reuters documented 30 complaints in U.S. police and court records that child sexual abuse material appeared on the site between December 2019 and June 2024. The case files examined by the news organization cited more than 200 explicit videos and images of kids, including some adults having oral sex with toddlers. In one case, multiple videos of a minor remained on OnlyFans for more than a year, according to a child exploitation investigator who found them while assisting Reuters in its reporting and alerted authorities in June.
OnlyFans didn’t respond to most of Reuters’ questions about the cases in this story, including how child abuse material was able to evade its monitoring and whether it has kept its revenue from accounts involving minors. None of the cases involved criminal charges against the website or its parent company, Fenix International. Reuters found no evidence that OnlyFans has been sued or held criminally liable for child sexual abuse content, according to a search of U.S. and international legal databases.
Federal free-speech protections have largely immunized social media platforms from liability for abusive content posted by their users. But as concerns mount about online harms — particularly involving children — Congress is seeking to toughen federal laws to hold the platforms accountable.
Federal Judge Halts Mississippi Law Requiring Age Verification for Websites
A federal judge on Monday blocked a Mississippi law that would require users of websites and other digital services to verify their age.
The preliminary injunction by U.S. District Judge Sul Ozerden came the same day the law was set to take effect. A tech industry group sued Mississippi on June 7, arguing the law would unconstitutionally limit access to online speech for minors and adults. The U.S. Supreme Court has held that any law that dealing with speech “is subject to strict scrutiny regardless of the government’s benign motive,’” Ozerden wrote.
The suit challenging the law was filed by NetChoice, whose members include Google, which owns YouTube; Snap Inc., the parent company of Snapchat; and Meta, the parent company of Facebook and Instagram.
Chris Marchese, director of the NetChoice Litigation Center, said in a statement Monday that the Mississippi law should be struck down permanently because “mandating age and identity verification for digital services will undermine privacy and stifle the free exchange of ideas.”
Cold Turkey for Child Smartphone Addicts
Mental health concerns drove New York City’s decision this week to ban city schoolchildren from using cell phones.
David Banks, chancellor of the nation’s largest public school system serving more than 900,000 kids, said doctors concerned about smartphone addiction and other harms connected to social media apps kids access on their phones had advised the city to make the move.
“Our kids are fully addicted to these phones,” Banks said in an interview on NY1. “We’ve got to do something about it.” Why it matters: The ban in New York comes amid a nationwide movement to curb phone use in schools and growing worry about how smartphones affect children.
Overseas, France has long banned smartphones for kids in middle school and younger, and the U.K. is considering a ban.
New ADHD Diagnoses Doubled During COVID, Study Suggests
New diagnoses of attention-deficit hyperactivity disorder (ADHD) in Finland doubled during the COVID-19 pandemic, with the largest increase in females aged 13 to 30 years, University of Helsinki researchers report in JAMA Network Open.
“Pandemic lockdown imposed a sudden increase to attention and executive behavioral demands, coupled with a lack of daily structures and reduced possibilities for physical exercise,” the researchers said. “These challenges in living conditions may have surfaced ADHD symptoms in individuals previously coping sufficiently in their daily lives. Further studies are needed to explain the psychological, societal, and biological mechanisms underlying these observations.”
Meta’s ‘Pay or Consent’ Model Fails EU Competition Rules, Commission Finds
An investigation conducted by the European Commission has found that Meta’s “pay or consent” offer to Facebook and Instagram users in Europe does not comply with the bloc’s Digital Markets Act (DMA), according to preliminary findings reported by the regulator on Monday.
The Commission wrote in a press release that the binary choice Meta offers “forces users to consent to the combination of their personal data and fails to provide them a less personalized but equivalent version of Meta’s social networks.”
More saliently, Meta could finally be forced to abandon a business model that demands users agree to surveillance advertising as the entry “price” for using its social networks.
The regulator’s case against Meta contends the adtech giant is failing to provide people with a free and fair choice to deny tracking.