Big Brother News Watch
Stop Mourning the Murthy Case, Start Fighting the Censorship-Industrial Complex + More
Stop Mourning the Murthy Case, Start Fighting the Censorship-Industrial Complex
When the government and Big Tech collude to censor speech online, Americans should not be at the mercy of lengthy and uncertain litigation to vindicate their rights.
After government officials like former White House advisers Rob Flaherty and Andy Slavitt repeatedly harangued platforms such as Facebook to censor Americans who contested the government’s narrative on COVID-19 vaccines, Missouri and Louisiana sued. They claimed that the practice violates the First Amendment.
Following years of litigation, the Supreme Court threw cold water on their efforts, ruling in Murthy v. Missouri that states and the individual plaintiffs lacked standing to sue the government for its actions.
A complicating factor in this litigation and other congressional investigations is that the government often disguised its censorship requests by coordinating with ostensibly “private” civil society groups to pressure tech companies to remove or shadow-ban targeted content.
Courts matter for adjudicating and protecting rights. But the entrenched relationships between Big Tech and Big Government require new policies to meaningfully check their power. It’s time for Congress and the next administration to strike at the root of the censorship-industrial complex and protect ordinary Americans.
Healthcare Companies Are Sending Your Data to Big Tech
California-based health system Kaiser Permanente recently alerted millions of people that their private information was inappropriately shared with tech giants, angering patients who weren’t aware of the practice.
A Bloomberg News analysis showed the same kinds of online trackers remain on the websites of the nation’s largest healthcare companies, often unknown to their millions of patients.
Tour De France Reintroduces Mask Mandate Amid COVID Concerns
The Tour de France has reintroduced protective measures against COVID-19, with race organization, media and guests now required to wear masks wherever they come into contact with riders and team staff on the race.
Race organizer ASO announced the protocol on Sunday morning following a number of COVID-19 cases in the peloton in recent days. Tom Pidcock (Ineos Grenadiers), Juan Ayuso (UAE Team Emirates) and Michael Mørkøv (Astana-Qazaqstan) are among the riders to have abandoned the race after contracting the virus, while Geraint Thomas (Ineos Grenadiers) remains in the Tour despite testing positive for COVID-19.
County Increases Testing, UCSD Health Workers Return to Masking as COVID Surges
The San Diego Union-Tribune reported:
UC San Diego Health became the first major local medical provider to reinstate masking requirements for employees as the county added free coronavirus testing Thursday, responding to a summer COVID-19 surge that started building steam in June.
Updates to local tracking information arrived Thursday with the amount of the virus detected in local wastewater and the percentage of positive tests performed by healthcare organizations on the rise. According to the county’s update, 15.3% of test results reported on July 6 were positive, nearly double the 8.1% positive reported on June 8.
UC San Diego Health, looking at its own numbers, took additional action, moving up a tier in its existing COVID response plan and reinstating requirements for its employees, whether they work in one of its three hospitals or in an outpatient setting, to start wearing masks when working face-to-face with patients.
Masks are not required for non-hospital employees nor for workers not delivering care. The change will be in effect for two weeks, at which point a decision will be made, based on several different measures of viral activity, about whether to increase or decrease infection-prevention measures.
Momentum Grows for Cell Phone Bans in Schools
Cell phone bans for schools are surging across the country as educators and state lawmakers look to tackle learning loss and reduce distractions, but within the movement, there are significant divisions.
New York City, Los Angeles and the state of Virginia have all moved to forbid student phones from classrooms in recent weeks, despite some parental backlash on the measures.
New York City Public Schools is looking into policies on getting rid of phones, and the Los Angeles school board approved a policy to restrict the devices, although the details of how America’s two biggest cities will get the job done have not been finalized.
Some are advocating for students not to have their phones at all throughout the school day.
AT&T Says Data From 109 Million U.S. Customer Accounts Illegally Downloaded
AT&T (T.N) said on Friday the company suffered a massive hacking incident as data from about 109 million customer accounts containing records of calls and texts from 2022 was illegally downloaded in April.
The U.S. telecom company said the FBI is investigating and at least one person has been arrested after AT&T call logs were copied from its workspace on a third-party cloud platform, in a significant breach of consumer communication records.
AT&T said the compromised data includes files containing AT&T records of calls and texts of nearly all of AT&T’s cellular and AT&T’s landline customers interacting with those cellular numbers between May and October 2022. The data does not contain the content of calls or texts or personal information such as social security numbers.
Think Tank Pushes International Alliance to Censor ‘Fake News’
The Japanese chair of the Center for Strategic and International Studies (CSIS) has come out with a report calling for the U.S. and Japan to team up on “combating disinformation.”
Christopher B. Johnstone also wants the two countries to engage in several censorship techniques, such as removing content (“false narratives” — regular censorship) but also a considerably more dystopian one known as “prebunking.”
That would be, suppressing narratives by revealing them as “misinformation” before they become public, thus eroding the very perception of their trustworthiness, while equating this as introducing “mental antibodies” into a population, and other outlandish language has been used in the past to justify the tactic.
Preparing Schools for the H5N1 Bird Flu They’re Likely to Face + More
Preparing Schools for the H5N1 Bird Flu They’re Likely to Face
As COVID-19 swept across the United States, schools were among the most highly affected public spaces. To prepare for a potential H5N1 avian influenza jump to humans, schools need to be preparing for the scenario now before a sustained transmission event occurs.
The response to COVID-19, which first appeared in the U.S. in early 2020, has been scrutinized by numerous case studies, after-action reports, and Congressional fact-finding hearings. Despite the federal government investing billions of dollars to improve public health infrastructure and efforts to streamline red tape through the new White House Office of Pandemic Preparedness and Response Policy, significant challenges remain. While these efforts suggest that the U.S. should be better prepared for the next pandemic, recent warnings from experts give pause for concern.
Robert Redfield, the former director of the Centers for Disease Control and Prevention, recently predicted that avian flu will cause a pandemic. Seth Berkley, the former CEO of GAVI, the Vaccine Alliance, derided the shocking ineptitude of the U.S. response to the avian flu outbreak among dairy cattle.
In school settings, testing, contact tracing, masks, and isolation cannot be counted on to control the spread of an avian flu that has adapted to efficiently infect humans. Before COVID-19, these nonpharmaceutical interventions (NPIs) were a cornerstone of pandemic response strategy. While such interventions can work for short periods of time in small settings, lack of consistent use and variability in operation make them unreliable over longer periods. It is also clear that views towards masks and other NPIs are influenced by political preferences, which further contribute to differing patterns of behavior and personal use.
Who takes responsibility for public health measures in the United States today emerges from a widely fragmented patchwork of incomplete administrative policies and political authorities that compete with fundamental ideals of free speech, individualism, and personal liberty. This realty, compounded by the fog of uncertainty in the early days of any viral outbreak, when nearly everything about an emerging infectious disease is up in the air, suggests a high likelihood of repeating the disjointed approach to COVID-19, with some jurisdictions opting to close schools to in-person instruction, others moving to hybrid learning, and others making no changes and remaining open.
Musk Announces X to Sue ‘Perpetrators and Collaborators’ Behind Advertising Censorship Cartel
Elon Musk announced on Thursday that social media platform X will sue ‘perpetrators and collaborators’ who have colluded to control online speech, as revealed on Wednesday by an interim staff report released by the House Judiciary Committee.
“Having seen the evidence unearthed today by Congress, has no choice but to file suit against the perpetrators and collaborators in the advertising boycott racket,” Musk wrote on his platform, adding “Hopefully, some states will consider criminal prosecution.”
The House report details a coordinated effort by the World Federation of Advertisers (WFA) and its Global Alliance for Responsible Media (GARM) initiative to demonetize and suppress disfavored content across the internet.
The Committee report details multiple instances of GARM’s coordinated efforts to influence and censor online content. Perhaps the most notable example is the recommendation for a boycott of Twitter following Elon Musk’s acquisition. GARM members, including Danish energy company Ørsted, were advised to pull their advertising from Twitter, a move that significantly impacted Twitter’s revenue. Internal emails show GARM’s satisfaction with the result, with GARM leader Rob Rakowitz boasting about the impact on Twitter’s financials.
‘Alarming Mental Health Crisis’: Virginia GOP Governor Calls for Phone-Free Schools in Executive Order
Virginia’s Republican governor called for “phone-free” schools in an executive order on Tuesday, citing the mental health and learning crisis among school-age children.
Governor Glenn Youngkin’s order directs the education department to issue guidance on establishing “cell phone-free education policies and procedures” for public schools.
The order cites the “alarming mental health crisis” among teenagers that is “driven in part by extensive social media usage and widespread cell phone possession among children.”
Human Rights Groups Raise Alarm Over UN Cybercrime Convention
As the date for finalizing the UN Cybercrime Convention approaches, human rights organizations are warning that it threatens freedom of expression and normalizes domestic surveillance.
After two and a half years of negotiations and seven negotiating sessions, the UN General Assembly is set to either adopt or reject the draft on August 9. The aim is to create an over-arching legal framework for nations to cooperate on preventing and investigating cybercrime and prosecuting cybercriminals.
However, rights organizations are concerned that the document hampers freedom of expression, enables state surveillance and could threaten the work of journalists and security researchers.
The Aftermath of the Supreme Court’s NetChoice Ruling
The NetChoice decision states that tech platforms can exercise their First Amendment rights through their content moderation decisions and how they choose to display content on their services — a strong statement that has clear ramifications for any laws that attempt to regulate platforms’ algorithms in the name of kids online safety and even on a pending lawsuit seeking to block a law that could ban TikTok from the U.S.
“When the platforms use their Standards and Guidelines to decide which third-party content those feeds will display, or how the display will be ordered and organized, they are making expressive choices,” Justice Elena Kagan wrote in the majority opinion, referring to Facebook’s News Feed and YouTube’s homepage. “And because that is true, they receive First Amendment protection.”
NetChoice isn’t a radical upheaval of existing First Amendment law, but until last week, there was no Supreme Court opinion that applied that existing framework to social media platforms.
The justices didn’t rule on the merits of the cases, concluding, instead, that the lower courts hadn’t completed the necessary analysis for the kind of First Amendment challenge that had been brought. But the decision still provides significant guidance to the lower courts on how to apply First Amendment precedent to social media and content moderation.
The decision is a revealing look at how the majority of justices view the First Amendment rights of social media companies — something that’s at issue in everything from kids online safety bills to the TikTok “ban.”
Musk Says Next Neuralink Brain Implant Expected Soon, Despite Issues With the First Patient
Elon Musk said Wednesday that his brain tech startup Neuralink hopes to implant its system in a second human patient within “the next week or so.” Executives also said the company is making changes to address the hardware problems it encountered with its first participant.
In a livestream with Neuralink executives on Wednesday, Musk said the company is hoping to implant its device in the “high single digits” of patients this year. It is not clear when or where those procedures will take place.
In January, Neuralink implanted its BCI in its first human patient, 29-year-old Noland Arbaugh, at the Barrow Neurological Institute in Phoenix, as part of a clinical study approved by the FDA.
Neuralink also plans to insert some threads deeper into the brain tissue and track how much movement occurs, according to the company livestream. Dr. Matthew MacDougall, head of neurosurgery at Neuralink, said it will insert threads “at a variety of depths” now that it knows retraction is a possibility.
Mall of America’s New AI-Fueled Facial Recognition Tech Sparks Pushback
Privacy advocates and lawmakers from both sides of the aisle are raising concerns about the Mall of America’s use of AI-fueled facial recognition technology.
Why it matters: The expansive shopping and entertainment center says the tool will help keep customers and staff safe, but critics argue it presents privacy and civil rights concerns.
How it works: The system, made by a company called Corsight, scans the mall’s security video feeds and looks for matches to a “person of interest” (POI) photo database.
The intrigue: State Sens. Omar Fateh (DFL-Minneapolis) and Eric Lucero (R-Saint Michael) — legislators who are polar opposites on most political issues — issued a joint news release slamming MOA’s decision. They called it a “direct assault on privacy” that creates a clear “potential for racial profiling, harassment, and false arrests.”
New Privacy Tools Promise Protection From Prying Eyes
In a world increasingly dominated by data, privacy has become both a precious commodity and a pressing concern. Enter differential privacy to protect individuals’ data in an era where information, like water, is vital but potentially destructive if not properly contained.
But what exactly is differential privacy, and how does it measure up to the promises it makes?
The mechanism behind differential privacy involves adding a certain amount of “noise” to the data. This noise is designed to be statistically negligible for the overall dataset but significant enough to obscure individual data points. This way, even if an adversary gets their hands on the data, they cannot extract specific information about any individual.
Using differential privacy allows organizations to collect and analyze data without exposing the details of the original individual entries. Imagine you have a dataset containing the ages of all employees in a company. Differential privacy allows you to determine the average age without revealing any specific employee’s age.
The FDA Just Quietly Gutted Protections for Human Subjects in Research + More
The FDA Just Quietly Gutted Protections for Human Subjects in Research
Late last year, the U.S. Food and Drug Administration (FDA) quietly introduced a regulation that may be one of the most important shifts in how nonprofit and for-profit U.S. institutions, both at home and abroad, conduct future medical and public health research. It represents an erosion of personal medical choice and threatens to undermine the public’s trust in scientific investigations in biomedicine.
A bedrock of ethical research design is the universal requirement of informed consent for any medical procedure, treatment, or intervention. Researchers must provide information to possible participants, without pressure or coercion, to decide whether the risks are worth the potential benefit of an intervention. The FDA’s exceptions, until recently, are historically reserved for people who are incapacitated or for urgent, life-threatening emergencies.
These measures have generally been strengthened since the Nuremberg Trials, formally adopted across the U.S. government through the institutional review board (IRB) system. An IRB is a committee of specialists and administrators at each institution that oversees research design and assures the protection of research subjects.
At its core, the new FDA rule change allows any IRB to broadly assume the FDA’s own exemption power, dubiously granted under the 21st Century Cures Act of 2016, to grant exemptions to informed consent requirements based on “minimal risk.” Based on vague guidelines, it effectively gives thousands of IRB committees the unilateral ability to determine that researchers need not obtain true informed consent from research participants.
AI Trains on Kids’ Photos Even When Parents Use Strict Privacy Settings
Human Rights Watch (HRW) continues to reveal how photos of real children casually posted online years ago are being used to train AI models powering image generators — even when platforms prohibit scraping and families use strict privacy settings.
Last month, HRW researcher Hye Jung Han found 170 photos of Brazilian kids that were linked in LAION-5B, a popular AI dataset built from Common Crawl snapshots of the public web. Now, she has released a second report, flagging 190 photos of children from all of Australia’s states and territories, including indigenous children who may be particularly vulnerable to harms.
These photos are linked in the dataset “without the knowledge or consent of the children or their families.” They span the entirety of childhood, making it possible for AI image generators to generate realistic deep fakes of real Australian children, Han’s report said. Perhaps even more concerning, the URLs in the dataset sometimes reveal identifying information about children, including their names and locations where photos were shot, making it easy to track down children whose images might not otherwise be discoverable online.
That puts children in danger of privacy and safety risks, Han said, and some parents thinking they’ve protected their kids’ privacy online may not realize that these risks exist.
Dark Side of ‘the Next AI Trade’: Seizing Private Property for Transmission Lines
There’s a dark side to ‘The Next AI Trade‘ — at least for some landowners. Powering up America and upgrading power grids for artificial intelligence data centers, onshoring trends, and the electrification of the economy will require thousands of miles of new transmission lines nationwide. Existing lines will be upgraded, but new lines will also be needed, resulting in the seizure of private property via eminent domain.
According to Fox 45 Baltimore, the Maryland Piedmont Reliability Project (MPRP) is a new plan to build a 70-mile 500,000-volt transmission line across three counties: Frederick, Baltimore, and Carroll. The line will connect a substation in southern Frederick County and supply the area with additional load capacity to handle surging power demand from AI data centers.
MPRP’s website explains that the new transmission lines will require the acquisition of private property through the use of an eminent domain, or government-mandated seizure to complete the construction.
It’s becoming clear that the dark side of powering up America for AI data centers will be land grabs by the government through eminent domain.
Businesses Are Harvesting Our Biometric Data. The Public Needs Assurances on Security
Biometrics are unique physical or behavioral traits and are part of our everyday lives. Among these, facial recognition is the most common.
Facial recognition technology stems from a branch of AI called computer vision and is akin to giving sight to computers. The technology scans images or videos from devices including CCTV cameras and picks out faces.
The system typically identifies and maps 68 specific points known as facial landmarks. These create a digital fingerprint of your face, enabling the system to recognise you in real time.
From supermarkets to car parks and railway stations, CCTV cameras are everywhere, silently doing their job. But what exactly is their job now? Businesses may justify collecting biometric data, but with power comes responsibility and the use of facial recognition raises significant transparency, ethical and privacy concerns.
Students Scoff at a School Cellphone Ban. Until They Really Begin to Think About It
The Board of Education’s 5-2 decision to ban cellphones by January 2025 aims to change the behavior of a generation of students and will be one of the most consequential and closely watched shifts in schooling since students were forced to go to class online — many by phone — more than four years ago at the onset of the pandemic.
Details, such as how the rule will be enforced and where the phones will be stored during the schoolday, will be worked out in the coming months. But the goals are clear.
School leaders say they want to combat classroom distractions that are impeding learning and to reduce the dangers of social media addiction. At this point, the leaders say, a strict phone ban is the only way to get students to talk to one another and their teachers and come to value face-to-face conversations over digital connections.
Los Angeles will join a growing movement in K-12 education to ban phones and has won the support of Gov. Gavin Newsom, who has endorsed a bill to make such a rule to go statewide. Last year, Florida passed a policy to bar student cellphones from all K-12 classrooms. A similar law will go into effect in Indiana next year. In Ohio, new legislation will force schools to devise policies to “minimize students’ use” of cellphones. New York City, the nation’s largest school district, is poised to introduce a student cellphone ban this month that’s similar to the one in Los Angeles, after dropping a prior ban in 2015.
EU Commission Urges Digital ID, E-Health Records, and Touts ‘Anti-Disinformation’ Efforts in Digital Decade Report
Earlier this week the EU Commission (EC) published its second report on what it calls “the state of the digital decade,” urging member countries to step up the push to increase access and incentivize the use of digital ID and electronic health records.
At the same time, the bloc is satisfied with how the crackdown on “disinformation,” “online harms,” and the like is progressing.
While the report is generally upbeat on the uptake of digital ID (eID schemes) and the use of e-Health records, its authors point out that there are “still significant differences among countries” in terms of eID adoption.
To remedy member countries falling short on these issues, it is recommended that they push for increased access to eID and e-Health records in order to meet the objectives set for 2030.
Anthony Blinken Reveals Government’s AI Plan to Censor Free Speech
U.S. Secretary of State Anthony Blinken admitted last week that the State Department is preparing to use artificial intelligence to “combat disinformation,” amidst a massive government-wide AI rollout that will involve the cooperation of Big Tech and other private-sector partners.
At a speaking engagement streamed last week with the State Department’s chief data and AI officer, Matthew Graviss, Blinken gushed about the “extraordinary potential” and “extraordinary benefit” AI has on our society, and “how AI could be used to accelerate the Sustainable Development Goals which are, for the most part, stalled.”
He was referring to the United Nations Agenda 2030 Sustainable Development goals, which represent a globalist blueprint for a one-world totalitarian system.
New Cyberattack Targets iPhone Apple IDs. Here’s How to Protect Your Data.
A new cyberattack is targeting iPhone users, with criminals attempting to obtain individuals’ Apple IDs in a “phishing” campaign, security software company Symantec said in an alert Monday.
Cyber criminals are sending text messages to iPhone users in the U.S. that appear to be from Apple, but are in fact an attempt at stealing victims’ personal credentials.
Joe Rogan Calls Out Key Issue With Social Media — ‘It’s Disheartening’ + More
Joe Rogan Calls Out Key Issue With Social Media — ‘It’s Disheartening’
Joe Rogan has expressed concern over social media being used as a tool to spread misinformation on health issues. Speaking on The Joe Rogan Experience, the powerhouse podcaster spoke with guest Max Lugavere, a filmmaker and health and science journalist, about the dangers of toxic “forever” chemicals being found in products used by consumers.
PFAS, which stands for per- and polyfluorinated alkyl substances, are a class of chemicals that can be found in a range of everyday products, from toilet paper to food packaging, cosmetics and dental floss. Nicknamed forever chemicals, these compounds break down very slowly over time and stick around in their surrounding environment.
The widespread nature of these chemicals is concerning as numerous studies have found associations between PFAS exposure and increased blood cholesterol and blood pressure, reduced immunity, reproductive issues and a higher risk of certain cancers, the U.S. Agency for Toxic Substances and Disease Registry recently reported.
Lugavere said that when it came to the issue: “You talk about this stuff today on social media and you’re accused of fear-mongering of being alarmist.” Rogan questioned whether “trolls from pharmaceutical companies” were responsible for dismissing such statements online, adding that such practices are “something that I guarantee you corporations use. If nations use it — and we know they do — and know that there are troll farms in Russia, we know this is a real thing, Why wouldn’t corporations use that too?”
Tennessee Woman Fired for Refusing Employer’s COVID Vaccine Mandate Wins Almost $700K
A federal jury has determined a woman who was fired for refusing to get a COVID-19 vaccine mandated by her employer, BlueCross BlueShield of Tennessee, is due a settlement worth almost $700,000. The jury found that Tanja Benton ‘proved by a preponderance of the evidence” that her refusal to get the shot “was based on a sincerely-held religious belief.” Benton worked at BCBST from 2005 through November of 2022, primarily as a bio statistical research scientist.
Her federal lawsuit said it was not a part of Benton’s job to regularly come into contact with people, saying she had a portfolio of 10 to 12 clients each year, with whom she only interacted infrequently, and sometimes not in person. It also pointed out that Benton never came into contact with any patients as part of her job.
Like many others, the pandemic changed Benton’s job. She says she worked from home for the next year and a half, without any complaints. Benton submitted a request for a religious exemption to BCBST’s vaccine mandate. But BCBST denied her request, saying she could not continue her job as a biostatistical research scientist.
Benton appealed, saying she did not interact with people during the course of her workday, and a company representative responded that “there are no exceptions” for anyone who has Benton’s job title, and suggested she apply for a different job. BCBST ultimately fired Benton, and she filed a federal lawsuit.
As part of its verdict, the federal jury awarded Benton $177,240 in back pay, $10,000 in compensatory damages, and $500,000 in punitive damages, for a total of $687,240.
U.S. Supreme Court Sidesteps Dispute on State Laws Regulating Social Media
The U.S. Supreme Court on Monday threw out a pair of judicial decisions involving challenges to Republican-backed laws in Florida and Texas designed to restrict the power of social media companies to curb content that the platforms deem objectionable.
The justices directed lower appeals courts to reconsider their decisions regarding these 2021 laws authorizing the states to regulate the content-moderation practices of large social media platforms. Tech industry trade groups challenged the two laws under the U.S. Constitution’s First Amendment limits on the government’s ability to restrict speech.
At issue was whether the First Amendment protects the editorial discretion of social media platforms and prohibits governments from forcing companies to publish content against their will. The companies have said that without such discretion — including the ability to block or remove content or users, prioritize certain posts over others or include additional context — their websites would be overrun with spam, bullying, extremism and hate speech.
Supreme Court Won’t Hear Tech Liability Challenge From Teen Groomed on Snapchat
The Supreme Court will not consider a challenge to the scope of a federal law immunizing tech companies from liability for their users’ content brought by a teenager who was allegedly groomed by a teacher on Snapchat.
The teen sought to hold Snap Inc. liable for having “negligently designed an environment rife with sexual predators and then lured children in.” His lawyers also claimed Snap “knew or should have known,” given its internal technology, that the teen was being groomed.
His petition, filed with the high court in March, sought to put Section 230 of the Communications Decency Act to a new test. The law says internet service providers cannot be held liable as the “publisher” or “speaker” of content on their platforms.
Instead of holding Snap liable as a publisher or speaker, the teen asked the high court to consider whether Section 230 immunizes internet service providers from any lawsuit regarding their own misconduct just because third-party content is also involved.
OnlyFans Vows It’s a Safe Space. Predators Are Exploiting Kids There.
OnlyFans makes reassuring promises to the public: It’s strictly adults-only, with sophisticated measures to monitor every user, vet all content and swiftly remove and report any child sexual abuse material. “We know the age and identity of everyone on our platform,” said CEO Keily Blair in a speech last year. “No children allowed, nobody under 18 on the platform.”
Reuters documented 30 complaints in U.S. police and court records that child sexual abuse material appeared on the site between December 2019 and June 2024. The case files examined by the news organization cited more than 200 explicit videos and images of kids, including some adults having oral sex with toddlers. In one case, multiple videos of a minor remained on OnlyFans for more than a year, according to a child exploitation investigator who found them while assisting Reuters in its reporting and alerted authorities in June.
OnlyFans didn’t respond to most of Reuters’ questions about the cases in this story, including how child abuse material was able to evade its monitoring and whether it has kept its revenue from accounts involving minors. None of the cases involved criminal charges against the website or its parent company, Fenix International. Reuters found no evidence that OnlyFans has been sued or held criminally liable for child sexual abuse content, according to a search of U.S. and international legal databases.
Federal free-speech protections have largely immunized social media platforms from liability for abusive content posted by their users. But as concerns mount about online harms — particularly involving children — Congress is seeking to toughen federal laws to hold the platforms accountable.
Federal Judge Halts Mississippi Law Requiring Age Verification for Websites
A federal judge on Monday blocked a Mississippi law that would require users of websites and other digital services to verify their age.
The preliminary injunction by U.S. District Judge Sul Ozerden came the same day the law was set to take effect. A tech industry group sued Mississippi on June 7, arguing the law would unconstitutionally limit access to online speech for minors and adults. The U.S. Supreme Court has held that any law that dealing with speech “is subject to strict scrutiny regardless of the government’s benign motive,’” Ozerden wrote.
The suit challenging the law was filed by NetChoice, whose members include Google, which owns YouTube; Snap Inc., the parent company of Snapchat; and Meta, the parent company of Facebook and Instagram.
Chris Marchese, director of the NetChoice Litigation Center, said in a statement Monday that the Mississippi law should be struck down permanently because “mandating age and identity verification for digital services will undermine privacy and stifle the free exchange of ideas.”
Cold Turkey for Child Smartphone Addicts
Mental health concerns drove New York City’s decision this week to ban city schoolchildren from using cell phones.
David Banks, chancellor of the nation’s largest public school system serving more than 900,000 kids, said doctors concerned about smartphone addiction and other harms connected to social media apps kids access on their phones had advised the city to make the move.
“Our kids are fully addicted to these phones,” Banks said in an interview on NY1. “We’ve got to do something about it.” Why it matters: The ban in New York comes amid a nationwide movement to curb phone use in schools and growing worry about how smartphones affect children.
Overseas, France has long banned smartphones for kids in middle school and younger, and the U.K. is considering a ban.
New ADHD Diagnoses Doubled During COVID, Study Suggests
New diagnoses of attention-deficit hyperactivity disorder (ADHD) in Finland doubled during the COVID-19 pandemic, with the largest increase in females aged 13 to 30 years, University of Helsinki researchers report in JAMA Network Open.
“Pandemic lockdown imposed a sudden increase to attention and executive behavioral demands, coupled with a lack of daily structures and reduced possibilities for physical exercise,” the researchers said. “These challenges in living conditions may have surfaced ADHD symptoms in individuals previously coping sufficiently in their daily lives. Further studies are needed to explain the psychological, societal, and biological mechanisms underlying these observations.”
Meta’s ‘Pay or Consent’ Model Fails EU Competition Rules, Commission Finds
An investigation conducted by the European Commission has found that Meta’s “pay or consent” offer to Facebook and Instagram users in Europe does not comply with the bloc’s Digital Markets Act (DMA), according to preliminary findings reported by the regulator on Monday.
The Commission wrote in a press release that the binary choice Meta offers “forces users to consent to the combination of their personal data and fails to provide them a less personalized but equivalent version of Meta’s social networks.”
More saliently, Meta could finally be forced to abandon a business model that demands users agree to surveillance advertising as the entry “price” for using its social networks.
The regulator’s case against Meta contends the adtech giant is failing to provide people with a free and fair choice to deny tracking.