Meta Remains a Dangerous Place for Children, Recent Lawsuits Claim
“With this lawsuit, New Mexico joins 33 other states that have also sued CEO Mark Zuckerberg, Meta and its wholly-owned subsidiaries for allegedly failing to protect children from sexual abuse, online solicitation of minors and human trafficking by sexual predators,” explained Susan Schreiner, technology analyst at C4 Trends.
While the safety and well-being of children on the platforms should have been the highest priority — that has been far from the case. Instead, the numerous lawsuits argue that the company deliberately designed platforms to hook and addict children and teens, and knowingly produce products and features through its apps.
This has been business as usual for many of the social media platforms, added Titania Jordan, chief parent officer at Bark Technologies, and co-author of Parenting in a Tech World: A Handbook for Raising Kids in the Digital Age.
Jordan explained that even after the testimony to Congress by whistleblower Frances Haugen — who worked as a manager for Facebook — about the dangerous nature of the platform and how kids were being targeted with unhealthy and unsafe content, very little was done to address the threat.
“In the U.S., the “Protecting Kids on Social Media Act” is meeting headwinds with objections from privacy advocates who perceive that such laws have less to do with protecting kids than creating digital authoritarian surveillance,” said Schreiner.
Apple Now Requires a Judge’s Consent to Hand Over Push Notification Data
Apple (AAPL.O) has said it now requires a judge’s order to hand over information about its customers’ push notifications to law enforcement, putting the iPhone maker’s policy in line with rival Google and raising the hurdle officials must clear to get app data about users.
The new policy was not formally announced but appeared sometime over the past few days on Apple’s publicly available law enforcement guidelines. It follows the revelation from Oregon Senator Ron Wyden that officials were requesting such data from Apple as well as from Google, the unit of Alphabet (GOOGL.O) that makes the operating system for Android phones.
Apps of all kinds rely on push notifications to alert smartphone users to incoming messages, breaking news, and other updates. These are the audible “dings” or visual indicators users get when they receive an email or their sports team wins a game. What users often do not realize is that almost all such notifications travel over Google and Apple’s servers.
In a letter first disclosed by Reuters last week, Wyden said the practice gave the two companies unique insight into traffic flowing from those apps to users, putting them “in a unique position to facilitate government surveillance of how users are using particular apps.”
Ready to Rumble: Lawsuits Against Censorship-Industrial Complex Heat Up After Musk Kicks Open the Floodgates
It took the richest man in the world to begin dismantling the censorship-industrial complex; a tightly connected network of government agencies, think tanks, private media platforms, and activist organizations whose goal is to censor, control, and bankrupt free speech platforms under the guise of battling ‘hate speech’ and ‘misinformation‘ that run counter to prevailing establishment narratives.
One of these entities, the Center for Countering Digital Hate, is a dark money organization run by an alleged former British intelligence operative.
We know all this because just over a year ago, X (formerly Twitter) owner Elon Musk disseminated the “Twitter Files” to a small group of independent journalists, from which we learned that the Biden administration collaborated with Twitter to censor the Hunter Biden laptop story, ban Donald Trump, and that the FBI essentially had its entire arm up Twitter’s ass to shape and control narratives. We also learned about the aforementioned relationships between the censorship-industrial complex.
In August, Musk kicked off what has become several lawsuits against anti-free speech advocates, filing a lawsuit against the Center for Countering Digital Hate, which X has accused of “actively working to assert false and misleading claims encouraging advertisers to pause investment on the platform.”
And so, as the lawsuits against the censorship complex begin to fly, one can’t help but feel that the tide may be turning — or at least, said censors will think twice before spouting defamatory claims about platforms that allow divergent opinions.
Major Report Declines to Recommend Banning Social Media for Youth
A committee of experts declined to recommend banning social media for youth 18 and younger in a report issued Wednesday.
Convened by the nonprofit National Academies of Sciences, Engineering, and Medicine, the committee reviewed numerous studies on the topic of youth health and social media use and found that most of the research shows only an association between a range of online behaviors and different physical and mental health outcomes.
In the absence of research demonstrating a compelling cause-and-effect relationship, the committee recommended further research as well as the creation of strong industry standards for social media platform design, transparency, and data use.
Earlier this year, Congress heard testimony from parents who supported bipartisan legislation banning social media for youth younger than 13, in addition to a requirement that older minors receive permission from their parents before opening a social media account. There have been separate attempts in the U.S. to ban TikTok, typically for cybersecurity reasons.
Patients Worry About How Doctors May Be Using AI: Survey
The vast majority of American patients are wary of how their doctor may use generative AI to help treat them, according to a new Wolters Kluwer Health survey.
Why it matters: The technology is still in limited use in physician offices — mostly to help with administrative tasks — but one day may help doctors make diagnoses or develop care plans.
By the numbers: Roughly 4 in 5 say they have concerns about their provider using generative AI to make diagnoses or set treatment, with the vast majority saying that’s because they don’t know where the information the tech uses comes from or why it should be trusted.
Threads Is Getting Its Own Fact-Checkers to Combat Misinformation
Meta plans to add direct fact-checking for Threads, to address misinformation on the app itself instead of referentially through its other platforms.
Though the owner of Facebook and Instagram uses third-party fact-checking teams to debunk misinformation and disinformation on these sites (whether it’s wholly successful is another thing), Meta’s answer to Twitter/X doesn’t have its own standalone fact-checking team.
“We currently match fact-check ratings from Facebook or Instagram to Threads, but our goal is for fact-checking partners to have the ability to review and rate misinformation on the app,” Instagram head Adam Mosseri wrote. “More to come soon.”
A key factor here is Threads’ connection to news. Though Threads is making moves toward making trending topics more intuitively collected, Meta doesn’t really push the platform as a news and current affairs-forward space, with Mosseri writing in July, “Politics and hard news are inevitably going to show up on Threads — they have on Instagram as well to some extent — but we’re not going to do anything to encourage those verticals.”
Notably, certain words have been blocked from Threads’ search, with The Washington Post reporting words like “coronavirus,” “vaccines,” “vaccination,” “sex,” “porn,” “nude,” and “gore,” as intentionally blocked. Threads still doesn’t have its own community guidelines; instead, the company says Threads is “specifically part of Instagram, so the Instagram Terms of Use and the Instagram Community Guidelines” apply to Threads too.
Hackers Had Access to Patient Information for Months in New York Hospital Cyberattack, Officials Say
A group of New York hospitals and health care centers were targeted in a cyberattack that for two months allowed hackers to access patients’ private information, officials said this week. The attack targeted three separate facilities in the Hudson Valley — HealthAlliance Hospital, Margaretville Hospital and Mountainside Residential Care Center — which all operate under the same parent company and within the hospital conglomerate Westchester Medical Center Health Network.
HealthAlliance, Inc., the corporate parent of the three facilities, said Monday that it “began mailing notification letters to patients whose information may have been involved in a data security incident.” The security issue was acknowledged publicly in October by the broader Westchester health network, but few details were released about the nature or the extent of the breach as an investigation got underway. Now, officials say the probe involving the New York State Department of Health, local authorities in the Hudson Valley, the FBI and a third-party cybersecurity firm determined that hackers were able to access the parent company’s information technology network from Aug. 18 to Oct. 13.
“While in our IT network, the unauthorized party accessed and acquired files that contain patient information,” HealthAlliance said in a statement. “The information involved varied by patient, but may have included names, addresses, dates of birth, Social Security numbers, diagnoses, lab results, medications, and other treatment information, health insurance information, provider names, dates of treatment, and/or financial information.”