The New Era of Social Media Looks as Bad for Privacy as the Last One
When Elon Musk took over Twitter in October 2022, experts warned that his proposed changes — including less content moderation and a subscription-based verification system — would lead to an exodus of users and advertisers. A year later, those predictions have largely borne out. Advertising revenue on the platform has declined 55% since Musk’s takeover, and the number of daily active users fell from 140 million to 121 million in the same time period, according to third-party analyses.
As users moved to other online spaces, the past year could have marked a moment for other social platforms to change the way they collect and protect user data. “Unfortunately, it just feels like no matter what their interest or cultural tone is from the outset of founding their company, it’s just not enough to move an entire field further from a maximalist, voracious approach to our data,” says Jenna Ruddock, policy council at Free Press, a nonprofit media watchdog organization, and a lead author on a new report examining Bluesky, Mastodon, and Meta’s Threads, all of which have jockeyed to fill the void left by Twitter, which is now named X.
Companies like Google, X, and Meta collect vast amounts of user data, in part to better understand and improve their platforms but largely to be able to sell targeted advertising. But the collection of sensitive information around users’ race, ethnicity, sexuality, or other identifiers can put people at risk.
Even for users who want to opt out of ravenous data collection, privacy policies remain complicated and vague, and many users don’t have the time or knowledge of legalese to parse through them. At best, says Nora Benavidez, director of digital justice and civil rights at Free Press, users can figure out what data won’t be collected, “but either way, the onus is really on the users to sift through policies, trying to make sense of what’s really happening with their data,” she says. “I worry these corporate practices and policies are nefarious enough and befuddling enough that people really don’t understand the stakes.”
Trust Us With AI, Say the Big Tech Titans. That’s What the Banks Said Before the 2008 Crisis
When the great and the good of Silicon Valley pitched up in Buckinghamshire for Rishi Sunak’s AI safety summit, they came with a simple message: trust us with this new technology. Don’t be tempted to stifle innovation by heavy-handed rules and restrictions. Self-regulation works just fine.
To which the simple response should be: remember 2008, when light-touch supervision allowed banks to indulge in an orgy of speculation that took the global financial system to the brink of collapse.
In the years leading up to the crisis, banks had developed products that were both lucrative and — as it turned out — highly toxic. The drive for profits trumped prudence. Only in retrospect were the dangers recognized of allowing the banks to mark their own homework. Financial regulation was subsequently tightened, but only after a deep recession from which the global economy has never fully recovered.
Sunak should learn from that experience. The focus at the Bletchley Park summit has been on the existential threat posed by AI – the risk that if left unchecked, the machines could lead to human extinction. That’s a worthy discussion point, especially given the rapid advances in creating super-intelligent machines. Elon Musk might be right when he says AI poses a “civilizational risk.”
But, as the TUC and others pointed out earlier this week, the focus on the longer-term challenges should not come at the expense of responding to a number of more immediate issues. These include the likely impact of AI on jobs, the increasing market dominance of big tech, and the use of AI to spread disinformation.
YouTube Limits Harmful Repetitive Content for Teens
In a move designed to prevent teenagers from repetitively watching potentially harmful videos on YouTube, the streaming platform announced Thursday that it will limit repeated recommendations of videos featuring certain themes to U.S. teens.
Currently, YouTube is limiting repetitive exposure to videos that compare physical features and favor some types over others, idealize specific fitness levels or body weights, or depict social aggression in the form of non-contact fights and intimidation. While these videos don’t violate the platform’s policies, repeated viewings could be harmful to some youth. YouTube already prohibits videos of fights between minors.
James Beser, director of product management for YouTube Kids and Youth, said that the company’s youth and family advisory committee, which comprises independent experts in child development and digital learning and media, helped YouTube identify categories of content that “may be innocuous as a single video, but could be problematic for some teens if viewed in repetition.”
The new policy comes amidst heavy scrutiny and criticism of the way social media platforms can influence youth mental health and well-being.
Conservative Nebraska Lawmakers Push Study to Question Pandemic-Era Mask, Vaccine Requirements
It didn’t take long for conservative Nebraska lawmakers to get to the point of a committee hearing held Wednesday to examine the effectiveness of public health safety policies from the height of the COVID-19 pandemic.
Following a brief introduction, Nebraska Nurses Association President Linda Hardy testified for several minutes about the toll the pandemic has taken on the state’s nursing ranks. The number of nurses dropped by nearly 2,600 from the end of 2019 to the end of 2022, said Hardy, a registered nurse for more than 40 years. She pointed to a study by the Nebraska Center for Nursing that showed nurses were worried about low pay, overscheduling, understaffing and fear of catching or infecting family with the potentially deadly virus.
“How many nurses quit because they were forced into vaccination?” asked Sen. Brian Hardin, a business consultant from Gering. When Hardy said she hadn’t heard of nurses leaving the profession over vaccination requirements, Hardin shot back. “Really?” he asked. “Because I talked to some nurses in my district who retired exactly because of that.”
The question of masks, mandatory shutdowns and the effectiveness of COVID vaccines was repeated time and again during the hearing. Those invited to testify included members of Nebraska medical organizations and government emergency response agencies.
U.S. Hospital Groups Sue Biden Administration to Block Ban on Web Trackers
The biggest U.S. hospital lobbying group on Thursday sued the Biden administration over new guidance barring hospitals and other medical providers from using trackers to monitor users on their websites.
The American Hospital Association (AHA), along with the Texas Hospital Association and two nonprofit Texas health systems, filed a lawsuit against the U.S. Department of Health and Human Services (HHS) in federal court in Fort Worth, Texas. The lawsuit accuses the agency of overstepping its authority when it issued the guidance in December.
The guidance warns healthcare providers that allowing a third-party technology company like Google or Meta to collect and analyze internet protocol (IP) addresses and other information from visitors to their public websites or apps could be a violation of the Health Insurance Portability and Accountability Act (HIPAA). Federal law bans the public disclosure of individuals’ private health information to protect them against discrimination, stigma or other negative consequences.
Court records show several hospitals have been hit with proposed class actions that cite the guidance, accusing them of mishandling personal health information through the use of these trackers.
British PM Rishi Sunak Secures ‘Landmark’ Deal on AI Testing
The British Prime Minister Rishi Sunak on Thursday said that under a new agreement “like-minded governments” would be able to test eight leading tech companies’ AI models before they are released.
Closing out the two-day artificial intelligence summit in Bletchley Park on Thursday, Sunak announced the agreement signed by Australia, Canada, the European Union, France, Germany, Italy, Japan, Korea, Singapore, the U.S. and the U.K. to test leading companies’ AI models.
“Until now the only people testing the safety of new AI models have been the very companies developing it. That must change,” said Sunak to a room full of journalists.
Sunak said the eight companies — Amazon Web Services, Anthropic, Google, Google DeepMind, Inflection AI, Meta, Microsoft, Mistral AI and Open AI — had agreed to “deepen” the access already given to his Frontier AI Taskforce, which is the forerunner to the new institute. The access is currently given on a voluntary basis, though under its Executive Order, the U.S. government has put binding requirements to hand over certain safety information.