BMJ Report Warns the Gates Foundation’s Foray Into ‘AI for Global Health’ Will Produce Far More Harm Than Good
The Gates Foundation’s “AI initiative” is getting scrutinized, and criticized, from a variety of points of view. And now a trio of academics has offered their take on the controversial push into using AI to supposedly advance “global health.”
What seems to have prompted this particular reaction — authored by researchers from the University of Vermont, Oxford University, and the University of Cape Town — was an announcement in early August.
The Gates Foundation at that time let the world know that it was in for a new scheme, worth $5 million, set to bankroll 48 projects whose task was to implement AI large language models (LLM) “in low-income and middle-income countries to improve the livelihood and well-being of communities globally.”
Every time — and it’s been many times now — that the Foundation chooses to present itself as the “benefactor” of “low or middle-income countries” (i.e., undeveloped ones with little recourse to protect themselves from many things, including Bill Gates’ apparent “savior” complex) — it leaves observers critical of the organization and its founder’s “experiments” — and feeling somewhat, if not a lot, ill at ease.
You wouldn’t necessarily expect scientists to cut this deep, but here they are: “At the end of the day, the hard, sharp edges of capital, command and control are in the hands of a very few entities and individuals, notably including the conflictingly interested Microsoft corporation itself, which has invested more than U.S. $10 billion in OpenAI.”
Meta Failed to Act to Protect Teens, Second Whistleblower Testifies
A second Meta whistleblower testified before a Senate subcommittee on Tuesday, this time describing his fruitless efforts to flag the extent of harmful effects its platforms could have on teens to top leadership at the company.
Arturo Bejar, a former Facebook engineering director from 2009 to 2015, who later worked as a consultant at Instagram from 2019 to 2021, testified before the Senate Judiciary Subcommittee on Privacy, Technology and Law that top Meta officials did not do enough to stem harm to its youngest users experienced on the platforms.
Lawmakers on both sides of the aisle blamed tech lobbying for Congress’ failure to pass laws protecting kids online. Despite broad support within Senate committees of bills that aim to protect kids on the internet, they have ultimately sat dormant, waiting for a vote on the Senate floor or for action in the House.
Bejar’s appearance shows the frustration among lawmakers who believe large tech companies operate with largely unchecked power. Meta leadership was aware of prevalent harms to its youngest users but declined to take adequate action to address it, Bejar told lawmakers on Tuesday.
‘Secret Reports’ Reveal How Government Worked to ‘Censor Americans’ Prior to 2020 Election, Jim Jordan Says
Officials at the Department of Homeland Security (DHS) assisted in the creation of a “disinformation” group at Stanford University that worked to “censor” the speech of Americans prior to the 2020 presidential election, according to a number of communications outlined in a report by the House Judiciary Committee.
Detailed in the House panel’s 103-page staff interim report, the emails and internal communications showed how the group, identified as the Election Integrity Partnership (EIP), worked with DHS’ Cybersecurity and Infrastructure Security Agency (CISA) to alert, suppress and remove certain online speech in coordination with big tech companies.
One such email — sent July 31, 2020, by a top director at the Atlantic Council’s Digital Forensic Research Lab, an EIP partner — described the CISA’s role in the censorship effort.
According to the report, which Judiciary Committee Chair Jim Jordan, R-Ohio, highlighted in a post to X, the communications showed how “the federal government and universities pressured social media companies to censor true information, jokes, and political opinions.”
Lawmakers Say FBI Can Keep Its Prized Surveillance Tool, but It’ll Need a Warrant
A rare bipartisan coalition of lawmakers has teamed up to propose major privacy reforms that could fundamentally reign in the U.S. government’s most powerful domestic surveillance tools.
If passed, the newly proposed Government Surveillance Reform Act would force law enforcement agencies to obtain a legal warrant before conducting searches as part of Section 702 of the Foreign Intelligence Surveillance Act (FISA).
Critics say the current lack of a warrant requirement for accessing the 702 database serves as an unconditional end-run around Americans’ Fourth Amendment protections. The proposed legislation comes towards the tail end of a tense, year-long battle over the future of highly controversial surveillance, which is set to expire at the end of this year.
23andMe Data Theft Prompts DNA Testing Companies to Switch On 2FA by Default
DNA testing and genealogy companies are stepping up user account security by mandating the use of two-factor authentication, following the theft of millions of user records from DNA genetic testing giant 23andMe.
Ancestry, MyHeritage, and 23andMe have begun notifying customers that their accounts will use two-factor (2FA) by default, a security feature where users are asked to enter an additional verification code sent to a device they own to confirm that the person logging in is the true account holder.
Ancestry, MyHeritage and 23andMe account for more than 100 million users.
The move to require 2FA by default comes after 23andMe said in October it was investigating after a hacker claimed the theft of millions of 23andMe account records, including one million users of Jewish Ashkenazi descent and 100,000 Chinese users.
What Is Bone Smashing? The Dangerous TikTok Beauty Trend Surgeons Are Warning Against
The latest TikTok beauty trend encourages young people to strike themselves in the face with a blunt object to cause fractures in their face, in the hope of achieving a perfect jawline or a more physically attractive face.
Plastic and reconstructive surgeon Dr. Ben Schultz. from LifeBridge Health, said there is a false belief behind the increasingly controversial trend that when bones heal, they grow stronger. “When I heard about it, I was like ‘this is the craziest thing ever since the Tide pods,'” he said.
Instead, he says the practice could lead to serious injury or irreversible damage. A wave of videos demonstrating how to do it has amassed more than 250 million views on TikTok.
Big Tech to Face Tougher Rules on Targeted Political Ads in EU
Big Tech firms will face new European Union rules to clearly label political advertising on their platforms, who paid for it and how much and which elections are being targeted, ahead of important votes in the bloc next year.
The new political advertising rules, which were agreed upon by EU countries and European Parliament lawmakers late on Monday, will force social media groups such as Alphabet’s Google (GOOGL.O), and Meta Platforms to be more transparent and accountable.
Violations of the new EU can be punished with fines up to 6% of an ad provider’s annual turnover.