Generative AI Is the Newest Tool in the Dictator’s Handbook
A new report from Freedom House shared with Gizmodo found political leaders in at least 16 countries over the past year have deployed deep fakes to “sow doubt, smear opponents, or influence public debate.” Though a handful of those examples occurred in less developed countries in Sub-Saharan Africa and Southwest Asia, at least two originated in the United States.
“AI can serve as an amplifier of digital repression, making censorship, surveillance, and the creation and spread of disinformation easier, faster, cheaper, and more effective,” Freedom House noted in its “Freedom on the Net” report.
The report details numerous troubling ways advancing AI tools are being used to amplify political repression around the globe. Governments in at least 22 of the 70 countries analyzed in the report had legal frameworks mandating social media companies deploy AI to hunt down and remove disfavored political, social, and religious speech.
Those frameworks go beyond the standard content moderation policies at major tech platforms. In these countries, Freedom House argues the laws in place compel companies to remove political, social, or religious content that “should be protected under free expression standards within international human rights laws.” Aside from increasing censorship efficiency, the use of AI to remove political content also gives the state more cover to conceal themselves.
Federal Appeals Court Extends Limits on Biden Administration Communications With Social Media Companies to Top U.S. Cybersecurity Agency
A federal appeals court has expanded the scope of a ruling that limits the Biden administration’s communications with social media companies, saying it now also applies to a top U.S. cybersecurity agency.
The ruling last month from the conservative 5th Circuit US Court of Appeals severely limits the ability of the White House, the surgeon general, the Centers for Disease Control and Prevention and the FBI to communicate with social media companies about content related to COVID-19 and elections that the government views as misinformation.
The preliminary injunction had been on pause and a recent procedural snafu over a request from the plaintiffs in the case to broaden its scope led the court on Tuesday to withdraw its earlier opinion and issue a new one that now includes the U.S. Cybersecurity and Infrastructure Security Agency. That agency is charged with protecting non-military networks from hacking and other homeland security threats.
Similar to the ruling last month, in which the appeals court said the federal government had “likely violated the First Amendment” when it leaned on platforms to moderate some content, the new ruling says CISA violates the Constitution.
The Founder Who Sold the Startup That Would Become Amazon’s Alexa Called Big Tech ‘Apex Predators’ and Says That’s Why Everyone Is Scared of AI
The reason so many people have been so afraid of AI isn’t because of the technology itself, but because of who is developing it: Big Tech. “Do you know why you’re all afraid?” AI pioneer Igor Jablokov asked the audience at Fortune’s CEO Initiative in Washington D.C., during a discussion about responsible development of AI. “It’s because Big Tech are apex predators.”
These companies, Jablokov elaborated in a phone call with Fortune, are using their positions of strength, whether financial or political through lobbying, to further cement their positions in the marketplace and keep new entrants out. The fear among some of these companies, he says, is that AI will upend the industry and demote some of them to the next batch of yesterday’s tech companies, like AOL, Motorola, or Yahoo.
All of this concentrated power ends up harming consumers too. “Eventually there’s only one source of technology,” he told Fortune. “There’s no control over how many ads they see, product quality ends up faltering. It’s all the monopolistic things, and monopolies are unhealthy because there’s no competition that drives prices up and your choices down.”
On Tuesday, the day of Jablokov’s comments, the Federal Trade Commission released a report outlining the many anxieties consumers felt about AI. “The bottom line?,” the report reads. “Consumers are voicing concerns about harms related to AI—and their concerns span the technology’s lifecycle, from how it’s built to how it’s applied by third parties in the real world.”
Lawsuit: Man Claims He Was Improperly Arrested Because of Misuse of Facial Recognition Technology
A Black man was wrongfully arrested and held for nearly a week in jail because of the alleged misuse of facial recognition technology, according to a civil lawsuit filed against the arresting police officers.
Randal Quran Reid, 29, was driving to his mother’s home outside of Atlanta the day after Thanksgiving when police pulled him over, according to Reid.
Officers of the Jefferson Parish Sheriff’s Office used facial recognition technology to identify Reid as a suspect who was wanted for using stolen credit cards to buy approximately $15,000 worth of designer purses in Jefferson and East Baton Rouge Parishes, according to the complaint filed by Reid.
“[The facial recognition technology] spit out three names: Quran plus two individuals,” Gary Andrews, Reid’s lawyer and senior attorney at The Cochran Firm in Atlanta, told ABC News. “It is our belief that the detective in this case took those names … and just sought arrest warrants without doing any other investigation, without doing anything else to determine whether or not Quran was actually the individual that was in the store video.”
Amazon Allegedly Used Secret Algorithm to Raise Prices on Consumers, FTC Lawsuit Reveals
Amazon, the behemoth online retailer, used a secret algorithm called “Project Nessie” to determine how much to raise prices in a manner that competitors would follow, according to a lawsuit filed by the Federal Trade Commission.
The algorithm was able to track how much Amazon’s power in the e-commerce field would get competitors to move their prices, and in instances in which competitors didn’t move their prices, the algorithm would return Amazon’s prices back down, according to the Journal.
The algorithm, which is no longer in use, brought the company $1 billion in revenue, sources told the Journal.
What Happened When Toxic Social Media Came for My Daughter
Eating disorders predate the internet, and our culture is saturated by body shaming and unhelpful images. But as one specialist explained to me, because of social media, the degree and depth of warped information that increasingly young children are consuming these days is unparalleled.
The Center for Countering Digital Hate conducted research showing that when its accounts on TikTok paused briefly over and “liked” content about mental health or body image, within 2.6 minutes, they were fed content about suicide, and within 8 minutes, they were fed content about eating disorders. The Tech Transparency Project’s “Thinstagram” research found that Instagram’s algorithm amplifies and recommends images of dangerously thin women and accounts of “thinfluencers” and anorexia “coaches.”
Worse, the platforms are aware of and profiting from this, as the whistleblower Frances Haugen, formerly of Facebook, helped expose. The platform’s own analysis shows it harms children.
We Know How to Regulate New Drugs and Medical Devices — but We’re About to Let Healthcare AI Run Amok
There’s a great deal of buzz around artificial intelligence and its potential to transform industries. Healthcare ranks high in this regard. If it’s applied properly, AI will dramatically improve patient outcomes by improving early detection and diagnosis of cancer, accelerating the discovery of more efficient targeted therapies, predicting disease progression, and creating ideal personalized treatment plans.
Alongside this exciting potential lies an inconvenient truth: The data used to train medical AI models reflects built-in biases and inequities that have long plagued the U.S. health system and often lacks critical information from underrepresented communities. Left unchecked, these biases will magnify inequities and lead to lives lost due to socioeconomic status, race, ethnicity, religion, gender, disability, or sexual orientation.
The consequences of flawed algorithms can be deadly. A recent study focused on an AI-based tool to promote early detection of sepsis, an illness that kills about 270,000 people each year. The tool, deployed in more than 170 hospitals and health systems, failed to predict sepsis in 67% of patients. It generated false sepsis alerts for thousands of others. The source of the flawed detection, researchers found, was that the tool was being used in new geographies with different patient demographics than those it had been trained on. Conclusion: AI tools do not perform the same across different geographies and demographics, where patient lifestyles, incidence of disease, and access to diagnostics and treatments vary.
Particularly worrisome is the fact that AI-powered chatbots may use LLMs that rely on data not screened for the accuracy of information. False information, bad advice to patients, and harmful medical outcomes can result.
AI Chatbots Are Learning to Spout Authoritarian Propaganda
When OpenAI, Meta, Google, and Anthropic made their chatbots available around the world last year, millions of people initially used them to evade government censorship. For the 70% of the world’s internet users who live in places where the state has blocked major social media platforms, independent news sites, or content about human rights and the LGBTQ community, these bots provided access to unfiltered information that can shape a person’s view of their identity, community, and government.
This has not been lost on the world’s authoritarian regimes, which are rapidly figuring out how to use chatbots as a new frontier for online censorship.
The hope that chatbots can help people evade online censorship echoes early promises that social media platforms would help people circumvent state-controlled offline media. Though few governments were able to clamp down on social media at first, some quickly adapted by blocking platforms, mandating that they filter out critical speech, or propping up state-aligned alternatives.
We can expect more of the same as chatbots become increasingly ubiquitous. People will need to be clear-eyed about how these emerging tools can be harnessed to reinforce censorship and work together to find an effective response if they hope to turn the tide against declining internet freedom.
Children Were Failed by Pandemic Policies, COVID Inquiry Told
Children were disproportionately affected by pandemic policies, with their voices not listened to and no one made responsible by the government for ensuring their legal rights were met, the COVID inquiry has heard.
Questions about how lockdown policies affected young people “weren’t even asked”, said the barrister Jennifer Twite, giving evidence on behalf of Save the Children U.K., Just for Kids Law and the Children’s Rights Alliance.
Children were at the back of the queue when the government made its biggest decisions about lockdown and reopening the economy, said Twite.
Prioritization of venues meant that pubs, restaurants and sports clubs were allowed to reopen before schools, nurseries and other places for children’s activities.