Miss a day, miss a lot. Subscribe to The Defender's Top News of the Day. It's free.
By Brett Wilkins
Amid the rapid development and deployment of artificial intelligence (AI) systems, a pair of Democratic U.S. lawmakers on Wednesday led more than a dozen of their colleagues in urging President Joe Biden to issue an executive order making the White House’s “AI Bill of Rights” official federal policy.
Sen. Ed Markey (D-Mass.) and Congressional Progressive Caucus Chair Pramila Jayapal (D-Wash.) spearheaded a letter to Biden asserting that “the federal government’s commitment to the AI Bill of Rights would show that fundamental rights will not take a back seat in the AI era.”
“By turning the AI Bill of Rights from a nonbinding statement of principles into federal policy, your administration would send a clear message to both private actors and federal regulators: AI systems must be developed with guardrails,” the letter states.
“Doing so would also strengthen your administration’s efforts to advance racial equity and support underserved communities, building on important work from previous executive orders.”
When AI systems create new risks and exacerbate existing biases, we need guardrails to keep AI developers in check. That’s why @RepJayapal and I are urging @POTUS to turn the AI Bill of Rights from a non-binding statement of principles to federal policy with an executive order. https://t.co/cSdFf7Lbwo pic.twitter.com/W9pB7tZbr9
— Ed Markey (@SenMarkey) October 11, 2023
The lawmakers asserted that implementing the AI Bill of Rights is “a crucial step in developing an ethical framework for the federal government’s role” in AI.
They stressed that five principles — “safe and effective systems; algorithmic discrimination protections; data privacy; notice and explanation; and human alternatives, consideration, and fallback” — must be the core of the policy.
The letter further argues that “implementing these principles will not only protect communities harmed by these technologies, it will also help inform ongoing policy conversations in Congress and show clear leadership on the global stage.”
In July, the White House secured voluntary risk management commitments from seven leading AI companies, a move praised by campaigners and experts — even as they stressed the need for further action from Congress and federal regulators.
Earlier this year, Markey and Rep. Doris Matsui (D-Calif.) reintroduced the Algorithmic Justice and Online Platform Transparency Act, which would prohibit Big Tech from using black-box algorithms that drive discrimination and inequality.
As AI advances and becomes more frequently used, we need more regulation and oversight of how this technology is employed.
— Rep. Pramila Jayapal (@RepJayapal) October 11, 2023
Jayapal, Markey and Sen. Jeff Merkley (D-Ore.) in March led the reintroduction of the Facial Recognition and Biometric Technology Moratorium Act, which would stop the government from using facial recognition and other biometric technologies, which they said “pose significant privacy and civil liberties issues and disproportionately harm marginalized communities.”
Wednesday’s letter came as the consumer advocacy group Public Citizen urged the Federal Election Commission to officially affirm that so-called “deepfakes” in U.S. political campaign communications are illegal under existing legislation proscribing fraudulent representation.
The lawmakers’ call also comes just weeks after Public Citizen warned that Big Tech is creating and deploying AI systems “that deceptively mimic human behavior to aggressively sell their products and services, dispense dubious medical and mental health advice, and trap people in psychologically dependent, potentially toxic relationships with machines.”
Originally published by Common Dreams.
Brett Wilkins is a staff writer for Common Dreams.