The Federal Trade Commission (FTC) on Thursday launched an investigation into OpenAI, the developer of the popular ChatGPT artificial intelligence (AI) platform, citing concerns about privacy violations, data collection practices and publishing false information about individuals.
The FTC investigation seeks to determine whether OpenAI violated consumer protection laws.
The FTC informed OpenAI about the impending investigation in a 20-page letter to the company earlier this week, in what The New York Times described as “the most potent regulatory threat to date to OpenAI’s business in the United States.”
The letter asks OpenAI to answer a series of questions about its business and security practices and provide numerous documents and other internal company details. The company was given 14 days to respond.
Sam Altman, CEO of OpenAI, tweeted his disappointment at the news, but said his company would work with the agency.
it is very disappointing to see the FTC’s request start with a leak and does not help build trust.
that said, it’s super important to us that out technology is safe and pro-consumer, and we are confident we follow the law. of course we will work with the FTC.
— Sam Altman (@sama) July 13, 2023
According to the Washington Post, “If the FTC finds that a company violates consumer protection laws, it can levy fines or put a business under a consent decree, which can dictate how the company handles data.”
Eleanor Fox, LL.B., a law professor emeritus at New York University and antitrust expert, told The Defender the FTC’s action is a positive step:
“The FTC’s opening an investigation into OpenAI seems to be an important move and just what the FTC should be doing. Most agree it should be regulated — including OpenAI itself.
“The consumer and competition aspects are very related. The first cut here is logically consumer protection.”
However, some other legal observers and Big Tech analysts said the investigation may take many months and state that it remains far from certain that OpenAI will ultimately face any sanctions.
W. Scott McCollough, an Austin-based internet and telecommunications lawyer, told The Defender:
“I don’t trust the FTC to do a whole lot of good. They usually just end up blessing intrusive or harmful business models by tinkering around the edges or requiring disclosure of the harmful practice when the entire enterprise should be eliminated.
“That is how the whole surveillance capitalism thing got started a while back. And now it is too late.”
Technology expert Michael Rectenwald, Ph.D., author of “Google Archipelago: The Digital Gulag and the Simulation of Freedom,” raised concerns about political and ideological bias in any proposed regulation of the tech industry:
“AI poses dangers to the public mostly because it is created by monopolies or near-monopolies. But government regulation is not the answer. If leftist regulators and legislators have their way, AI will curate reality using ‘woke’ criteria and will make reality otherwise disappear.
“The answer is greater competition. Competition would force AI producers to safeguard privacy, protect individuals from automated libel, and represent reality rather than leftist fantasy fiction.”
The FTC’s letter came the same week the agency’s chairperson, Lina Khan, faced questions from the Republican-led House Judiciary Committee over several other failed antitrust and regulatory actions against Big Tech firms.
Allegations of ‘deceptive practices,’ ‘reputational harm’ to individuals
The FTC investigation comes less than four months after the Center for AI and Digital Policy (CAIDP), a nonprofit advocacy group, filed a complaint with the FTC urging an investigation of OpenAI.
CAIDP’s complaint stated OpenAI failed to meet the FTC guidelines that AI should be “transparent, explainable, fair, and empirically sound while fostering accountability,” and said that OpenAI itself acknowledged a number of potential dangers, including “disinformation and influence operations” and “proliferation of conventional and unconventional weapons.”
CAIDP asked the FTC to block OpenAI from releasing new commercial editions of its ChatGPT platform “until guardrails are established” to address issues concerning bias, disinformation and security. The letter drew from statements by “AI experts” who called for a “pause on large language models” used in AI.
CAIDP escalated its complaint on July 7, accusing OpenAI of “unfair and deceptive practices” and “mounting concerns over the ethical implications and the regulatory needs” of AI products such as ChatGPT.
According to the Post, in addition to the CAIDP complaint, there have been other allegations of harm against OpenAI and ChatGPT.
The Post reported that Georgia radio talk show host Mark Walters sued OpenAI for defamation, alleging the chatbot falsely claimed he had defrauded and embezzled funds from the Second Amendment Foundation. Walters has filed a lawsuit against OpenAI.
In another instance, ChatGPT said a lawyer “made sexually suggestive comments and attempted to touch a student on a class trip to Alaska, citing an article that it said had appeared in The Washington Post.”
However, “no such article existed, the class trip never happened and the lawyer said he was never accused of harassing a student,” according to the Post.
Accordingly, the FTC asked OpenAI to provide details about all complaints it received about its products making “false, misleading, disparaging or harmful” statements about people that may have resulted in “reputational harm.”
Security issues probed
The FTC will also examine whether OpenAI “engaged in unfair or deceptive privacy or data security practices or engaged in unfair or deceptive practices relating to risks of harm to consumers.”
The FTC is seeking records relating to a March data breach where a bug in the ChatGPT system revealed certain users’ chat history, personally identifiable data and financial information.
At the time, OpenAI claimed only an “extremely low” number of users were affected.
Do ChatGPT users understand the limits of AI technology?
The FTC is requesting OpenAI turn over any research, surveys or test results assessing how well consumers understand “the accuracy or reliability of outputs” generated by its AI tools, and about the data OpenAI uses to train its products.
The Post notes that products like ChatGPT “mimic humanlike speech by ingesting text, mostly scraped from Wikipedia, Scribd and other sites across the open web,” but that these models also have a “tendency to ‘hallucinate’” by “making up answers when the models don’t know the answer to a question.”
McCollough told The Defender practices such as “hallucination” pose a threat to the privacy of all online users:
“Tech companies often assert the right to scrape everything on the internet, even personal or copyrighted material. So someone who thinks they have posted something private or not generally published may get it appropriated.”
Lastly, the FTC is asking for details about how OpenAI advertises its products, how it assesses safety and reliability of new products and how often OpenAI delays a new product due to safety concerns.
According to the Times, “The investigation could force OpenAI to reveal its methods around building ChatGPT and the data sources it uses to build its AI systems,” adding that “While OpenAI had long been fairly open about such information, it more recently has said little about where the data for its AI systems come from.”
Unclear if sanctions will follow against OpenAI
Despite numerous allegations against OpenAI, the outcome of this investigation is far from certain. According to the Times, “The FTC’s investigation into OpenAI can take many months, and it is unclear if it will lead to any action from the agency. Such investigations are private and often include depositions of top corporate executives.”
Megan Gray, a former staff member of the FTC’s Bureau of Consumer Protection, told the Times “The FTC doesn’t have the staff with technical expertise to evaluate the responses they will get and to see how OpenAI may try to shade the truth.”
Industry analysts questioned the FTC’s authority to enforce actions against OpenAI. Adam Kovacevich, founder and CEO of the tech industry’s Chamber of Commerce, told the Post the agency may lack the authority to “police defamation or the contents of ChatGPT’s results.”
Nevertheless, “The FTC investigation poses the first major U.S. regulatory threat to OpenAI, one of the highest-profile AI companies, and signals that the technology may increasingly come under scrutiny as people, businesses and governments use more AI-powered products,” stated the Times.
Recent concerns over AI technology
In June, Sen. Chuck Schumer (D-N.Y.) called for “comprehensive legislation” to create “a new foundation for AI policy,” stating:
“Lower-skilled tasks will keep falling victim to automation at a faster and faster rate — displacing millions of low-income workers, many from communities of color. Trucking, manufacturing, energy production could be next. And rest assured, those with college educations and advanced degrees won’t be safe either.”
Altman himself said the AI industry needs to be regulated. In his testimony before Congress in May he said, “I think if this technology goes wrong, it can go quite wrong. We want to work with the government to prevent that from happening.”
At the same hearing, Altman said that it would be important for “government to figure out how we want to mitigate” potential job losses that may result from the broad deployment of AI technology.
Regulators in other countries have already taken action against AI companies
Some countries are already taking steps to regulate AI.
In March, Italy temporarily stopped ChatGPT’s operations due to data privacy concerns.
in June, Google was forced to postpone the launch of its AI tool Bard in Ireland due to similar concerns.
The EU, Brazil, China, Japan, Canada, India and Switzerland are also working on developing regulations for the AI industry, according to law firm TaylorWessing.
Such measures by other countries’ regulators may have prompted the FTC into taking action. According to the Times, “The FTC is acting on A.I. with notable speed, opening an investigation less than a year after OpenAI introduced ChatGPT.”
In April, Samuel Levine, director of the FTC’s Bureau of Consumer Protection, said the agency was prepared to be “nimble” in responding to emerging threats posed by AI, adding that while the FTC “welcomes innovation … being innovative is not a license to be reckless.”
“We are prepared to use all our tools, including enforcement, to challenge harmful practices in this area,” Levine said.
Previously, in a February blog post, the FTC warned AI companies that “Marketers should know that — for FTC enforcement purposes — false or unsubstantiated claims about a product’s efficacy are our bread and butter.”
FTC Chair Khan, who has earned a reputation as a Big Tech skeptic, addressed AI during a Biden administration press conference in April, during which she said, “There is no AI exemption to the laws on the books,” according to the Post
In 2017, Khan published a legal paper in the Yale Law Journal characterizing Amazon as a modern monopolist whose market power required “addressing.” In the same paper, she argued in favor of restoring “traditional antitrust and competition policy” to Big Tech.
In a May 3 guest essay published in the Times, Khan said that although AI tools “are novel, they are not exempt from existing rules, and the FTC will vigorously enforce the laws we are charged with administering, even in this new market,” adding that “While the technology is moving swiftly, we can already see several risks.”
Khan questioned over FTC failures
At Thursday’s House Judiciary Committee hearing, Khan repeated calls for increased scrutiny of the AI industry.
“ChatGPT and some of these other services are being fed a huge trove of data. There are no checks on what type of data is being inserted into these companies,” she said at the hearing, adding that there had been reports of people’s “sensitive information” appearing in results delivered by the AI tool.
In recent months, the FTC has enforced some actions against Big Tech firms for deceptive consumer practices.
In 2019, the FTC approved a $5 billion settlement with Facebook following a privacy probe, a $150 million fine was levied by the FTC against Twitter in 2022 for deceptive data collection practices, and in May, the FTC fined Amazon $25 million over child privacy concerns.
In February, California-based online pharmacy GoodRx agreed to pay $1.5 million in civil penalties, following a FTC investigation finding the company shared users’ health data with firms such as Facebook and Google.
However, at Thursday’s hearing, Khan was grilled over her agency’s failures, including a federal judge halting the FTC’s attempt to stop Microsoft’s $70 billion acquisition of Activision. The previous month, the FTC had faced another court defeat when a judge blocked FTC efforts to stop Meta’s acquisition of virtual reality app company Within.
Rep. Jim Jordan (R-Ohio), chairman of the House Judiciary Committee, said “the FTC has not fully complied with a single request for documents from this committee, and because of her mismanagement, not even her own staff is impressed with Chairman Khan’s leadership,” citing internal FTC surveys showing growing staff dissatisfaction.
And Rep. Kevin Kiley (R-Calif.) asked Khan during Thursday’s hearing “You are now 0 for 4 in merger trials. Why are you losing so much?”
Unanswered questions about risks AI poses to the social fabric
What appeared to be missing from both the FTC’s rhetoric and from the questioning Khan received at the hearing, however, were the broader risks AI may pose to the social fabric.
According to analyst and activist Charles Hugh Smith, “AI eliminates jobs which are not replaced by a massive wave of new jobs, a process known as technological job displacement.”
Smith wrote he is “skeptical of the claims that tens of millions of jobs will be lost due to LLM [large language model] or machine-learning AI” and that such AI tools may “boost the productivity of skilled human workers rather than entirely replace” them.
However, responding to claims that new technologies may even create many more jobs than they eliminate, Smith wrote “The evidence is actually not quite so clear that this new job creation is predictable.”
As a result, “We may find that AI delivers the worst of both worlds: It slashes profits as everyone loads up on the higher costs of AI but without any enduring competitive advantage that would support higher prices and profits, and it displaces wide swaths of human labor that are not replaced with new sectors generating tens of millions of new jobs.”
“There is a feedback loop to job losses that aren’t replaced,” Smith added, citing economist John Maynard Keynes.
“When people lose their earned income and depend on unemployment or possibly Universal Basic Income (UBI), their income is typically lower and they’re no longer able to spend and consume as much as when they had a job. The entire economy shrinks.”