US regulators are probing artificial intelligence company OpenAI over the risks to consumers from ChatGPT generating false information.
The Federal Trade Commission (FTC) sent a letter to the Microsoft-backed business requesting information on how it addresses risks to people’s reputations.
The inquiry is a sign of the rising regulatory scrutiny of the technology.
OpenAI chief executive Sam Altman says the company will work with the FTC.
ChatGPT generates convincing human-like responses to user queries within seconds, instead of the series of links generated by a traditional internet search. It, and similar AI products, are expected to dramatically change the way people get information they are searching for online.
Tech rivals are racing to offer their own versions of the technology, even as it generates fierce debate, including over the data it uses, the accuracy of the responses and whether the company violated authors’ rights as it was training the technology.
The FTC’s letter asks what steps OpenAI has taken to address its products’ potential to “generate statements about real individuals that are false, misleading, disparaging or harmful”.
The FTC is also looking at OpenAI’s approach to data privacy and how it obtains data to train and inform the AI.
Mr Altman said OpenAI had spent years on safety research and months making ChatGPT “safer and more aligned before releasing it”.
“We protect user privacy and design our systems to learn about the world, not private individuals,” he said on Twitter.
In another tweet he said that it was important to the firm that its “technology is safe and pro-consumer, and we are confident we follow the law. Of course we will work with the FTC.”
Mr Altman appeared before a hearing at Congress earlier this year, in which he admitted that the technology could be a source of errors.
He called for regulations to be created for the emerging industry and recommended that a new agency be formed to oversee AI safety. He added that he expected the technology to have a significant impact, including on jobs, as its uses become clear.
“I think if this technology goes wrong, it can go quite wrong… we want to be vocal about that,” Mr Altman said at the time. “We want to work with the government to prevent that from happening.”
The investigation by the FTC was first reported by the Washington Post, which published a copy of the letter. OpenAI did not respond to a BBCrequest for comment.
The FTC also declined to comment. The consumer watchdog has taken a high profile role policing the tech giants under its current chair, Lina Khan.
Ms Khan rose to prominence as a Yale law student, when she criticised America’s record on anti-monopoly enforcement related to Amazon.
Appointed by President Joe Biden, she is a controversial figure, with critics arguing that she is pushing the FTC beyond the boundaries of its authority.
Some of her most high-profile challenges of tech firms activities – including a push to block the merger of Microsoft with gaming giant Activision Blizzard – have faced setbacks in the courts.
During a five-hour hearing in Congress on Thursday, she faced tough criticism from Republicans over her leadership of the agency.
She did not mention the FTC’s investigation into OpenAI, which is at a preliminary stage. But she said she had concerns about the product’s output.
“We’ve heard about reports where people’s sensitive information is showing up in response to an inquiry from somebody else,” Ms Khan said.
“We’ve heard about libel, defamatory statements, flatly untrue things that are emerging. That’s the type of fraud and deception that we are concerned about,” she added.
The FTC probe is not the company’s first challenge over such issues. Italy banned ChatGPT in April, citing privacy concerns. The service was restored after it added a tool to verify users’ ages and provided more information about its privacy policy.