Be part of leading executives in San Francisco on July 11-12, to listen to how leaders are integrating and optimizing AI investments for good results. Learn Additional
The Federal Trade Fee (FTC) been given a new complaint currently from the Center for AI and Digital Policy (CAIDP), which calls for an investigation of OpenAI and its product GPT-4. The complaint argues that the FTC has declared that the use of AI ought to be “transparent, explainable, fair, and empirically sound when fostering accountability,” but promises that OpenAI’s GPT-4 “satisfies none of these requirements” and is “biased, misleading, and a hazard to privacy and community protection.”
CAIDP is a Washington, DC-dependent independent, nonprofit study organization that “assesses nationwide AI insurance policies and practices, trains AI coverage leaders, and promotes democratic values for AI.” It is headed by president and founder Marc Rotenberg and senior investigate director Merve Hickock.
“The FTC has a distinct accountability to look into and prohibit unfair and deceptive trade techniques. We feel that the FTC should really seem intently at OpenAI and GPT-4,” claimed Rotenberg in a press release about the complains. “We are particularly inquiring the FTC to determine whether the enterprise has complied with the advice the federal agency has issued.”
The complaint comes a day after an open up letter calling for 6-thirty day period ‘pause’ on developing massive-scale AI products over and above GPT-4 highlighted the intense debate close to risks vs. hoopla as the speed of AI advancement accelerates.
Be part of us in San Francisco on July 11-12, wherever leading executives will share how they have integrated and optimized AI investments for results and avoided common pitfalls.
>>Follow VentureBeat’s ongoing generative AI coverage<<
FTC has made recent public statements about generative AI
The complaint also comes 10 days after the FTC published a business blog post called “Chatbots, deepfakes, and voice clones: AI deception for sale,” authored by Michael Atleson an attorney at the FTC division of advertising practices. The blog post said that The FTC Act’s “prohibition on deceptive or unfair conduct can apply if you make, sell, or use a tool that is effectively designed to deceive – even if that’s not its intended or sole purpose.” Companies should consider whether they should even be making or selling the AI tool and whether they are effectively mitigating the risks.
“If you decide to make or offer a product like that, take all reasonable precautions before it hits the market,” says the blog post. “The FTC has sued businesses that disseminated potentially harmful technologies without taking reasonable measures to prevent consumer injury.”
In a separate post from February, “Keep your AI claims in check,” Atleson wrote that the FTC may be “wondering” if a company advertising an AI product is aware of the risks. “You need to know about the reasonably foreseeable risks and impact of your AI product before putting it on the market. If something goes wrong – maybe it fails or yields biased results – you can’t just blame a third-party developer of the technology. And you can’t say you’re not responsible because that technology is a “black box” you can’t understand or didn’t know how to test.”
FTC attorney said agency will always apply “bedrock” advertising law principles
In an interview with VentureBeat last week unrelated to the CAIDP complaint and focused solely on advertising law, Atleson said that the basic message of both of his recent AI-focused blog posts is that no matter how new or different the product or service is, the FTC will always apply the “bedrock” advertising law principles in the FTC Act — that you can’t misrepresent or exaggerate what your product can do or what it is, and you can’t sell things that are going to cause consumers substantial harm.”
“It doesn’t matter whether it’s AI or whether it turns out we’re all living in a multiverse,” he said. “Guess what? That prohibition of false advertising still applies to every single instance.”
He added that admittedly, AI technology development is happening quickly. “We’re certainly right in the middle of a corporate rush to get a certain type of AI product to market, different types of generative AI tools,” he said. The FTC has focused on AI for a while now, he added, but the difference is that AI is more in the public eye, “especially with these new generative AI tools to which consumers have direct access.”
Federal AI regulation may come from FTC
With the growth of AI and speed of its development, legal experts say that FTC rulemaking about AI could be coming in 2023. In a December 2022 article written by Alston & Bird, federal AI regulation may be emerging from the FTC even though AI-focused bills introduced in Congress have not yet gained significant support.
“In recent years, the FTC issued two publications foreshadowing increased focus on AI regulation,” the article said, stating that the FTC had developed AI expertise in enforcing a variety of statutes, such as the Fair Credit Reporting Act, Equal Credit Opportunity Act, and the FTC Act.
VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings.