A tech ethics group has requested the Federal Trade Commission (FTC) to look into OpenAI for potentially violating consumer protection rules. The Center for AI and Digital Policy (CAIDP) claims that OpenAI’s AI text generation tools are “biased, deceptive, and a risk to public safety.” Oh boy, things are heating up in the AI world!
CAIDP filed a complaint after an open letter, signed by prominent AI researchers, OpenAI co-founder Elon Musk, and CAIDP president Marc Rotenberg, called for a pause on large generative AI experiments. The complaint demands to slow down the development of these AI models and implement stricter government oversight.
The main concern revolves around OpenAI’s GPT-4 generative text model, announced in mid-March. CAIDP points out various threats, such as GPT-4 producing malicious code and highly tailored propaganda. Additionally, the biased training data could result in baked-in stereotypes or unfair race and gender preferences in areas like hiring. And as if that wasn’t enough, they also highlight significant privacy failures with OpenAI’s product interface, like a recent bug that exposed OpenAI ChatGPT histories and possibly payment details to other users.
Despite OpenAI acknowledging the potential risks, CAIDP argues that GPT-4 crosses a line of consumer harm that warrants regulatory action. They seek to hold OpenAI accountable for violating Section 5 of the FTC Act , which prohibits unfair and deceptive trade practices. “OpenAI released GPT-4 to the public for commercial use with full knowledge of these risks,” the complaint states, referring to potential bias and harmful behavior. CAIDP also considers AI hallucinations – when generative models confidently fabricate non-existent facts – as a form of deception. “ChatGPT will promote deceptive commercial statements and advertising,” it cautions, potentially bringing it under the FTC’s watchful eye.
In their complaint, CAIDP requests the FTC to pump the brakes on any further commercial deployment of GPT models and to mandate independent assessments before future rollouts. Moreover, they suggest the creation of a publicly accessible reporting tool, akin to the one used for filing fraud complaints. Finally, CAIDP calls for firm rulemaking on the FTC’s regulations for generative AI systems, building upon the agency’s ongoing, albeit informal, research and evaluation of AI tools.
Interestingly, the FTC has shown an appetite for regulating AI tools. In recent years, it has warned that biased AI systems could prompt enforcement action. At a joint event this week with the Department of Justice, FTC Chair Lina Khan stated that the agency would be on the lookout for large incumbent tech companies attempting to stifle competition. However, an investigation of OpenAI, a significant player in the generative AI arena, would mark a substantial escalation in the FTC’s efforts.
The CAIDP complaint represents a critical turning point in the conversation about AI’s role in society and the potential risks it poses.
Should the FTC decide to investigate OpenAI, the outcome could have far-reaching consequences for the AI industry. Establishing stricter guidelines and regulations may help ensure the safe and ethical deployment of AI systems, while also protecting users from potential harm.
Will the regulatory body step in and put the brakes on OpenAI’s GPT models? Or will the AI juggernaut continue to forge ahead with its groundbreaking text generation tools? Only time will tell, but one thing is for sure: this AI saga has us all on the edge of our seats.
No matter the outcome of the CAIDP complaint, it is undeniable that the conversation it has sparked is vital for the future of AI. Open dialogue, transparency, and collaboration between researchers, regulators, and the public will be key to navigating the complex and ever-changing world.