HomePolicyFTC Investigation of OpenAI: Consumer Protection Is the Opening Salvo of US...

FTC Investigation of OpenAI: Consumer Protection Is the Opening Salvo of US AI Regulation

The Federal Trade Commission has opened an investigation into ChatGPT maker OpenAI for possible violations of consumer protection laws. The FTC sent the corporate a 20-page request for information the week of July 10, 2023. The move comes as European regulators have begun taking motion and Congress is working on laws to control the synthetic intelligence industry.

The FTC has asked OpenAI to supply details of any complaints it has received from users regarding OpenAI's “false, misleading, derogatory, or harmful” statements and whether OpenAI engages in unfair or deceptive practices related to the chance of harm to consumers , including damage to popularity. The agency has asked detailed questions on how OpenAI obtains its data, the way it trains its models, what processes it uses for human feedback, risk assessment and mitigation, and what mechanisms it has in place to guard privacy.

As a social media and AI researcher, I recognize the big transformative potential of generative AI models, but consider that these systems pose risks. Particularly within the context of consumer protection, these models can result in errors, exhibit biases and violate the protection of non-public data.

Hidden power

At the guts of chatbots like ChatGPT and image generation tools like DALL-E is the ability of generative AI models that may create realistic content from text, image, audio and video inputs. These tools could be accessed via a browser or smartphone app.

Because these AI models don’t have any predefined intended use, they could be fine-tuned for a wide range of applications in a wide selection of fields, from finance to biology. The models trained on massive amounts of knowledge could be adapted to different tasks with little to no coding and sometimes so simple as describing a task in plain language.

Because AI models like GPT-3 and GPT-4 were developed by private organizations using proprietary datasets, the general public doesn’t know the form of data used to coach these models. The opacity of the training data and the complexity of the model architecture – GPT-3 was trained on over 175 billion variables or “parameters” – make these models difficult for anyone to check. Therefore, it’s difficult to prove that the best way they’re built or trained causes harm.

Hallucinations

In speech model AIs, a hallucination is a self-conscious response that’s imprecise and seemingly not justified by a model's training data. Even some generative AI models, that are less liable to hallucinations, have amplified them.

There is a risk that generative AI models produce false or misleading information that may ultimately be harmful to users. A study examining ChatGPT's ability to supply factually accurate scientific texts within the medical field found that ChatGPT ultimately either generated citations to non-existent papers or reported non-existent results. My colleagues and I discovered similar patterns in our research.

Such hallucinations may cause real harm if the models are used without adequate supervision. For example, ChatGPT falsely claimed that a professor it named had been accused of sexual harassment. And a radio host has filed a defamation lawsuit against OpenAI related to ChatGPT, falsely claiming there’s a lawsuit against him for embezzlement.

Bias and discrimination

Without proper safeguards and protections, generative AI models trained on massive amounts of knowledge from the web can find yourself reproducing existing societal biases. For example, organizations that use generative AI models to design recruiting campaigns could find yourself unintentionally discriminating against certain groups of individuals.

When a journalist asked DALL-E 2 to generate images of “a technology journalist writing an article a couple of recent AI system that may produce remarkable and strange images,” only images of men were generated. An AI portrait app showed several socio-cultural biases, comparable to lightening an actress' skin color.

Data privacy

Another major issue particularly relevant to the FTC investigation is the chance of knowledge breaches, where AI may leak sensitive or confidential information. A hacker could gain access to sensitive details about people whose data was used to coach an AI model.

Researchers warn of risks from manipulation, so-called prompt injection attacks, which might trick generative AI into sharing information it shouldn't. “Indirect prompt injection” attacks could idiot AI models with steps like sending a calendar invite with instructions for his or her digital assistant to export the recipient’s data and send it to the hacker.

OpenAI CEO Sam Altman testified before a Senate Judiciary Subcommittee on May 16, 2023. A law to control AI is within the works, however the FTC has beaten Congress to it.
AP Photo/Patrick Semansky

Some solutions

The European Commission has published ethical guidelines for trustworthy AI, which give an assessment checklist for six different facets of AI systems: human agency and oversight; technical robustness and security; data protection and data management; transparency, diversity, non-discrimination and fairness; social and environmental well-being; and responsibility.

Better documentation of AI developers' processes might help highlight potential harm. Algorithmic fairness researchers, for instance, have proposed model cards that resemble food nutrition labels. Data statements and data sheets that characterize data sets used to coach AI models would play the same role.

For example, Amazon Web Services has introduced AI service maps that describe the uses and limitations of among the models it provides. The cards describe the models' capabilities, training data, and intended uses.

The FTC's investigation suggests that this kind of disclosure could possibly be a direction that U.S. regulators could take. Additionally, if the FTC finds that OpenAI has violated consumer protection laws, it could wonderful the corporate or subject it to a consent decree.

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Must Read