HomePolicyThe Biden administration's executive order addresses AI risks, but a scarcity of...

The Biden administration's executive order addresses AI risks, but a scarcity of privacy laws limits its reach

The comprehensive, even far-reaching, artificial intelligence guidelines unveiled by the White House in an executive order on October 30, 2023 exhibit that the U.S. government is attempting to deal with the risks posed by AI.

As a researcher in information systems and responsible AI, I consider the Executive Order represents a very important step in constructing responsible and trustworthy AI.

However, the order is barely one step and leaves the query of comprehensive data protection laws open. Without such laws, individuals are at greater risk of AI systems revealing sensitive or confidential information.

Understanding AI risks

Technology is often evaluated on performance, cost and quality, but often not on equity, fairness and transparency. In response, responsible AI researchers and practitioners are advocating the next:

The National Institute of Standards and Technology (NIST) released a comprehensive AI risk management framework in January 2023 that goals to deal with a lot of these issues. The framework serves as the premise for much of the Biden administration's executive order. The executive order also authorizes the Department of Commerce, NIST's home within the federal government, to play a key role in implementing the proposed guidelines.

AI ethics researchers have long warned that more rigorous scrutiny of AI systems is required to avoid creating the looks of control without real accountability. As it stands, a recent study corporate public disclosures found that claims about AI ethics practices are outpacing actual AI ethics initiatives. The executive order could help by identifying ways to implement accountability.

Another essential initiative outlined in the manager order is to search for vulnerabilities in very large, general-purpose AI models trained on massive amounts of knowledge, akin to the models that support OpenAI's ChatGPT or DALL-E. The order requires firms developing large AI systems with the potential to harm national security, public health or the economy to conduct red teaming and report the outcomes to the federal government. Red teaming involves using manual or automated methods to force an AI model to supply harmful results – for instance, making offensive or dangerous statements, akin to advice on selling drugs.

Reporting to the federal government is very important because a recent study found that almost all firms producing these large-scale AI systems lack transparency.

There can be a risk that the general public will probably be misled by AI-generated content. To address this issue, the manager order directs the Department of Commerce to develop guidelines for labeling AI-generated content. Federal agencies must use AI watermarking — a technology that marks content as AI-generated to scale back fraud and misinformation — even though it shouldn’t be required for the private sector.

The executive order also recognizes that AI systems may pose an unacceptable risk of harm to civil and human rights and the well-being of people: “Artificial intelligence systems, used irresponsibly, have reproduced and reinforced existing inequities, creating recent forms of harmful ones Discrimination caused and the Internet exacerbates and physical harm.”

The U.S. government is taking steps to deal with the risks posed by AI.

What the Executive Order doesn’t do

A key challenge for AI regulation is the shortage of comprehensive federal laws on data protection and privacy. The executive order simply calls on Congress to enact privacy laws but doesn’t provide a legal framework. It stays to be seen how the courts will interpret the Executive Order's guidelines in light of existing consumer protection and data rights laws.

Without strong privacy laws within the U.S. as in other countries, the manager order could have minimal impact on getting AI firms to enhance privacy protections. In general, it’s difficult to measure the impact that decision-making AI systems have on privacy and freedoms.

It can be value noting that algorithmic transparency shouldn’t be a panacea. For example, the European Union’s General Data Protection Regulation requires “meaningful information in regards to the logic” of automated decisions. This suggests a right to a proof of the standards algorithms use of their decision-making. The mandate treats the technique of algorithmic decision-making as a sort of recipe book. That is, it assumes that if people understand how algorithmic decision-making works, they will understand how the system affects them. But knowing how an AI system works doesn't necessarily inform you why it made a specific decision.

As algorithmic decision-making becomes more prevalent, the White House Executive Order and International Summit on AI Security make it clear that lawmakers are starting to know the importance of AI regulation, even within the absence of comprehensive laws.

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Must Read