The thirtieth of October, 2023, marks an advancement in AI governance as President Joe Biden has announced a comprehensive executive order setting robust rules and guidelines for AI.
The order is anticipated to usher in a brand new era of transparency and standardization for AI firms, highlighting the need of clear content labeling and watermarking practices.
“To realize the promise of AI and avoid the chance, we’d like to control this technology, there’s no way around it,” Biden spoke on the signing event on the White House.
At the event, Biden also spoke of AI deep fakes, announcing to humor among the many audience, “I’ve watched certainly one of me…I said, the hell did I say that?”
The order focuses on heightened transparency from AI developers and establishes a series of recent standards, particularly for labeling AI-generated content.
The White House goals to boost “AI safety and security” through the order. It features a surprising requirement for developers to share safety test results for brand spanking new AI models with the US government if the technology could potentially threaten national security.
This involves the Defense Production Act, typically reserved for national emergencies.
For those not aware of US lawmaking, executive orders are usually not laws – they don’t create latest laws or change existing laws. Instead, they instruct Congress on policy priorities.
The order, published here, does contain quite a few deadlines specifying various actions that must be taken to usher forth AI laws.
Executive orders should be based on constitutional or statutory authority and can’t be used to bypass Congress or create laws unilaterally.
As a result, many have highlighted that the manager order lacks enforcement mechanisms. It doesn’t carry the burden of congressional laws on AI.
“The Congress is deeply polarized and even dysfunctional to the extent that it is extremely unlikely to supply any meaningful AI laws within the near future,” observes Anu Bradford, a law professor at Columbia University.
The order comes days before the UK is about to the milestone AI Safety Summit, which can see politicians, researchers, tech executives, and members of civil society convene at Bletchley Park. Vice President Kamala Harris is attending. Notably, China can be represented on the summit, too.
“We intend that the actions we’re taking domestically will function a model for international motion,” Harris said on the White House event.
This statement draws attention to criticisms that the order could potentially undermine open international collaboration ahead of the AI Safety Summit.
The US was a slow starter in AI regulation, and the character of executive orders means it stays one. It’s perhaps brazen to suggest other countries should follow its trajectory.
Harris continued that the US would “apply existing international rules and norms with a purpose to advertise global order and stability, and where needed to construct support for added rules and norms which meet this moment.”
What the White House has to say
The order introduces stringent standards for AI, safeguarding Americans’ privacy, fostering equity and civil rights, protecting consumers and staff, fueling innovation and competition, and reinforcing American leadership in AI.
It complements voluntary commitments from 15 leading firms to advertise the protected and trustworthy development of AI.
One of essentially the most notable elements of the order is that the President has stated developers of ‘powerful’ AI systems might want to share safety test results and demanding information with the US government, aiming to make sure these systems are protected and trustworthy before public release.
The National Institute of Standards and Technology (NIST) will lead ‘red teaming’ efforts to check and analyze AI model safety.
Red team is the technique of probing and stress testing AI model functionality and security.
Regarding privacy, the President’s call for bipartisan data privacy laws reflects an understanding of the situation’s urgency.
However, as noted, the effectiveness of those measures will ultimately rely upon the swift and decisive motion of Congress, which has been historically slow in legislating on tech-related issues.
Additionally, the manager order takes a powerful stance on advancing equity and combating algorithmic discrimination, with directives to make sure fairness in housing, criminal justice, and federal advantages programs.
Again, while these are positive steps, the success of those initiatives will hinge on rigorous enforcement and continuous oversight.
The order addresses eight key areas.
Here’s how President Biden’s landmark Executive Order on AI will ensure America leads the way in which in this era of technological change while keeping Americans protected. pic.twitter.com/SvBPxiZk3M
1. New standards for AI safety and security
- Developers of potent AI systems must share safety test results and crucial information with the U.S. government.
- The development of standards, tools, and tests to make sure AI systems’ safety and trustworthiness, led by the NIST.
- Protection against AI’s potential in engineering hazardous biological materials by establishing robust standards for biological synthesis screening.
- Establishing protocols to safeguard Americans from AI-enabled fraud and deception, including standards for detecting AI-generated content and authenticating official communications.
- Launching a sophisticated cybersecurity program to leverage AI in securing software and networks.
2. Protecting Americans’ privacy
- Advocating for federal support in the event and use of privacy-preserving techniques in AI.
- Strengthening research in privacy-preserving technologies.
- Enhancing federal agencies’ guidelines to make sure privacy in the gathering and use of knowledge, especially personally identifiable information.
3. Advancing equity and civil rights
- Providing guidance to mitigate AI’s potential to exacerbate discrimination in housing, justice, and employment.
- Promoting fairness across the criminal justice system through developing best practices in AI application.
4. Standing up for consumers, patients, and students
- Encouraging responsible AI use in healthcare for the event of inexpensive, life-saving medications and ensuring safety in AI-involved healthcare practices.
- Facilitating AI’s transformative role in education, supporting educators in deploying AI-enhanced learning tools.
5. Supporting staff
- Developing principles and best practices to balance the advantages and harms of AI within the workplace.
- Conducting comprehensive studies on AI’s impact on the labor market and fortifying federal support for staff facing labor disruptions as a result of AI.
6. Promoting innovation and competition
- Catalyzing AI research nationwide and ensuring a competitive AI ecosystem.
- Streamlining immigration processes for highly expert individuals in critical AI sectors.
7. Advancing American leadership abroad
- Strengthening international collaborations and frameworks in AI.
- Promoting protected and responsible AI development and deployment worldwide.
8. Ensuring responsible and effective government use of AI
- Providing clear guidance for federal agencies on AI use, procurement, and deployment.
- Enhancing AI talent acquisition across the federal government and providing AI training to federal employees.
The Biden-Harris Administration is attempting to strike a balance between retaining and enhancing the US’ world-leading AI industry while stunting obvious risks.
Deep fakes and misinformation are at the highest of most individuals’s minds, seeing as we now have tangible evidence that they might influence election votes.
With the US general election next 12 months, it’s perhaps unsurprising that the order increases pressure to watermark and highlight AI-generated content so users can easily determine real from fake.
Technically speaking, nonetheless, there are no robust solutions for achieving this in practice.
Industry reactions
Industry reactions – naturally – are mixed. Many praise the rapid progress toward signing the order, whereas others highlight that laws and knowledge about enforcement motion are lacking.
Again, the order indicates that the White House seeks Congress to act on AI policy.
The only exception here is the Defense Production Act, which has been invoked to force AI firms to notify the federal government when developing models that interact with national security.
The official wording is an AI model that poses a “serious risk to national security, national economic security or national public health and safety.”
The AI Executive Order is a bit ridiculous and pretty hard to implement.
Here are the problems –
1. Any foundation model that poses a serious risk to national security – How do you identify if something is a “serious risk to national security!”?
If that is about…
Some highlighted that, in comparison with the EU AI Act, there’s no guidance on training data transparency, over which multiple AI developers are facing lawsuits.
Absent from latest exec order:
AI firms must reveal their training set.
To develop protected AI, we’d like to know what the model is trained on.
Why aren’t the AI safety orgs advocating for this?https://t.co/yjr21bNIK4
Adnan Masood, Chief AI Architect at UST, applauded the initiative, stating, “The order underscores a much-needed shift in global attention toward regulating AI, especially after the generative AI boom now we have all witnessed this 12 months.”
Avivah Litan, a Vice President at Gartner Research, noted that while the principles start off strong, there are still areas where the mandates fall short. She questioned the definition of ‘strongest’ AI systems, the applying to open source AI Models, and enforcing content authentication standards across social media platforms.
Bradley Tusk, CEO at Tusk Ventures, said AI developers aren’t prone to share proprietary data with the federal government, stating, “Without an actual enforcement mechanism, which the manager order doesn’t appear to have, the concept is great but adherence can be quite limited.”
Randy Lariar, AI security leader at Optiv, said, “I worry that many open-source models, that are derived from the large foundational models, may be just as dangerous without the burden of red teaming — but this can be a start.”
Ashley Leonard, chief executive officer of Syxsense, added that it should be very interesting to see how the order is implemented. “It takes real resources — budget, time, and staff — for even essentially the most advanced firms to maintain up with vulnerabilities and bug fixes,” said Leonard.
Max Tegmark, a professor on the Massachusetts Institute of Technology and the president of the Future of Life Institute, highlighted that the order needs to come back equipped with a plan for creating and enforcing laws, stating, “Policymakers, including those in Congress, must look out for his or her residents by enacting laws with teeth that tackle threats and safeguard progress.”
Jaysen Gillespie, Head of Analytics and Data Science at RTB House, viewed the manager order positively, stating that AI regulation is a subject where a bipartisan approach is actually possible.
Alejandro Mayorkas from Homeland Security said, “The unprecedented speed of AI’s development and adoption presents significant risks we must quickly mitigate, together with opportunities to advance and improve our work on behalf of the American people…It directs DHS to administer AI in critical infrastructure and cyberspace, promote the adoption of AI safety standards globally, reduce the chance of AI’s use to create weapons of mass destruction, combat AI-related mental property theft, and ensure our immigration system attracts talent to develop responsible AI within the United States.”
Casey Ellis, founder and CTO of Bugcrowd, said, “The directive mandates developers to share safety test results with the U.S. government, ensuring AI systems are extensively vetted before public release. It also highlights the importance of AI in bolstering cybersecurity, particularly in detecting AI-enabled fraud and enhancing software and network security. The order also champions the event of standards, tools, and tests for AI’s safety and security.”
A needed step, but challenges ahead
President Biden’s executive order on AI attempts to be comprehensive, but Congress must follow up its urgency with legislative motion. That is way from guaranteed.
While it establishes stringent standards and emphasizes transparency, accountability, and the prevention of discriminatory practices, the true effectiveness of this initiative will rely upon its implementation and the flexibility to balance regulatory oversight with fostering innovation.
With the AI Safety Summit imminent, conversations surrounding AI’s risks and how you can mitigate them are escalating.
Comprehensively controlling AI still relies on the speed and efficiency of lawmaking and the flexibility to implement it.