President Donald Trump signed an executive order on December 11, 2025 that goals to do that replace state laws on artificial intelligence which the federal government sees as a barrier to innovation in AI.
The variety of state laws regulating AI is increasing, particularly in response to the rise of generative AI systems like ChatGPT, which produce text and pictures. Thirty-eight states passed laws regulating AI in 2025 in a method or one other. They range from Prohibit stalking via AI-powered robots excluding AI systems that may manipulate people's behavior.
The executive order states that it’s the policy of the United States to determine a “minimally burdensome” national framework for AI. The order calls on the U.S. attorney general to create an AI litigation task force to challenge state AI laws which can be inconsistent with the policy. It also directs the Commerce Secretary to discover “onerous” state AI laws that conflict with the policy and withhold funding under the law Broadband Equity Access and Delivery Program to states with these laws. The executive order exempts state AI laws related to child safety.
Executive orders are instructions to federal agencies to implement existing laws. The AI ​​Implementing Regulation directs federal ministries and agencies to take actions that the administration claims are inside their statutory authority.
Big tech firms have Lobbying for the federal government to override government AI regulations. The firms have argued that the Burden of complying with multiple government regulations hinders innovation.
Proponents of the state laws are inclined to portray them as attempts to just do that Balancing public safety with economic advantages. Prominent examples are Laws in California, Colorado, Texas and Utah. Here are a few of the key state laws regulating AI that may very well be targeted under the chief order:
Algorithmic discrimination
Colorado Consumer protection for artificial intelligence is the primary comprehensive state law within the U.S. aimed toward regulating AI systems utilized in employment, housing, credit, education and healthcare decisions. But enforcement of the law has been delayed while state lawmakers consider its implications.
The focus of the Colorado AI Act is predictive artificial intelligence systemswho make decisions, not newer generative artificial intelligence like ChatGPT who create content.
The Colorado law goals to guard people from algorithmic discrimination. The law requires organizations using these “high-risk systems” to conduct impact assessments of the technology, inform consumers whether predictive AI is utilized in subsequent decisions about them, and publicly disclose the varieties of systems they use and the way they plan to deal with the risks of algorithmic discrimination.
An identical Illinois law, set to take effect on January 1, 2026, amends the Illinois Human Rights Act to achieve this a violation of civil rights that employers use AI tools that result in discrimination.
At the “border”
California Law on Transparency within the Frontier Artificial Intelligence sets guidelines for developing the very best performing AI models. These models, called base or frontier models, are any AI models which can be trained on extremely large and diverse data sets that will be adapted to quite a lot of tasks without additional training. This includes the models underlying OpenAI's ChatGPT chatbots and Gemini AI chatbots.
California law only applies to world's largest AI models – those who cost at the very least $100 million and require at the very least $10 million26 – or 100,000,000,000,000,000,000,000,000 – floating point operations of the computing power to be trained. Floating point operations are arithmetic operations that enable computers calculate large numbers.
Robi Rahman, David Owen and Josh You (2024), “Tracking Large-Scale AI Models.” Published online at epoch.ai., CC BY
Machine learning models can result in unreliable, unpredictable and inexplicable results. This is a pose Challenges in regulating technology.
Their internal workings are invisible to users and sometimes even their creators, resulting in them being called black boxes. The Foundation model transparency index shows that these large models will be quite opaque.
The Risks from such large AI models These include malicious use, malfunctions and systemic risks. These models could pose potentially catastrophic risks to society. For example, someone could Use an AI model to craft a weapon This leads to a lot of victims or directs one to stage a cyberattack that causes billions of dollars in damages.
California law requires developers of frontier AI models to explain how they incorporate national and international standards and best practices into industry consensus. They must also provide a summary of all disaster risk assessments. The bill also directs the state's Office of Emergency Services to determine a mechanism to permit anyone to report a critical safety incident and to confidentially submit summaries of any assessments of the potential for a catastrophic risk.
Disclosure and Liability
Texas has enacted the Texas Responsible AI Governance Act, which imposes restrictions on the event and deployment of AI systems for purposes similar to behavior manipulation. The refuge Provisions – liability protections – within the Texas AI Act are intended to incentivize firms to document compliance with responsible AI governance frameworks similar to the NIST AI Risk Management Framework.
What is recent in regards to the Texas law is that it creates a “sandbox” – an isolated environment by which software will be safely tested – so developers can test the behavior of an AI system.
The Utah Artificial Intelligence Policy Act establishes disclosure requirements about organizations that use generative AI tools with their customers. Such laws be sure that an organization using generative AI tools bears ultimate responsibility for the resulting consumer liabilities and harms and can’t shift blame to the AI. This law is the primary within the country to mandate consumer protection and require firms to prominently disclose when a consumer interacts with a generative AI system.
Other movements
States are also taking other legal and policy steps to guard their residents from the potential harms of AI.
Florida Republican Gov. Ron DeSantis said he opposes federal efforts to override state AI regulations. He has it too proposed a Florida AI Bill of Rights to deal with “obvious dangers” of the technology.
There are actually the attorneys general of 38 states and the attorneys general of the District of Columbia, Puerto Rico, American Samoa and the U.S. Virgin Islands urged AI firmsincluding Anthropic, Apple, Google, Meta, Microsoft, OpenAI, Perplexity AI and xAI to deal with sycophantic and delusional outcomes of generative AI systems. These are results that may result in users becoming overly trusting the AI ​​systems or even delusional.
It's not clear what impact the chief order can have, observers said it is against the law Because Only Congress can supersede state laws. The final provision of the regulation directs federal officials to propose relevant laws.

