Updated: 07-05-2026
Reading time: 6-7 minutes

AI legislation in the US: A 2026 overview

Software Improvement Group

In this article​

Summary

The United States has introduced several key legislative measures to regulate AI, but the complexity of federalism still makes a unified AI policy difficult.

Since Donald Trump began his second term in 2025, federal policy has emphasized “innovation-first” approaches while states continue to pass enforceable AI rules that are now taking effect in 2026.

At the same time, a high-profile battle over federal preemption of state AI laws is creating new uncertainty for compliance planning. Anthropic’s recent announcement of the Mythos model, and its advanced cybersecurity capabilities, has triggered an unexpected shift in the Trump administration’s stance on AI oversight.

This article serves as a general and up-to-date overview of current AI legislation in the US, so that business leaders in America and beyond can better prepare for compliance.

A complete and current overview of US legislative AI measures and principles

The legislative measures and principles listed below are meant as a general overview. While the timeline for many of these measures varies, the following provides an up-to-date overview of current efforts.

What’s changed (May 2026 updates)

  • State-level AI legislation
    According to legislative tracker MultiState, as of March 2026, lawmakers in 45 states had already introduced 1,561 AI-related bills, surpassing the total volume from all of 2024. For the most current view of activity, NCSL maintains a searchable Artificial Intelligence Legislation Database, which tracks all introduced AI bills (enacted and pending) from 2025 onwards.
  • EO 14365: Ensuring a National Policy Framework for Artificial Intelligence
    Signed on December 11, 2025, EO 14365 directs federal agencies to promote a “minimally burdensome” national AI framework and establishes an AI Litigation Task Force to challenge state AI laws deemed inconsistent with federal policy.
  • Federal vs. state preemption
    Multiple attempts to impose a federal moratorium on state AI laws have failed in Congress, most recently a 99-1 Senate vote to strip a 10-year freeze from the “One Big Beautiful Bill Act.” The Trump administration continues to push for federal preemption through other means, but state laws remain in effect and enforceable.
  • Frontier AI model oversight
    The Trump administration is reportedly considering pre-release evaluation requirements for frontier AI models, driven by national security concerns around Anthropic’s Mythos model. At the same time, CAISI, the renamed US AI Safety Institute, has announced evaluation partnerships with Google, Microsoft, and xAI. This marks a notable shift for an administration that came into office explicitly opposing AI oversight.

What still stands

  • National Artificial Intelligence Initiative Act of 2020 (NAII)
    Signed during President Trump’s first term, the National Artificial Intelligence Initiative Act of 2020 legislation promotes and funds artificial intelligence innovation initiatives throughout key federal agencies.
  • Executive order: Removing barriers to American Leadership in Artificial Intelligence
    The executive order titled “Removing Barriers to American Leadership in Artificial Intelligence”, signed by President Trump on January 23rd, 2025, aims to sustain and enhance America’s global AI dominance by promoting human flourishing, economic competitiveness, and national security. 
  • Winning the Race: AMERICA’S AI ACTION PLAN
    Winning the Race: AMERICA’S AI ACTION PLAN introduces a set of policy recommendations that aim to secure U.S. global AI dominance, framing AI leadership as essential for economic growth, national security, and a new era of innovation.  
  • AI Training Act
    The AI Training Act (S.2551) is enacted (Public Law 117-207, signed in 2022). It requires the Office of Management and Budget (OMB) to establish or otherwise provide an AI training program for the federal acquisition workforce—supporting federal readiness as agencies procure and deploy AI-enabled systems.
A scenic view of the city of San Francisco, California in the US.

Introduction: When AI technology meets legislation

Artificial Intelligence was once the stuff of science fiction, but today it has become one of the fastest adopted business technologies in history and in 2026, it is increasingly treated as mission-critical software rather than “just another tool.”

AI has become a boardroom priority. According to BCG’s AI Radar 2026, 65% of CEOs say accelerating AI is one of their top three priorities for 2026. This surge in AI adoption highlights its profound impact, comparable to the revolutionary advent of electricity. McKinsey reports today, 88% of organizations report using AI technology in at least one business function, and 64% say that AI is enabling their innovation.

However, with great opportunities significant challenges and uncertainties come.

While the potential benefits of AI for business, society, healthcare, transport, and culture are significant, these advantages come alongside real risks, including security breaches, misinformation, and flawed decision-making processes. As SIG’s AI Boardroom Gap report notes, regulatory pressure is rising while rules are fragmented. As a result, organizations need strategies that are proactive, flexible, and geopolitically aware.

While the EU leads in AI regulation, other countries are also developing their own frameworks. 

According to more recent AI regulatory trackers, governments and regulators worldwide are moving quickly to keep legal frameworks from becoming obsolete but approaches vary widely. Emerging regimes range from comprehensive, cross-sector AI laws to sector- or use-case-specific rules, alongside non-binding principles, guidelines, and standards.

The OECD’s AI Policy Observatory underscores the scale of this activity, hosting a repository of 1,000+ AI policies across 70+ jurisdictions, providing an up-to-date global view of AI governance efforts.

Stanford HAI’s 2026 AI Index  found that 47 countries now have active AI-specific legislation, though only a fraction have established enforcement mechanisms.

American AI legislation and the complexity of federalism

The closest initiative is the National Artificial Intelligence Initiative Act of 2020 (NAII) introduced by President Trump in his first term, and the recently introduced set of policy recommendations in the recently published: Winning the Race: AMERICA’S AI ACTION PLAN.

In the United States, federalism still makes it challenging to implement a single, unified AI policy. There is still no overarching “AI Act” at the federal level.

At the same time, state laws began to take effect in 2026 while the federal government signaled increased willingness to contest or preempt certain state approaches. That tension, as described in the section below, has become one of the defining features of the 2026 AI compliance landscape.

Picture of the White house

American AI legislation and the effect of Trump’s presidency

Since President Donald Trump began his second term in January 2025, federal AI policy has undergone significant shifts. 

While policy developments continue to evolve, the Trump administration’s focus on technological leadership and reduced regulatory oversight marked a significant departure from the Biden era. That said, 2026 has already seen one notable reversal.

What happened to Biden’s Executive Order and the AI Bill of Rights?

During the 2024 campaign run during the Republican National Convention in July 2024, Trump stated:

“We will repeal Joe Biden’s dangerous Executive Order that hinders AI innovation and imposes radical left-wing ideas on the development of this technology. In its place, Republicans support AI development rooted in free speech and human flourishing.”

As promised, mere days after Trump took office in January of 2025, these efforts have been revoked.

On January 23, 2025, President Trump signed a new Executive Order, titled “Removing Barriers to American Leadership in Artificial Intelligence.” This policy focuses on revoking directives perceived as restrictive to AI innovation, paving the way for “unbiased and agenda-free” development of AI systems.

In April 2025, the White House (via the Office of Management and Budget) instructed federal agencies to appoint Chief AI Officers, develop strategies to expand the government’s use of AI, and adopt minimum risk-management practices for “high-impact” AI uses, alongside agency generative AI policies. The guidance also rescinded Biden-era directives focused on safeguards and certain procurement restrictions, while emphasizing faster, more interoperable AI acquisition and prioritizing “American-made” AI, with privacy protections still cited as a requirement.

Biden’s EO 14144 got amended via EO 14306 on June 6, 2025

On July 23rd of 2025, the White House released “Winning the AI Race: America’s AI Action Plan”. A document that identifies over 90 Federal policy actions across three pillars – Accelerating Innovation, Building American AI Infrastructure, and Leading in International Diplomacy and Security. On that same day Biden’s Executive Order 14141 got revoked.

A day later, on July 24th of 2025, President Trump signed a trio of executive orders that he vowed would turn the United States into an “AI export powerhouse”, including one targeting what the White House described as “woke” artificial intelligence models. He also mentioned that his predecessor, Joe Biden, had “established toxic diversity, equity and inclusion ideology as a guiding principle of American AI development”.

On December 11, 2025, President Trump issued EO 14365, which promotes a “minimally burdensome” national AI policy framework and directs agencies to evaluate and, in some cases, challenge state AI laws.

Screenshot of a 404 page not found notification of the White House showing that previous legislation has been removed.

On January 23, 2025, President Trump signed a new Executive Order, titled “Removing Barriers to American Leadership in Artificial Intelligence.” This policy focuses on revoking directives perceived as restrictive to AI innovation, paving the way for “unbiased and agenda-free” development of AI systems. 

On July 23rd of 2025, the White House released “Winning the AI Race: America’s AI Action Plan”. A document that identifies over 90 Federal policy actions across three pillars – Accelerating Innovation, Building American AI Infrastructure, and Leading in International Diplomacy and Security.  

A day later, on July 24th of 2025, President Trump signed a trio of executive orders that he vowed would turn the United States into an “AI export powerhouse”, including one targeting what the White House described as “woke” artificial intelligence models. He also mentioned that his predecessor, Joe Biden, had “established toxic diversity, equity and inclusion ideology as a guiding principle of American AI development”. 

It is worth noting that not all Biden administration AI efforts were rolled back. Executive Order 14144, Biden’s cybersecurity EO, remains in effect, though it was amended via EO 14306 in June 2025

May 2026: An unexpected shift on AI oversight

In a development that few would have predicted at the start of Trump’s second term, the administration is now reportedly considering pre-release evaluation requirements for frontier AI models. The catalyst was Anthropic’s Mythos AI model, which demonstrated the ability to identify and exploit cybersecurity vulnerabilities at speed, raising immediate national security concerns.

The White House National Economic Council Director Kevin Hassett said on Fox Business that the administration is studying a possible executive order to create a “clear road map” for how advanced AI systems should be evaluated before release, comparing the process to FDA drug approval.

At the same time, CAISI, the Trump administration’s renamed version of the Biden-era US AI Safety Institute, announced evaluation partnerships with Google, Microsoft, and xAI, covering both pre-deployment and post-deployment assessment of frontier models. The agency says it has now completed more than 40 such evaluations.

Speaking to Fortune, Rob van der Veer, Chief AI Officer at Software Improvement Group and founder of OWASP AI Exchange, offered an important qualification on what model vetting can actually deliver: “AI model vetting can motivate model makers to invest more in resilience, and it can help expose obvious weaknesses. But AI models will remain fragile, no matter how much we test them… design the system as if the model can still fail. Because it can.”

The shift is notable given how explicitly the administration came into office opposing AI oversight. As AI analyst Rumman Chowdhury characterized it in the same Fortune piece, this amounts to a 180-degree reversal for an administration that had “very explicitly been anti-any sort of regulation.

The federal vs. state battle: preemption

Alongside the Trump administration’s own policy moves, a parallel struggle has been playing out between federal and state authority over AI regulation. This is now the central storyline for any organization trying to plan compliance across US jurisdictions.

Through 2025, the administration made several attempts to impose a federal moratorium on state AI laws. The most prominent was a proposed 10-year freeze included in the “One Big Beautiful Bill Act.” It was stripped before passage, with the Senate voting 99-1 to remove it. A similar attempt through the National Defense Authorization Act also failed.

On March 20, 2026, the White House released a National Policy Framework for Artificial Intelligence, urging Congress to replace the state-law patchwork with a uniform federal approach. The framework is non-binding and creates no immediate compliance obligations. State AI laws remain operative unless and until Congress actually legislates.

For organizations, the practical implication is straightforward: existing state laws must be complied with now. The federal preemption debate is worth monitoring, but it is not a reason to delay compliance planning.

State-level AI legislation

State-level AI legislation continues to accelerate. According to legislative tracker MultiState, as of March 2026, lawmakers in 45 states had already introduced 1,561 AI-related bills, surpassing the total volume from all of 2024. What materially changes in 2026 is enforceability: multiple “compliance-grade” state laws now have effective dates this year, increasing the need for cross-state governance, system inventories, and documented evidence of control.

Let’s look at a few key examples.

The Colorado AI Act

Colorado enacted SB24-205 in 2024. The law uses a risk-based approach focused on “high-risk” AI systems and imposes a duty of reasonable care on both developers and deployers to protect consumers from algorithmic discrimination.

The original February 1, 2026 effective date was delayed to June 30, 2026 after a special legislative session failed to reach a compromise on amendments. Governor Polis signed SB 25B-004 on August 28, 2025 to implement that delay.

There is now a further complication: in April 2026, a Colorado Magistrate Judge ordered the state Attorney General not to enforce the law until its implementing rulemaking is finalized, a process that had not yet formally begun at the time of writing.

In practical terms, enforcement of the Colorado AI Act is effectively on hold pending that rulemaking, even as the June 30 effective date approaches.

California’s AI laws

California remains fragmented, but 2026 makes that fragmentation more operational. Multiple AI laws have 2026 effective dates, including SB 53 (Transparency in Frontier Artificial Intelligence Act) and AB 2013 (training data transparency for generative AI systems, effective January 2026). In practical terms, California compliance in 2026 is less about one statute and more about coordinating overlapping transparency, reporting, and disclosure duties across multiple AI use cases.

Texas Responsible Artificial Intelligence Governance Act

Texas’s Responsible Artificial Intelligence Governance Act (TRAIGA; C.S.H.B. 149) took effect January 1, 2026. It is worth noting that the enacted version was substantially scaled back from the original proposal, which had been modeled closely on the EU AI Act. Rather than imposing broad risk assessments and impact assessment requirements on private companies, the final law focuses on prohibiting specific harmful AI practices, with liability based on intent rather than risk. Prohibited uses include developing AI to incite self-harm or criminal activity, social scoring by government entities, and generating child sexual abuse material. Enforcement rests with the Texas Attorney General, with civil penalties ranging from $10,000 to $200,000 per violation, and a 60-day cure period applies before penalties can be sought.

A small statue of lady justice holding the scales

US AI legislation and its implications for business

Although US AI legislation remains piecemeal, 2026 is a pivot year because multiple state laws are now in effect or approaching enforceability, meaning “tracking bills” is no longer sufficient. Organizations need evidence of control: a clear inventory of AI systems, defined ownership, clarity on where systems run, and documented compliance across jurisdictions.

The federal preemption debate adds another layer of uncertainty. The Trump administration is actively pushing to replace state-level regulation with a lighter-touch federal framework, but Congress has twice rejected moratorium proposals and no federal statute has been enacted. Organizations should proceed on the basis that state laws apply now, while staying alert to any legislative movement.

EY stated in their recent global survey findings that the majority of C-suite leaders feel that non-compliance with AI regulations is the most common AI risk. Moreover, the penalties for AI non-compliance in the US can be significant.

Regulatory pressure is rising while rules remain fragmented. Boards need strategies that are proactive, flexible, and geopolitically aware. Software Improvement Group’s AI Maturity Guide 2026 offers 20 practical steps for board members, CTOs, CISOs, and GRC leaders to move from AI ambition to genuine AI control, across governance, risk, development, and security.

AI-related penalties and fines in the US

At present, while there is no comprehensive federal Act governing AI use and risk mitigation, a range of existing laws can still result in severe financial penalties. Let’s look at a few examples:

Conclusion

AI legislation in the US differs significantly from that in other parts of the world. The focus has primarily been on innovation, government AI use, and reinforcing “traditional American values.” But 2026 marks a shift: state laws are now live and enforceable, a federal preemption battle is underway, and even the Trump administration, which came into office explicitly opposing AI oversight, is now moving toward pre-release evaluation requirements for the most capable frontier models.

For organizations operating within and with the US, navigating the myriad of AI legislation and proposals remains challenging. The direction of travel at the federal level is still uncertain. State laws, however, are not waiting.

In addition, global operations add complexity. An American organization that markets, deploys, or has AI outputs used in the EU will have to meet EU AI Act obligations regardless of where it is headquartered.

Are you ready for the complexities of US AI legislation? Our AI Maturity Guide 2026, authored by Rob van der Veer, simplifies compliance with evolving regulations. With 20 steps covering governance, security, and IT. It helps organizations minimize risk and harness AI’s full potential, while staying ahead of regulatory changes.

Frequently asked questions

What is the National Artificial Intelligence Initiative Act of 2020 (NAII)?

The National Artificial Intelligence Initiative Act of 2020, introduced under the Trump administration, was one of the first major national efforts specifically targeting artificial intelligence. However, its primary focus is less on regulating AI and more on fostering research and development in the field. The Act aims to solidify the United States’ position as a global leader in AI innovation. 

What is the purpose of the NAII Act?

The primary purpose of the 2020 act is to guide AI research, development, and evaluation at various federal science agencies, to drive American R&D into AI technology, and champion AI use in government. The Trump-era Act advocated for “a more hands-off, free market–oriented political philosophy and the perceived geopolitical imperative of “leading” in AI.” 

What is the impact of the NAII act?

The American AI Initiative’s central impact on business in the US has been the coordination of AI activities across different federal agencies. Below we list the main agencies affected and their directions, with emphasis on those affecting business across the country. 

The National Science and Technology Council is to establish an Interagency Committee to coordinate federal programs and activities in support of the initiative. 

The Department of Energy (DOE) is to establish the National Artificial Intelligence Advisory Committee to advise the President and the Initiative Office on matters related to the initiative. 

The DOE must also carry out an artificial intelligence research and development program to: 

  1. Advance artificial intelligence tools, systems, capabilities, and workforce needs; and
  2. Improve the reliability of artificial intelligence methods and solutions relevant to DOE’s mission. 

The National Science Foundation (NSF) is to enter a contract with the National Research Council of the National Academies of Sciences, Engineering, and Medicine to conduct a study of the current and future impact of artificial intelligence on the workforce of the United States. 

The National Institute of Standards and Technology is to develop voluntary standards for artificial intelligence systems, among other things. 

Crucially, the goal of these standards is not—as is the case in the EU—to make AI technology safer, more secure, and more trustworthy, but instead “to advance US AI leadership. 

The NSF is also ordered to fund research and education activities in artificial intelligence systems and related fields. 

And finally, the National Artificial Intelligence Initiative Act of 2020 is to provide regulatory guidance on AI, which “reflects American values.

What is the Executive Order: Removing barriers to American Leadership in Artificial Intelligence?

The recent executive order titled “Removing Barriers to American Leadership in Artificial Intelligence”, signed by President Trump on January 23rd,2025, aims to sustain and enhance America’s global AI dominance by promoting human flourishing, economic competitiveness, and national security.   

What is the purpose of the Executive Order?

This order will revoke certain AI policies and directives that act as barriers to American AI innovation, essentially clearing a path for the United States to act decisively to retain global leadership in artificial intelligence. In order to maintain this leadership, AI systems that are developed must be “free from ideological bias or engineered social agendas.” 

Key aspects of the Executive order

  1. Developing an Artificial Intelligence Action Plan
    Within 180 days of this order, key advisors on science, technology, AI, crypto, national security, economic policy, and domestic policy, along with relevant government officials, must create and submit a plan to the President to carry out the policy in section 2 of this order.
  2. Implementation of order revocation
    Key officials, including the APST, the Special Advisor for AI and Crypto, and the APNSA, must immediately review all actions taken under the revoked Executive Order 14110 on AI. They will identify any actions that conflict with the new policy and work with relevant agencies to suspend, revise, or rescind them as necessary. If changes cannot be made immediately, exemptions will be applied until finalized. 

What is AMERICA’S AI ACTION PLAN?

America’s AI Action Plan (released July 2025) remains a key roadmap for federal “innovation-first” actions. In 2026, it’s increasingly important to treat it as a directional policy signal (procurement expectations, infrastructure priorities, export posture), even where it is not itself enforceable law because it influences agencies.

What is the impact of AMERICA’s AI ACTION PLAN?

The document identifies over 90 Federal policy actions across three pillars – Accelerating Innovation, Building American AI Infrastructure, and Leading in International Diplomacy and Security.  

What are key aspects of AMERICA’s AI ACTION PLAN?

  1. Exporting American AI
    The Commerce and State Departments will partner with industry to deliver secure, full-stack AI export packages – including hardware, models, software, applications, and standards – to America’s friends and allies around the world.

  2. Promoting rapid buildout of data centers
    Expediting and modernizing permits for data centers and semiconductor fabs, as well as creating new national initiatives to increase high-demand occupations like electricians and HVAC technicians.

  3. Enabling innovation and adoption
    Removing Federal regulations that hinder AI development and deployment, and seeking private sector input on rules to remove.

  4. Upholding free speech in frontier models
    Updating Federal procurement guidelines to ensure that the government only contracts with frontier large language model developers who ensure that their systems are objective and free from top-down ideological bias. 

  5. Ensuring a National Policy Framework for Artificial Intelligence.
    The order’s goal is to promote a “minimally burdensome” national AI policy framework and reduce inconsistencies across state AI laws. It directs the Attorney General to establish an AI Litigation Task Force to challenge state AI laws deemed inconsistent with the EO’s policy, and directs Commerce to publish an evaluation of existing state AI laws.

Download the AI Boardroom Gap Report

This field is for validation purposes and should be left unchanged.
Name*
Privacy*

Register for access to Summer Sessions

This field is for validation purposes and should be left unchanged.
Name*
Privacy*