38 States Have Now Passed AI Laws — The Regulatory Patchwork Every Builder Must Navigate

Nebraska, Maryland, and Maine passed new AI bills this week. The federal government is racing to preempt the chaos with a single national framework. Here is what every team deploying AI in production needs to understand right now.

38
States with AI legislation
3
New AI bills passed this week
Mar 18
Trump America AI Act draft
2026
Colorado AI Act takes effect

If you are building or deploying AI products in the United States, you are now operating in a legal environment that did not exist two years ago — and that is fragmenting faster than any single team can track. This week, Nebraska, Maryland, and Maine each passed new AI-related legislation, bringing the total number of states with enacted AI laws to 38. That number crossed 30 just six months ago. The pace is accelerating, not slowing.

At the federal level, two parallel efforts are underway. The White House released a National Policy Framework for AI on March 20, 2026 — a set of non-binding recommendations that signal intent without imposing legal obligation. And on March 18, Senator Marsha Blackburn introduced a draft of the “Trump America AI Act,” which attempts to do something more durable: codify President Trump’s December 2025 Executive Order on AI into a single national legislative framework that would preempt conflicting state laws.

The goal of federal preemption is to reduce this patchwork down to one compliance surface. But until that bill passes — and it has not yet — you are operating under 38 overlapping frameworks, with more arriving weekly.

The 5-Second Version

01

What Passed This Week

Three states moved AI legislation across the finish line in the same week, which is a signal of how normalized state-level AI lawmaking has become. Let’s look at each one plainly.

NE

Nebraska — Chatbot Disclosure Bill

Requires any AI-powered chatbot deployed by a business to affirmatively disclose that the user is interacting with an AI system, not a human. Applies to customer service, sales, and support interactions. No exceptions for “obvious” AI contexts.

Impact: If you deploy a chatbot in Nebraska, it must identify itself.
MD

Maryland — AI Pricing Discrimination Bill

Bans the use of AI systems to charge consumers different prices based on protected characteristics including race, gender, and national origin. Targets algorithmic pricing models that produce discriminatory outputs even without discriminatory intent.

Impact: Dynamic pricing models serving Maryland users need an equity audit.
ME

Maine — AI Therapy Ban

Prohibits AI systems from providing mental health therapy, counseling, or crisis intervention to users unless a licensed human mental health professional is directly supervising the session. Effectively bans autonomous AI therapists in Maine.

Impact: Mental health AI apps must restructure their supervision model for Maine users.
38

The Running Total

These three join 35 other states that have already enacted some form of AI legislation. Coverage ranges from narrow disclosure requirements (like Nebraska) to broad algorithmic accountability frameworks (like Colorado’s). No two states are identical.

Impact: If you serve users in multiple states, you have multiple compliance obligations.
02

The Federal Preemption Play

The fragmentation problem is not lost on Washington. Two distinct federal efforts are underway right now, and understanding the difference matters for how you plan your compliance roadmap.

The White House National Policy Framework for AI, released March 20, 2026, is a policy document — not a law. It establishes the administration’s vision for AI governance: prioritizing American competitiveness, avoiding “overly prescriptive” regulation, and encouraging industry self-governance. It is directionally important but does not preempt anything. No state attorney general is bound by it.

The Trump America AI Act, introduced as a draft by Senator Marsha Blackburn on March 18, is the legislative vehicle. It attempts to translate the December 2025 Executive Order — which directed federal agencies to promote AI innovation while managing national security risk — into statute. The bill’s most significant provision for builders is its federal preemption clause, which would establish a single national AI compliance standard and override conflicting state laws.

The political math for the bill is uncertain. State AGs and privacy advocates are pushing back on the preemption provisions, arguing that states have a legitimate interest in protecting their residents. Tech industry groups are pushing hard for a single national standard, arguing that 38-state compliance is economically irrational. This debate is not resolved. It may not be resolved in 2026.

03

The Laws Already In Effect

While the new bills get attention, the bigger compliance risk for most teams right now sits in the laws that are already on the books and taking effect.

Colorado AI Act takes effect later in 2026. It is the most comprehensive state-level AI accountability law in the country, modeled loosely on the EU AI Act. It requires developers and deployers of “high-risk AI systems” — those that make or substantially inform decisions in employment, education, credit, healthcare, and housing — to perform impact assessments, maintain documentation, allow human review of adverse decisions, and notify consumers when AI is used. Colorado is not a small market, and this law has teeth.

California CCPA amendments now explicitly extend to automated decision-making. Any company that uses AI to profile California consumers, make significant decisions about them, or train models on their data must comply with expanded disclosure, opt-out, and access rights. Given that California represents roughly 14% of US GDP, “we don’t serve California” is not a practical opt-out for most businesses.

38
States with enacted AI laws
14%
US GDP represented by California alone
2026
Colorado AI Act effective year
04

The EU AI Act: Still Applies to You

US companies have a tendency to treat EU regulation as someone else’s problem. On AI, that is a mistake. The EU AI Act applies to any organization whose AI systems affect people in the European Union — regardless of where the company is headquartered or where the servers sit.

The practical implication: if you have EU users and your AI system touches employment, credit scoring, healthcare, critical infrastructure, law enforcement, or educational assessment, you are operating a “high-risk AI system” under EU law. That triggers a significant compliance stack: conformity assessments, technical documentation, registration in the EU database, post-market monitoring, and incident reporting obligations.

The industry is currently requesting extensions on generative AI labeling and transparency requirements — specifically on the obligations around synthetic content watermarking and training data disclosure. Those extension requests are being reviewed. But the law is in effect. Waiting for the extension to be granted before starting compliance work is a real business risk.

05

What This Means If You Are Building AI Products

The practical reality is that regulatory literacy is now a core skill for anyone deploying AI in production — not just for lawyers, and not something you can outsource entirely to your legal team. The people building the systems need to understand what the laws actually require, because those requirements shape architecture decisions, data pipelines, user interfaces, and deployment models.

Three sectors face the highest immediate exposure:

Healthcare AI sits under the heaviest stack: Maine’s therapy ban, HIPAA, state health privacy laws, Colorado’s high-risk AI requirements if decisions affect care, and EU AI Act classification as high-risk. If you are building in digital health, you need legal review before deployment, not after your first user complaint.

HR and hiring AI is regulated in Illinois, New York City, Colorado, and several other jurisdictions. Automated resume screening, interview scoring, and promotion recommendation tools all face specific disclosure, audit, and opt-out requirements. Illinois’ AI Video Interview Act has been in effect since 2020. This is not new, but enforcement is tightening.

Financial AI — including credit scoring, loan underwriting, insurance pricing, and investment recommendations — is subject to fair lending laws, FTC guidance on algorithmic decision-making, Maryland’s new pricing discrimination bill, and CFPB enforcement actions that have been accelerating since 2024.

The common thread across all three: if your AI system makes or informs a decision that affects a person’s access to something important — credit, healthcare, employment, housing — you are in regulated territory in most US states, and the number of states covering that territory grows every month.

The Verdict
The 38-state patchwork is not a temporary inconvenience waiting to be resolved by federal preemption. It is the operating environment right now. If you are deploying AI systems that affect real people in production, understanding the regulatory landscape is no longer optional — it is a core part of the job. Federal preemption may come. Until it does, assume the most protective state law in your user base applies to you.

This is exactly why we built AI regulation and compliance into the Precision AI Academy curriculum. Understanding what you can deploy, where, and under what constraints is not a legal sidebar — it is a technical and product decision that starts on day one of building. Two days in person with practitioners who navigate this environment is the fastest way to build that fluency.

Learn to Build AI That Ships — and Stays Compliant

The 2-day in-person Precision AI Academy bootcamp. 5 cities. $1,490. 40 seats max. Thursday-Friday cohorts, June-October 2026.

Reserve Your Seat
BP

Published By

Bo Peng — Precision AI Academy

Practitioner-focused AI education · 2-day in-person bootcamp in 5 U.S. cities

Precision AI Academy publishes deep-dives on applied AI for working professionals. Founded by Bo Peng (Kaggle Top 200), federal AI practitioner and instructor. The in-person bootcamp runs in Denver, NYC, Dallas, LA, and Chicago.

Kaggle Top 200 Federal AI Practitioner 5 U.S. Cities Thu-Fri Cohorts