The U.S. has entered a new chapter in AI regulation. With Governor Gavin Newsom’s signing of SB 53, California became the first state to pass a law specifically targeting frontier AI models. This landmark legislation requires large AI developers to publish safety frameworks, disclose risk assessments, and report critical incidents to the state. It also extends whistleblower protections and empowers regulators with meaningful enforcement tools.
California’s move reflects a growing trend: in the absence of a unified federal framework, states are stepping in to set their own rules of the road for AI.
A Rising Wave of State AI Laws
California’s SB 53 is not alone. Across the U.S., states are experimenting with their own approaches to governing AI:
- Utah’s SB 149 “AI Policy Act” sets guidelines for AI disclosures in consumer interactions, making clear when people are engaging with machines instead of humans.
- Colorado’s AI Act introduces a duty of care for developers and deployers to mitigate algorithmic discrimination, along with new transparency requirements.
- Montana’s HB 178 “AI Limitations for State & Local Government” curbs certain uses of AI in government decision-making, reflecting concerns over fairness and accountability.
- New York’s RAISE Act (Responsible Artificial Intelligence Systems and Evaluation) focuses on establishing standards for the ethical and transparent deployment of AI in sensitive domains.
- Arkansas’s AI Ownership Law explores intellectual property questions, requiring clarity on AI-generated content and ownership rights.
The details differ, but the message is clear: AI governance is happening now, and it is fragmented. Companies operating across states will soon face overlapping, and sometimes conflicting, obligations.
The Global Context
These state efforts come alongside sweeping global frameworks like the EU AI Act, which categorizes AI systems by risk and imposes stringent obligations for high-risk and systemic models, and the UK’s principles-based approach, which emphasizes safety and accountability without binding rules, at least for now. Meanwhile, countries across Asia are rolling out their own governance models.
The result is a patchwork at both the national and international levels. For enterprises deploying GenAI, this means compliance can no longer be treated as an afterthought. It must be integrated into security and governance from day one.
How Lasso Helps Enterprises Stay Ahead
At Lasso, we help enterprises navigate this shifting landscape with confidence. Our platform enables organizations to:
- Protect sensitive data from model misuse, exfiltration, and unauthorized access.
- Enforce governance frameworks that align with emerging laws like California’s SB 53, Colorado’s AI Act, and the EU AI Act.
- Stay compliant across jurisdictions, ensuring that AI deployments meet evolving state, federal, and global requirements.
AI regulation is moving from principles to practice, and SB 53 is a clear signal that the era of enforceable guardrails has begun. For enterprises, the challenge is not just to comply with today’s rules, but to anticipate tomorrow’s. Lasso makes that possible, helping organizations innovate safely, responsibly, and globally.