In early 2025, California put a slate of new AI laws into effect, more than any other state in the country. The effort marked a bold step in shaping how artificial intelligence is deployed, policed, and commercialized. But the move also reopened familiar fractures between lawmakers eager to set limits and tech leaders warning of unintended consequences. What unfolds next will test California’s ability to lead both as a driver of innovation and a regulator of its impact.
Silicon Valley vs. Sacramento
California is a state of contradictions, where libertarian technologists share sidewalks with progressive regulators. It’s home to OpenAI, Google DeepMind, Meta, and Anthropic, all racing to train ever-larger language models with implications still poorly understood.
But it’s also the launchpad for a much broader range of digital ventures, from streaming platforms and gig-economy apps to fresh sites launching this year in the U.S. online casino market. While these sectors vary in scale and influence, they all operate under California’s growing web of tech regulations, laws targeting deepfakes, AI disclosures, and protections for workers and consumers.
So, who gets to shape the future: the engineers or the ethicists?
SB 1047 tried to draw that line. It would have required developers of advanced AI systems to conduct safety evaluations, disclose training data, and permit third-party audits. Supporters saw it as a blueprint for national policy. Critics warned it could stifle smaller innovators while focusing too narrowly on large models. Governor Newsom’s veto cited a lack of empirical justification and concerns about curbing growth in California’s flagship industry.
The Ghosts of Hollywood
In 2023, actors and writers walked off the job over growing fears that studios might replace them with synthetic versions of themselves. What worried them most wasn’t just job loss, it was the idea that voices, faces, and performances could be copied by machines and reused without control or consent.
Lawmakers tried to respond in 2024 but couldn’t get a bill through. The effort returned in 2025 with AB 2602, which requires explicit permission before an actor’s image or voice can be replicated. It’s a late but significant recognition that technological progress, without clear limits, can erode the very idea of personal ownership.
In 2023, the Writers Guild of America and SAG-AFTRA went on strike, in part to stop studios from replacing humans with generative AI. The fear wasn’t just losing work. It was losing identity, voice, likeness, legacy, to deep-learning models that could mimic them forever.
The Global Eyes on Sacramento
Europe has taken notice. In a quiet but symbolic move, the European Union opened an office in San Francisco to coordinate with U.S. stakeholders on digital regulation. Brussels, long a leader in data protection and tech policy, now views California not just as a collaborator, but as a possible competitor in setting global AI standards.
That diplomatic gesture reflects a larger truth: when Washington stalls, Sacramento steps in.
Whether that model can hold is less clear. California’s AI sector is anything but uniform. What works for OpenAI may not work for a 12-person startup building speech tools. Laws that treat all systems as large-scale and high-risk, overlooking the quieter, but no less important, uses of AI in hiring decisions, insurance claims, and classrooms.
What’s Next?
The challenge of regulating AI is that the more specific the rules become, the less adaptable they are to what comes next.
Consider AB 2013, which will require developers to publish summaries of AI training datasets starting in 2026. Or SB 1120, which sets standards for how healthcare algorithms are used to approve claims. Or the latest amendments to the CCPA, which now extend to neural data and AI-generated personal information.
Each statute responds to a clear concern. But together, they’ve created a legal landscape that’s difficult even for experts to navigate. The Attorney General’s office recently issued formal advisories, just to help businesses understand how these laws apply to the technologies they’re already using.
Meanwhile, California’s courts are asking a different question: Can AI improve the justice system itself? A judicial task force is now studying how generative AI could help modernize court administration, without compromising legal ethics or due process.
Between Risk and Reward
California’s position is precarious. As both a global tech engine and the largest AI labor market in the U.S., the state can’t afford to stall innovation or let it go unchecked. Its regulatory path reflects a broader tension: between speed and safety, commercial gains and public interest, ambition and restraint. The direction won’t be set by lawmakers or labs alone. It will be shaped in lawsuits, labor actions, policy briefs, and the everyday consequences of automated decisions.
