Last week, I headed out to campus at Princeton University for talks on AI Policy, and I’ve got great news for grad students and post-docs. If you are looking to make an impact in the field of AI you don’t need to be a math wiz; you can help direct the evolution of policy and regulation within the Industry today!
This is more than an academic exercise. The New AI Lab at Princeton has joined the conversation with their AI Policy Fellows program, promoting the hand-in-hand cooperation of state and AI leaders to direct the evolution of AI Policy in the US.
And that direction is badly needed. As about 20+ visitors heard last Thursday, the state of AI Policy (at the state and federal level) is very much in its infancy. The reality is that the patchwork of evolving frameworks for regulation, verification, and safety are immature. This is where the disconnect becomes dangerous.
The Great Policy Disconnect: Federal Speed vs. State Action
The current US administration has made it a priority to advance all areas of AI except for regulatory policy.
This is a deliberate strategy. The administration's "America's AI Action Plan," released in July 2025, focuses on deregulation, accelerating data center construction, and promoting "global AI dominance." This plan even revoked a previous, more regulatory-focused executive order from the Biden-Harris administration. The federal message is clear: accelerate innovation and infrastructure, and let's "Build Baby Build!"
Instead of top-down laws, the federal approach has favored voluntary guidelines, like the NIST AI Risk Management Framework (AI RMF). This framework is a "gold standard" playbook for organizations to manage risk, but it isn't a law. This hands-off federal stance creates a vacuum, and states (especially California) are rushing to fill it.
California's AI Rule-Making Frenzy
Over our 2 hours of lectures and discussions, it became very clear that the current forms, questionnaires, and safety evaluations are more about collecting information than making informed decisions.
For example, the California state forms for assessing AI safety I reviewed blatantly contradicted themselves, asked questions that ignored definitions, and were open-ended enough for bad actors to slip by unnoticed.
While those forms were flawed, the landscape in California has radically shifted in just the last few months.
In late 2024, Governor Newsom signed what's been called the most comprehensive AI legislative package in the nation. This isn't a single policy; it's a web of new rules.
- For Engineers and Employers: The most significant change is the new set of regulations on Automated Decisionmaking Technology (ADMT). Starting in 2026, companies using AI for employment decisions (like hiring, promotion, or firing) must conduct risk assessments, provide pre-use notices to employees, and offer a way to opt out.
- For Society: The state passed a slew of bills cracking down on deepfakes, protecting individuals' digital likenesses, and requiring AI content watermarking.
- The "Innovation" Brake: In a move that highlights the deep conflict at the heart of this, Governor Newsom also vetoed several more restrictive bills, including one nicknamed the "No Robo Bosses Act," citing concerns that it would "stifle innovation."
This is the very definition of a "patchwork," and it's a patchwork that requires significant investment by thought leaders, the public, industry, and academia to advance a structure that safely implements AI solutions.
A Call for Experts (That Means You, Engineer)
As an engineer, I have often passed over policy and societal issues, preferring to think that my engineering background wasn’t useful in that sphere.
However, more so now than ever, this new, complex reality is exactly why we can't. We need the thinkers, doers, and educators to help direct the conversations and make sure we (as a society) are making progress and not just creating restrictions for the sake of regulations or taxes. We need people who understand the technology to be in the room where the rules are being written.
Together we can help build a future where AI is utilized to advance humanity, but it will take both sides of the aisle standing together. Furthermore, we need to motivate today’s students to get involved in these discussions.
So, I'll end with the questions that sparked our discussion at Princeton:
How do you envision AI policy maturing? What are the areas your company has implemented for AI security or evaluation frameworks?

We Have a Problem: Why Your Voice Is Missing in the AI Policy Debate