The balance Krishna identifies extends well beyond federal policy. It runs downward into a state-by-state patchwork of legislation now reshaping how American companies build and deploy AI, and upward into a global contest where technological competitiveness underwrites both economic prominence and national security. No clear path forward has emerged at any level. In our conversations with CEOs and political leaders, that lack of clarity is the common refrain.
All this unfolds against a sharper international backdrop. The EU is implementing the AI Act, and China is deploying frontier capability under state direction, while the line between commercial AI and national-security capability is collapsing—raising the cost of incoherent U.S. policy.
Too often, the debate has been framed as a binary choice between sweeping regulation and unrestricted operation, as though there were no middle ground, and with too little attention given to how proposals might conflict with existing law. Both sides talk past each other because neither has a clear test for which specific regulation, aimed at which actor, addresses which gap, and at what cost to whom, is actually necessary.
Yet these narrower successors still impose new compliance burdens beyond those imposed by existing civil rights and consumer protection law. Across statehouses, the same pattern is recurring. Well-intentioned legislation that, read carefully, replicates existing protections at the cost of substantial new compliance burdens.
At the federal level, three live propositions each flop on different grounds. Broad state preemption, in the form of presidential executive authority and the failed congressional moratorium, trades real protection against demonstrable harms, such as deepfake-generated child sexual abuse material (CSAM), AI-driven election fraud, and automated hiring discrimination, for the illusion of federal uniformity. Mandatory frontier-model approval, as currently floated, is poorly targeted and creates an incumbent moat that locks in the largest developers; however, perhaps a better version could be formulated. Capability-specific oversight of frontier models that can autonomously generate cyber exploits or Chemical, Biological, Radiological, and Nuclear (CBRN)-relevant content—the one area where federal action is genuinely needed—is where the federal conversation is not productively focused.
International approaches sharpen the contrast. The EU AI Act applies a tiered, risk-based regime with prescriptive compliance requirements scaled to system risk. China pairs state-directed deployment with detailed sectoral rules—algorithmic recommendation, generative AI, and deep synthesis—under national security review. Singapore and the UK have positioned themselves as governance hubs through voluntary frameworks, model sandboxes, and active industry partnerships. Each is a different bet on the same underlying tradeoff between innovation pace, harm reduction, and national security. The U.S. is currently betting without clearly identifying which bet it has placed.
The common failure is the lack of a structured method for determining whether a proposed rule effectively addresses the gap. A three-stage test offers a clear solution.
Stage 1: The Target Specificity Question
Before evaluating any tradeoffs, a single test should be applied: if “AI” were replaced with “technology” or “software” in the bill text, would existing law already address the harm?
The rule, then, is that when existing law adequately addresses the harm, the appropriate instrument is interpretive guidance from the relevant agency. New legislation imposes compliance costs, whereas simple interpretive guidance provides clarity. Many state AI bills do not survive this stage, and the first test is the most efficient single-discipline a state house can adopt.
Stage 2: Four Dimensions of Cost-Benefit Analysis
When existing law does not adequately address the harm, the question becomes whether the proposed rule’s benefits exceed its costs. Every AI policy choice sits along a single axis: a higher degree of regulation generally delivers stronger protections but reduces economic competitiveness, while a lower degree of regulation, beyond basic protections, preserves competitiveness but accepts greater downside risk. The framework’s purpose is not to resolve this tradeoff in the abstract but to make it explicit for each specific proposal.
Four dimensions warrant consideration: harm reduction, national security and critical-infrastructure resilience, innovation environment, and competitive concentration. The first two yield near-clear benefits when targeted well, with cost caveats that must still be weighed. The second two entail genuine tradeoffs.
Harm reduction is the strongest test case. The question is whether the harm is demonstrable, measurable, and unaddressed by existing law. AI-generated child sexual abuse material, election deepfakes, and discriminatory automated hiring decisions pass cleanly. Algorithmic harm framed in the abstract does not. A targeted state law addressing a specific harm produces measurable protection at a reasonable cost. A 50-state patchwork addressing the same harm multiplies compliance costs without proportional improvement.
Laying out all four dimensions along the regulation-competitiveness axis forces the debate to consider tradeoffs that current legislative drafting frequently ignores. A bill that scores well on harm reduction can still fail on innovation environment or competitive concentration.
Stage 3: Four Design Tests
Finally, any policy that survives the threshold and tradeoff stages should be evaluated against four design tests: targeting, counterfactual durability, adaptation, and enforceability.
Targeting measures whether the rule is aimed at the actor with the actual capability to mitigate the harm. A rule holding a deployer responsible for harm that only a developer can prevent, or the reverse, is regulatory theater. The EU AI Act’s tiered targeting at the system level is one model, classifying by risk category and assigning specific obligations across the entire value chain from developer to deployer. California SB 53’s developer-focused obligations sit at the other end, placing almost all responsibility on those who built the system. Texas’s TRAIGA imposes liability on whichever actor demonstrates harmful intent.
Cutting across all four tests is the jurisdictional overlay. Frontier-model oversight, critical-infrastructure cybersecurity standards, and much of workforce policy require federal action or multistate compacts. Deepfakes, child sexual abuse material, election fraud, automated hiring discrimination, and procurement transparency more cleanly belong to the states.
Applied honestly, the framework produces sharper verdicts than the current debate allows.
California’s SB 53 partially clears the threshold test. Catastrophic-risk reporting from large frontier developers addresses a gap that California authorities do not fully reach, though several adjacent provisions duplicate authority. Gains in transparency and adoption durability are offset by the regulatory cliff at the $500 million revenue and 10²⁶ FLOP thresholds, which can shift compute decisions strategically rather than safely. The bill’s most consequential weakness lies in the obligation it imposes on developers when the catastrophic harms it contemplates arise primarily during deployment. The CalCompute consortium is its strongest provision, a positive-sum intervention that addresses competitive concentration head-on.
New York’s RAISE Act operates on a similar theory, with stricter provisions, including 72-hour incident reporting (versus California’s 15 days) and a new state oversight office with rulemaking authority. Chapter amendments narrowed the scope considerably, giving the RAISE Act a cleaner threshold case than SB 53, but the cost analysis turns almost entirely on how the oversight body exercises rulemaking authority, a structural risk the bill does not constrain. The same targeting problem as SB 53 remains.
The affirmative model that emerges from applying the framework is defined by a pattern rather than by a single bill. Interpretive guidance from attorneys general and relevant agencies comes first, as Attorney General Tong’s Connecticut advisory and Attorney General Campbell’s earlier Massachusetts advisory demonstrate, doing the threshold work that a substantial share of state AI legislation otherwise duplicates. Narrow legislation follows only where the advisory leaves real gaps and where the gap is genuinely state-level in character—deepfake CSAM, AI-generated election content, automated decision disclosure in benefits administration, and companion-chatbot protections for minors. Sandboxes carry the higher-risk uses on the TRAIGA model. The pattern is replicable across states without locking any one of them into a regime whose enforcement and interpretation will not be testable for years.
The stakes extend beyond domestic compliance. The same decisions position the United States against EU regulators applying the AI Act, Chinese capability development unfolding under state direction, and frontier models whose safety and security implications are now national-security questions in their own right.
The legislative volume is high, but a shared test for distinguishing good policy from bad has been absent from the debate. The framework offered here will not, on its own, resolve any specific dispute. Its purpose is to ensure that the questions before state legislators, members of Congress, and federal agencies are the right questions, asked in the right order, before another five hundred bills are introduced and a patchwork is hardened in place that no one designed and few defend.
The opinions expressed in Fortune.com commentary pieces are solely the views of their authors and do not necessarily reflect the opinions and beliefs of Fortune.



