
Regulatory adherence: confine to licensed facilities and verified operators; prohibit capabilities whose primary purpose is unlawful.
Close the loop at real-world chokepoints
AI-enabled systems become real when they’re connected to users, money, infrastructure, and institutions and that’s where regulators should focus enforcement: at the points of distribution (app stores and enterprise marketplaces), capability access (cloud and AI platforms), monetization (payment systems and ad networks), and risk transfer (insurers and contract counterparties).
For high-risk uses, we need to require identity binding for operators, capability gating aligned to the risk tier, and tamper-evident logging for audits and post-incident review, paired with privacy protections. We need to demand evidence for deployer claims, maintain incident-response plans, report material faults, and provide human fallback. When AI use leads to damage, firms should have to show their work and face liability for harms.
This approach creates market dynamics that accelerate compliance. If crucial business operations such as procurement, access to cloud services, and insurance depend on proving that you’re following the rules, AI model developers will build to specifications buyers can check. That raises the safety floor for all industry players, startups included, without handing an advantage to a few large, licensed incumbents.
The EU approach: How this aligns, where it differs
This framework aligns with the EU AI Act in two important ways. First, it centers risk at the point of impact: the Act’s “high-risk” categories include employment, education, access to essential services, and critical infrastructure, with lifecycle obligations and complaint rights. It also recognizes special treatment for broadly capable systems (GPAI) without pretending publication control is a safety strategy. My proposal for the U.S. differs in three key ways:
First, the U.S. must design for constitutional durability. Courts have treated source code as protected speech, and a regime that requires permission to publish weights or train a class of models starts to resemble prior restraint. A use-based regime of rules governing what AI operators can do in sensitive settings, and under what conditions, fits more naturally within the U.S. First Amendment doctrine than speaker-based licensing schemes.
Second, the EU can rely on platforms adapting to the precautionary rules it writes for its unified single market. The U.S. should accept that models will exist globally, both open and closed, and focus on where AI becomes actionable: app stores, enterprise platforms, cloud providers, enterprise identity layers, payment rails, insurers, and regulated sector gatekeepers (hospitals, utilities, banks). Those are enforceable points where identity, logging, capability gating, and post-incident accountability can be required without pretending we can “contain” software. They also span the many specialized U.S. agencies which may not be able to write higher-level rules broad enough to affect the whole AI ecosystem. Instead, the U.S. should regulate AI service chokepoints more explicitly than Europe does, to accommodate the different shape of its government and public administration.
Third, the U.S. should add an explicit “dual-use hazard” tier. The EU AI Act is primarily a fundamental-rights and product-safety regime. The U.S. also has a national-security reality: certain capabilities are dangerous because they scale harm (biosecurity, cyber offense, mass fraud). A coherent U.S. framework should name that category and regulate it directly, rather than trying to fit it into generic “frontier model” licensing.
China’s approach: What to reuse, what to avoid
China has built a layered regime for public-facing AI. The “deep synthesis” rules (effective January 10, 2023) require conspicuous labeling of synthetic media and place duties on providers and platforms. The Interim Measures for Generative AI (effective August 15, 2023) add registration and governance obligations for services offered to the public. Enforcement leverages platform control and algorithm filing systems.
The United States should not copy China’s state-directed control of AI viewpoints or information management; it is incompatible with U.S. values and would not survive U.S. constitutional scrutiny. The licensing of model publication is brittle in practice and, in the United States, likely an unconstitutional form of censorship.
But we can borrow two practical ideas from China. First, we should ensure trustworthy provenance and traceability for synthetic media. This involves mandatory labeling and provenance forensic tools. They give legitimate creators and platforms a reliable way to prove origin and integrity. When it is quick to check authenticity at scale, attackers lose the advantage of cheap copies or deepfakes and defenders regain time to detect, triage, and respond. Second, we should require operators to file their methods and risk controlswith regulators for public-facing, high-risk services, like we do for other safety-critical projects. This should include due-process and transparency safeguards appropriate to liberal democracies along with clear responsibility for safety measures, data protection, and incident handling, especially for systems designed to manipulate emotions or build dependency, which already include gaming, role-playing, and associated applications.
A pragmatic approach
We cannot meaningfully regulate the development of AI in a world where artifacts copy in near real-time and research flows fluidly across borders. But we can keep unvetted systems out of hospitals, payment systems, and critical infrastructure by regulating uses, not models; enforcing at chokepoints; and applying obligations that scale with risk.
Done right, this approach harmonizes with the EU’s outcome-oriented framework, channels U.S. federal and state innovation into a coherent baseline, and reuses China’s useful distribution-level controls while rejecting speech-restrictive licensing. We can write rules that protect people and which still promote robust AI innovation.








