• Business
  • Markets
  • Politics
  • Crypto
  • Finance
  • Intelligence
    • Policy Intelligence
    • Security Intelligence
    • Economic Intelligence
    • Fashion Intelligence
  • Energy
  • Technology
  • Taxes
  • Creator Economy
  • Wealth Management
  • LBNN Blueprints
  • Business
  • Markets
  • Politics
  • Crypto
  • Finance
  • Intelligence
    • Policy Intelligence
    • Security Intelligence
    • Economic Intelligence
    • Fashion Intelligence
  • Energy
  • Technology
  • Taxes
  • Creator Economy
  • Wealth Management
  • LBNN Blueprints

Don’t Regulate AI Models. Regulate AI Use

Simon Osuji by Simon Osuji
February 2, 2026
in Artificial Intelligence
0
Don’t Regulate AI Models. Regulate AI Use
0
SHARES
1
VIEWS
Share on FacebookShare on Twitter



Hazardous dual-use functions (e.g., tools to fabricate biometric voiceprints to defeat authentication).
Regulatory adherence: confine to licensed facilities and verified operators; prohibit capabilities whose primary purpose is unlawful.

Close the loop at real-world chokepoints

AI-enabled systems become real when they’re connected to users, money, infrastructure, and institutions and that’s where regulators should focus enforcement: at the points of distribution (app stores and enterprise marketplaces), capability access (cloud and AI platforms), monetization (payment systems and ad networks), and risk transfer (insurers and contract counterparties).

Related posts

Klarna backs Google UCP to power AI agent payments

Klarna backs Google UCP to power AI agent payments

February 2, 2026
ICE and Qatari Security Forces at the Winter Olympics Put Italians on Edge

ICE and Qatari Security Forces at the Winter Olympics Put Italians on Edge

February 2, 2026

For high-risk uses, we need to require identity binding for operators, capability gating aligned to the risk tier, and tamper-evident logging for audits and post-incident review, paired with privacy protections. We need to demand evidence for deployer claims, maintain incident-response plans, report material faults, and provide human fallback. When AI use leads to damage, firms should have to show their work and face liability for harms.

This approach creates market dynamics that accelerate compliance. If crucial business operations such as procurement, access to cloud services, and insurance depend on proving that you’re following the rules, AI model developers will build to specifications buyers can check. That raises the safety floor for all industry players, startups included, without handing an advantage to a few large, licensed incumbents.

The EU approach: How this aligns, where it differs

This framework aligns with the EU AI Act in two important ways. First, it centers risk at the point of impact: the Act’s “high-risk” categories include employment, education, access to essential services, and critical infrastructure, with lifecycle obligations and complaint rights. It also recognizes special treatment for broadly capable systems (GPAI) without pretending publication control is a safety strategy. My proposal for the U.S. differs in three key ways:

First, the U.S. must design for constitutional durability. Courts have treated source code as protected speech, and a regime that requires permission to publish weights or train a class of models starts to resemble prior restraint. A use-based regime of rules governing what AI operators can do in sensitive settings, and under what conditions, fits more naturally within the U.S. First Amendment doctrine than speaker-based licensing schemes.

Second, the EU can rely on platforms adapting to the precautionary rules it writes for its unified single market. The U.S. should accept that models will exist globally, both open and closed, and focus on where AI becomes actionable: app stores, enterprise platforms, cloud providers, enterprise identity layers, payment rails, insurers, and regulated sector gatekeepers (hospitals, utilities, banks). Those are enforceable points where identity, logging, capability gating, and post-incident accountability can be required without pretending we can “contain” software. They also span the many specialized U.S. agencies which may not be able to write higher-level rules broad enough to affect the whole AI ecosystem. Instead, the U.S. should regulate AI service chokepoints more explicitly than Europe does, to accommodate the different shape of its government and public administration.

Third, the U.S. should add an explicit “dual-use hazard” tier. The EU AI Act is primarily a fundamental-rights and product-safety regime. The U.S. also has a national-security reality: certain capabilities are dangerous because they scale harm (biosecurity, cyber offense, mass fraud). A coherent U.S. framework should name that category and regulate it directly, rather than trying to fit it into generic “frontier model” licensing.

China’s approach: What to reuse, what to avoid

China has built a layered regime for public-facing AI. The “deep synthesis” rules (effective January 10, 2023) require conspicuous labeling of synthetic media and place duties on providers and platforms. The Interim Measures for Generative AI (effective August 15, 2023) add registration and governance obligations for services offered to the public. Enforcement leverages platform control and algorithm filing systems.

The United States should not copy China’s state-directed control of AI viewpoints or information management; it is incompatible with U.S. values and would not survive U.S. constitutional scrutiny. The licensing of model publication is brittle in practice and, in the United States, likely an unconstitutional form of censorship.

But we can borrow two practical ideas from China. First, we should ensure trustworthy provenance and traceability for synthetic media. This involves mandatory labeling and provenance forensic tools. They give legitimate creators and platforms a reliable way to prove origin and integrity. When it is quick to check authenticity at scale, attackers lose the advantage of cheap copies or deepfakes and defenders regain time to detect, triage, and respond. Second, we should require operators to file their methods and risk controlswith regulators for public-facing, high-risk services, like we do for other safety-critical projects. This should include due-process and transparency safeguards appropriate to liberal democracies along with clear responsibility for safety measures, data protection, and incident handling, especially for systems designed to manipulate emotions or build dependency, which already include gaming, role-playing, and associated applications.

A pragmatic approach

We cannot meaningfully regulate the development of AI in a world where artifacts copy in near real-time and research flows fluidly across borders. But we can keep unvetted systems out of hospitals, payment systems, and critical infrastructure by regulating uses, not models; enforcing at chokepoints; and applying obligations that scale with risk.

Done right, this approach harmonizes with the EU’s outcome-oriented framework, channels U.S. federal and state innovation into a coherent baseline, and reuses China’s useful distribution-level controls while rejecting speech-restrictive licensing. We can write rules that protect people and which still promote robust AI innovation.



Source link

Previous Post

In collaboration with BENEFIT, EWA launches direct debit service for bill payments through Benefitpay

Next Post

GSK walks away from pioneering Wave RNA editing drug

Next Post
Haya banks $65M to scour the ‘dark genome’ for new drugs

GSK walks away from pioneering Wave RNA editing drug

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

RECOMMENDED NEWS

Eliminating cameramen distractions with AI to enhance live soccer broadcasts

Eliminating cameramen distractions with AI to enhance live soccer broadcasts

2 years ago
Kelce Jersey Sales Spike After Taylor Swift Attends NFL Game

Kelce Jersey Sales Spike After Taylor Swift Attends NFL Game

2 years ago
AUSSOM operational in Somalia – defenceWeb

AUSSOM operational in Somalia – defenceWeb

1 year ago
Ruto Proposes Multi-Sectoral Forum to End Nationwide Protests

Ruto Proposes Multi-Sectoral Forum to End Nationwide Protests

2 years ago

POPULAR NEWS

  • Ghana to build three oil refineries, five petrochemical plants in energy sector overhaul

    Ghana to build three oil refineries, five petrochemical plants in energy sector overhaul

    0 shares
    Share 0 Tweet 0
  • The world’s top 10 most valuable car brands in 2025

    0 shares
    Share 0 Tweet 0
  • Top 10 African countries with the highest GDP per capita in 2025

    0 shares
    Share 0 Tweet 0
  • Global ranking of Top 5 smartphone brands in Q3, 2024

    0 shares
    Share 0 Tweet 0
  • When Will SHIB Reach $1? Here’s What ChatGPT Says

    0 shares
    Share 0 Tweet 0

Get strategic intelligence you won’t find anywhere else. Subscribe to the Limitless Beliefs Newsletter for monthly insights on overlooked business opportunities across Africa.

Subscription Form

© 2026 LBNN – All rights reserved.

Privacy Policy | About Us | Contact

Tiktok Youtube Telegram Instagram Linkedin X-twitter
No Result
View All Result
  • Home
  • Business
  • Politics
  • Markets
  • Crypto
  • Economics
    • Manufacturing
    • Real Estate
    • Infrastructure
  • Finance
  • Energy
  • Creator Economy
  • Wealth Management
  • Taxes
  • Telecoms
  • Military & Defense
  • Careers
  • Technology
  • Artificial Intelligence
  • Investigative journalism
  • Art & Culture
  • LBNN Blueprints
  • Quizzes
    • Enneagram quiz
  • Fashion Intelligence

© 2023 LBNN - All rights reserved.