Don’t Regulate AI Models. Regulate AI Use

Don’t Regulate AI Models. Regulate AI Use

Hazardous dual-use functions (for example, tools to fabricate biometric voiceprints to defeat authentication).
Regulatory adherence: confine to licensed facilities and verified operators; prohibit capabilities whose primary purpose is unlawful.

Close the loop at real-world choke points

AI-enabled systems become real when they’re connected to users, money, infrastructure, and institutions, and that’s where regulators should focus enforcement: at the points of distribution (app stores and enterprise marketplaces), capability access (cloud and AI platforms), monetization (payment systems and ad networks), and risk transfer (insurers and contract counterparties).

For high-risk uses, we need to require identity binding for operators, capability gating aligned to the risk tier, and tamper-evident logging for audits and postincident review, paired with privacy protections. We need to demand evidence for deployer claims, maintain incident-response plans, report material faults, and provide human fallback. When AI use leads to damage, firms should have to show their work and face liability for harms.

This approach creates market dynamics that accelerate compliance. If crucial business operations such as procurement, access to cloud services, and insurance depend on proving that you’re following the rules, AI model developers will build to specifications buyers can check. That raises the safety floor for all industry players, startups included, without handing an advantage to a few large, licensed incumbents.

The E.U. approach: How this aligns, where it differs

This framework aligns with the E.U. AI Act in two important ways. First, it centers risk at the point of impact: The act’s “high-risk” categories include employment, education, access to essential services, and critical infrastructure, with life-cycle obligations and complaint rights. It also recognizes special treatment for broadly capable systems (GPAI) without pretending publication control is a safety strategy. My proposal for the United States differs in three key ways:

First, the U.S. must design for constitutional durability. Courts have treated source code as protected speech, and a regime that requires permission to publish weights or train a class of models starts to resemble prior restraint. A use-based regime of rules governing what AI operators can do in sensitive settings, and under what conditions, fits more naturally within the U.S. First Amendment doctrine than speaker-based licensing schemes.

Second, the E.U. can rely on platforms adapting to the precautionary rules it writes for its unified single market. The U.S. should accept that models will exist globally, both open and closed, and focus on where AI becomes actionable: app stores, enterprise platforms, cloud providers, enterprise identity layers, payment rails, insurers, and regulated-sector gatekeepers (hospitals, utilities, banks). Those are enforceable points where identity, logging, capability gating, and…

Read full article: Don’t Regulate AI Models. Regulate AI Use

The post “Don’t Regulate AI Models. Regulate AI Use” by John deVadoss was published on 02/02/2026 by spectrum.ieee.org