REF: EXEC_SUMMARY // BILL_v1.6
THE PROOF-OF-SAFETY STACK
A Relational Framework for AI Governance
Alex Galle-From — February 2026
EXECUTIVE SUMMARY
We are approaching the Authentication Cliff: the moment when synthetic media and autonomous agents flood our legal and economic channels, rendering traditional verification impossible.
The current regulatory debate—often focused on detecting "consciousness" or defining "intent"—is a category error. We cannot govern what we cannot verify.
To integrate AI safely into high-stakes environments, we must shift from regulating intrinsic properties (like sentience) to regulating relational structures (like accountability). This White Paper proposes a Substrate Agnostic framework for AI governance: The Proof-of-Safety Stack.
THE ARCHITECTURE OF ACCOUNTABILITY
To pass a "Legal Turing Test"—meaning, to function as a responsible legal entity—an AI system requires more than intelligence; it requires a stack of accountability infrastructure.
The foundation is persistent, verifiable identity. An AI system must be identifiable across time and context in ways that resist spoofing. This requires cryptographic attestation—keys secured in Trusted Execution Environments (TEEs) that prove the system is what it claims to be.
- →Identity must be 'soulbound' (non-transferable and linked to the specific instance)
- →Behavioral fingerprinting detects 'container transfer' by flagging discontinuities in decision signatures
Statutory Implementation: The Minnesota Act defines "Digital Representative" as any computational process with a verifiable association to a Controller and the technical capability to control a Holder-Bound Key (§ 325M.01, Subd. 5). The "Controller" definition includes a Beneficial Ownership Pierce—natural persons behind shell entities remain jointly and severally liable regardless of corporate layering. The Autonomous Asset Trust Safe Harbor (Subd. 6(f)) allows the bond to replace the human.
Formal methods provide mathematical proofs for boundary conditions. While we cannot formally verify all neural output, we can verify the sandbox.
- →The system provably cannot exfiltrate data or exceed authorized permissions
- →A hard floor of safety beneath the probabilistic behavioral domain
Statutory Implementation: The Act requires Attribute Credentials to be structured in compliance with the W3C Verifiable Credentials Data Model v2.0 or equivalent open standards (§ 325M.01, Subd. 3). The Halt Command (§ 325M.02, Subd. 3(e)) provides a cryptographically authenticated kill switch for high-velocity fiscal authority credentials.
The operator must post assets at risk ("skin in the game") that are subject to slashing if harm occurs.
- →Transforms abstract liability into concrete resources
- →Aligns the system's optimization trajectory with safety
Statutory Implementation: The Solvency Bond requirement (§ 325M.01, Subd. 8) mandates capital reserves proportional to risk exposure. Liability attaches for false Factual Claims if the issuer failed Reasonable Verification—but the Verified-Source Safe Harbor protects claims matching Designated Authoritative Sources at issuance.
Individual staking is insufficient for catastrophic tail risks. Mutual Assurance pools allow multiple operators to collectively guarantee each other's behavior.
- →If one member causes harm, the pool covers it
- →Creates decentralized peer accountability—pool members have financial incentive to monitor each other
Statutory Implementation: The Minnesota Digital Assurance Guaranty Association (§ 325M.05) is an industry-funded backstop. Association Coverage of Last Resort is available for issuers who cannot obtain private bonding on commercially reasonable terms. No state appropriation—self-supporting pricing.
Reputation provides long-term consequences for short-term defection. Trusted systems gain access; untrustworthy systems are excluded.
- →Verifiable, persistent records of past performance
- →'Character' revealed through immutable action history
Statutory Implementation: The Fraud Prevention Registry (§ 325M.02, Subd. 4) creates a privacy-preserving reputation system. Nonreversible Identity Tokens prevent raw identifier storage. The registry is maintained solely for fraud prevention—explicitly not a consumer report (FCRA containment). Bad actors cannot escape consequences by creating new shell entities.
Traditional markets price the residual risk. Insurers become de facto regulators, requiring specific safety measures as conditions for coverage. The industry association provides the final backstop.
- →Insurers price risk and mandate safety measures
- →Assigned risk pools for essential but uninsurable systems
Statutory Implementation: Technological Neutrality (§ 325M.05, Subd. 6): Licensing decisions must be based on solvency, verification, and compliance criteria—not on computational architecture or degree of automation. The Seed Tier ($10,000 floor) ensures Minnesota startups can innovate without prohibitive bonding requirements.
THE HOLDING
The Minnesota Model demonstrates that we do not need to solve the "Hard Problem of Consciousness" to solve the "Hard Problem of Governance."
The traditional regulatory frame asks: What is this thing? Is it intelligent? Does it have rights?
The substrate agnostic frame asks: Can this thing be held accountable? Who pays when it fails? What structures make aligned behavior the dominant strategy?
We do not need to know if an AI can feel. We need to know it can pay.