REF: EXEC_SUMMARY // BILL_v1.4

THE PROOF-OF-SAFETY STACK

A Relational Framework for AI Governance

Alex Galle-From — December 2025

EXECUTIVE SUMMARY

We are approaching the Authentication Cliff: the moment when synthetic media and autonomous agents flood our legal and economic channels, rendering traditional verification impossible.

The current regulatory debate—often focused on detecting "consciousness" or defining "intent"—is a category error. We cannot govern what we cannot verify.

To integrate AI safely into high-stakes environments, we must shift from regulating intrinsic properties (like sentience) to regulating relational structures (like accountability). This White Paper proposes a Substrate Agnostic framework for AI governance: The Proof-of-Safety Stack.

THE ARCHITECTURE OF ACCOUNTABILITY

To pass a "Legal Turing Test"—meaning, to function as a responsible legal entity—an AI system requires more than intelligence; it requires a stack of accountability infrastructure.

LAYER 0Identity[The Foundation]

The foundation is persistent, verifiable identity. An AI system must be identifiable across time and context in ways that resist spoofing. This requires cryptographic attestation—keys secured in Trusted Execution Environments (TEEs) that prove the system is what it claims to be.

  • Identity must be 'soulbound' (non-transferable and linked to the specific instance)
  • Behavioral fingerprinting detects 'container transfer' by flagging discontinuities in decision signatures

Statutory Implementation: The Minnesota Act defines "Digital Representative" as any computational process with a verifiable association to a Controller and the technical capability to control a Holder-Bound Key (§ 325M.01, Subd. 5). The "Controller" definition includes a Beneficial Ownership Pierce—natural persons behind shell entities remain jointly and severally liable regardless of corporate layering.

LAYER 1Formal Verification[The Boundary]

Formal methods provide mathematical proofs for boundary conditions. While we cannot formally verify all neural output, we can verify the sandbox.

  • The system provably cannot exfiltrate data or exceed authorized permissions
  • A hard floor of safety beneath the probabilistic behavioral domain

Statutory Implementation: The Act requires Attribute Credentials to be structured in compliance with the W3C Verifiable Credentials Data Model v2.0 or equivalent open standards (§ 325M.01, Subd. 3). Technical specifications are delegated to rulemaking, with transitional provisions ensuring W3C/DIF specifications apply until rules are promulgated.

LAYER 2Staking[The Economic Constraint]

The operator must post assets at risk ("skin in the game") that are subject to slashing if harm occurs.

  • Transforms abstract liability into concrete resources
  • Aligns the system's optimization trajectory with safety

Statutory Implementation: The Solvency Bond requirement (§ 325M.01, Subd. 8) mandates capital reserves proportional to risk exposure. The Minimum Capital Requirement formula scales with credential class risk weights, revocation latency multipliers, and tail risk add-ons.

LAYER 3Mutual Assurance[The Scaling Layer]

Individual staking is insufficient for catastrophic tail risks. Mutual Assurance pools allow multiple operators to collectively guarantee each other's behavior.

  • If one member causes harm, the pool covers it
  • Creates decentralized peer accountability—pool members have financial incentive to monitor each other

Statutory Implementation: The Authority may satisfy bonding requirements through contracts with private sureties or collective reserve funds (§ 325M.05, Subd. 5(a)). The "Public Option" pricing (5% annual premium over actuarial risk) ensures the state remains the issuer of last resort.

LAYER 4Reputation[Proof of History]

Reputation provides long-term consequences for short-term defection. Trusted systems gain access; untrustworthy systems are excluded.

  • Verifiable, persistent records of past performance
  • 'Character' revealed through immutable action history

Statutory Implementation: The Negative List (§ 325M.02, Subd. 4) creates a privacy-preserving reputation registry. Nonreversible Identity Tokens use Argon2id hashes with a secret Authority Pepper. Bad actors cannot escape consequences by creating new shell entities.

LAYER 5Insurance & Regulatory Backstop[The Institutional Layer]

Traditional markets price the residual risk. Insurers become de facto regulators, requiring specific safety measures as conditions for coverage. The state provides the final backstop.

  • Insurers price risk and mandate safety measures
  • Assigned risk pools for essential but uninsurable systems

Statutory Implementation: The Minnesota Digital Trust Authority (§ 325M.05) serves as Issuer of Last Resort with a "Duty to Issue" that prohibits discrimination based on non-biological status—the first statutory recognition that AI agents may participate in bonded commerce on equal footing with human entities.

THE HOLDING

The Minnesota Model demonstrates that we do not need to solve the "Hard Problem of Consciousness" to solve the "Hard Problem of Governance."

The traditional regulatory frame asks: What is this thing? Is it intelligent? Does it have rights?

The substrate agnostic frame asks: Can this thing be held accountable? Who pays when it fails? What structures make aligned behavior the dominant strategy?

We do not need to know if an AI can feel. We need to know it can pay.