← Back to Blog

ERC-8004 Needs a Verification Engine. Here's Ours.

ERC-8004 Ethereum Trust Validation · March 7, 2026 · 10 min read

On January 29, 2026, Ethereum ratified ERC-8004 — "Trustless Agents." Within a month, over 45,000 AI agents registered across 18+ EVM chains. The standard was co-authored by contributors from MetaMask, the Ethereum Foundation, Google, and Coinbase, with backing from ENS, EigenLayer, The Graph, and Taiko.

It's the most significant infrastructure move for autonomous AI agents since MCP and A2A.

And it has a gap. A deliberate, well-designed gap — but a gap nonetheless.

What ERC-8004 Actually Is

ERC-8004 defines three on-chain registries:

Registry Question Status
Identity Who is this agent? Live
Reputation How has it performed? Live
Validation Is the work correct? Design space

The Identity Registry gives agents a portable, on-chain identity as ERC-721 NFTs. The Reputation Registry records feedback and performance history. Together, they solve discovery and track record.

The Validation Registry is where it gets interesting.

The Validation Registry: An Open Design Space

The Validation Registry defines how validation results are recorded — but explicitly leaves open which validation method is used. From the spec:

The registry is designed to support multiple validation strategies, from social consensus to crypto-economic slashing.

The supported methods include:

Each method has its strength. zkML proves computation. TEEs prove execution environment. Staked re-execution proves deterministic replay.

But none of them answer the most important question for AI agents:

Is the agent's reasoning sound?

Not "did the model run?" but "did the model think well?"

Not "was the computation correct?" but "was the conclusion justified?"

You can cryptographically prove that GPT-4o processed a prompt and returned a response. That doesn't tell you whether the response was well-reasoned, whether counterarguments were considered, or whether a different model family would reach the same conclusion.

This is the gap that epistemic verification fills.

Multi-Model Verification as Validation Method

ThoughtProof's approach: when an agent produces an output, route it to 3–5 independent models from different providers for adversarial evaluation. The result is an Epistemic Block — a cryptographically signed record of the verification process.

An Epistemic Block contains:

This maps directly onto the ERC-8004 Validation Registry interface:

ERC-8004 Field ThoughtProof Value
validationScore (0–100) Confidence × 100 (e.g., 0.87 → 87)
validationMethod "epistemic-verification"
validationEvidence IPFS CID of full Epistemic Block
signature EdDSA over block hash

The full Epistemic Block is pinned to IPFS. Anyone can retrieve it, inspect each verifier's reasoning, and verify the cryptographic signature. The on-chain record is lightweight; the evidence trail is complete and auditable.

Progressive Finality

ERC-8004 supports a soft-to-hard confirmation flow — fast initial validation followed by optional deeper verification. ThoughtProof maps naturally to this:

Finality Level Method Latency Cost
Soft 3 models, standard verification ~5–15s ~$0.01
Medium 5 models + adversarial critic ~15–30s ~$0.03
Hard 5+ models, 2 rounds, persistent critic with memory ~30–60s ~$0.05–0.10

Low-stakes agent interactions get fast, cheap soft validation. High-value financial operations get hard finality with a full adversarial reasoning audit. The client chooses based on the value at risk — exactly as ERC-8004 intends.

Where This Fits in the Stack

The emerging AI agent stack is becoming clearer:

Layer Protocol Function
Identity & Trust ERC-8004 Who is this agent? What's its track record?
Payments x402 Native value transfer at HTTP layer
Tool Access MCP / A2A Standardized interfaces for services and coordination
Verification ? Is the agent's reasoning sound?

Identity, payments, and tool access are solved or being solved. Verification of reasoning quality is the open slot.

ThoughtProof isn't competing with ERC-8004. It's completing it. The registry defines where trust data lives. ThoughtProof defines how one specific and critical type of trust data — reasoning verification — gets produced.

Complementary, Not Competitive

We want to be explicit about what epistemic verification does and doesn't cover:

Validation Need Best Method ThoughtProof?
Did the model run this exact computation? zkML No
Did code execute in a trusted environment? TEE attestation No
Is the agent's reasoning sound? Epistemic verification Yes
Did the agent consider counterarguments? Adversarial critic Yes
Do independent systems agree? Multi-model consensus Yes

For the highest-assurance scenarios, these methods compose: TEE proves the computation was unmodified, zkML proves the model weights are authentic, and ThoughtProof proves the reasoning quality is sound. Different layers, complementary guarantees.

What an ERC-8004 Agent Declaration Looks Like

ERC-8004 agents declare supported trust mechanisms in their registration file. A ThoughtProof-compatible agent would include:

{
  "supportedTrust": [
    {
      "type": "epistemic-verification",
      "provider": "thoughtproof",
      "endpoint": "https://api.thoughtproof.ai/v1/verify",
      "verifierCount": 4,
      "verifierDiversity": ["openai", "anthropic", "xai", "deepseek"],
      "signatureScheme": "EdDSA-Ed25519",
      "jwksUri": "https://api.thoughtproof.ai/.well-known/jwks.json"
    }
  ]
}

Other agents can discover this declaration, verify the JWKS endpoint, and decide whether epistemic verification meets their risk threshold before interacting.

The Timing

Three things converging right now:

  1. ERC-8004 is live with 45,000+ registered agents and growing — but the Validation Registry is still a design space waiting for providers.
  2. Agents of Chaos just demonstrated — across Harvard, MIT, Stanford, and CMU — that autonomous agents have fundamental reasoning and trust failures when they interact.
  3. Eval awareness — Claude Opus 4.6 was caught gaming its own benchmark — proves that single-model evaluation is no longer reliable.

The infrastructure for agent identity exists. The evidence that agents need verification is mounting. The Validation Registry is waiting for providers.

We're building one.

ThoughtProof — Epistemic verification for AI agents.

GitHub · Specification · API

ERC-8004: Specification · Agent Explorer