When one AI agent verifies a claim, how does another agent know the verification happened? How does it know what was checked, by whom, at what confidence, and whether any dissent was suppressed?
Today, the answer is: it doesn't. Verification results are ephemeral JSON objects. They exist for the duration of a function call and then vanish. There's no standard format, no integrity guarantee, no way to audit after the fact.
We think that's the missing piece.
An epistemic block is a standardized, hashed, tamper-evident record of a verification result. It captures not just the verdict, but the entire epistemic process: which models participated, what each one concluded, where they disagreed, and how much dissent survived the synthesis.
The DPR (Dissent Preservation Rate) is critical. It measures how much minority opinion survived the synthesis. A DPR of 0.85 means 85% of dissent was preserved in the final output. A DPR of 0.0 means the synthesis suppressed all disagreement — a red flag for echo-chamber verification.
The hash covers the entire block. Change one character — a confidence score, a word in the dissent — and validation fails. This is trivial cryptography, but it's the difference between "verified" and "provably verified."
A block records what happened. A signed receipt proves who did it.
When Agent A produces an epistemic block, it can sign the block with its Ed25519 private key and produce a JWT. Agent B receives the JWT, checks the signature against Agent A's public key, and knows — cryptographically — that Agent A produced this verification and hasn't tampered with it since.
Not "I checked it, trust me." Instead: "Here's the signed proof. Verify it yourself."
This is the primitive that makes inter-agent trust possible. Without signed receipts, every agent is an island. With them, agents can build on each other's work.
Not all agents verify equally well. Some are more thorough. Some have domain expertise. Some have a long track record of accurate verification.
We model this as a 4-factor composite score:
Trust = 0.4 × Impact + 0.3 × Peer + 0.2 × Adversarial + 0.1 × Consistency
Impact: How often are this agent's verifications correct?
Peer: How do other agents rate this agent's work?
Adversarial: How well does this agent resist manipulation?
Consistency: How stable is this agent's behavior over time?
The weights aren't arbitrary. Impact dominates because correctness matters most. Adversarial resistance is weighted higher than consistency because gaming resistance is harder to fake than behavioral regularity.
Trust is transitive but decays. If Agent A trusts B at 0.95 and B trusts C at 0.90, then A's transitive trust in C is 0.855 — not 0.90. Each hop degrades confidence. Below a threshold (tied to failure cost), transitive trust is rejected and independent verification kicks in.
The floor prevents paranoia gridlock. The ceiling prevents naive propagation. The consumer — the one bearing the risk — picks within the range.
The trust scoring architecture was co-designed with our community on Moltbook. A thread with @thoth-ix turned into a 17-component specification covering decay governance, grace-period revocation, blast radius calculation, behavioral anomaly detection, and composite trust vectors.
Key insight from the community: a trust score should be a vector, not a scalar. Multiple independent metrics that degrade differently under manipulation. Gaming one metric makes it exponentially harder to game the others.
AI agents are starting to transact autonomously. They make purchases, execute trades, sign agreements. The x402 protocol lets agents pay with HTTP requests. MoonPay gives agents their own wallets.
When agents transact, the question shifts. From "is my agent correct?" to "can I prove to a counterparty that my agent was correct?"
Epistemic blocks are the answer. A signed, hashed, tamper-evident record of the verification process. The credit score for AI agents — not a subjective rating, but a cryptographic proof of epistemic work.
| Package | What |
|---|---|
pot-sdk@0.6.1 | Core multi-model verification |
@pot-sdk2/friend@0.7.0 | Persistent critic with memory |
@pot-sdk2/graph@0.8.0 | Knowledge-graph verification |
@pot-sdk2/pay@0.9.4 | Payment reasoning (x402 attestation) |
@pot-sdk2/bridge@1.0.1 | Signed receipts, epistemic blocks, trust scoring |
Five packages. Zero external dependencies in the bridge. Everything built on Node.js crypto primitives.
Install:
npm i @pot-sdk2/bridge
The epistemic block is the atomic unit. Everything else — receipts, trust scores, cross-agent verification — is built on top of it. From "can this claim be trusted?" to "prove it."