Proof the claimed model ran and the delivered answer was not tampered with
CommitLLM returns a compact receipt binding the checkpoint, deployment manifest, prompt, sampling randomness, decode policy, and the path that produced the delivered answer. A verifier checks on CPU that the delivered answer matches the committed execution trace, not a post-hoc rewrite.
- Checkpoint + quantization bound
- Decode policy replayed
- Delivered answer bound
- Challenge tok 67 · L7/23/41/58
- Routine audit 1.3 ms/tok
- Verifier CPU only
If a provider rewrites, cleans up, or post-processes the answer after decoding, the receipt no longer verifies.
The receipt binds the model, decode path, and delivered answer
CommitLLM does not only attest that some checkpoint ran. It binds the request, deployment choices, sampling policy, and answer path that determine the delivered answer.
Which system actually ran
Checkpoint identity, quantization, adapter state, and runtime configuration.
How the answer was produced
Sampler, temperature, penalties, grammar constraints, and stop rules.
What answer was actually delivered
Detokenization, cleanup, whitespace handling, and formatting on the committed path.
| Receipt field | What it binds |
|---|---|
| input_spec_hash | Tokenizer, chat template, BOS/EOS, truncation, padding, and system prompt rules |
| request_binding | Prompt tokens, transcript state, retained audit window, and token count |
| model_spec_hash | Checkpoint identity RW, quantization, LoRA/adapter, RoPE config, RMSNorm ε |
| decode_spec_hash | Sampler, temperature, top-k/p, penalties, logit bias, grammar, and stop rules |
| sampling_randomness | The randomness actually used for non-greedy decoding, committed before replay of the output policy |
| output_spec_hash | Detokenization, cleanup, whitespace normalization, and output formatting |
Serve normally. Open on challenge. Verify on CPU.
CommitLLM stays close to the kept serving path. Expensive work appears only when a verifier asks to inspect a committed region.
Serve and commit
The provider stays on the normal GPU path and returns the delivered answer plus a compact receipt binding the model, request, randomness, and output policy.
Open on challenge
The verifier challenges token positions and layers after commitment. Routine audit samples the retained window; deep audit opens more of it.
Verify the delivered answer
CPU-side checks tie the delivered answer back to model identity, decode policy, and the committed execution trace.
Measured on the current prototype
Current figures on the corrected replay path for Qwen2.5-7B-W8A8 and Llama-3.1-8B-W8A8. Routine audit keeps verifier work CPU-only.
Verifier cost · Llama 70B
Online tracing overhead
Attention corridor · Qwen2.5-7B-W8A8
Attention corridor · Llama-3.1-8B-W8A8
What is exact, approximate, and statistical
Decode is fail-closed. The remaining boundary lives inside native attention and unopened retained state during routine audit.
Routine audit stays cheap. Deep audit upgrades coverage.
CommitLLM uses the same receipt in both modes. Routine audit keeps steady-state verification light; deep audit opens the full retained window and upgrades prefix provenance to exact verification.
Low-friction spot checks
Designed for normal operation when you want frequent verification without opening the full trace every time.
- Freivalds-based checks on large linear layers
- Canonical replay for supported nonlinear subcomputations
- Sampled prefix and KV provenance with statistical coverage
- Bounded approximate attention replay on CPU
Escalate when the stakes are higher
Use the same commitment, but require a larger opening. This removes the routine-audit statistical gap on the retained prefix window.
- Full-prefix and KV openings across the retained audit window
- Exact prefix provenance instead of sampled coverage
- The same algebraic, replay, and decode checks as routine audit
- Higher bandwidth and storage cost, not a different serving path
Built to sit beside real serving stacks
CommitLLM is not a replacement inference engine. The provider keeps the normal GPU path and produces request-scoped evidence alongside it.
Continuous batching and paged attention
Many user requests can share the same GPU microbatch. CommitLLM still produces per-request receipts and per-request audits.
Tensor parallelism and fused kernels
The tracing layer follows the existing execution path instead of replacing production kernels with proof-friendly substitutes.
Quantized serving
Quantization metadata is receipt-bound, and the kept path is measured on production-style W8A8 checkpoints.
Cross-request cache reuse and shortcut decoding
Cross-request prefix caching, speculative decoding, and other semantics-changing shortcuts need more protocol work. Unsupported paths should fail closed.
Execution integrity for every deployment
Model identity is only part of the story. CommitLLM matters wherever decode policy and the delivered answer need to be auditable.
Enterprise procurement
Paying for Llama 70B? Get proof the provider served that checkpoint and did not rewrite the delivered answer after decoding.
Regulated deployments
Banks, hospitals, and legal teams need an auditable chain from prompt and model version to decode policy and the delivered answer.
Decentralized compute
Networks like Gensyn, Ritual, or Bittensor cannot rely on “trust the node.” CommitLLM provides the missing verification layer.
Agent systems
When an agent takes action, model provenance and the untampered delivered answer become governance and liability questions.
Between fingerprints and full proving
Heuristics are too weak, full proofs are still too expensive, and deployments need a middle ground that binds the delivered answer.
Fingerprinting
Statistical heuristics can provide evidence, but they do not exactly verify a specific response or rule out post-hoc rewriting by a determined provider.
Commit-and-audit
Commitment-bound end-to-end. Large linear layers are checked algebraically, supported nonlinear pieces are replayed canonically, and the delivered answer is tied to the committed path.
Full proofs
Strong proof objects are attractive, but prover costs remain too high for production LLM serving at realistic scale today.
Setup once. Commit every response. Verify on challenge.
For readers who want the mechanics rather than the product pitch. The verifier holds a secret key derived from public weights, the provider commits inline during serving, and expensive openings happen only on challenge.
Build the verifier key
From a public checkpoint, the verifier computes a Merkle root over weights, secret Freivalds vectors for eight matrix families (Wq, Wk, Wv, Wo, Wgate, Wup, Wdown, LM_head), and the model configuration needed for canonical replay.
Serve normally, return a receipt
The provider runs inference on the normal GPU path with a tracing sidecar that captures retained state. It returns the response plus a compact receipt binding the execution trace, KV state, deployment manifest, prompt, sampling randomness, output policy, and token count.
Challenge specific positions and layers
The verifier challenges token positions and layers after the commitment. The provider opens the requested region. Routine audit samples prefix state; deep audit opens everything in the retained window.
CPU-only checks
Embedding Merkle proof. Freivalds on shell matmuls. Exact INT8 bridge recomputation. KV provenance. Attention replay against committed post-attention output. Final-token tail from captured residual. LM-head binding. Decode and output policy replay.
Verify huge matrix multiplies cheaply
The provider claims z = W @ x for a public weight matrix W.
Recomputing the full product is expensive. Freivalds’ algorithm gives a much cheaper check:
the verifier precomputes v = rTW with a secret random vector r,
then checks v · x =? rT · z in the finite field
Fp where p = 232-5.
If z != Wx, the check fails with probability ≥ 1-1/p.
This is information-theoretically sound. Transformers are mostly matrix multiplication;
once those multiplies are cheap to audit, the verifier can check model identity
without rerunning the full model.
Start with the paper, then inspect the implementation
The homepage gives the high-level story. The paper, code, and formalization carry the full technical detail.
Paper (PDF)
Protocol, guarantee boundary, threat model, and measurements in full detail.
Open paperGitHub repository
Reference implementation in Rust, Python, and Lean. Some internal crate paths still use the legacy verilm-* prefix while the rename completes.
Lean 4 formalization
The formalization tracks the proof obligations behind the protocol and makes the security claims explicit.
Open Lean directoryBenchmark and repro scripts
Scripts for reproducing measurements on the current prototype, including the corrected replay path and corridor checks.
Open scripts