Could AI agents one day prove their expertise like professionals showing credentials? Or verify their responses like journalists citing sources? In a future economy where agents compete based on specialized knowledge and capabilities, such verification could become critical for both trust and value. Web Proofs promise a path toward cryptographically verifiable AI agents. In this blogpost, we explore how they can validate the information AI produces, consumes, and shares.
At their core, Web Proofs are signed attestations of TLS transcripts. This allows any web data to be securely verified while preserving privacy, enabling access to user’s private data stored by Internet services without the need for APIs. For a deeper dive into the fundamentals of Web Proofs, see our Verifiable Data report and a whitepaper on TLSNotary protocol.
While we see a lot of value in "on-chain" Web Proofs applications (as discussed in our blog post), their potential goes beyond that. Today's internet runs on trust – trust in data sources, trust in content authenticity, trust in user credentials. Web Proofs represent a fundamental shift in how we obtain trusted data. This opens up new possibilities not only for decentralized applications, but also for building trusted AI agent economies.
Web Proof can give you instant confidence that content comes from a trusted AI model. It's like seeing a verified badge on social media, but for AI-generated information. You can trust what you're seeing without manually checking sources. The verification happens automatically in the background and appears as a simple visual indicator, making it easy to build trust in AI-generated information while preventing manipulation or misattribution.
The verification begins at the source. As shown below, when content is generated by your AI model like ChatGPT, you can create a Web Proof which will act as a signed screenshot of ChatGPT response. The verification panel shows the exact conversation that produced the content, confirming its origin with cryptography.
This verification then travels with the content. When the verified content appears in another context (like the Medium blog post about AI stock trading shown above), readers see a simple green verification mark confirming "ChatGPT is verified source of this code."
This straightforward approach to verification changes how we interact with AI content. Readers save time by not needing to cross-check sources. Content creators build trust with their audience through transparency. The verification prevents tampering between generation and publication. And throughout the content's lifecycle, proper attribution is maintained automatically. It's all about making trust simple and intuitive.
As AI evolves, we're increasingly seeing AI systems building upon other AI systems. This happens when one AI's output becomes another AI's input, creating supply chains of AI-generated content. Think of an AI that writes a story, which another AI transforms into images, which a third AI animates into video. When you hear information from a voice assistant, that content might come from a coupled AI system: first a text generation model creates the content, then a voice synthesis model transforms it into speech. Such cases demand building a verification chain, to make sure that every piece of information comes from a trusted model.
Web Proof elegantly addresses this challenge by creating a continuous verification path throughout the AI supply chain. For example, with AI-generated voice content:
The key benefit of this approach is its simplicity and automation. When you hear information through an AI voice, verification that it came from a trusted source happens instantly and behind the scenes. There's no need for manual verification – just the same straightforward trust you'd have when citing a reputable source, but built directly into the AI supply chain.
Building on the simple idea of verifiable data sources, we could ambitiously extend it to AI agents’ ecosystems – so they’re able to prove where their entire knowledge base comes from. In practice, Web Proofs could verify the new knowledge being added to an AI agent's database. Think academic papers, specialized articles, or book citations that establish expertise in a certain domain. A key advantage here is that Web Proofs can handle information from any external server, whether it's publicly accessible or behind authentication. The agent could then prove it has incorporated specific information without exposing sensitive data.
However, the practical implementation comes with some constraints. Due to the computational overhead of cryptography (MPC) used in Web Proofs, we'd be currently limited to proving relatively small chunks of information. When dealing with large databases, other zero-knowledge solutions might be more suitable. One of such solutions is so called zkSQL, which handles entire structured datasets more efficiently.
Anyhow, such proof of expertise is particularly important in the context of AI agent economies. Imagine specialized AI agents that can cryptographically prove their knowledge in medicine, law, or engineering. When businesses or individuals delegate tasks to AI agents, being able to verify an agent's expertise isn't just a matter of trust – it becomes a driver of value. Agents with verified knowledge bases could command premium rates and build reputation based on cryptographic guarantees rather than just track record.
On the other hand, by creating a Web Proof of a blog's contents the authors could benefit when their articles are used to extend knowledge base on one’s AI Agent. Cryptographical seal on the web page will not only guarantee its authenticity but also could enable automated royalties for authors.
The landscape of agentic economies where AI agents interact autonomously constantly evolves. Ensuring AI authenticity is vital, yet it remains one of the most challenging aspects to implement. As agents take on more responsibilities – like handling transactions, executing smart contracts, and making high-stakes decisions – authenticity must become a core foundation not a late-stage addition. Without it, both end-user perspective and agent-driven economies risk being undermined by fraud and manipulation.
Web Proofs certainly can help with that, but they seem to be an interim solution; for long-term trust and security the authentication standards should be natively integrated into AI provider APIs rather than relying solely on external protocols. Another interesting angle would be executing AI agents within a Trusted Execution Environment (TEE). By doing that the cryptographic security can be enforced at the hardware level. This could make authentication in the context of an autonomous agent network a lot easier.
Web Proofs offer a promising path to cryptographically verifiable AI. The applications we've explored – AI content sources, automating AI supply chain verification, validating agent expertise, and authenticating AI gent’s responses – barely scratch the surface of possibilities at this intersection. As AI agent economies take shape, the cryptographic verifications mechanisms won't just build trust. They'll reshape value dynamics, creating a marketplace where cryptographic guarantees become a premium differentiator.
The race toward verifiable AI is now unmistakably underway, with Web Proofs leading the charge toward a future where we don't just trust AI – we verify it.