February 2026 | Web Standards

The AI-Readable Web: Introducing /.well-known/ai

The Browser for AI — a new web standard where websites speak AI natively, with signed knowledge from the source

In 1995, Fortune Put the Magazine on the Web

They scanned the pages, wrapped them in HTML, and called it a website. It took years before anyone realized the web wasn't for reproducing print — it was a fundamentally new medium that demanded new thinking.

We're making the same mistake with AI.

Today, companies build beautiful websites for human eyeballs. Then scrapers from Perplexity, ChatGPT Browse, and Google AI Overviews crawl those pages, guess at meaning from HTML markup, and present their interpretations to AI users. The scrapers are middlemen who decide what your company means.

The result:

  • AI agents hallucinate company details because they're guessing from HTML
  • No way to verify if AI-presented information is accurate or current
  • Companies have zero control over how AI represents them
  • Scrapers aggregate and repackage without attribution or verification
  • JavaScript-rendered sites (React, Lovable) are completely invisible to AI

What if, instead of letting middlemen interpret your website, you could speak to AI directly?

What Already Exists (and What's Missing)

Several standards address parts of the AI-web relationship:

Standard What It Does What It Doesn't Do
robots.txt Tells crawlers what to index Says nothing about what to understand
agents.json Defines agent API contracts Agent capabilities, not company knowledge
A2A agent.json Agent-to-agent communication protocol How agents talk, not what the org does
ai-plugin.json ChatGPT plugin manifest Plugin functionality, not organizational identity
schema.org JSON-LD Per-page structured data No single discovery endpoint

None of these answer the fundamental question: "What does this organization DO, and how can I verify it?"

The Proposal: /.well-known/ai

We propose a simple, powerful standard: a single JSON document at the well-known URI /.well-known/ai that tells any AI agent:

WHO

Organization identity, people, Digital Name

WHAT

Products, applications, core concepts

WHERE

Knowledge endpoint, feed endpoint, site map

VERIFIED

Cryptographically signed by the organization's blockchain identity

Three-Tier Architecture

The AI Discovery Standard uses a three-tier approach, from compact discovery to deep knowledge:

Tier 1: Discovery — /.well-known/ai

Compact organizational profile. Who we are, what we do, where to find more. Links to knowledge and feed endpoints. Core concepts glossary. Cryptographic signature.

Tier 2: Knowledge — /ai/knowledge.json

Full organizational encyclopedia. Detailed glossary, product descriptions, team backgrounds, market thesis, technology architecture. An AI reads this and understands everything.

Tier 3: Feed — /ai/feed.json

Chronological updates with structured key facts, tags, and related concepts. Not RSS — designed from the ground up for AI consumption. Machine-readable news.

Go Direct, Not Through Scrapers

The current AI information pipeline:

Company Website → Scraper → AI Training Data → AI Response

The /.well-known/ai pipeline:

Company Website → AI Agent (direct, verified, current)

No middlemen. No stale training data. No hallucinated interpretations. The AI agent reads the authoritative source directly, verifies the signature, and responds with verified knowledge.

The Differentiator: Signed Data

Every other discovery standard serves unsigned data. An attacker who compromises a web server can serve false information to AI agents. With signed AI discovery data, each JSON file includes a signature block:

"_signature": {
  "digitalName": "0xD36AAf65...664",
  "network": "polygon",
  "contentHash": "sha256:...",
  "signedAt": "2026-02-15T00:00:00Z",
  "method": "epistery-domain-v1"
}
  • Integrity: Content hash detects any tampering
  • Attribution: Digital Name traces to on-chain organizational identity
  • Non-repudiation: Blockchain timestamp proves when the data was published
  • Verifiable origin: AI agents can confirm WHO published the data, not just WHERE it was served from

This is Rootz eating its own dogfood — the AI discovery data itself becomes a Data Wallet with verifiable origin. The company's blockchain Digital Name signs the knowledge it publishes about itself.

Plain English + Plain AI

This standard extends the Veritize dual-layer architecture to the entire web:

Plain English

Your website. Written for humans. Readable by anyone, from a rancher in Montana to a customs agent in London. The human-readable layer.

Plain AI

/.well-known/ai and its knowledge/feed endpoints. Written for machines. Structured JSON with verifiable signatures. The machine-readable layer.

Both layers coexist. Both are authoritative. Together they form the Twin Heart of the AI-readable web.

The Browser for AI

In 1993, Mosaic gave humans a way to browse the web. In 2026, /.well-known/ai gives AI agents a way to browse organizational knowledge.

Human Web (1993) AI Web (2026)
DNS → find the server /.well-known/ai → discover the organization
HTML → render the page knowledge.json → understand the company
RSS → stay current feed.json → track updates
SSL certificate → verify identity Digital Name signature → verify origin
robots.txt → "what can I crawl?" /.well-known/ai → "what should I know?"

Every AI agent is already a browser — it reads content and produces understanding. This standard gives AI agents a URL bar: start at /.well-known/ai and discover everything, verified at the source.

Why This Matters

When AI agents can programmatically discover and verify organizational knowledge:

  • Accuracy: AI responses about companies are sourced from the company itself, not scraped interpretations
  • Currency: Knowledge updates in real-time when the organization updates its files, not when a crawler re-indexes
  • Verification: AI agents can cryptographically verify the origin of information they present
  • Attribution: The source is the organization, signed with their Digital Name — not an anonymous scrape
  • Autonomy: Organizations control their AI narrative directly, without intermediaries

This is the Origin Economy for web content. Data with verified origin commands premium trust.

Try It Now

rootz.global is the first implementation. Point any AI agent at our discovery file:

Ask Claude, ChatGPT, or any AI: "Fetch rootz.global/.well-known/ai and tell me what Rootz does." The AI reads structured, verified knowledge directly from the source. No scraping. No guessing.

An Open Standard

The AI Discovery Standard is published under CC-BY-4.0. The full specification is at /ai/standard.md.

We're proposing registration of /.well-known/ai as a well-known URI per RFC 8615. The standard is simple enough that any organization can implement it — a minimum viable implementation is just a single JSON file with your name, domain, mission, and a few core concepts.

The signature block is optional but powerful. For organizations with a blockchain Digital Name, signing your AI discovery data creates verifiable origin for your web content — the same Origin² that Rootz Data Wallets provide for any data.

Make Your Website Speak AI

rootz.global is the first implementation. The standard is open.

Read the spec. Implement it on your domain. Join the AI-readable web.

Read the Standard Get in Touch