Visit https://sethshoultes.com/.well-known/mcp.json from any HTTP client — no credentials, no headers, no session. The response:
{
"name": "Seth Shoultes — Writing & Skills",
"description": "Blog posts, brain learnings, and installable Agent Skills from sethshoultes.com.",
"protocol": "mcp",
"version": "2025-06-18",
"endpoints": { "public": "https://mcp.sethshoultes.com/" },
"auth_required": false,
"transport": "http"
}
That JSON points to a JSON-RPC 2.0 endpoint at a same-origin subdomain. POST {"jsonrpc":"2.0","id":1,"method":"tools/list","params":{}} to the listed URL and an AI client can enumerate the site's blog posts, fetch their HTML, and discover the installable Agent Skills. No credentials. No human in the loop. This is what closing the loop looks like — what an AI-first website does, end to end, when there is nothing left in the way.
The loop and what it takes to close it
The loop has three steps. The agent visits a URL, or is given one. The agent fetches /.well-known/mcp.json to learn what's available there. The agent queries the discovered endpoint and gets back structured catalog data with no authentication. Skip any step and the loop is open. All three have to work, in sequence, for the loop to close.
Most implementations stop at step two, or between two and three. They build an MCP endpoint that requires a bearer token, wire up the token exchange, and call it done. This is the right architecture for a site owner who wants to query their own data from Claude Desktop — authenticated session, stored token, owner running queries against their own site. It is the wrong architecture for an AI agent arriving at the URL with no prior knowledge of the site, no token, and no way to ask for one.
The agent has no token. The site owner cannot share a token with every visiting agent — that is not a token, that is a public credential, which is something else entirely. Even if the agent somehow knew the URL and knew the endpoint, it could not authenticate. There was no zero-credential path. The loop was open at step three.
The asymmetry
A human visitor arriving at sethshoultes.com has resources an AI client does not. Cookies from a previous visit. A browser that renders the post archive as a readable surface. The ability to click around, scroll, follow context. The capacity to make inferences from layout — this section is featured, this list is paginated, this link goes deeper. None of that is available to an agent arriving at the same URL. The agent has HTTP. The agent has headers. If the response to an unauthenticated request is HTML wrapped in chrome, JavaScript-rendered cards, or a marketing surface with no structured data, the agent has nothing it can reason from.
The asymmetry is not a minor inconvenience. It is a wall. And the wall does not announce itself. An agent that cannot authenticate against an MCP endpoint does not receive a helpful explanation of what it would need. It receives a 401. The loop is open and the agent has no way to know whether the loop was ever meant to close.
The fix is to give the agent its own layer: a structured catalog at a known path, served without authentication, in a format an LLM can parse. Not the human-facing HTML, not a behind-token admin endpoint — a public, structured surface designed for agents to find and query.
Three files
Closing the loop on sethshoultes.com required three files. Each one does one thing. All three are required.
The Cloudflare Worker at mcp.sethshoultes.com is a JSON-RPC 2.0 endpoint — single-purpose, unauthenticated, permissive CORS. It accepts the standard MCP methods: initialize, tools/list, tools/call. Cloudflare's edge handles rate-limiting and caches stable reads for five minutes (Cache-Control: public, max-age=300). The worker is the surface. Without it, the discovery file points nowhere actionable.
The tool implementations inside the worker — list_posts, get_post, list_skills — are the formatters that shape what the endpoint returns. This is not a minor implementation detail. The formatters are the wall between the public surface and the site's source data. list_posts returns the post catalog from /talk/posts.json: title, slug, subtitle, date, read time, og_image. Nothing internal. get_post fetches a post's HTML by slug — the same content a human reader sees at sethshoultes.com/blog/<slug>.html. list_skills queries the public building-with-ai-skills GitHub repo. A future tool that calls the wrong method, or a developer who adds a field to the wrong response, cannot leak anything that wasn't already public, because the formatters do not have access to anything that wasn't already public. The wall is in what the formatters can see, not in what they choose to return.
The discovery file at .well-known/mcp.json is a static JSON file in the site repository, served by Jekyll. The file is the contract: protocol version, endpoint URL, transport, and the boolean auth_required: false. An agent fetching the file at the canonical path receives exactly enough information to call the endpoint and nothing more. Without this file, the worker exists but cannot be discovered by an agent that doesn't already know its URL.
The default is off
Public surfaces should not be implicit. The discovery file does not exist on a Jekyll site by default — Jekyll drops dotted directories at build time, so the .well-known/ path is silently excluded unless the operator explicitly opts in by adding it to the include array in _config.yml. The opt-in is one line of YAML and a committed JSON file. Without both, an agent fetching /.well-known/mcp.json gets a 404 — not a JSON blob announcing "authentication required," but a 404, as if the file does not exist. Which, from the agent's perspective, it doesn't.
This is not an abundance of caution. It is the only defensible default for a surface that exposes structured catalog data with no authentication. A site owner who creates the file has made an explicit decision. A site owner who has not has made no such decision, and the static-site generator respects that. The same logic applies to the worker: if the operator has not deployed it, the endpoints.public URL in the discovery file points at nothing, and the loop fails at step three. Each step requires action. Each step is opt-in.
The discovery file closes the loop
The public endpoint alone — even a well-designed one — does not close the loop. An agent that does not know the endpoint exists cannot query it. An agent that knows the site's domain but not its MCP URL has to guess at URL patterns, which is not a reliable mechanism and is not how the MCP specification anticipates discovery working.
The /.well-known/mcp.json convention solves the discovery problem the same way /.well-known/acme-challenge solves certificate issuance or /.well-known/openid-configuration solves OAuth discovery: a predictable path at a predictable location that any client can check without prior knowledge of the site's internal structure. An agent that knows a domain can check /.well-known/mcp.json. If the file is there, the agent knows the endpoint URL, the protocol version, the transport type, and whether authentication is required. If the file is not there, the agent receives a 404 and moves on.
The worker for sethshoultes.com shipped first; the discovery file followed days later. The endpoint without the file was a partial loop — useful to anyone who already knew the URL, useless to agents operating without prior knowledge of the site. The discovery file is what turns an endpoint into a discoverable service. That distinction is the difference between a loop that is mostly closed and one that is actually closed.
What the loop closing means
The loop is closed for sethshoultes.com. An AI client given that domain can fetch the discovery file, find the endpoint at mcp.sethshoultes.com, call tools/list, and enumerate the site's offerings without credentials and without a human in the loop. The blog post catalog, the post bodies, the installable Agent Skills — all of it comes back as structured JSON that an LLM can reason from directly. The pattern generalizes. Any site running on any stack — static, dynamic, WordPress, Next.js, anything that can serve a JSON file at /.well-known/ and stand up an HTTP endpoint somewhere — can ship the same three files and close the same loop.
The Mouth Was Not an Ear described a site that had listeners but no ear the client could reach: the avatar SDK had three commands for output and none for input. The pipes ran one way. This is the inversion. The site had a corpus an LLM could read once an agent knew where to look — and nothing the agent could use to find it. The public layer is the ear the site was missing. The loop is now symmetrical: the agent can find the surface, and the surface is listening.
The engineering pattern
The agent discovery loop is an engineering problem, not a marketing problem. The people who talk about "AI-first websites," "AI search," and "agentic discovery" as inevitable features of the near-term web are right that the direction of travel is clear. They are wrong to speak in future tense. The surface does not materialize on its own. The surface has to be built. And it is a specific three-file design with a default-off rule, not a trajectory or a tendency.
Anyone running a public site in 2026 who does not ship this triple — discoverable endpoint, public formatter, well-known discovery file — is building a surface that the site owner can use and visiting agents cannot find. The site is up. The site is searchable. The site is not discoverable in the AI-first sense. The agent is at the door, and the door is not on the map the agent has.
The loop closes or it doesn't. There is no middle state where it almost closes and the agent improvises the rest.
Seth Shoultes builds at garagedoorscience.com and writes here when the building produces something worth saying. The public MCP layer for sethshoultes.com runs at mcp.sethshoultes.com; the discovery file lives in the site repo.