I was writing a configuration document — the kind of technical spec you keep internally so you remember exactly how you set something up — and about halfway through, I realized I wasn't doing any work. Not in the sense that matters. I was pasting things that already existed into fields that were already asking for them.
That's when I stopped and looked at what had actually happened.
The configuration document was for a Custom GPT. ChatGPT's builder lets you create a named assistant, give it a system prompt, point it at an OpenAPI spec, and publish it to the GPT Store. The assistant calls your API, grounds its responses in what your tools return, and shows up in search results for people who already have ChatGPT open and don't know your site exists.
The three things ChatGPT's Actions importer actually needs are an operationId on every endpoint, an absolute server URL in the spec, and a privacy policy at a public URL. Then it imports your schema, lists your operations, and starts calling your API.
Here's what I was looking at when I opened my own spec to check it:
{
"openapi": "3.1.0",
"servers": [{ "url": "https://garagedoorscience.com" }],
...
}
Absolute server. Check.
Every operation in the spec had an operationId — diagnose, routeByZip, getDoorStyles, getActivePromotions, getInspectionReferencePhotos, retrieveLabContext, costEstimate, submitInspection. Eight tools. All named. All there.
Privacy policy: https://garagedoorscience.com/privacy. Already live. I'd written it three weeks ago because the /developers portal needed it for the API key sign-up flow.
The only new thing the whole process required was one DNS TXT record — a domain verification token OpenAI provides when you submit to the Store. I added it to Cloudflare, waited about fifteen minutes for propagation, clicked Verify, and that was it.
In One Registry, Seven Surfaces, I described the architecture this way: I defined my tools once, as plain TypeScript, and then exposed them through every surface that any kind of agent might try to reach.
The Custom GPT is the eighth surface. And I didn't touch the registry to add it.
That's the thing I want to be precise about, because it's easy to say "the pattern worked" in a way that sounds like a marketing line. What I mean specifically is this: the operationIds I added to the OpenAPI spec three weeks ago — I added them because MCP requires named operations to work correctly. The absolute server URL — I made it absolute because Scalar's interactive doc renderer breaks on relative URLs. The privacy policy — I wrote it because the /developers flow required a consent link before issuing API keys.
Every decision was made for a prior reason. None of them were made in anticipation of the Custom GPT builder. The builder showed up, asked for exactly those things, and they were there.
That's not luck. It's what happens when you build to a standard rather than to a consumer. OpenAPI 3.1 requires absolute servers and unique operationIds to be a valid spec — so I made the server absolute because Scalar's doc renderer needed it, and I named every operation because MCP needed it. I was just trying to have a valid spec. The fact that ChatGPT's importer also reads a valid spec is the whole point of having a standard.
The configuration itself took about two hours, most of it writing the system prompt. That part is not automatic — it's the only genuinely new work in the whole process. The Custom GPT needs to know which tool to call first when someone describes a symptom (it's diagnose), what to do when someone asks about spring repair (refuse the walkthrough, call routeByZip), and what to say when someone asks about HVAC (not its job, get out cleanly).
But even there, I was drawing on a document that already existed. The diagnose tool's description field already specified its behavior for unsafe repairs: diyRisk: "unsafe". I'd written that for the chat interface. I just promoted it into the system prompt.
The one thing I turned off deliberately was web browsing. The GPT has a retrieveLabContext tool that does grounded retrieval over the actual corpus — ten labs, thirteen articles, twenty-eight videos, the 24-point inspection checklist. Letting it go hunting the open web instead of calling that tool would mean ungrounded answers. The whole point is to route queries through what the site actually knows.
The rate setup was simple: no user tokens, public tier, 60 requests per minute, 2,000 per day. Sufficient for a conversational assistant. If the GPT ever generates meaningful traffic, I can update the Instructions to pass a gds_live_ bearer and move to the pro tier in about thirty seconds.
There's a version of this where I'd built the first six surfaces for MCP and REST, then built a seventh for /developers, then decided to add Custom GPT support and spent a day building a custom integration. That's the version where every new AI-first consumer costs you a sprint. You build for Claude Desktop, then you build for ChatGPT Actions, then you build for whatever Gemini calls their external tool format, and each one is a distinct project because each one has a different wire protocol and a different shape of authentication and a different way of expressing schemas.
The OpenAPI spec is the abstraction that makes all of those the same project. Not because OpenAPI is magic, but because it's the common language. When LangChain imports your spec, when Postman auto-generates a collection from it, when the GPT Store importer reads it — they're all doing the same thing. They're consuming a schema that tells them what your tools do and how to call them. You write the schema once. You don't write the integration.
The surface count is eight now. Probably nine soon — Gemini's function-calling format reads OpenAPI, Grok's tool use is headed in the same direction, and whatever arrives next quarter will almost certainly want a URL and a schema. I'm not going to keep counting in the title.
The config pack I wrote today is sitting in the repo at docs/custom-gpt/garage-door-doctor.md. It has the system prompt, the conversation starters, the exact Actions setup, and a post-publish checklist for verifying that diagnose is actually being called. Someone on the team could pull it up tomorrow and have the GPT live in under an hour.
That document exists because the work was already done. It's documentation, not implementation.
The cost of adding the eighth surface was zero lines of code and one DNS record. When the work is already done, you write documentation — not implementation. That's the difference.
Seth Shoultes builds things at garagedoorscience.com and writes about them occasionally.