When I wired up the Custom GPT a few days ago, I opened the Actions importer and watched it read my OpenAPI spec. Four seconds. It listed all eight tools — diagnose, routeByZip, getDoorStyles, getActivePromotions, getInspectionReferencePhotos, retrieveLabContext, costEstimate, submitInspection — and gave me a green checkmark.

I've been thinking about those four seconds. Not because it was fast. Because of what it was reading. Everything the import needed, I'd written for earlier reasons. The GPT showed up and asked for exactly those things.

That essay was about the pattern holding. This one is about what the pattern required — the specific things an agent needs from your API, and why that list looks so familiar.


OperationIds

ChatGPT's Actions importer will not import a spec without an operationId on every endpoint. Not "will display a warning." Will not import.

That sounds like a papercut. It isn't.

The operationId is the name an agent uses to reason about which tool to call. An agent deciding between post_api_v1_diagnose and post_api_v1_routeByZip is doing disambiguation work with names that carry no meaning. An agent choosing between diagnose and routeByZip is reading intent.

I'd added operationIds because MCP requires named operations to work correctly. By the time the GPT importer ran, they were already there.

Name your operations for what they do, not for where they live.


Absolute server URLs

Relative URLs break two things: Scalar's interactive doc renderer and ChatGPT's Actions importer. A stranger's agent has no idea what domain your API lives at. A relative /api/v1 path is not a locator. It's a fragment.

My spec declares a single absolute server: "url": "https://garagedoorscience.com". One line. Any agent that reads it knows where to send the request without asking.

Make your spec self-locating. The agent you haven't met yet is grateful.


Tool descriptions written for when to call them

A human developer figures out, over time, when to reach for each tool. An agent reads the description field and has to decide on the spot, with no second chance to ask.

"Returns diagnostic tree results" is a description of what the tool does. It tells the agent nothing about when to use it.

"Use this when a homeowner describes a symptom in their own words — 'my door won't close' or 'it makes a grinding noise.' Returns likely issues, typical cost range, urgency, and a DIY-safety flag for each." — that's a description that routes correctly.

The description field is not documentation. It's an instruction to an agent who is deciding, right now, whether this is the right tool. Write it that way.


A public privacy policy URL

Not optional for the Custom GPT Store. Not optional for the /developers portal I built for issuing gds_live_ API keys. Anywhere you ask someone — human or automated — for consent before accessing your system, you need a stable URL to your privacy policy.

I'd written it three weeks earlier for the developers portal. It was already live. The GPT Store required it. It was there.

Write it once. Put it somewhere it won't move.


Clear error shapes

Agents treat HTTP status codes as semantic signals. 5xx means retry. 4xx means surface the error to the user. 404 means stop.

When you return 200 with {"error": "not found"} in the body, you've broken that logic. The agent retried successfully, got a 200, passed garbage upstream.

Standard HTTP semantics exist because every client already knows them. Don't invent new ones.


Input schemas that exclude nonsense

{"zipOrLocation": string} — with no description — means an agent will pass "the user's garage door" as the input. I've seen this happen.

The description field on a parameter is not optional. "A 5-digit US ZIP code (e.g. '84770') or a city name (e.g. 'St. George, UT'). Do not pass a full address." — that's a constraint. It shapes the input the agent will generate.


An opt-in flag for LLM-assisted paths

This is the subtle one, and I only understood it after building the widget.

The diagnose endpoint supports a useLlmFallback: true flag. When the deterministic lookup misses — when someone types "sounds like a gunshot" and no alias matches — the fallback runs a Haiku classification pass and tries again. I wrote about that decision here.

But make the fallback the default and you've created a problem: most agents calling your API already have their own LLM. They're not asking you to run a model on their behalf. If you silently fire a second one on top of theirs, you've added latency, cost, and non-determinism they didn't ask for.

Let them choose. Default to the deterministic path. Expose useLlmFallback as an explicit opt-in. Agents that want it will pass it. Agents that don't, won't.


A shallow surface to crawl first

/llms.txt is a plaintext index of your site's machine-readable content — something an agent can crawl in a single request before deciding whether to go deeper. /ai is a tactical TL;DR: what tools exist, what they return, what the rate limits are. /openapi.json is the schema.


The list above is the price of admission.

Seth Shoultes builds things at garagedoorscience.com and writes about them occasionally.