From the source material
1 / 1
Generated Useful Machines fallback graphic for the GPT-5.5 prompting guide.
OpenAI just dropped their prompting guide for GPT-5.5, and if you skim past the cheerful examples, the underlying message is essentially a demolition order for your current prompts. Now that the model is fully available in the API, developers are rushing to swap the endpoint and declare victory. The guide explicitly warns against this.
The catch: Do not treat GPT-5.5 as a drop-in replacement for earlier models.
OpenAI is telling builders to begin migration with a fresh baseline rather than carrying over every instruction from an older prompt stack. You are supposed to start with the absolute smallest prompt that preserves the product contract. From there, you tune the reasoning effort, dial in the verbosity, clarify tool descriptions, and enforce output formats against representative examples. If you have a massive, four-page system prompt full of superstitious workarounds, begging, emotional manipulation, and bizarre formatting tricks accumulated over the last two years—throw it out.
Translation: The new model is apparently capable enough that micromanaging it with legacy prompt debt actually degrades performance. Over-specifying the path narrows the search space and forces the model into weird, mechanical corners.
What changed: Tool-calling workflows now require communication etiquette.
One of the more interesting technical recommendations involves latency masking for multi-step tasks. OpenAI recommends that before a long-running task triggers its tool calls, the application should send a short, one-to-two sentence user-visible update acknowledging the request and stating the first step. This isn't just about politeness. It is a structural fix for the reality of agentic latency. Because GPT-5.5 can spend considerable time thinking, reasoning, and executing tools before returning a final answer, dropping a quick status update prevents the user from assuming the model crashed.
We are already seeing this behavior in the wild with their Codex app, and it genuinely helps bridge the gap between "instant text generation" and "asynchronous autonomous work." It makes the system feel like a reliable worker rather than a freezing chat window.
To make this migration easier, OpenAI suggests running a specific command directly in Codex to upgrade your existing code using advice embedded in their `openai-docs` skill: `$openai-docs migrate this project to gpt-5.5`. The upgrade guide the coding agent follows includes light instructions on how to rewrite prompts to better fit the model.
It is fascinating to see OpenAI openly acknowledge that their older models forced developers into adopting terrible habits. Starting from scratch is painful for teams with tight deadlines. But trusting that an existing prompt, hyper-optimized for the quirks of GPT-4 or GPT-5.2, will magically sing with GPT-5.5 is a pure fantasy.
It is time to delete your prompt debt.
In short
OpenAI released detailed guidance on prompting GPT-5.5, and the primary lesson is demolition. Treat it as a new model family, delete your bloated prompt preambles, and keep your tool users updated while the model thinks.