| | Welcome to the OpenAI developer update—your source for what’s shipping, how teams are building, and practical guidance straight from our engineers. Not in our Discord yet? Join to hear directly from us and stay in touch with the OpenAI developer community. | | | What shipped, what changed, what to automate next | Your new command center for agentic coding The Codex app is designed for multi-tasking with agents and connects to all your tools. Supervise parallel workstreams, review changes cleanly, and stay in control from background execution to shipped code. And for a limited time, try Codex in ChatGPT Free and ChatGPT Go.
Make Codex smarter everywhere GPT-5.3-Codex can make Codex a more autonomous coworker across all Codex surfaces: the app, CLI, IDE extensions, and web—and is now available in the Responses API, too. We also released a research preview of GPT-5.3-Codex-Spark, a smaller, faster model built for real-time coding. Speed up tool-heavy agent runs with WebSockets WebSockets in the Responses API speed up agent runs by 20–40% by maintaining a persistent connection—so you only send new inputs instead of full context each turn. Build longer-running agents with the API We’ve added an OpenAI-hosted container (with networking), Skills, and automatic compaction to the Responses API. This enables long-running workflows on computers that execute tasks more reliably and work across millions of tokens. Upgrade your Realtime voice stack If you’re building voice agents, gpt-realtime-1.5 is the new default stronger multilingual conversation, tighter instruction following, and more reliable tool calling. Generate images in bulk with the Batch API You can now use the Batch API with our gpt-image models to send async groups of requests at 50% lower cost, with access to a separate pool of higher rate limits. This is available for our gpt-image models: gpt-image-1.5, chatgpt-image-latest, gpt-image-1, and gpt-image-1-mini. | | | Guides, blogs, and things worth cloning | Automate the repetitive stuff: Andrew from the Codex engineering team shows how he uses Codex Automations to offload recurring tasks—what to automate, how to set it up, and where it saves real time. Work on two things at once (without chaos): Joey from the Codex engineering team demos parallel development with Codex worktrees: delegate a feature in one worktree, keep coding in another, then review and land both PRs. Parallel workflows turn waiting time into progress.
Build agents that can run for hours: Shell tool, Skills, and automatic Compaction are the core primitives for long-running agents. Get our tips for practical patterns to execute commands, reuse behaviors, and keep context stable over time.
Build agents that behave (and remember): Two Cookbook articles break down how prompt structure shapes agent behavior and reliability and how context engineering determines what gets retained vs. dropped.
Did your agent improve... or just change?: Evals turn “feels better” into actual checks. Did the skill trigger, do the right thing, and leave things clean? This will help you catch regressions early. Stop recomputing the same prompts: Prompt caching reuses shared prefixes to make AI apps faster and cheaper at scale. | | | From side projects to professional work | | | | | | Go behind the scenes with OpenAI engineers | Harness engineering: leveraging Codex in an agent-first world Note from the author: “Our team has varied experience, and humans can’t fully mind-meld. Codex can: when we encode ‘what good looks like’ in the repo, every run applies it consistently—and the gains compound.” - Ryan Lopopolo, Member of Technical Staff at OpenAI. Keep up with his work on X. Inside the Postgres Setup Powering 800M ChatGPT Users Note from the author: “Before OpenAI, I learned the hard way that startups often over-engineer. Here, I’ve been impressed by how far a deliberately simple setup—a single PostgreSQL writer—can scale with the right optimizations and discipline. That inspired me to share what I’ve seen.” - Bohan Zhang, Member of Technical Staff at OpenAI. Keep up with his work on X. What Actually Happens Inside an AI Coding Agent (We Unrolled It) Note from the author: “Despite being on Codex CLI since the beginning, I learned a lot from putting this post together, so hopefully you will, too!” - Michael Bolin, Member of Technical Staff at OpenAI. Follow more of his journey on Threads. Happy building and vibing, The OpenAI Team | | | | |
|
|