The news: OpenAI launched Codex, a cloud-based professional coding agent for ChatGPT.
- Codex can generate code, fix bugs, test its own output, and suggest ways to use the code.
- The web-based agent runs in its own mini-computer environment and is currently available in research preview for ChatGPT Pro, Team, and Enterprise users.
Companies such as Cisco, Superhuman, and Kodiak are currently using Codex.
How it works: The agent is accessible through ChatGPT and can be assigned tasks using natural language prompts. Codex can also answer questions about a user’s codebase, read and edit files, and provide a log of all performed activity.
- The tool takes 1 to 30 minutes to run a task, per OpenAI. It cannot process image inputs or be course-corrected while performing tasks.
- It’s powered by a coding-optimized version of OpenAI’s o3 reasoning model and is blocked from accessing the internet for safety.
Zooming out: AI agents are a hot topic, and tools that can generate and modify code for workers are a major area of focus for developers. Code creation is the fifth-most-popular application of AI, per Filtered.
- 75% of software engineers will use generative AI (genAI) code assistants by 2028, up from less than 10% in early 2023, per Gartner.
- Google’s Gemini Code Assist, Lightrun’s AI code debugger, and AnySphere’s Cursor coding assistant are all vying to help enterprises and casual users.
Why does it matter? This move could help position OpenAI as a prominent AI coding provider, rather than just an infrastructure supporter for other services.
-
OpenAI is a client of Cursor’s coding workflow services. Codex could let OpenAI rely on internal coding products exclusively, reducing dependence on third parties.
- OpenAI is also in talks to purchase startup Windsurf for $3 billion, per a report in The New York Times; a deal would greatly expand its coding toolbox.
Our take: Codex has the potential to become a household name for enterprises, much like ChatGPT has for consumers. It could encourage more businesses to become fully reliant on OpenAI’s catalog of models.
However, data privacy could be a risk, especially if Codex is able to read companies’ internal code and use it to train OpenAI models or improve its own output for other users.