Sharing through an event we're hosting next week with Romain Huet, Head of Developer Experience at OpenAI. Feel free to join via Zoom for the event & Q&A! More below. Thanks.
On Jan 21st at 10 am PT, Jjoin the leaders of Codex and OpenAI DevEx for a behind-the-scenes look at how OpenAI's engineering teams use Codex day-to-day. We'll cover practical habits, default configurations, and enterprise best practices - plus a live demo showing realistic parallelized workflows on production-style codebases.
What we'll cover:
- How OpenAI uses Codex internally - the stack and workflows their eng teams rely on
- Live demo: parallelized task execution (implementation + tests + PR notes), handling real snags, and reviewable output hygiene
- Best practices for code review, security, repo conventions, and CI integration
- Collaboration patterns and guardrails for enterprise teams
- Where Codex fits in the broader AI coding landscape and what’s ahead
Who Should Attend: Engineering leaders, Heads of Product, and technical teams evaluating or scaling AI-assisted development workflows—especially those managing large production scale codebases and looking to move beyond chat-based copilots.
Bessemer's Research to Runtime series brings together early users of emerging AI engineering tools with the original creators, for a thoughtful discussion, demos, and insight into how to build AI systems at scale. Access past sessions at https://researchtoruntime.com/
Sharing through an event we're hosting next week with Romain Huet, Head of Developer Experience at OpenAI. Feel free to join via Zoom for the event & Q&A! More below. Thanks.
On Jan 21st at 10 am PT, Jjoin the leaders of Codex and OpenAI DevEx for a behind-the-scenes look at how OpenAI's engineering teams use Codex day-to-day. We'll cover practical habits, default configurations, and enterprise best practices - plus a live demo showing realistic parallelized workflows on production-style codebases.
REGISTER HERE: https://bvp.zoom.us/webinar/register/WN_bul7bYg6RcCXBuxl30Kw...
What we'll cover: - How OpenAI uses Codex internally - the stack and workflows their eng teams rely on - Live demo: parallelized task execution (implementation + tests + PR notes), handling real snags, and reviewable output hygiene - Best practices for code review, security, repo conventions, and CI integration - Collaboration patterns and guardrails for enterprise teams - Where Codex fits in the broader AI coding landscape and what’s ahead
Who Should Attend: Engineering leaders, Heads of Product, and technical teams evaluating or scaling AI-assisted development workflows—especially those managing large production scale codebases and looking to move beyond chat-based copilots.
Bessemer's Research to Runtime series brings together early users of emerging AI engineering tools with the original creators, for a thoughtful discussion, demos, and insight into how to build AI systems at scale. Access past sessions at https://researchtoruntime.com/
Can't wait to 10x my SPM