YouTube OpenAI
www.youtube.com/channel/UCXZCJLdBC09xxGZ6gcdrc6A
Introducing ChatGPT Pulse
Today we’re releasing a preview of ChatGPT Pulse to Pro users—a new experience where ChatGPT proactively does research to deliver personalized updates based on your chats, feedback, and connected apps. Each night ChatGPT learns what matters to you—pulling from memory, chats, and feedback—then delivers focused updates the next day. Expand updates to dive deeper, grab next steps, or save for later so you stay on track with clear, timely info. Pulse is the first step toward a more useful ChatGPT that proactively works on your behalf, and this preview lets us work with power users to learn, iterate, and improve before rolling it out more broadly.

Codex and the future of coding with AI — the OpenAI Podcast Ep. 6
What happens when AI becomes a true coding collaborator? OpenAI co-founder Greg Brockman and Codex engineering lead Thibault Sottiaux talk about the evolution of Codex—from the first glimpses of AI writing code, to today’s GPT-5 Codex agents that can work for hours on complex refactorings. They discuss building “harnesses,” the rise of agentic coding, code review breakthroughs, and how AI may transform software development in the years ahead. Chapters 1:15 – The first sparks of AI coding with GPT-3 4:00 – Why coding became OpenAI’s deepest focus area 7:20 – What a “harness” is and why it matters for agents 11:45 – Lessons from GitHub Copilot and latency tradeoffs 16:10 – Experimenting with terminals, IDEs, and async agents 22:00 – Internal tools like 10x and Codex code review 27:45 – Why GPT-5 Codex can run for hours on complex tasks 33:15 – The rise of refactoring and enterprise use cases 38:50 – The future of agentic software engineers 45:00 – Safety, oversight, and aligning agents with human intent 51:30 – What coding (and compute) may look like in 2030 57:40 – Advice: why it’s still a great time to learn to code

Build Hour: Codex
Codex is now one agent for everywhere you code — connected by your ChatGPT account. This Build Hour is a hands-on walkthrough of how to use all its features, including the new IDE extension and code review. Dominik Kundel (Developer Experience) and Pranav Deshpande (Product Marketing) cover: - What’s new with Codex? IDE extension, revamped Codex CLI, code review, and local to cloud handoffs - How Codex works: where you can use it, and where it runs - Live demos for pair programming with Codex CLI and IDE extension - Best practices for structuring your codebase and delegating tasks to the - Codex cloud agent - Live Q&A 👉 Follow along with the code repo: https://github.com/openai/build-hours 👉 Codex docs: https://developers.openai.com/codex 👉 Agents.md: https://agents.md/ 👉 Codex CLI repo: https://github.com/openai/codex 👉 Sign up for upcoming live Build Hours: https://webinar.openai.com/buildhours/

Lowe’s gets answers faster with GPT-5
Lowe’s has built their AI strategy around three pillars: "how we shop", "how we sell", and "how we work". GPT-5 can help all three pillars by holding context longer, applying reasoning across multiple inputs, and reducing the number of steps needed to get to the right answer. Hear more from Chandhu Nair, Senior Vice President of Data, AI, and Innovation at Lowe’s.

GPT-5 reshapes how teams work at Moderna
At Moderna, scientists who have never written code are generating algorithms in minutes in plain English using GPT-5. In this short video, Brice Challamel, Head of AI Products & Innovation, shows where GPT-5 is already influencing his work.

Build Hour: GPT-5
GPT-5 is OpenAI’s most steerable reasoning model yet. This Build Hour walks through its new capabilities, how to use it in the Responses API, and practical prompting techniques for coding and agentic tasks. Bill Chen (Applied AI), Eric Han (Research), and Anoop Kotha (Applied AI) cover: - GPT-5 capabilities: stronger code quality, front-end/UI generation, agentic task reliability - New parameters: minimal reasoning, verbosity, and free-form function calling - Live demo: building a Minecraft clone using GPT-5 in the Responses API - Prompting best practices: avoiding conflicting instructions, meta prompting, and controlling agentic behavior - Customer spotlight: Charlie Labs shows how they built an autonomous coding agent that works directly in GitHub and Slack workflows (https://www.charlielabs.ai/) - Live Q&A 👉 Follow along with the code repo: https://github.com/openai/build-hours 👉 GPT-5 Docs: https://platform.openai.com/docs/models/gpt-5 👉 Prompt Optimization Cookbook: https://cookbook.openai.com/examples/gpt-5/prompt-optimization-cookbook 👉 Prompting Guide: https://cookbook.openai.com/examples/gpt-5/gpt-5_prompting_guide 👉 Sign up for upcoming live Build Hours: https://webinar.openai.com/buildhours

Build Hour: Built-In Tools
Built-in tools let you extend models out of the box without writing custom functions. This Build Hour shows you how to use web search, file search, code interpreter, MCP, and image generation directly with the Responses API, with demos of adding these tools to real applications. Katia Gil Guzman (Developer Experience) covers: - What are built-in tools? How do they compare to function calling? - Available tools: web search, file search, MCP, code interpreter, computer use, image generation - Playground demo: experimenting with tools in (https://platform.openai.com/chat) - Live demo: building a data exploration dashboard using MCP, web search, and code interpreter - Why use built-in tools? Minimal coding, functionality out-of-the-box, and ability to combine tools - Customer spotlight: Hebbia’s use of web search for finance and legal workflows (https://www.hebbia.com/) - Live Q&A 👉 Follow along with the code repo: https://github.com/openai/build-hours 👉 Playground: https://platform.openai.com/chat 👉 Built-In Tools Guide: https://platform.openai.com/docs/guides/tools 👉 Sign up for upcoming live Build Hours: https://webinar.openai.com/buildhours

Build Hour: Voice Agents
Voice agents don’t just transcribe anymore — they think, talk, and call tools in real time. This Build Hour demos speech-to-speech agents built with the Realtime API and Agents SDK that can handle conversations natively in audio, reason about context, and call tools while streaming speech back to the user. Brian Fioca and Prashant Mital (Applied AI) cover: - Why voice agents now: APIs to the real world, expressive + accessible interactions - Architectures: chained speech-to-text vs. end-to-end speech-to-speech models - Live demo: building a voice-powered workspace manager + designer agent with handoffs - Best practices: evals, guardrails, and delegation - Live Q&A 👉 Follow along with the code repo: https://github.com/openai/build-hours 👉 Check out the voice agents guide: https://platform.openai.com/docs/guides/voice-agents 👉 Sign up for upcoming live Build Hours: https://webinar.openai.com/buildhours

Build Hour: Reinforcement Fine-Tuning
Reinforcement fine-tuning (RFT) lets you improve how models reason by training with graders instead of large labeled datasets. This Build Hour shows you how to set up tasks, design grading functions, and run efficient training loops with just a few hundred examples. Prashant Mital and Theophile Sautory (Applied AI) cover: - Intro to RFT: optimization, fine-tuning options, RFT benefits - Task setup: prompts, graders, and training and validation data - Live demo: building and running RFT for a classification task - RFT workflow: from dataset selection to evaluating and iterating - Customer spotlight: Accordance uses RFT models for tax and accounting workflows (https://accordance.com/) - Live Q&A 👉 Follow along with the code repo: https://github.com/openai/build-hours 👉 RFT Cookbook: https://cookbook.openai.com/examples/reinforcement_fine_tuning 👉 RFT Use Case Guide: https://platform.openai.com/docs/guides/rft-use-cases 👉 Sign up for upcoming live Build Hours: https://webinar.openai.com/buildhours

Build Hour: Agentic Tool Calling
In 2025, agents don’t just think — they run code, call tools, and complete tasks. This Build Hour is a hands-on walkthrough of how to design agentic systems that reason and act using OpenAI’s latest APIs and SDKs. Ilan Bigio (Developer Experience) covers: - What’s new in 2025: Responses API, Agents SDK, Hosted Tools, Codex, and more - Chain of thought concepts: reasoning, tool calling, and long-horizon tasks - Live demo: building an agentic task system to process a backlog of tickets - Delegation: directional guidance for evals - Live Q&A 👉 Follow along with the code repo: https://github.com/openai/build-hours 👉 Check out additional resources: https://developers.openai.com/ 👉 Sign up for upcoming live Build Hours: https://webinar.openai.com/buildhours

Build Hour: Image Gen
Over 130M people created images with Image Gen in its first week inside ChatGPT. Now, with Image Gen in the API, developers can build the same capabilities directly into their own apps and platforms.This Build Hour walks through gpt-image-1 in the API, with demos on streaming, editing, and masking for real-world apps. Bill Chen (Applied AI) covers: - What’s new: text rendering, world knowledge, image inputs - New capabilities: streaming, multi-turn editing, masking - Best practices: picking the right API (image vs responses), customizing outputs, handling latency & UX tradeoffs - Live demo: building an AI-powered photobooth from scratch - Customer spotlight: create AI presentations using Gamma (https://gamma.app/) - Live Q&A 👉 Follow along with the code repo: https://github.com/openai/build-hours 👉 Check out the Image Gen Guide: https://platform.openai.com/docs/guides/image-generation 👉 Sign up for upcoming live Build Hours: https://webinar.openai.com/buildhours

Bain accelerates client value with GPT-5
Bain & Company has embedded GPTs across its business, with dozens of proprietary applications and nearly 25,000 custom versions in use. Gene Rapoport, Partner & AI Leader in the Private Equity practice, shares how GPT-5 drives better outputs — and improved client results — by powering everything through one consistent, flexible model.

OpenAI to Z Challenge
10,000+ people joined the OpenAI to Z Challenge to explore how AI can push the archaeological frontier in the Amazon. The winner is Team Black Bean, which used deep learning on public LiDAR + satellite data to build maps that surface what's under rainforest canopies. ➡️ Black Bean’s submission: https://www.kaggle.com/competitions/openai-to-z-challenge/writeups/amazon-archeological-site-discovery-a-deep-learnin

Introducing gpt-realtime in the API
Join Brad Lightcap, Peter Bakkum, Beichen Li, Liyu Chen, Julianne Roberson, and Srini Gopalan as they introduce and demo our most advanced speech-to-speech model and new API features like MCP, SIP, image input, and more.