In partnership with

Mon, May 11 at 6:00 AM
AI writes 60% of Airbnb's code 🤖
sixty percent? for real?

Brian Chesky said it on the Q1 earnings call: AI wrote 60% of the code Airbnb engineers shipped last quarter. That puts them ahead of Microsoft (around 30%) and Google (also around 30%), and lands them in the same neighborhood as Spotify.

so are engineers cooked or what?

Airbnb claims one engineer can now spin up agents to do the work of 20. The bearish read is that math eventually hits the headcount slides. Customer support is already there. Airbnb's support bot now handles 40% of issues without a human, up from 33%.

How much of YOUR work do you think AI is doing right now?

Login or Subscribe to participate

does it actually work for travel though?

Chesky says no, not yet. The chatbot UI is wrong for travel: too much text, no sliders, hard to compare options, and most trips are planned by groups. AI is rewriting Airbnb from the inside, but the customer-facing AI isn't there yet.

4x more context into every prompt. Zero extra effort.

You think faster than you type. Which means every typed prompt leaves out the constraints, examples, and edge cases that would have made the output actually useful.

Wispr Flow turns your voice into paste-ready text inside any AI tool. Speak naturally — include "um"s, tangents, half-finished thoughts — and Flow cleans everything up. You get detailed, structured prompts without touching a keyboard.

89% of messages sent with zero edits. Used by teams at OpenAI, Vercel, and Clay. Free on Mac, Windows, and iPhone.

OpenAI drops three new voice models 🎙️
a real-time translator?

Yep, and two more. OpenAI launched three voice models in its API Thursday. GPT-Realtime-2 handles messy multi-step conversations. GPT-Realtime-Translate does live translation across 70 input and 13 output languages. GPT-Realtime-Whisper handles live transcription.

who's actually gonna use this?

Customer service is the obvious target, but OpenAI is pitching it broader: education, media, creator platforms. Zillow saw call success rates jump 26 points in early testing. Voice is finally moving past call-and-response into something that can listen, reason, and act mid-conversation.

weekly scoop 🍦
🗣️ weekly challenge: build your own real-time voice agent
what's the move?

OpenAI just dropped three voice models built for real conversations. This week, you're building one for yourself. A pocket polyglot, a meeting transcriber, or a voice tutor (your choice).

Here's what to do:

🎯 Step 1: Pick your agent's job Three options:

  1. A live translator for conversations in a language you don't speak.

  2. A meeting buddy that transcribes Zooms and pulls action items.

  3. A voice tutor that quizzes you on what you're studying.

Pick whichever would actually make your week easier.

🔑 Step 2: Get into the OpenAI Realtime API Grab an API key at platform.openai.com and open the Realtime API docs. Use Translate for the language one, Whisper for transcription, GPT-Realtime-2 for the tutor.

🛠️ Step 3: Vibe-code the wrapper Open Claude Code or Cursor and prompt: "Build me a simple browser app that connects to the OpenAI Realtime API using [your model] and lets me talk into my microphone." Paste the docs into context. Twenty minutes to a working voice loop.

🎤 Step 4: Stress test it Translator: try it with a friend who speaks another language. Meeting buddy: run it during a real call and check the transcript. Tutor: see if it can actually push back when you're wrong.

📲 Step 5: Ship it to your group chat Send the demo to the smartest person you know. The people who built one in May 2026 will sound a lot smarter about voice agents in May 2027.

If AI is writing 60% of Airbnb's code today, what's that number look like in 18 months? And which of OpenAI's three voice models do you think actually goes mainstream first?

Zoe from Overclocked 

Keep Reading