Mon, March 30 at 6:00 AM
Meta is trading humans for H100s 📉
I guess the "year of efficiency" is turning into the deacde of layoffs...

Pretty much. Meta just axed several hundred more employees across Reality Labs, Instagram, and WhatsApp. It’s a drop in the bucket compared to their 79,000-person headcount, but the timing is loud. While pink slips go out, Zuck is opening the checkbook for AI, projecting up to $135 billion in capital expenditures this year alone.

Is it ethical for companies to lay off staff while spending billions on AI?

Login or Subscribe to participate

👀 geesh... they're cutting more jobs?

Yep. This follows a January cut of 1,000 workers. Affected Reality Labs staff were told to work from home before getting the news. Meta says they're just restructuring to achieve goals, but it's hard to ignore the shift toward a leaner, AI-first workforce.

…$600 billion on infrastructure though?

The goal is clear: build the biggest AI infrastructure on earth. They're planning to spend $600 billion on U.S. data centers by 2028. Zuckerberg says AI is already disrupting internal workflows, with engineers using agents to code faster. It’s a brutal pivot; cutting recruiting to fund the quest for superintelligence.

Google's new voice model is scary good 🎙️
so it can finally hear frustration in my voice?

Yup… Google just dropped Gemini 3.1 Flash Live, and it’s a massive leap for real-time talk. It’s not just faster; it understands pitch, pace, and emotion. If you sound frustrated, the AI actually adjusts its response dynamically. It even crushed benchmarks for reasoning through interruptions.

it actually understands tone?

It’s their most natural voice model yet. It can follow a conversation thread for twice as long and filters out background noise better than ever. Verizon and Home Depot are already praising the enterprise version for its reliability.

and what's all this I'm hearing about Search Live?

You heard right. Google is also rolling out Search Live to over 200 countries. It supports 90+ languages and lets you have multimodal conversations with the web. It’s basically a supercharged researcher in your pocket.

weekly scoop 🍦
📸 weekly challenge: breathe life into your vacation photos
what's the challenge?

Video generation models have gotten insanely good at turning static images into cinematic clips. Runway, Kling, Minimax, Hailuo, Veo, Sora (while it’s still available) - take your pick. This week, we're taking a boring vacation photo and breathing life into it.

Here's what to do:

🖼️ Step 1: Pick your base image Grab a high-quality vacation photo or any image with clear visual elements. Landscapes, portraits, and cityscapes all work great.

📽️ Step 2: Choose your tool Head to Runway Gen-4.5, Kling, Minimax, or Hailuo and upload your image. Most of these models now support image-to-video natively.

🎯 Step 3: Guide the motion Use motion controls to tell the model what should move - clouds drifting, waves crashing, hair blowing in the wind. The more specific you are, the better the result.

Step 4: Refine and export Keep the motion subtle - less is almost always more with these tools. Generate a few variations, pick your favorite, and download your clip.

🚀 Step 5: Level it up Try the same source image across two different models and compare the results. You'll be surprised how different they look.

Is it ethical for companies to lay off staff while spending billions on AI? And would you trust a robot that can "feel" your frustration? We'd love to hear your thoughts!

Zoe from Overclocked 

Keep Reading