Mon, April 6 at 6:00 AM
Microsoft just built its own AI brain 🧠
huh? aren't they partners with OpenAI?

They are, but Microsoft just launched three foundation models built entirely in-house. Led by Mustafa Suleyman, this is the first release from Microsoft's new AI division. Recent comments suggest that Microsoft isn’t just fully onboard with AI, they’re fully bought into replacing most workers in the next 18 months.

let's hope not. are they good enough to replace workers though?

Not exactly. The lineup includes MAI-Transcribe-1 for multilingual speech-to-text and MAI-Voice-1 for natural-sounding speech synthesis, which are great small models, but not state of the art. The kicker? They're exclusive to Microsoft's Foundry platform, putting them in direct competition with OpenAI.

Is Microsoft starting to pull away from OpenAI?

Login or Subscribe to participate

still no image generation though?

Actually, MAI-Image-2 rounds out the trio, focusing on photorealistic image generation. Enterprise clients like WPP are already using it to create ad content with lifelike lighting and skin tones. Microsoft is still OpenAI's biggest investor, but this launch makes one thing clear: they're building their own future.

Alibaba's new AI model is gunning for Claude 🎯
another open-source release?

Actually, no. Qwen3.6-Plus is proprietary, which is a major strategic shift. Alibaba is moving away from its open-source roots to chase enterprise revenue, and this model is built for it.

what makes it different?

A million-token context window, repo-level coding skills, and multimodal reasoning that lets it work across text, images, and data in a single workflow. Early benchmarks show it approaching Claude's performance in STEM reasoning, complex math, and long-context extraction.

and what's all this I'm hearing about Search Live?

Alibaba is integrating the model into its Wukong enterprise platform, targeting the booming market for agentic AI. This is part of a five-year plan to hit $100 billion in AI-driven revenue. The model's ability to handle cross-modal analysis in a unified workflow makes it a serious contender for businesses looking to scale autonomous operations.

weekly scoop 🍦
📸 weekly challenge: turn Gemini Live into your personal tour guide
what's the challenge?

This week, we're using Gemini Live to get real-time historical context as you explore a landmark, museum, or historic neighborhood.

Here's what to do:

📱 Step 1: Open the Gemini app Head to the Gemini homepage on your Android device and make sure you have the latest updates.

Step 2: Activate Live mode Long-press your power button and tap "Live" to start a real-time conversation with the assistant.

📸 Step 3: Turn on your eyes Tap the camera icon within the Live interface to let Gemini see what's in front of you.

🏛️ Step 4: Start exploring Visit a historical site, area in your town or museum and point your camera at buildings, plaques, or artifacts (be sure to tell Gemini where you are first). Ask "What's the history behind this?" or "When was this built?"

🧠 Step 5: Go deeper Ask follow-ups as you walk, like "What did this look like 100 years ago?" Gemini keeps the conversation going so each stop builds on the last.

Is Microsoft's move the beginning of the end for the OpenAI alliance? And, have Chinese firms finally bridged the gap with U.S. companies?

Zoe from Overclocked 

Keep Reading