Welcome to this week's edition of Overclocked!

This week, Meta reportedly cooks up Mango, a new image and video model aimed at the attention economy. Then, the music industry makes a rare move toward licensed generative tools through a UMG and Splice collaboration. Six quick scoops follow, plus a practical drill. Let’s dive in ⬇️

In today’s newsletter ↓
🪄 Meta’s next visual codename targets your feed
🎛️ UMG tests licensed AI music tools
🪙 Amazon weighs an OpenAI chip cash boost
🔐 Google Cloud nears a $10B security pact
🎧 Weekly Challenge: NotebookLM turns your files into audio briefs

🍑 Meta Builds ‘Mango’ AI Image & Video Generator

Meta just handed the AI world the kind of detail people actually click: an internal roadmap with two codenames that sound like smoothie flavors and a very obvious target.

According to a report on Meta’s image and video model plans, the company is developing an image and video focused model code named Mango alongside a next text model called Avocado, with internal timelines pointing to a 2026 release window.

The bet is simple. Visual creation drives attention, and attention is the currency of every major platform. Text can feel interchangeable. A model that can generate short clips, product shots, animated explainers, and template ads from a few prompts changes what creators can ship in a day.

🔒 Open Source Is Getting Repriced

The strategic twist is what Mango implies about Meta’s product direction. Meta built goodwill by pushing open releases, but recent reporting suggests the next generation could tighten distribution and lean into monetization.

A briefing on Meta’s 2026 AI roadmap frames Mango and Avocado as a step toward a more controlled rollout. If that is the move, it is a real shift away from the Llama era and toward a walled garden that sells access, inserts guardrails, and keeps the best outputs inside Meta apps.

🌀 Whoever Owns the Loop Wins

This matters for everyone outside Meta because the growth lever is moving. The winner in 2026 is not the model that writes the best paragraph. It is the model that makes the best clip in ten seconds, then lets you remix it, caption it, and publish it without leaving the app.

Reporting that Meta has internally discussed whether to charge for a future Avocado model underlines the same point. Whoever owns that loop owns the habit, and whoever owns the habit owns the marketplace.

🚨 Power Scales Faster Than Trust

The catch is obvious. As soon as visuals get good, abuse gets easy. Reporting on AI powered face swapping driving romance scams shows how fast misuse scales compared to trust systems. We can’t forget about the deepfake issues Sora 2 ran into just a few months ago.

If Meta wants Mango to win, it has to ship power with real controls, clear provenance, and fast takedown tooling, because the first viral misuse story will travel farther than any product demo.

🎚️ UMG Teams With Splice

In a new partnership announcement between Universal Music Group and Splice, UMG said it will work with the sample platform to explore AI powered music creation tools that keep artists involved.

The pitch is not press a button, get a hit. It is more like bring your own sound into tools that can generate variations, virtual instruments, and usable building blocks without the guilt of scraping, because the training inputs and pack rights are the whole point.

🎙️ Licensing Beats Guesswork

Splice is a giant sample marketplace, and that detail matters. Most AI music fights happen at the top of the funnel, models trained on everything, then outputs that sound like someone else. Splice lives inside the workflow. People chop samples, layer drums, and build tracks piece by piece. That is why industry coverage framed the deal as artist led tooling, not a replacement engine.

👀 What Creators Should Watch For

UMG has also been moving fast on licensing. Labels have sued, settled, and negotiated, and the direction is clear. If AI is going to learn from music, it will happen through contracts and paid access, like one of UMG’s latest partnerships with Udio.

Reporting on how major labels are reshaping AI music licensing strategies shows why this partnership fits that trajectory and gives UMG a way to say it is protecting artists while still capturing the upside of generative tools.

Watch for specifics like opt in terms, revenue splits, and whether contributors can revoke usage. Those details decide if this becomes progress or just nicer packaging.

🚫 The Backlash Has Not Left

It also lands while creators are increasingly furious about voice clones and fake tracks, a backlash described through musician pushback against AI clones and impersonation spam. That skepticism is not going away, and it will shape which ethical AI products survive.

For everyday creators, the practical move is simple. Track which tools publish licensing terms, keep project notes on sample sources, and avoid models that cannot explain where training audio came from.

The Weekly Scoop 🍦

🎯 Weekly Challenge: Learn How to Maximize NotebookLM

Challenge: Turn NotebookLM into a personal briefing engine using its newest source grounded and audio features.

Here’s what to do:

📂 Step 1: Add real material Create a new notebook and upload three things you already trust. A long article, a PDF, and a rough doc or notes file. No web search yet.

🧠 Step 2: Force grounded answers Ask NotebookLM five questions you actually care about. Require answers that cite the uploaded sources only. If it guesses or generalizes, re ask until every claim points back to your files.

🎧 Step 3: Generate an Audio Overview Use the Audio Overview feature to turn the notebook into a spoken briefing. Listen once at normal speed, once at 1.25x. Note what stuck and what felt fuzzy.

🗺️ Step 4: Stress test understanding Ask for a structured summary with sections, open questions, and contradictions across sources. This reveals what the model understands versus what it is smoothing over.

🏁 Step 5: Ship a decision Write a five sentence takeaway based only on what NotebookLM could support with citations. If you cannot make a decision from that, your inputs were weak.

Win condition: you trust the output more than a generic chatbot response because you can trace every claim back to your own material.

From new AI image and video models to AI music taking over the entertainment airwaves, the world of AI never sleeps. Hit reply and let us know your thoughts.

Zoe from Overclocked

Keep Reading

No posts found