- Overclocked
- Posts
- WhatsApp's AI Mistakely Shares Another User's Phone Number
WhatsApp's AI Mistakely Shares Another User's Phone Number
Welcome to this week's edition of Overclocked!
In this issue, a slip-up in WhatsApp’s new AI assistant sparks privacy alarms, UK universities grapple with mass academic cheating, and AI headlines from around the world. Let’s dive in ⬇️
In today’s newsletter ↓
🪐 AI helper in WhatsApp leaks a private phone number
🎓 UK universities uncover a surge of AI-driven cheating
🖍️ Adaptive tech could transform reading for dyslexic kids
⚖️ EU moves to shield personal likenesses from deepfakes
🎯 Use AI tools to build a one-week crash course on any passion
📱 WhatsApp AI Helper Leaks a User’s Number
Meta’s beta WhatsApp AI Helper promised smart replies and quick info, but a recent incident exposed a glaring privacy hole. A UK user asked the bot for Airbnb tips and received a random phone number with the response. Moments later the bot revealed the surname and city tied to that number, violating WhatsApp’s own end-to-end encryption ethos.
WhatsApp AI helper mistakenly shares user’s number. Chatbot tries to change subject after serving up unrelated user’s mobile to man asking for rail firm helpline theguardian.com/technology/202…
— Brendan Tierney (@brendantierney)
1:41 PM • Jun 20, 2025
🔍 How the Leak Happened
Early investigation points to the bot’s retrieval model accidentally surfacing cached group-chat metadata. It’s somewhat normal for the AI helper to hallucinate URLs, but this time it returned real personal information, raising the stakes. Meta’s engineers blamed “overshared vector embeddings” and pushed a server-side patch within 12 hours.
🛡️ Privacy Fallout
Regulatory heat: Unsolicited disclosure of personal data violates GDPR’s purpose-limitation clause. There’s an indication that an inquiry is underway.
User trust: Digital rights advocates warn that AI assistants layered atop encrypted apps create new exposure vectors even if message content stays private.
Policy tweaks: Meta temporarily disabled external web lookup for the bot and added a filter that masks phone-like strings unless the user explicitly requests contact details.
🌐 Bigger Picture
WhatsApp’s 2 billion users make it an ideal AI sandbox, but errors spread at scale. Unlike conventional bugs, generative mistakes are unpredictable. One hallucinated phone number today could morph into location data tomorrow. The mishap underscores why privacy reviews for AI features need dynamic red-team testing, not just static code audits.
🗝️ Takeaway
Meta’s swift patch limits immediate damage, yet the episode shows how even “encrypted” platforms can leak via AI overlays. Expect regulators to demand pre-deployment risk reports and real-time monitoring for any chat-based assistant that might surface personal data. This may seem like a great solution, especially in lieu of nothing better.
However, increased government oversight also leads to a whole slew of potential data privacy issues as indicated by a recent ruling on an OpenAI case.
🎓 UK University Faces AI Cheating Surge
A survey of 71 UK universities revealed over 20,000 plagiarism cases tied to ChatGPT-style tools in the 2024-25 academic year. One unnamed Russell Group institution recorded 40% of all academic-misconduct findings as “AI-generated submissions.”
📈 How Students Got Caught
Plagiarism-detection vendor Turnitin rolled out “AI writing” flags in January. Staff noticed identical phrasing, odd citation formats, and U.S. spellings in UK essays. Cross-checking drafts with Turnitin’s classifier marked paragraphs as 99% AI-written. A follow-up analysis by University World News confirmed similar spikes across Europe.
Surprised?…
Thousands of UK 🇬🇧 university students caught cheating using AI | Higher Education | The Guardian theguardian.com/education/2025… via @Nitin_Author#GenAI#AIEthics#aieducation
— Glen Gilmore (@GlenGilmore)
9:00 AM • Jun 18, 2025
🏫 Universities Respond
Policy revamps: Several campuses now require viva-style mini-orals after written work.
Tech shifts: Exam boards explore locked-down browsers and in-person assessments for core courses.
Education over punishment: Some lecturers assign “AI plus critique” tasks, asking students to analyze ChatGPT’s errors rather than submit its raw output.
🔮 Future of Assessment
Experts argue total bans are unrealistic; instead, curricula must emphasize source evaluation and prompt design. UK faculty are piloting workshops on ethical AI use, teaching students to cite AI-generated text like any other secondary source.
Bottom line: Generative AI isn’t leaving the classroom. Institutions that blend detection tools with pedagogical reform will curb misconduct without stifling innovation.
The Weekly Scoop 🍦
💡 Weekly Challenge: Learn Anything With AI
Everyone wants to learn something new, but until recently, it took a lot more effort, time, and money. However, with recent advancements in AI, you can now make your learning experience much more personal and effective. Here’s how:
Challenge: Pick a topic you’ve always wanted to master—Python basics, music theory, or nutrition science. Then choose a tool combo:
☝️ Step 1: Choose your tool combo
ChatGPT 4o for Socratic Q&A
Gemini 2.5 Pro for interactive code or math demos
Claude Sonnet for concise reading summaries
NotebookLM to upload PDFs and ask targeted questions
Khanmigo (Khan Academy’s AI) for structured lessons
✍️ Step 2: Draft a three-step learning plan with your AI of choice.
📚 Step 3: Ask it to create daily micro-tasks and quizzes.
⏰ Step 4: Spend 20 minutes each day this week completing the tasks.
🖐️Step 5: End on day 7 with a self-quiz generated by the AI—aim for 80 % or better.
In one week, you’ll see how tailored prompts turn any interest into a guided course. Ready, set, learn!

That's all for this week's Overclocked! Privacy slips, academic upheavals, and endless AI potential—stay alert out there. Hit reply with your learning wins or news tips, and we’ll see you next Monday!
Zoe from Overclocked