This week, we're spotlighting 6 of the 700+ ways AI could go wrong, a sobering new database from MIT that reveals the darker side of artificial intelligence. Plus, dive into Hollywood's AI revolution, where technology is set to redefine creativity and production, and catch the latest on Elon Musk's controversial AI innovations. All this and more in today’s issue!
In today’s newsletter:
🎬 Hollywood’s AI Revolution
🚀 Elon in Hot Water (again)
👃 The Scent of Bitcoin
🗳️ Nancy Pelosi Criticizes AI Legislation
700+ ways AI could go horribly wrong (we picked out a few ⬇ )
Imagine a world where AI, designed to make our lives easier, inadvertently becomes a source of significant risk. From perpetuating biases to enabling cyberattacks, the potential pitfalls of AI are vast and varied. This isn't just speculation—MIT researchers have recently compiled a comprehensive repository detailing 700 ways AI could go wrong. Below, we’ll take a look at just 6 of them, but you can check out the full database here.

Errors, copyright issues, and cyber attacks are just a few of over 700 AI risks identified by MIT.
Bias: AI systems, even when designed with good intentions, can unintentionally perpetuate biases. These biases can manifest in various critical areas, such as hiring practices or criminal justice, leading to unfair discrimination. The risk lies in AI reinforcing existing societal inequalities by reflecting the biases present in the data they are trained on.
Toxicity: AI-generated content can sometimes include harmful or toxic language that contributes to social harm. This can exacerbate issues such as online harassment, hate speech, and the spread of harmful rhetoric. The challenge is ensuring AI systems are consistently generating content that promotes positive, inclusive, and accurate communication.
Privacy Leakage: AI systems are at risk of unintentionally leaking sensitive information, which can have serious consequences for user privacy. Whether through data breaches or unintended data sharing, such leaks can result in significant personal or financial harm, highlighting the importance of robust data protection measures.
Factuality Errors: AI-generated content, while often convincing, can sometimes be factually incorrect. This presents a significant risk as it can lead to the spread of misinformation. The challenge is ensuring that AI systems are reliable and accurate, especially when providing information that people rely on for decision-making.
Copyright Violations: AI systems have the potential to infringe on intellectual property rights by generating content that closely mimics or replicates copyrighted material. This raises legal and ethical questions about the ownership of AI-generated works and the protection of original content creators.
Cyber Attacks: AI can be weaponized to launch sophisticated cyberattacks, enabling malicious actors to exploit vulnerabilities in digital infrastructure more effectively. This threat highlights the need for advanced security measures to protect against AI-driven cyber threats.
BREAKING: Bitcoin is … smelly? 🥸
Teleport, an innovative AI startup founded by former Google researchers, is pushing the boundaries of digital experiences by introducing the concept of "digital scents." One of their most intriguing creations? The scent of Bitcoin. Using AI to digitize smells, Teleport has crafted a unique aroma that captures the essence of this volatile digital currency—think a blend of metallic, earthy, and electronic notes. Sounds gross.

Image courtesy of Osmo.ai
This technology isn't just a novelty; it could revolutionize how we engage with the digital world. By adding a sensory dimension to virtual experiences, digital scents could make interactions with cryptocurrencies and other digital assets more immersive. Imagine being able to "sniff out" market trends or enjoy a more engaging online shopping experience. Teleport’s work is at the forefront of this new frontier, offering a tantalizing glimpse into the future of multi-sensory digital interactions.
Bests and Busts
Here's a look at this week's AI highlights and lowlights:
The National Healthcare Group in Singapore is integrating AI technology into its radiology workflow to enhance lung and heart disease screening. The AI solution, developed by South Korea's Lunit, will be used to analyze chest X-rays, quickly identifying abnormalities indicative of serious conditions. This initiative aims to reduce wait times and improve diagnostic efficiency across their healthcare facilities. The pilot program begins in October at the Geylang Polyclinic and will eventually expand to other polyclinics within the group.
Donald Trump recently posted AI-generated images on his social media, falsely claiming Taylor Swift endorsed his 2024 campaign. The images, which included Swift in "Swifties for Trump" gear, were entirely fabricated. Swift has not endorsed any candidate and has previously criticized Trump, making this incident a clear example of AI-driven misinformation in the political arena.
The Scoop 🍦
🎬 How AI Is Transforming Hollywood AI is rapidly changing the landscape of Hollywood, from de-aging actors and creating digital doubles to generating entire scenes. While some fear job losses, experts believe AI will also create new opportunities in production and storytelling, reshaping the future of the entertainment industry.
🚨 Musk’s Grok AI Floods X with Fake Political Images Elon Musk’s AI chatbot Grok, now allowing users to generate images from text prompts, has quickly become a tool for creating misleading and disturbing fake images of political figures like Donald Trump and Kamala Harris. Unlike other AI tools, Grok has minimal safeguards, raising concerns about the spread of false information, especially with the upcoming U.S. elections. Despite some recent restrictions, Grok’s inconsistent enforcement of rules has sparked debates over the ethical use of AI in digital media.
💰 Bitcoin Miners Eye $13.9B Boost from AI and HPC Bitcoin miners could generate an additional $13.9 billion annually by shifting 20% of their energy capacity to AI and high-performance computing (HPC) sectors by 2027, according to VanEck. This move could help miners improve their financial standing, especially amid challenges like the recent Bitcoin halving that has squeezed profits.
🗳️ AI’s Role in Election Misinformation Grows AI technology is increasingly being used to create deepfake videos and fake images that spread misinformation during elections. With tools now more accessible, nearly anyone can generate convincing fake content, raising concerns about the impact on voters and election integrity. Experts are working on strategies to combat this growing threat, but challenges remain as AI continues to evolve.
🤖 Chatbot Interrupts Google Exec During Australian Senate Hearing During a Senate committee hearing on AI, Google's Australian government affairs director, Lucinda Longcroft, was unexpectedly interrupted by a chatbot. The incident raised questions about the reliability and control of AI technologies, especially during important public proceedings.
🗳️ Pelosi Criticizes California AI Bill as ‘Ill-Informed’ Nancy Pelosi has voiced strong opposition to California's SB 1047, a bill aimed at regulating AI, calling it "well-intentioned but ill-informed." Despite recent amendments, Pelosi and other Bay Area representatives argue that the bill could do more harm than good, urging for legislation that supports small entrepreneurs and academia over big tech.
❤️ The Dangers of AI-Generated Romance AI-generated girlfriends are gaining popularity, providing companionship and emotional support for millions of users. However, experts warn that these virtual relationships may perpetuate loneliness, deter real-life connections, and even lead to harmful psychological effects. As AI continues to evolve, the impact on human intimacy and social development is becoming a growing concern.
Stay tuned for more exciting insights and tools in next week’s edition. Until then, keep overclocking your potential!
Ivan from Overclocked