AI is shaping how we work, sell, and consume content, and its influence is only growing. That’s why I’ve been talking about AI on Millennial Masters quite a bit, whether it’s using it for personal productivity, building AI sales androids (with Ryan Carruthers of AI Response Lab), its impact on search (with Zette’s Yehong Zhu) or questioning its darker side in social media algorithms with Lexie Kirkconnell-Kawana of Impress.
But even now, two years after ChatGPT exploded onto the scene with over 400 million users, AI is still in its early days. And a lot of it is hype. Companies are slapping “AI-powered” on everything, while investors throw money at startups promising breakthroughs that often don’t hold up under scrutiny.
This conversation couldn’t be more timely, as this week the UK’s biggest media players have united under Make It Fair, a campaign pushing back against government plans to let AI models scrape creative content without permission or payment. With the UK also delaying AI regulation to align with the US, the stakes couldn’t be higher.
That’s exactly why two AI researchers, Arvind Narayanan and Sayash Kapoor, wrote AI Snake Oil, this week’s Book Club pick — to cut through the noise and expose what AI can actually do, what it can’t, and how to separate real value from marketing spin.
Daniel’s verdict ✅
Too many businesses chase AI trends without asking if it actually helps them. AI Snake Oil by Arvind Narayanan & Sayash Kapoor is a must-read for anyone who wants to see through the hype and use AI wisely.
For this week’s Millennial Masters Book Club, I’ve broken down 5 AI red flags and 5 green flags from the book, so you can spot the BS, dodge the hype, and actually make AI work for you.
Use it to automate, optimise, and enhance, but don’t let it kill strategy, creativity, or human connection. 👇🏻
Millennial Masters is brought to you by Jolt ⚡️ The UK’s top web hosting service
🚨 AI red flags
🔴 Don’t adopt AI just because it’s trendy: focus on ROI
Everyone’s slapping “AI-powered” on their product, but that doesn’t mean it delivers. Before investing, define the exact problem AI needs to solve and set measurable success metrics. A CRM promising ‘AI-driven sales growth’ sounds great, but if it’s just automating follow-up emails, is it really adding value?
🔴 Be cautious of AI that makes decisions without transparency
Black-box AI can be dangerously unreliable. If a system can’t explain why it’s approving one job candidate and rejecting another, would you trust it to hire for your company? AI should provide clear reasoning behind decisions, especially in hiring, lending, or pricing models.
🔴 AI should not replace human expertise or interaction
AI can automate and assist, but it can’t replace human intuition. An AI-powered chatbot might handle FAQs well, but relying on it for complex customer service can frustrate clients. Keep humans in the loop for key decisions, relationship-building, and strategy.
🔴 Watch out for biased AI models
AI inherits bias from its training data, which can lead to discrimination. Some AI hiring tools have been found to favour certain demographics over others. If you’re using AI for hiring, marketing, or credit scoring, audit it regularly to ensure fair outcomes.
🔴 Don’t hand over sensitive data without understanding the risks
AI tools often require vast amounts of customer and business data, but where is that data going? Many AI-powered CRMs and marketing tools store sensitive information in ways that could breach GDPR regulations. Always review where and how AI handles data before integrating it into your workflow.
✅ AI green flags
🟢 Use AI to automate repetitive tasks, not creative or strategic thinking
AI is best for admin work like scheduling, email filtering, CRM updates, and transcription, freeing up time for strategic work. Instead of spending hours manually logging customer interactions, AI-powered CRM tools can do it for you while you focus on closing deals.
🟢 Leverage AI for content ideation, but always refine for originality
ChatGPT, Claude, and other AI tools can generate blog posts, marketing copy, and social media captions, but raw AI output is often generic. A CEO posting unedited AI-generated LinkedIn updates will sound robotic, so use AI as a starting point, then inject your own voice.
🟢 Use AI for insights, not decision-making
AI can process vast amounts of data quickly, but human expertise should drive decisions. If AI flags an employee as ‘underperforming’ based on raw data, dig deeper: maybe they’re on a tough project, not actually struggling. Choose AI tools that explain their reasoning so you can make informed choices.
🟢 Train your team to use AI effectively
AI is only as useful as the people using it. If your team doesn’t know how to craft great prompts or verify AI-generated insights, they won’t get value from AI tools. Invest in AI literacy, teaching employees how to get better outputs and spot errors.
🟢 Stay ahead of AI regulations and ethics
AI laws are evolving fast, and what’s legal today might not be tomorrow. If your startup is using AI-generated marketing content, ensure compliance with copyright laws. If your AI tool collects customer data, be proactive about transparency before regulations force you to.
👤 About the authors
Arvind Narayanan is a professor of computer science at Princeton University, known for his work on AI ethics, privacy, and debunking AI hype.
Sayash Kapoor is a researcher at Princeton, specialising in the risks and limitations of AI systems. His work focuses on exposing flawed AI models and promoting transparency in machine learning.