What GPT won’t say about real AI power 🦾
Supremacy to snake oil: The true dilemma in this AI wave
We’re not short on AI hot takes right now. Beneath the buzzwords and launch announcements, something much bigger is unfolding.
The future of work, power, creativity, and decision-making is being rewritten, and most people haven’t even realised the terms are changing. That’s why I’ve been reading: not to keep up, but to get a better sense of what’s really going on.

One guest on Millennial Masters who gets this is Ryan Carruthers of AI Response Lab, who’s building AI androids that revive dead sales leads. His take? "AI is the worst it’s ever going to be. And it’s already mind-blowing." He’s seeing the gap firsthand between the promise and the application: “If you're in a sales business, it’s the best place to start. Test it on your dead leads. If it all goes to pot, they’re dead anyway.”
Another Millennial Masters guest with skin in the game is Thibault (Tibo) Louis-Lucas, who helped build Tweet Hunter with GPT-3 back in 2021. As AI coding tools rapidly become powerful enough to spin up full products from a few prompts, he sees the ground shifting fast: “It created a change that we truly do not understand enough right now. Imagine if anyone can create a Slack competitor tomorrow with just a few prompts… where is the value then, if it’s not in the product itself?”
From the trenches: My special dispatch from the Business Innovation Summit
From AI to ROI 🧨 Rebuild or die trying
This shift won’t be reversed. The technology has changed, and so has the pace. What happens next depends on how well your business listens, learns, and moves. There’s no safety net for the slow. But there’s no ceiling for the fast either.
While some are still debating ethical guidelines or whether ChatGPT is a fad, founders like Ryan and Tibo are shipping tools that are already reshaping how teams operate.
The four books in this Booksmart roundup take completely different angles:
Supremacy by Parmy Olson shows how Big Tech swallowed AI and how idealists traded control for compute.
The Coming Wave by Mustafa Suleyman and Michael Bhaskar warns that the speed of innovation has outpaced our ability to contain it.
The AI Dilemma by Juliette Powell and Art Kleiner reveals how automated systems manipulate our choices and erode agency.
AI Snake Oil by Arvind Narayanan and Sayash Kapoor calls out the scams, false promises, and structural incentives driving AI hype.
Together, they paint a messy but essential picture of the path we’re on. If you're navigating AI right now, whether you're building, investing, or adapting, these are the stories, frameworks, and red flags worth your time, before you bet your company, your team, or your future on it.
Let’s get to it! 👇🏻
Millennial Masters is brought to you by Jolt ⚡️ The UK’s top web hosting service
Supremacy: Inside the AI takeover
Sam Altman was chasing scale. Demis Hassabis was chasing understanding. Two different instincts, same race. What they built instead was a new power structure, one that could shape humanity’s next chapter.
This is the story Parmy Olson tells in Supremacy: AI, ChatGPT, and the Race That Will Change the World. On the surface, it’s about the two men behind OpenAI and DeepMind. But underneath, it’s a cautionary tale about idealism, ego, and what happens when the most powerful technology on Earth is handed over to the richest corporations.
Olson compares their journey to the War of Currents, a battle of ideas between Edison and Westinghouse, ultimately won not by visionaries but by General Electric. The point? Vision doesn’t matter if you don’t own the infrastructure. And in AI, the infrastructure is now controlled by Google and Microsoft.
Hassabis was building god-like games at 17. Altman was bluffing his way through high-stakes poker games in his teens. Both saw AI not just as the next big thing, but as the thing. Altman believed that if you could align superintelligence with human values, you could solve everything else. Hassabis believed in a quieter version of the same idea: solve intelligence, and you solve science. But what began in lab coats and hacker houses now runs on pitch decks and board approvals.
To win the AI race, both needed compute. And compute meant capital. That’s where things shifted. OpenAI and DeepMind made their names as independent, research-driven projects. Today, they are company labs inside trillion-dollar giants. Microsoft effectively owns a piece of OpenAI. DeepMind answers to Google. The idealism? Still there in press releases. The reality? It was traded away for GPUs.
Olson doesn’t go on about killer robots or rogue AGI. Her real concern is quieter and more subtle. It’s the slow creep of AI into everything, from customer service to criminal sentencing, baked in with bias, built on flawed data, and deployed at speed. It’s the replacement of human judgment with opaque algorithms. It’s the fact that, as Olson puts it, "no other organisations in history have amassed so much power or touched so many people as today’s tech giants." The result? AI systems that may shape the future, controlled by companies optimised for shareholder returns.
If you’re building in AI, this book won’t tell you how to code a better model. But it will help you understand the power dynamics behind the tools you're using and who really benefits when they scale. For entrepreneurs, that awareness is the edge.
AI may feel open, but the real power is held by those who own the infrastructure: Microsoft, Google, Nvidia. They control the models, the chips, the compute, and increasingly, the rules. And every time your product grows on their platforms, their grip tightens. Even open-source models struggle to keep up, while startups lean on APIs with shifting prices, throttled limits, and priorities that change without warning.
So who wins when AI scales? Big Tech, first. Enterprise customers, next. And sometimes, startups, at least in the short term. Sure, you can still win. But only if you understand the game and stop pretending it’s being played fairly. You can use their platforms. Just don’t build something they can clone overnight. Keep something they can’t touch. Figure out who’s cashing in on your growth, and whether you’re feeding their machine or building something of your own.
The coming wave: No one’s ready
Mustafa Suleyman isn’t interested in the hype cycle. He’s focused on what happens next and how fragile our systems really are when it does. In The Coming Wave, Suleyman (co-founder of DeepMind) lays out a blunt reality: we’re building tools with godlike power, and we have no idea how to control them.
AI is just the beginning. Add in synthetic biology, quantum computing, and robotics, and you get what he calls the 'collision of waves': powerful technologies accelerating at the same time, feeding off one another, and outpacing our ability to manage them.
Suleyman’s core idea is the 'containment problem.' These technologies are powerful, accessible, and increasingly autonomous. You can’t fence them in. Once released, they don’t go back in the box. They scale fast, can be copied or leaked, and require only small teams, or even individuals, to create massive downstream effects.
This is already visible: deepfakes and autonomous drones, AI-written scams, synthetic biology startups that could engineer viruses in a lab. Suleyman argues that our political and regulatory systems were never designed to cope with this level of speed and complexity. As he suggests, we are using 20th-century institutions to manage 21st-century power.
The real challenge? Everyone’s incentivised to move faster. Companies race to deploy before their competitors do. Countries sprint to dominate strategic technologies before rivals get there first. No one wants to be the one that slowed down, especially if the others keep going.
And yet, Suleyman isn’t anti-technology. He’s pro-accountability. He’s warning founders, investors, and policymakers that what we’re building is much bigger than we think. He doesn’t offer easy fixes, but he makes it clear that being a builder today means being part of a much larger story.
For founders in AI, bio, or any frontier tech: this book is a challenge. Not to stop innovating, but to stop pretending the tools we’re releasing won’t reshape society in irreversible ways. Smarter models won’t be enough if we’re still flying blind.
The AI dilemma: You’re not in control
While some books focus on scale and power, The AI Dilemma zooms in on perception and how it’s being distorted. Juliette Powell and Art Kleiner focus on how automated systems quietly shape behaviour, erode trust, and manipulate our sense of agency.
Their core insight: the real danger isn’t just what AI does, it’s what it makes us believe we control. We hand over decisions to automated systems because it feels easier. But in doing so, we lose sight of who’s really in charge. As the authors note, we may feel in control, but the system is nudging us, shaping choices without our knowledge.
They call these technologies Triple-A systems: algorithmic, automated, and autonomous. These systems train on data, operate independently, and influence outcomes at scale, often without any human in the loop. And they’re everywhere.
Behind the scenes, Powell and Kleiner argue, four competing forces are shaping AI:
Engineering logic – solving technical problems with speed and efficiency
Corporate logic – driven by growth, scale, and investor returns
Government logic – control, compliance, and geopolitical advantage
Social justice logic – focused on fairness, harm reduction, and accountability
These forces rarely align. And founders often default to the first two, engineering and corporate, because they feel most immediate. But if you're building something people interact with, you’re also shaping culture and behaviour. You can't avoid the consequences just because they’re not in your sprint backlog.
One of the book’s sharpest insights is that our psychological need for control is being used against us. We click, swipe, accept, and engage not because we fully understand the systems, but because they’re designed to feel intuitive. The illusion of control is powerful. And it gives cover to platforms and companies whose algorithms shape everything from hiring to healthcare to criminal sentencing.
The authors stop short of calling for a halt to innovation. They’re calling for responsibility built into the process, not bolted on after the fact.
For founders and builders, The AI Dilemma is a reminder that design choices aren’t neutral. If you don’t decide how your tech influences people, someone else (regulators, journalists, users) will define it for you. The more power your product has, the more you need to understand the behaviours it shapes.
AI snake oil: Shiny tool, shaky truth
The loudest voice in the AI conversation often belongs to the builders. AI Snake Oil is a sharp counterpoint. Arvind Narayanan and Sayash Kapoor aren’t interested in the dreams of AGI or the scale of frontier labs. They’re focused on what’s already here and what’s already broken. I’ve already broken down their key takeaways here.
Unlike the previous books, which wrestle with power, safety, or governance, AI Snake Oil goes straight for the heart of the hype machine. Its core claim: much of what’s sold as predictive AI today doesn’t work and never will. Especially when it comes to making decisions about people. Hiring, insurance, policing, healthcare: systems in these areas are marketed as objective and data-driven, but often perform no better than random chance.
The problem is systemic. Researchers publish flawed studies, vendors make vague claims, regulators lag behind, and the media repeats press releases without scrutiny. The result? AI systems that feel rigorous on the surface but collapse under pressure.
AI Snake Oil draws a clear line between generative and predictive AI, acknowledging that generative models have potential (with caveats), but calling predictive AI in human contexts a statistical illusion.
The examples are damning. An AI system used to cut off Medicare payments early. A hiring tool that claims it can assess kindness from short video clips. A car insurance model that raises rates for seniors because they’re less likely to shop around. Predictive policing tools that aren’t much better than coin flips but still determine who gets bail.
If Supremacy is about power, The Coming Wave is about scale, and The AI Dilemma is about perception. AI Snake Oil is a timely reminder that some of the most hyped tools simply don’t hold up under scrutiny.
For entrepreneurs and leaders, the lesson is clear: be sceptical. Don’t confuse technical complexity with truth. If you’re building in AI, especially in sensitive sectors, your credibility depends on knowing what your model can actually do — and what it can’t. The market may reward flash in the short term. But trust is the real moat.
Read the map before you build
We’re still at the start of the AI era. That much is clear. None of these books pretend we’ve seen the full picture yet, but they do show just how fast the foundations are being laid and how uneven the playing field already is.
What unites them isn’t a fear of AI. They’re all sounding the same alarm: things are moving too fast, and no one’s really steering. Whether it’s Altman and Hassabis handing over their ideals to tech giants (Supremacy), governments struggling to contain exponential tools (The Coming Wave), users being nudged without knowing it (The AI Dilemma), or predictive models getting it wrong at scale (AI Snake Oil), the common thread is speed without oversight.
It’s tempting to think we’ll rebalance. That open-source models, challengers like DeepSeek, or regulation will level the field. And to some extent, they might. DeepSeek, for example, is showing that low-cost Chinese alternatives can compete on capability. But even there, we face trade-offs: poor transparency on sourcing, unknown data practices, and the influence of state censorship and propaganda. Cheap alternatives come with their own invisible costs.
Meanwhile, the infrastructure gap is growing. The companies that locked in early (OpenAI, Google, Microsoft, Amazon) have already shaped the stack. Chips, compute, regulation, public opinion. For new entrants, the cost of catching up isn’t just technical — it’s geopolitical, financial, and cultural.
That doesn’t mean it’s game over. But it does mean you need to know where you sit in the system. If you’re building with AI right now, you’re part of the momentum, whether you like it or not. The question is who benefits when you do.
If you’re a builder, leader, or investor, the next decade will be defined by whether you understand the system you’re operating in, and whether you choose to know the system or be ruled by it without realising.
Go further: AI isn’t on the sidelines anymore. It’s centre stage, and the stakes just changed. The hesitation’s gone. Now it’s about pace, clarity, and action. Read my special report from the Business Innovation Summit.