Are we building AI mirrors or monsters? 👁️
How entrepreneurs can face the real risks of AI
Every week a new AI drop promises more speed, more output, fewer headaches. A lot of it delivers. The right tool can shave hours off a build and make hard work feel light.
Yet anyone who’s used these systems for long enough also knows the other side. They make rookie mistakes. They forget context mid-conversation and slide back into stock answers. They spout nonsense with total confidence.
Meanwhile we’re wiring these systems into hiring, healthcare, finance, and defence because they move fast and handle lots of data. The decisions are real, the stakes are high, and the oversight is thin.
That’s why I asked Sam Illingworth, an academic and author of the Slow AI newsletter, to lay out how to implement AI with judgment before the rollout runs us over. His take is measured, practical, and urgent. 👇🏻
— Millennial Masters is sponsored by Jolt ⚡️ Reliable hosting for modern builders
We’re raising AI we don’t yet understand
People don't yet understand that what we're doing is creating alien beings. We need to figure out how to give them strong maternal instincts towards us. We should be urgently doing research on how to prevent them taking over. My best guess is between five and 20 years before we get super intelligence. The only thing that can control a super intelligence is another super intelligence. We have to figure out how we can coexist with things much smarter and much more powerful than ourselves. — Geoffrey Hinton
When AI godfather and 2024 Nobel laureate in physics Geoffrey Hinton warns we are creating “alien beings” with LLMs, he is choosing metaphor to force attention.
If a telescope showed an incoming invasion, he says, we’d panic. With AI, we get hype, casual adoption, and little scrutiny of the risks already here.
Leaders face a hard problem. The danger sits inside business tools, state surveillance, and individual tragedies.
Clarity matters before attention drifts from what is already in front of us.
From metaphor to impact
Hinton’s warning about “alien beings” grabs attention because it makes the abstract vivid. But it also misleads if we forget what these systems are made of.
They’re built from us: our data, our histories, our mistakes. They reflect human knowledge and prejudice alike.
Palantir’s AI tools are already woven into surveillance, migration control, and urban conflict. They pull data from faces, movements, and social media, then feed it into targeting systems.
This is happening now, shaping the lives of detained migrants, civilians under fire, and communities under constant watch.
For builders, the warning is clear: the systems you design may extend into uses you never intended, with consequences you can’t dismiss. That risk is on you.
The risks don’t stop at geopolitics. A young man asked ChatGPT for help ending his life. It gave him instructions. His parents are now suing OpenAI.
These tools are never neutral. They shape decisions, sometimes with life-or-death consequences. Responsibility starts the moment your product is in someone’s hands.
Shift attention to what’s live
Headlines chase extinction scenarios. Meanwhile, rights, freedoms, and daily life are being reshaped now. Put your focus there.
Accountability comes first. Training data transparency, audit trails, and real consumer protections form the baseline for trust.
Slowing down means pausing long enough to see the hidden systems, the human costs, and the ethical stakes before you scale technology you cannot roll back.
For founders, this work is immediate: define where AI is allowed, document where a human must decide, review outcomes in the open, and change course when the data demands it.
The horizon can wait. Your next release is the real test.
Your next three moves
Map your AI footprint. List every tool. Note the data it touches, ownership, sensitive fields, vendors, and models. Flag high-risk uses for extra review.
Set boundaries. Specify where AI can assist and where a human decides. Write simple rules for approval, logging, and escalation. Share them and review on a fixed cadence.
Raise literacy. Teach how these systems work, where they fail, and how to verify outputs. Run short drills. Require sources and checks before important actions. Add AI usage to post-mortems.
The future is decided in product meetings, code reviews, and launch checklists.
Slow down long enough to see how your systems behave in the real world, then act on what you find. Own the consequences of what you ship.
Leaders who do this set standards, enforce transparency, and raise team literacy. They build companies users can trust and a culture where technology serves people, not the other way round.
👤 About Sam Illingworth
Sam is a Professor of Creative Pedagogies at Edinburgh Napier University. His work uses poetry and games to spark dialogue between scientists, technologists, and the public.
He also writes Slow AI, a newsletter that pushes back against urgency and hype in tech. It’s a call to build systems that reflect us carefully, ethically, and on human terms.
More on AI from Millennial Masters:
ethical oversight and accountability need to be front and center as these technologies are integrated into high-stakes sectors.