From MVP to 1M customers 🧑💻 What AI vibe coding can and can't do
A founder's roadmap to AI coding that actually works
Vibe coding is having a moment. Tools like Lovable, Cursor, or Bolt let you chat with an AI, spin up a clickable demo in days, and sometimes even ship an MVP after weeks of fumbling with prompts. It feels like magic, until you try putting that code in real customers’ hands.
But in the coming years, the edge won’t come from using the most AI or avoiding it altogether. It will come from founders who understand where AI creates speed and where it creates fragility. Customers won’t ask who wrote the code. They’ll only care that the product works when they use it.
That’s why I asked Tyler Folkman, author of The AI Architect and GM & CTO at TubeBuddy, to map the reality. After a decade leading AI product teams and shipping tools used by millions, his verdict is clear: it’s not about the tools, it’s about the techniques. 👇🏻
— Millennial Masters is sponsored by Jolt ⚡️ Reliable hosting for modern builders
When AI speeds you up (and when it sinks you)
I burned through 59 AI coding tools trying to build a weekend side project: a simple expense tracker for my family.
It should have taken 3 days. It took 3 weeks.
The code looked perfect, compiled clean, even passed basic tests. Then my wife tried to log groceries and the whole app crashed.
That’s when I learned the truth about AI coding in 2025: it’s the fastest accelerator a founder can use, and the fastest way to bury yourself in technical debt if you trust it blindly.
After 10 years building AI products and helping teams scale from MVP to millions of users, I’ve mapped exactly where AI speeds you up, and where it quietly burns your business down.
The 80% zone 🟢 Where AI shines
Boilerplate without the burnout
Real example: I procrastinated for a year on building a chores app for my kids. With Claude Code, I went from idea to production deployment on AWS in 3 hours.
The pattern that works: Give AI well-defined, isolated tasks. "Build a user registration endpoint with email validation" gets perfect results. "Build our entire auth system" creates technical debt that haunts you for months.
Greenfield is AI’s playground
AI absolutely dominates fresh projects. Building from zero means no legacy constraints, no existing patterns to learn, and no architectural debt to navigate.
When I built that chores app in 3 hours, AI could make clean architectural decisions because there was no existing codebase to understand. Every component was purpose-built, every pattern was modern, every integration was designed from scratch.
Tests and docs on autopilot
One founder reduced their testing backlog from 6 weeks to 6 days using Claude for systematic test generation. The catch? Only trust it for isolated functions. Integration tests need human oversight, as AI misses state dependencies 73% of the time.
Documentation is where AI truly shines. Generate API docs, inline comments, and README files in seconds. Use AI to document code AFTER you write it, not before. Pre-documentation locks you into bad patterns.
Small bites, not whole rewrites
Point AI at your 2019 legacy code. It'll suggest modern patterns, extract methods, and update deprecated libraries faster than any human.
But here's the secret: Give it small, bounded chunks. "Refactor this 50-line function" works perfectly. "Modernise our entire codebase" creates chaos.
The danger zone 🔴 Where AI burns you
The state problem AI never solves
AI doesn't understand your system architecture. It generates code that works in isolation but explodes when real data hits.
Stanford research found developers using AI wrote more insecure code. Many even believed that insecure code was safe. Additional studies show roughly 40-50% of AI-generated code fails basic security tests.
AI doesn’t know your business
Your payment system has 37 business rules. AI knows zero of them.
It'll happily generate code that charges customers incorrectly, violates regulations, or ignores your specific edge cases. AI learns from public code examples, not your unique business requirements.
AI’s weakest link: security
AI consistently reproduces common vulnerabilities: SQL injection, exposed API keys, weak encryption patterns.
One e-commerce startup discovered their AI-generated checkout flow was storing sensitive payment data in the browser’s local storage. It’s a critical security flaw that would have violated PCI compliance standards.
Scale exposes AI’s cracks
AI writes code that works for 10 users. Not 10,000. And it has no understanding of database indexes, caching strategies, or memory management.
Real example: After my 3-hour chores app success, I got cocky and asked AI to "refactor this to be multi-tenant." It failed spectacularly and I had to git reset — hard and start over.
The lesson: Complex architectural changes need the same engineering discipline as before, just faster execution.
AI won’t tell you you’re wrong
Here's the most dangerous trap: AI won't tell you when you're making a mistake.
Ask for a terrible architecture decision, it'll build it flawlessly. Human developers push back on bad ideas. AI enables them with algorithmic confidence.
This politeness becomes technical debt that compounds daily.
Turn AI from parrot to partner
Most teams blame “AI limitations” when code falls apart. What they’re missing is scaffolding. The same model that fumbles business logic becomes sharp once you wrap it in context, checks, and feedback.
Start with a single source of truth: a CONTEXT.md file in the repo and treat it like memory. Write down the rules humans carry in their heads but never document (billing cadence, refund thresholds, VAT quirks) plus the architecture decisions you don’t want to reopen every sprint.
Context alone won’t save you. Every AI-generated feature needs a testing safety net. Generate the code and in a new session, ask for full unit coverage and a pass of security tests targeting injections, exposed keys, weak crypto, sloppy auth. Then add a human check at the seams: where the code touches state, payments, files, or external services.
Finally, make the system learn. Keep an AI_NOTES.md in the repo and record what worked, what failed, and the lessons you want to carry forward. Log the exact prompts that produced good output and the ones that created debt. Spell out rules for future requests: break complex features into 3–5 tasks, mirror existing files, require explicit error handling.
Together, these steps turn AI into a capable junior. Fast, consistent, and guided by your playbook. You still decide the edges and review anything that touches money, identity, or scale.
The 3-zone framework 🚦
🟢 Green Zone (AI dominant)
Boilerplate, scaffolding, unit tests, docs, formatting.
🟡 Yellow Zone (AI + human)
API endpoints, database queries, UI components, legacy refactoring.
🔴 Red Zone (human dominant)
Business logic, security, scale, architecture.
👤 About Tyler Folkman
Tyler is GM and CTO at TubeBuddy and the writer of The AI Architect newsletter. He’s led AI product teams for more than a decade, building tools used by millions. After spending 200+ hours mastering AI development workflows, he now helps developers and tech leaders adopt the techniques that actually speed up shipping, without falling into the traps that waste time and create debt.
More on AI from Millennial Masters: