Introduction
AI coding has quickly gone from curiosity to daily habit. Tools like GitHub Copilot, Cursor, and Codex are now part of many developers’ core workflow.
And with that shift, a bold claim is floating around:
“We generate 100% code using AI and it’s reviewed 100% by AI.”
It sounds futuristic. Efficient. Almost magical.
I’ve been using AI for coding for around two years now — and here’s my honest take:
There is no 100%.
The Reality: AI Writes Code That Works… But Not Always Well
Modern AI models are incredibly capable. They can:
- generate complete features
- handle edge cases
- produce syntactically correct, executable code
But “working” code is not the same as optimal or production-grade code.
What I’ve consistently observed:
- Extra CPU cycles due to inefficient logic
- Unnecessary object creation leading to memory overhead
- Over-engineered abstractions that look good but add complexity
AI often solves the problem — but not always in the best way.
How I See AI Coding: Two Juniors in a Loop
Most AI-driven workflows today can be simplified like this:
1. The Generator (Model 1)
Acts like a junior developer
- Writes complete, working code
- Uses patterns it has learned
- Solves for correctness first
2. The Reviewer (Model 2)
Acts like a junior code reviewer
- Reviews based on predefined rules
- Suggests improvements
- Flags potential issues
They iterate:
- Version 1 → feedback → Version 2 → feedback → … → Version N
- Eventually both “agree” → ready to ship
On paper, this sounds like a complete system.
But here’s the catch 👇
Why I Don’t Trust a Fully Autonomous AI Loop
Both the generator and reviewer are still operating within:
- learned patterns
- predefined rules
- limited business context
They lack:
- deep system understanding
- real-world tradeoff awareness
- domain-specific nuance
So while the output may pass “AI review,” it doesn’t guarantee:
- performance efficiency
- scalability readiness
- alignment with business constraints
My Approach: AI as an Assistant, Not the Owner
Here’s the workflow that works best for me:
Step 1: Generate Raw Material
I use AI to quickly produce:
- initial implementations
- boilerplate code
- multiple approaches
Step 2: Refine with AI
Then I use AI again to:
- optimize structure
- clean up logic
- identify obvious issues
Step 3: Human Final Pass (Most Important)
This is non-negotiable.
I manually:
- review logic deeply
- remove inefficiencies
- align with business use cases
- ensure long-term maintainability
The Interior Designer Analogy
Think of it like this:
You are a senior interior designer.
AI tools are your assistants:
- they prepare materials
- take measurements
- assemble components
But the final finish — the one clients notice — is yours.
Without that final touch, the output may be complete… but not refined.
Why the 90–10 Rule Matters
I strongly believe:
👉 90% of the work can be done by AI
👉 10% still requires human judgment
And that 10% is where the real engineering happens:
- performance tuning
- architectural decisions
- business logic alignment
Staying Connected to Code (And Why It Matters)
One hidden risk of over-relying on AI is losing touch with your own system.
If you depend entirely on AI:
- You may not fully understand what’s implemented
- Debugging becomes harder
- Decision-making becomes slower
With the 90–10 approach:
- You stay connected to the code
- You understand business flows deeply
- You can confidently make system decisions
So when someone asks:
“How does this feature work?”
You don’t need to go back to AI for answers.
Final Thoughts
AI coding is not a replacement — it’s a multiplier.
It accelerates development, reduces effort, and unlocks productivity.
But it doesn’t eliminate the need for engineering judgment.
The future isn’t:
AI vs Developers
It’s:
AI + Developers — with clear ownership
Over to You
I’m curious:
- Are you using a fully AI-driven workflow?
- Do you trust AI-generated code without manual review?
- Or do you follow a similar human-in-the-loop approach?
Let’s discuss 👇
Read more about how LLM thinks.
RELATED POSTS
View all