The trap of delegating critical thinking to AI

Who’s holding the rope?
We are living through one of the most exciting periods in the history of software development. Thanks to LLMs, tasks that used to take hours - such as setting up a new microservice, writing test suites, or summarizing a long meeting - now take minutes.
Don’t get me wrong, I fully support AI as a tool. It’s like having a pair programmer, research assistant and devil’s advocate all in one. However, we must be mindful of a dangerous shift, the transition from using AI to accelerate our work, to using it to replace our judgment.
You have probably seen product roadmaps that feel generic and are generated by a prompt, architectural documents proposed by engineers who can’t explain the trade-offs, because an LLM made the decisions for them or sloppy code pushed to production without proper review. You are not alone. I have seen it, too.
We must remember that AI is an incredibly efficient tool but a terrible substitute for critical thinking.
In the talk “Model Collapse Ends AI Hype” the author highlights that LLMs are fundamentally next-token predictors that rationalize rather than reason.
The belayer and the climber
Let’s think about it this way: AI is the climber. It’s strong and fast. It feels it knows everything. It can scale heights that would take a human ten times longer. It can explore different routes up the cliff and find holds that humans might miss. We humans are the belayers. We are anchored to the ground and hold the safety rope. We see the whole picture, monitor the weather conditions, and, most importantly, catch the climber if they slip.
The danger is that too many engineers and product managers are letting go of the rope. They tie the safety rope to a loose rock and walk away to do something else while the AI keeps climbing. When we let AI take the lead in vision, design or architecture, we have let go of the rope. We stop being the anchor of critical thinking and start passively following wherever the AI leads.
The death of product vision
In product management and high-level engineering, critical thinking means the ability to consolidate messy customer data, market insights, and bold CEO bets into a clear vision. When a product manager asks an LLM to define the product vision, they receive a statistical regression. An LLM provides the most probable answer based on its training data, it predicts the most likely next word based on existing data. There is no innovation in it.
Moreover, when we delegate ideas generation to AI, we detach ourselves from the outcome. When challenges arise, we won’t have the confidence to fight for the idea because it wasn’t ours to begin with. We just follow the AI’s output. Real innovation requires a human to passionately work through a problem.
The fragile over the frugal architecture
The risk is even more important on the engineering side. It’s addicting to watch an LLM generate a complex Kubernetes configuration or design an event-driven architecture in minutes. However, architecture requires a deep understanding of trade-offs. It requires knowing why one option was chosen over another in a specific context. It also requires understanding the failure modes of the system we are building.
If engineers let AI make those structural decisions, they gain speed and lose understanding. They end up building systems that they don’t know how to evolve, debug and fix. When a production incident occurs at 3 a.m., the LLM won’t be there to drive the debugging process and resolve the issue.
We can’t outsource accountability
Here’s the uncomfortable truth: no matter how much AI contributed to a decision, the consequences are always yours.
- If your company’s new product strategy fails and company resources are drained, you cannot fire the AI agent.
- If your system suffers a data breach due to a security vulnerability introduced by AI, GitHub Copilot or Claude will not be held responsible.
- If you commit AI-generated code without deeply understanding its logic, the production incident it causes is still yours to fix at 3 a.m.
- If a pull request you approved introduces a critical bug, it doesn’t matter that an AI wrote the code, it’s your name on the review.
Ultimately, we are the ones who put our name on the commit, strategic document, product roadmap and quarterly report. If we are responsible for the outcome, then we must also be responsible for the critical decisions that led to it.
Use the tool, don’t be used by it
Let AI generate boilerplate code, find bugs, refactor, implement proof of concept, summarize a large amount of documentation you don’t have time to read, and suggest alternative options. Do not let it decide what to build, how you build it, nor the architecture foundation of your system.
We are the safety line. The moment we let go of the rope, we risk a fall that no AI can catch. Keep your hands on the rope, keep your eyes on the climber, and never stop doing your own thinking.