AI Tooling Slows You Down Instead of Speeding You Up
4 minute read
What you are seeing
A developer opens an AI chat window to implement a function. They spend ten minutes writing a prompt that describes the requirements, the constraints, the existing patterns in the codebase, and the edge cases. The AI generates code. The developer reads through it line by line because they have no acceptance criteria to verify against. They spot that it uses a different pattern than the rest of the codebase and misses a constraint they mentioned. They refine the prompt. The AI produces a second version. It is better but still wrong in a subtle way. The developer fixes it by hand. Total time: forty minutes. Writing it themselves would have taken fifteen.
This is not a one-time learning curve. It happens repeatedly, on different tasks, across the team. Developers report that AI tools help with boilerplate and unfamiliar syntax but actively slow them down on tasks that require domain knowledge, codebase-specific patterns, or non-obvious constraints. The promise of “10x productivity” collides with the reality that without clear acceptance criteria, reviewing AI output means auditing the implementation detail by detail - which is often harder than writing the code from scratch.
Common causes
Skipping Specification and Prompting Directly
The most common cause of AI slowdown is jumping straight to code generation without defining what the change should do. Instead of writing an intent description, BDD scenarios, and acceptance criteria first, the developer writes a long prompt that mixes requirements, constraints, and implementation hints into a single message. The AI guesses at the scope. The developer reviews line by line because they have no checklist of expected behaviors. The prompt-review-fix cycle repeats until the output is close enough.
The specification workflow from the Agent Delivery Contract exists to prevent this. When the developer defines the intent (what the change should accomplish), the BDD scenarios (observable behaviors), and the acceptance criteria (how to verify correctness) before generating code, the AI has a constrained target and the developer has a checklist. If the specification for a single change takes more than fifteen minutes, the change is too large - split it.
Agents can help with specification itself. The agent-assisted specification workflow uses agents to find gaps in your intent, draft BDD scenarios, and surface edge cases - all before any code is generated. This front-loads the work where it is cheapest: in conversation, not in implementation review.
Read more: Agent-Assisted Specification
Missing Working Agreements on AI Usage
When the team has no shared understanding of which tasks benefit from AI and which do not, developers default to using AI on everything. Some tasks - writing a parser for a well-defined format, generating test fixtures, scaffolding boilerplate - are good AI targets. Other tasks - implementing complex business rules, debugging production issues, refactoring code with implicit constraints - are poor AI targets because the context transfer cost exceeds the implementation cost.
Without a shared agreement, each developer discovers this boundary independently through wasted time.
Read more: No Shared Workflow Expectations
Knowledge Silos
When domain knowledge is concentrated in a few people, the acceptance criteria for domain-heavy work exist only in those people’s heads. They can implement the feature faster than they can articulate the criteria for an AI prompt. For developers who do not have the domain knowledge, using AI is equally slow because they lack the criteria to validate the output against. Both situations produce slowdowns for different reasons - and both trace back to domain knowledge that has not been made explicit.
Read more: Knowledge Silos
How to narrow it down
- Are developers jumping straight to code generation without defining intent, scenarios, and acceptance criteria first? If the prompting-reviewing-fixing cycle consistently takes longer than direct implementation, the problem is usually skipped specification, not the AI tool. Start with Agent-Assisted Specification to define what the change should do before generating code.
- Does the team have a shared understanding of which tasks are good AI targets? If individual developers are discovering this through trial and error, the team needs working agreements. Start with the AI Adoption Roadmap to identify appropriate use cases.
- Are the slowest AI interactions on tasks that require deep domain knowledge? If AI struggles most where implicit business rules govern the implementation, the problem is not the AI tool but the knowledge distribution. Start with Knowledge Silos.
Ready to fix this? Start with Agent-Assisted Specification to learn the specification workflow that front-loads clarity before code generation.
Related Content
- Agent-Assisted Specification - Using agents to define intent, scenarios, and criteria before generating code
- Agent Delivery Contract - The six artifacts that constrain AI-generated code
- AI-Generated Code Ships Without Developer Understanding - Related symptom where AI speed comes at the cost of comprehension
- Pitfalls and Metrics - Common failure modes when teams adopt AI coding tools
- AI Adoption Roadmap - Staged approach to adopting AI tools safely
- Work Decomposition - Breaking work into pieces small enough for fast feedback