Agentic Coding in Teams — Why Enablement Matters More Than the Tool

Your team has Cursor subscriptions. Or Copilot. Or Claude Code. The invoices are paid, the tools are installed, and now AI is supposed to boost productivity. Except it doesn’t. A few developers experiment, most carry on as before, and nobody feels like anything has changed.
The problem isn’t the tool. The problem is that a subscription isn’t enablement.
A look at the current state
At JavaLand 2026, during my talk “Context Is Everything,” I asked about 50 developers where they stood on the AI adoption scale — using Steve Yegge’s 8-stage model , which ranges from “Zero or Near-Zero AI” to “Building your own orchestrator.” At level 1, all hands went up. By level 3, it got lonely: only four or five hands. From level 6 onward — not a single one.
These are 50 developers who voluntarily attended a talk on agentic coding — people with active interest in the topic. Yet almost none of them work with AI agents at a level beyond occasional chat completion.
This matches what I see at clients. Teams have subscriptions, but usage stays superficial. Agents get used as chat interfaces — a question here, a code snippet there. The full potential of the tools remains untapped, and that level of usage doesn’t justify the cost.
Why subscriptions alone don’t work
Handing a team AI tools and expecting adoption to happen on its own is like giving a junior developer an IntelliJ Ultimate subscription. They’ll discover autocomplete and be pleased. But the advanced refactoring capabilities, the debugging tools, the analysis features — everything you’re actually paying for — go unused. Not because the tool is bad, but because nobody showed them what’s possible.
With AI agents, the pattern is identical. Without structured onboarding, developers use the tools far below their potential. The agent gets demoted to a glorified chatbot: ask a question, draft an email, maybe generate a small code snippet. But the real strength — steering an agent through entire tasks, providing context, building repeatable workflows — that doesn’t happen by itself.
What happens instead is a pattern I’m watching unfold at a client right now: individual developers explore the tools on their own initiative and start producing noticeably more output. But that creates resistance within the existing team. Producing code with an agent instead of typing it yourself gets viewed negatively. The team feels emotionally threatened when someone stops working “with their own hands.”
The dynamic is toxic: developers who get good results with AI agents receive negative feedback from colleagues. They worry about their reputation within the team and stop using the tools. The productivity gain dies before it can take hold — not because the tools don’t work, but because the social dynamic within the team actively prevents adoption.
That’s the difference between unstructured and structured introduction: without a shared framework, you create a two-tier system within the team. Some experiment, others block. With a shared enablement process, everyone learns together, at the same pace, with the same principles. That takes the emotional pressure out of the equation.
The tool trap: Cursor vs. Copilot is the wrong question
When I talk to teams about AI agents, the first question is almost always the same: “Which tool should we pick?” Cursor, Copilot, Claude Code, Windsurf — the list grows monthly, and every vendor explains why their product is the right choice.
The question is understandable, but it leads nowhere. Part of the reason is that the topic is still relatively new, and perception is heavily driven by tool vendors — Anthropic with Claude Code at the forefront. The problem: you quickly fall into chasing the next feature release instead of asking what principles actually drive effective agentic coding.
And those principles are similar across all tools: How do you give an agent context? How do you formulate tasks so the output is production-ready? How do you review AI-generated code? How do you set guardrails instead of approving every step?
These aren’t Cursor skills or Claude Code skills. They’re agentic coding skills. Learning only one tool ties you to that tool. Understanding the principles means you can switch tomorrow without starting over.
In practice, I redirect the conversation like this: I start with the tool the team already knows, because that connects to their existing experience. Then I draw comparisons to other tools — how the same functionality is implemented elsewhere, where the interfaces overlap. Through that comparison, the underlying principles become visible. The effect: the team understands their current tool better and can transfer those insights to any future tool.
What enablement actually means
Enablement isn’t a lecture on prompt engineering. It’s a structured process that equips a team to use AI agents productively in their daily work. Concretely:
1. Hands-on with the team’s own code. Not with demo projects, but with the team’s actual codebase. That’s where the real challenges live — legacy code, organic architectures, unusual build setups. An agent that works on a clean demo project says little about how it handles reality.
2. Project documentation for agents. Teams learn how to give their agent the context it needs: project rules, architecture decisions, code conventions. Not as a one-time setup, but as living documentation that grows with the project. This is often the biggest lever — because without context, even the best agent produces mediocre results.
3. Shared standards and workflows. How does the team review AI-generated code? What quality criteria apply? Where do you use the agent, where don’t you? These questions need to be answered together so there’s no chaos and quality stays consistent.
4. Honest assessment: what’s worth it, what isn’t? Not every part of the codebase benefits equally from AI agents. You need a realistic evaluation of where the investment pays off and where conventional methods are still faster.
What changes as a result
From my own work, I can describe what happens once you’ve internalized the tools: output increases not just in volume but in structure. AI agents make it possible to automate standard procedures and produce repeatable results — and not just in writing code.
Ticket descriptions, roadmap planning, analysis of existing software, documentation — all of this can be systematized. You build workflows that fit your own approach and your company’s needs, then reuse them consistently. The key difference: results become predictable. Not a new approach every time, but a proven process that works reliably.
For a team, that means concretely: less variance in quality, because not every developer improvises their own approach. Less dependence on individual knowledge holders, because project knowledge lives in the agent documentation rather than only in people’s heads. And more capacity for the tasks where human judgment makes the difference — architecture decisions, domain understanding, client communication.
When this doesn’t apply
If the foundation is broken, enablement won’t help much. A team without functioning code reviews, without tests, and without clean version control won’t get better through AI agents — it will just produce more of the wrong thing, faster.
And if resistance within the team is fundamental — not caused by missing guidance but by a culture that actively rejects change — a workshop alone won’t solve that. That requires addressing the team dynamics first.
The first step
The question isn’t “Cursor or Copilot?” The question is: does your team understand the principles that make AI agents productive? And is there a shared standard for how to work with them?
If the answer to both is “No,” the tool isn’t the problem. What’s missing is enablement.
And enablement doesn’t start with a subscription. It starts with taking the team through the process — hands-on, on their own code, with an honest assessment of what works and what doesn’t.
That’s exactly what I built the Agentic Coding Workshop for. If you want to find out whether it’s a fit for your team — get in touch.