AI in Software Development — What SMEs Actually Need to Know

Your development team says: “We should start using AI tools.” The question isn’t whether they’re right. The question is: What does that actually mean? What does it cost if you get it wrong? And what does it cost if you do nothing?
This article is for both sides of the desk. For the CTO who needs to explain to the board why AI tools in the dev team make sense. And for the CEO who wants to understand what the team means when they talk about “coding agents.”
What has actually changed
Two years ago, AI in software development meant ChatGPT. A chat window where you type questions and get code snippets back. Useful for quick answers. But not a tool that changes how you work day to day.
That has changed.
Today there are AI coding agents. These aren’t chatbots anymore — they’re tools that independently write code, create files, run tests, and fix errors. The developer provides goals, context, and constraints. The agent executes.
I switched entirely to coding agents in mid-2025. To understand what’s possible today, I ran an experiment: building a complete food tracking app with an iOS frontend, a backend, and infrastructure-as-code for deployment — all in a monorepo. I had never done iOS development before.
For the experiment, I simulated a full development team using agents. A product manager agent helped me refine stories. Specialized agents handled backend, frontend, security, and DevOps. I created tickets in GitHub the same way I do in real projects — as proper user stories and epics. The agents executed them. I conducted code reviews and had the agents review each other.
The result: within a short time I had a production-grade application. And I had to admit: the code the agents wrote wasn’t worse than what I would have written myself — as long as I provided the right context and goals.
That was the turning point. From that moment, writing code was no longer the part of my work where I create value. The value comes before the code: understanding system design, clarifying requirements, developing creative solutions, making architecture decisions. And after the code: verifying that the result actually works in the system.
One thing I find interesting: developers with leadership experience have a head start with coding agents. You have to lead an agent the same way you lead a team member. Clear goals, clear context, clear boundaries. Just without the human element. Different agents and models also behave differently, and you need to know and account for those differences.
Where the real productivity gains are
From my experience: since working with coding agents, I get roughly ten times more done than before. Without compromising quality.
That sounds like a marketing pitch. It isn’t.
The explanation is straightforward: tasks that used to take hours — setting up a new API endpoint, writing test cases, implementing a feature — take minutes with a well-steered agent. Not because the agent is smarter than a developer. But because it types faster, doesn’t need breaks, and has no context-switching overhead.
But here’s what matters: the productivity gains don’t come from the tool alone. They come from the combination of an experienced developer and a well-steered agent.
The Anthropic Economic Index shows this clearly: the quality of AI output correlates almost perfectly with the expertise level of the input (r > 0.92). More expertise in, better results out. AI doesn’t make everyone equally productive. It amplifies what you bring to the table.
Where the hype ends
This is where it gets relevant for decision-makers.
What’s realistic: Experienced developers who learn to work with AI agents will get significantly more done. Standard tasks get faster. Code quality stays the same or improves because there’s more time for reviews and architecture work.
What’s not realistic: That everyone can become a coder now. That you can build a market-ready application without programming experience or system design knowledge. That you can replace developers with AI.
AI agents are tools, not developers. They need someone who knows what should be built, why, and how it fits into the existing system. Without that knowledge, they produce code that works — but in the wrong place, with the wrong assumptions, with no regard for maintainability.
A comparison that helps me: we don’t discuss the machine code a compiler generates. We evaluate whether the result is correct, maintainable, and reliable. We need the same shift for AI-generated code. The question isn’t “who wrote the lines” but “does the result meet our quality standards.”
Four risks SMEs need to understand
1. Uncontrolled adoption
When individual developers use AI tools without shared standards, you get chaos. Everyone works differently, quality varies, and nobody can trace which code came from an agent and which didn’t.
2. Quality erosion without review culture
AI agents produce code fast. That tempts people to look less carefully. But fast code isn’t automatically good code. Without clear review standards for AI-generated code, technical debt grows instead of shrinking.
3. Vendor lock-in
$25 vs. $3.20 per million output tokens — same quality, different model. The AI model landscape changes fast. Locking into a single provider today is the new vendor lock-in. The principles behind effective agentic coding are similar across all tools: provide context, formulate tasks clearly, review results. Learning only one tool ties you to that tool. Understanding the principles means you can switch tomorrow.
4. Security
AI agents access your source code. They send parts of it to external servers. And they can introduce code with security vulnerabilities — not intentionally, but because they lack the security context. Without clear rules about what information can be sent to AI services, you create a security risk.
The most common mistake
The mistake I see most often: companies buy subscriptions, distribute them to the team — and expect the rest to happen on its own.
That’s like giving a junior developer an IntelliJ Ultimate subscription. They’ll probably use the autocomplete feature. Not because the tool is bad, but because nobody showed them what’s possible.
At clients, I see this pattern regularly: teams have access to AI agents, but the results are far below what’s achievable. Within the same team, some developers actively use agents and visibly produce more output, while others have stopped at chat completion. The difference in results is noticeable.
Many experienced developers even actively reject the tools — because without guidance they don’t get good results, and they conclude the tools don’t work. That’s understandable, but wrong. The tools work. They just require a different skillset than what developers have learned so far.
The people who reach out to me for workshops have recognized exactly this: their teams have subscriptions. But adoption isn’t happening. And without outside help, it won’t.
A sensible first step
When a CTO with 20 developers asks me what to do right now, here’s what I say:
1. Give every developer access. Provide subscriptions. No bottleneck on access — everyone needs the option to work with these tools.
2. Start structured enablement. Not a one-off training session, but a guided learning process. Developers need to learn how to steer agents, provide context, and build a closed workflow — from planning through implementation to testing. Concretely: understanding what skills, agents, and commands are for and when to use which. Learning how MCP extends agent capabilities. Building a complete loop that covers planning, implementation, testing, debugging, and review. And above all: understanding how to give the agent the right information at the right time.
3. Define shared standards. What quality criteria apply to AI-generated code? What review processes? What security rules? This sounds like bureaucracy, but it’s the difference between controlled adoption and chaos.
4. Start small, then scale. A pilot project with motivated developers. Measure results. Then roll out. Don’t switch the entire team at once.
When this doesn’t apply
If your core problem isn’t productivity but fundamentally misstructured software, AI won’t fix it. An agent working on broken architecture just produces more of the wrong thing, faster.
Same if your team lacks basic development practices — no tests, no review process, no clean version control. Then AI isn’t the right first step. Fix the foundation first.
AI agents amplify what’s already there. The good and the bad.
The decision ahead
At JavaLand in March 2026, I asked about 50 developers who worked with AI agents. At level 1, all hands went up. By level 3 of 8, it got lonely.
That’s the current state of the industry. The interest is there. The tools are there. But the gap between “tried it once” and “it’s my daily workflow” is wide.
Companies that enable their developers to learn agentic coding systematically now will ship faster in two years. Not because they have better developers, but because the same developers work with better tools.
Companies that wait will have to close that gap later. Under time pressure. Against competitors who already have the head start.
The question is no longer whether AI changes software development. The question is whether your team is ready when it matters. And “when it matters” is now.