the m9sh.
← Blog / Apr 6, 2026

Your team is afraid of AI. Here is how to fix that..

The most common failure mode in AI adoption is not a tooling problem. It is a feelings problem. How to close the gap without destroying morale.

Your best engineers are not using AI. Not really. They tried Copilot for a week, decided it was "mostly wrong," and went back to typing everything by hand. Or they use it for boilerplate and tests but would never let it near the real work. Or they're quietly terrified that if they lean on it too hard, they'll stop being good at their job.

This is the most common failure mode I see in teams adopting AI tools in 2026. Not a tooling problem. Not a budget problem. A feelings problem. And if you're leading a team, it's your problem to solve — because the gap between teams that use AI well and teams that don't is already visible in output, and it's widening every quarter.

Here's what I've learned about closing that gap without destroying morale.

Start by understanding the fear

Engineers won't tell you they're afraid. They'll tell you the tools are "not good enough yet" or "not worth the context-switching cost" or "I'm faster without it." Sometimes they're right. But when an entire team is saying the same thing, it's not a technical assessment — it's a defense mechanism.

The fears are real and worth naming out loud:

  • "If AI can do my job, why do they need me?" The existential one. Nobody says it in a standup, but everyone's thinking it. Address it directly: AI can generate code, but it can't own outcomes. It can't decide what to build, how to prioritize, when to push back on a requirement, or what to do when production is on fire at 2am. Your job is not "typing code." Your job is engineering judgment. The typing was never the hard part.
  • "I'll lose my skills." The craft anxiety. Engineers who spent years getting good at something don't want to feel like that investment was wasted. And there's a real version of this concern: if you outsource all the routine work, do you stop understanding the foundations? Yes, if you're careless. No, if you use AI as a force multiplier on the boring work and spend the freed-up time on the hard work you never had time for.
  • "The output is unreliable." The trust problem. Early experiences with AI that confidently generated wrong code create lasting skepticism. This one's legitimate and requires a process answer, not a pep talk.

Don't mandate. Model.

The worst thing you can do is send a Slack message that says "everyone should be using AI tools starting next sprint." Mandates create compliance, not adoption. People will check the box — open the tool, generate something, throw it away, go back to their workflow — and nothing changes.

What works: use the tools visibly, in real work, and share the results honestly.

When I led AI enablement sessions at my last company, the sessions that changed behavior weren't the "here's how to use Claude" tutorials. They were the ones where I pulled up a real ticket from our backlog, worked through it live with an AI tool, made mistakes, hit dead ends, and showed both the wins and the places where the tool was useless. Engineers could see the actual workflow — not a curated demo, but the messy, iterative, "that suggestion is garbage, let me rephrase" reality.

The message that lands: "This is a tool that makes me better at my job. It doesn't replace my judgment. It accelerates the parts I was already going to do. Here's exactly how."

Create safe space to experiment

Engineers won't experiment with AI tools on critical-path work. The risk is too high and the embarrassment of a bad AI-generated PR is too real. You need to create low-stakes contexts where experimentation is safe:

  • Hardening sprints. Dedicate a sprint to tech debt, test coverage, and documentation. Tell the team: "Use AI tools for as much of this as you can. See what works." The work is low-risk (tests, docs, refactors), the cost of failure is low, and the volume of boring tasks is high — which is exactly where AI tools shine.
  • Pair sessions. Put two engineers together — one who's comfortable with AI tools and one who isn't. Not as teacher/student, but as co-pilots exploring together. The social cover of "we're figuring this out together" removes the stigma of not knowing how to prompt.
  • Show and tell. Weekly 15-minute slot where anyone can show something they built with AI assistance. Celebrate the process, not the output. "I asked Claude to write the retry logic and it got the backoff formula wrong, but it saved me 20 minutes on the boilerplate" is a more useful share than "look at this perfect code AI wrote."

Fix the trust problem with process

The "AI output is unreliable" concern is legitimate and won't go away with encouragement. It goes away with process.

Treat AI output like junior-engineer output. You wouldn't merge a junior's PR without review. Same standard. The review posture is: "Is this correct? Does it handle the edge cases? Does it fit the existing patterns?" Not "was this written by AI?" — that question is irrelevant.

Tests are the contract. If the AI-generated code passes the existing test suite, it meets the same bar as human-written code. If the test suite isn't good enough to catch AI mistakes, it wasn't good enough to catch human mistakes either — fix the tests, not the AI policy.

Small PRs, always. AI tools make it easy to generate large diffs fast. Large diffs are hard to review regardless of who wrote them. Keep the PR size small, keep the review quality high, and the trust builds naturally over time.

Measure what matters

Don't measure "AI adoption rate." It's a vanity metric that incentivizes performative tool usage. Measure the things you actually care about:

  • Cycle time. Are tickets moving faster from "in progress" to "merged"?
  • Test coverage. Is it going up? (AI is great at writing tests people skip.)
  • Tech debt tickets closed. Are the boring hardening tasks finally getting done?
  • Developer satisfaction. Are people less frustrated with toil? (Anonymous survey, quarterly.)

If AI tools are working, these metrics improve. If they're not improving, the tools aren't being used effectively — and that's a coaching conversation, not a mandate.

Stay honest about the limits

The fastest way to lose credibility is to oversell AI tools. If you tell your team "this will 10x your productivity" and their experience is "it's sometimes useful for boilerplate," you've lost them. They'll dismiss the tool and your judgment.

What I say instead: "This tool is excellent at the boring, repetitive, mechanical parts of our work — tests, documentation, refactoring, boilerplate, and first drafts. It's mediocre at architecture, design, and anything that requires understanding context the tool doesn't have. Use it where it's strong. Don't fight it where it's weak."

That framing does two things. First, it's honest — which builds trust. Second, it positions the engineer's judgment as the critical differentiator — which addresses the existential fear. You're not being replaced. You're being augmented in the parts of the job you didn't like anyway.

AI adoption framework

The leader's job

Adopting AI tools is a change management problem dressed up as a technology problem. The technology is the easy part. The hard part is getting a room full of smart, skeptical, craft-proud engineers to change how they work — without making them feel like their skills don't matter.

Your job as a leader is to make adoption safe, make the benefits visible, make the process trustworthy, and never, ever pretend the tools are better than they are. Do that consistently for a quarter and you won't need a mandate. The team will adopt because the evidence is in front of them, generated by their own colleagues, on their own codebase.

That's how culture changes. Not from the top down. From the inside out.

Get new essays by email

No spam. Unsubscribe anytime.

MU

Marcin Urbanski

Engineering lead. 11+ years shipping distributed systems at scale.