AI isn’t a hypothetical anymore. You’ve run a pilot, launched a workflow, or at least asked ChatGPT to rewrite a vendor email. Maybe it worked. Maybe it fizzled.
Most AI initiatives don’t fail dramatically. They stall quietly. McKinsey’s 2025 State of AI report found 88% of organizations use AI in at least one function, but nearly two-thirds haven’t scaled it past individual experimentation. That tracks with what we see across the SMBs we work with. Two or three power users are driving the whole experiment while the rest of the team watches from a safe distance. The gap between “we’re using AI” and “AI is changing how we work” is where most companies get stuck.
These are the five friction points we see most often, and practical fixes for each.
1. Prompt Paralysis: People Don’t Know What to Say
You bought the license. You showed the team the tools. You even wrote a few prompt examples. But usage is low.
People don’t know what to say. Prompting isn’t intuitive, and even savvy employees freeze at where to start, how specific to be, and what the AI can actually do. Without guidance, they skip it.
Fix: Build a prompt library around your team’s real workflows. Host a “Prompt Jam” where each department submits its most-used prompts. Encourage reuse, remixing, and annotation. Make it safe to experiment and safer to share what didn’t work.
2. Shadow AI: Personal Tools With No Guardrails
If your staff uses AI on personal accounts and free-tier tools, you have a data governance problem.
This is shadow AI: employees paste customer emails into personal ChatGPT accounts, upload financial data to free browser extensions, or run client documents through tools leadership has never heard of. The company has no visibility, no audit trail, no control over where that data ends up.
There’s a hidden-cost dimension too. When the company doesn’t provide AI, people expense personal subscriptions: a $20 ChatGPT here, a $30 Claude Pro there, a handful of $10 specialty tools scattered across the ops team. We’ve walked into companies paying more for fragmented shadow AI across a dozen expense reports than a single company-wide enterprise plan would have cost in the first place.
Fix: Give your people proper tools with enterprise licensing. ChatGPT for Business and Claude Teams include data-protection agreements, audit logs, and a guarantee that your data won’t train anyone’s models. When the company provides good tools, people stop using personal accounts. The shadow AI problem solves itself.
“Would you give one kid good AI and one kid not?” That’s Greg Shove, CEO of Section, on the Beyond the Prompt podcast. It sounds absurd, but companies do it constantly — marketing gets the paid tier, sales gets the free version. The people with good AI end up in meetings with the people stuck on bad AI. They notice. And the ones with the bad tools quietly go back to their personal accounts.
Standardize on the best tool you can afford and give everyone access.
3. Trust Breakdown: The AI Got It Wrong
Nothing kills momentum faster than a hallucinated fact or a tone-deaf customer email. When the AI gets something wrong in front of leadership or a client, users lose confidence. Without confidence, adoption stops.
A personal example: I’ve been building a Claude skill to help draft scopes of work, one of the more tedious parts of running an MSP. Early versions consistently overshot. The AI came back with pilot phases, phased rollouts, rollback plans, and labor estimates calibrated for 10,000-user enterprises, not the 25-to-100-user SMBs we actually serve. The outputs weren’t wrong exactly. They were wrong for our clients. Nothing about those SOWs matched how our customers buy, budget, or deploy.
What fixed it wasn’t a better single prompt. It was iteration: feeding the skill real SMB engagements, pushing back on enterprise-flavored language, and teaching it what “right-sized” looks like for a 40-user firm. The AI wasn’t the problem. The context was. That’s the loop most adoption efforts never run.
Fix: Teach your team layered verification. AI writes the first draft. A human is always the final editor. Encourage web-grounded outputs that cite sources you can verify. ChatGPT, Claude, and Copilot all support this now. And budget time to refine the tools themselves. A skill or prompt that’s wrong on day one can be right by day thirty if someone cares enough to keep correcting it.
4. Workflow Gaps: AI Lives in a Silo
AI that sits in a separate tab goes unused. It has to live where the work lives.
That means embedding it into the tools the team already opens every morning: Outlook, Teams, your CRM, your PSA. Not a new tab. Not a new login. Not a new place to remember.
Fix: Identify your team’s three most repetitive workflows and build simple AI assists directly into those flows. For most of our clients, the highest-ROI three are the same: Copilot drafting first-pass email replies in Outlook, Copilot summarizing Teams meetings so nobody takes notes manually, and a structured prompt that turns technician shorthand into billable time entries in ConnectWise. None of those require a separate “AI tool.” They’re integrations already available in the M365 license most SMBs already pay for.
5. Ownership Ambiguity: Nobody’s Driving
If nobody owns it, nobody improves it.
When no one is responsible for prompt quality, usage tracking, or refinement, things stall. The pilot doesn’t scale. Enthusiasm fades.
Fix: Assign an AI Champion. A role, not a title. They don’t need to be technical. The best champions we’ve seen were an ops manager and a paralegal. What they need is curiosity, consistency, and a standing block of time each week.
Give them a thirty-day ramp. Week one: sit with your five heaviest AI users and your five lightest, and figure out what the difference is. Week two: publish the ten prompts that would save the team the most time, with real examples from real projects. Week three: pair with your slowest adopter and ship one actual workflow together, start to finish. Week four: send out the first monthly AI Win Digest — time saved, manual steps killed, decisions made faster — and copy the whole company.
That’s how a “capability” stops being a PowerPoint bullet and starts being how your team actually works.
High-Momentum Teams Do Things Differently
The best teams don’t wait for the perfect tool or the most advanced use case. They work with what’s available and get better over time.
| Low-Momentum Teams | High-Momentum Teams |
|---|---|
| Wait for the perfect tool | Start with whatever’s already in their M365 license |
| Hide failed experiments | Post the failures in the group chat, then move on |
| Focus on features | Focus on hours saved |
| Expect AI to replace roles | Use AI to make the people they already have better at their jobs |
| Treat AI as a side project | Work it into the first thirty minutes of every day |
If any of these friction points sound uncomfortably familiar, our thinkAI team helps SMBs move from “we’re using AI” to “AI is changing how we work.” Reach out — we’d like to talk about what making AI part of your business actually looks like.
Sources
- The state of AI in 2025: Agents, innovation, and transformation. McKinsey & Company, November 2025. mckinsey.com