Loading…

What the AI trust gap means for enterprise SaaS

Adoption and trust are moving in diametrically opposed directions, and that gap has real implications for organizations deciding how to spend money on software.

Article hero image
Credit: Alexandra Francis

What the AI trust gap means for enterprise SaaS

Something strange is happening in the developer community when it comes to AI coding tools. Stack Overflow's 2025 survey revealed that adoption of AI tools continues to climb: 84% of developers now use or plan to use AI tools, up from 76% in 2024.

At the same time, however, trust in these tools has fallen sharply. Only 29% of respondents trust AI outputs to be accurate, a big dip from 40% in 2024. Even more revealing, more developers actively distrust the accuracy of AI tools (46%) than trust them (33%), and only 3% report a high level of trust in AI-generated outputs.

In just one year, the developer community went from cautious optimism around AI tools to outright skepticism, even as adoption accelerated. That’s the opposite relationship we’re used to seeing with new tech, where people tend to place more trust in tools as they use them more. In the AI space, adoption and trust are moving in diametrically opposed directions, and that gap has real implications for organizations deciding where and how to spend money on software. Let’s get into it.

Why is trust falling if usage is rising?

At first blush, the disconnect between usage and trust might seem irrational: Why would people keep using tools they don't trust? In a recent deeper dive into the AI trust gap, we explored why developers’ response to the current state of AI is perfectly rational, along with what it reveals about who developers are and what matters to them.

As we wrote, “Developers are neither reflexively change-resistant nor overly eager to integrate AI into their workflows without first ensuring that it adds value. They're professionals trying to navigate a paradigm shift that calls into question core aspects of how they've been trained to think about their work.”

The productivity gains AI tools offer for certain tasks (boilerplate code, documentation, quick lookups/gutchecks) are real and measurable. But by now, developers have spent enough time with these tools to recognize a uniquely dangerous failure mode: answers that sound plausible but are actually incorrect.

This failure mode makes AI errors more insidious than, say, a broken function that throws an unambiguous error. A fundamentally flawed but confidently delivered AI output requires developers who already know enough to catch the mistake. In this sense, as my colleague Ryan Donovan points out, developers’ mistrust of AI can actually be a good thing. For junior developers or anyone solving a problem in an unfamiliar domain, that safety net of human experience and judgment evaporates.

As you’d expect, this dynamic erodes confidence not only in specific outputs, but also in AI categorically. Once you’ve been tripped up by a few perfectly plausible hallucinations, you’ll start going through AI output with a fine-toothed comb, closely auditing it for gaps and errors. The time you spend doing that, of course, undercuts the efficiency gains AI promises in the first place.

What the trust gap means for SaaS decisions

If you’re evaluating SaaS platforms, especially ones with AI features embedded into their core workflows, the trust gap/paradox should definitely be a factor in your decision. After all, the best way to drive tool adoption and realize return on AI investment is by giving developers the tools they want and need. With that in mind, here are our recommendations for people making SaaS purchasing decisions right now:

  • Ask where AI is doing the work—and what happens when it's wrong. There's a big difference between an AI feature that suggests a subject line for an email and one that generates a compliance report, surfaces a security vulnerability, or populates a customer record. The stakes vary widely. Any vendor worth evaluating should be able to give you a clear answer about where their AI outputs are load-bearing and what guardrails are in place when those outputs are incorrect.
  • You know the skepticism your devs apply to AI outputs? Give vendor claims the same side eye. As anyone who works in AI or marketing can tell you, the marketing language around AI can be very far from technical reality (self-driving cars, anyone?). “AI-powered” tells you nothing about accuracy, reliability, or auditability. Push vendors on specifics: What are the known failure modes? How is accuracy measured? Is there a human review layer? What's the recourse when the AI is wrong?
  • Consider how the tool handles uncertainty. The most trustworthy AI implementations give you more than just an answer. They communicate confidence levels, flag edge cases, and enable observability. A platform that presents every AI output with the same level of confidence should, as we said above, invite serious skepticism. Tools that are aware of and transparent about their own limitations are harder to build, but they hold up better under real-world conditions.
  • Factor in the cost of verification. When users lack trust in an AI tool, they compensate by double- and triple-checking the output, which defeats the purpose of using AI to save time and improve accuracy. When you’re weighing AI features in SaaS tools, it’s worth asking how much of the time “saved” by a tool will be spent auditing its output.

The cost of lack of trust

As we wrote in our longer discussion of the AI trust gap, it’s pretty much impossible to scale AI tools when engineering teams don’t trust them. Under pressure, teams revert to manual processes they know and trust. Security and privacy teams are understandably wary of deploying unfamiliar tools, the more so in highly regulated industries. Pilot programs with narrow scopes might succeed, but unless you’re successful in driving adoption across the org, you’ll have a hard time realizing any return on your AI investment.

The uncomfortable middle ground

What makes our current moment so complicated is that organizations can neither trust AI tools completely nor dismiss them out of hand. The productivity upside is real (at least for the right tasks), and the tech is improving. That 84% adoption rate is no bubble; it reflects the genuine utility, if not the consistent reliability, of AI tools.

As Stack Overflow’s developer survey makes plain, developers intend to keep using AI tools, but they want the ability to verify the outputs and understand/account for failure modes. Enterprise organizations need to earn developers’ trust by matching their sophistication: asking vendors harder questions and working with technical teams to build procurement criteria that reflect what AI tools can actually do, rather than just what they promise.

Add to the discussion

Login with your stackoverflow.com account to take part in the discussion.