Loading…

8 lessons from tech leadership on scaling teams and AI

What we learned from the first year of Leaders of Code

Article hero image
Credit: Alexandra Francis

It’s been nearly a year since we launched Leaders of Code, a segment on the Stack Overflow Podcast where we curate candid, illuminating, and (dare we say) inspiring conversations between senior engineering leaders.

An impressive roster of guests from organizations like Google, Cloudflare, GitLab, JPMorgan Chase, Morgan Stanley, and more joined members of our senior leadership team to compare notes on how they build high-performing teams, how they’re leveraging AI and other rapidly emerging tech, and how they drive innovation in their engineering organizations.

To kick off 2026, we wanted to collect some overarching lessons and common themes that many of our guests touched on last year, from the importance of high-quality training data to why so many AI initiatives fizzle to what the trust/adoption gap tells us and how to bridge it.

Read on for the most important insights we heard last year.

AI initiatives need quality data

Poor data quality undermines even the most sophisticated AI initiatives. That was a unifying theme of our show throughout 2025, beginning with the inaugural Leaders of Code episode. In that conversation, Stack Overflow CEO Prashanth Chandrasekar and Don Woodlock, Head of Global Healthcare Solutions at InterSystems, explored how and why a robust data strategy helps organizations realize successful AI projects.

An out-of-tune guitar is an apt metaphor here: No matter how skilled the musician (or advanced the AI model), if the instrument itself is broken or out of tune, the output will be inherently flawed.

Organizations rushing to implement AI often discover that their data infrastructure is fragmented across siloed systems, inconsistent in terms of format, and devoid of proper governance. These issues prevent AI tools from delivering meaningful business value and proving their value to skeptical developers.

In the episode, Prashanth and Don emphasized that maintaining a human-centric approach when automating processes with AI requires building trust among users, which, in turn, starts with clean, well-organized data that AI systems can reliably interpret and effectively use.

Most organizations overestimate data readiness

Too many organizations rush into AI implementation without properly assessing whether their data infrastructure can support it, explained Ram Rai, VP of Platform Engineering at JPMorgan Chase. This overconfidence stems from a fundamental misunderstanding: Having data is not the same as having AI-ready data. A centralized, well-maintained knowledge base is essential for getting AI initiatives off the ground successfully, yet most organizations discover this requirement only after launching poorly conceived pilot projects.

Organizations often fail to evaluate whether their AI projects align with core business values. This can lead to wasted investments in tools that cannot access the internal context necessary for meaningful results. In highly regulated environments with heavy compliance requirements like banking and finance, Ram says his team can’t ignore the productivity benefits offered by AI. At the same time, he says, they must “be surgical about it,” particularly when dealing with critical infrastructure where “we can't entirely trust probabilistic AI.”

Internal knowledge is the antidote to AI hallucinations

Enterprise AI models frequently hallucinate because they lack access to internal company knowledge, as Ram points out: “Why does AI hallucinate? Because it lacks the right context, especially your internal context. AI doesn't know your IDP configuration, token lifetimes, your authentication patterns or your load balance settings, so the training data is thin on this proprietary knowledge.”

This gap between general training data and specific organizational knowledge leads AI tools to make convincing-sounding but fundamentally incorrect suggestions. Grounding AI tools in verified, internal documentation significantly improves accuracy and reliability, helping enterprise users realize the value they need from these new tools.

The conversation with Ram highlighted how Stack Overflow’s structured Q&A data provides ideal fine-tuning material for next-generation AI models by offering the kind of community-driven, verified knowledge that can bridge this context gap. Organizations that invest in robust internal knowledge systems create a foundation for AI tools that developers can actually trust.

To learn more about how Stack Internal can help you build smarter, more trustworthy AI systems, check out this webinar.

Developers trust AI less than ever

Stack Overflow’s 2025 Developer Survey revealed a striking paradox: more developers actively distrust the accuracy of AI tools (46%) than trust it (33%), while only a tiny fraction (3%) report “highly trusting” the output.

This trust deficit has real consequences for adoption and productivity. The number-one frustration, cited by 66% of developers, is dealing with “AI solutions that are almost right, but not quite,” which often leads directly to the second-biggest frustration: “Debugging AI-generated code.” Many developers find themselves wasting time reviewing and fixing AI-generated code rather than experiencing the promised productivity gains.

Experienced developers are the most skeptical of AI, with the lowest “highly trust” rate (2.6%) and the highest “highly distrust” rate (20%). As Ram Rai of JPMorgan Chase acknowledged, “Many developers distrust AI accuracy—that’s the current reality, and there is a struggle with adoption of AI.”

This decline in trust—down from over 70% positive sentiment in 2023 and 2024 to just 60% in 2025—is a red flag. Organizations must address developers’ valid accuracy and reliability concerns before expecting widespread adoption and the realization of actual business value.

Developers turn to Stack Overflow for human-verified, trusted knowledge, with about 35% reporting that their visits to Stack Overflow are a result of AI-related issues at least some of the time. This pattern reveals a crucial insight: when AI tools fail or produce suspicious results, developers seek validation from community-driven platforms where real humans have vetted the answers through collective scrutiny. By “grounding AI in our internal reality using [a] solid community knowledge system like Stack Overflow,” says JPMorgan Chase’s Ram Rai, his organization can move beyond purely probabilistic AI toward systems that incorporate verified, battle-tested knowledge.

As we mentioned above, the structured nature of community Q&A—with voting, peer review, and iterative refinement—provides exactly the kind of high-quality training data that AI models need to generate trustworthy outputs. Organizations that build or access community-driven knowledge layers provide their AI tools the verified context they need to move from “almost right” to consistently reliable.

Understanding AI limitations is crucial

Organizations need to recognize what AI can and cannot do well. That was the big takeaway from our conversation with Dan Shiebler, Head of Machine Learning at Abnormal AI.

Leaders who manage expectations and deploy AI strategically—where it provides genuine value rather than where it's merely trendy—see better outcomes. Understanding limitations means acknowledging that AI excels at pattern matching and generating code for well-defined problems but struggles with novel architectural decisions, complex trade-offs, and situations requiring deep contextual judgment.

The most successful AI implementations carefully scope where AI can add value while maintaining human oversight for decisions that require accountability, domain expertise, or creative problem-solving that goes beyond existing patterns.

AI is reshaping team structure and roles

In the two-part conversation between Peter O'Connor, Stack Overflow’s Director of Platform Engineering, and Ryan J. Salva, Senior Director of Product at Google, Developer Experiences, we explored how AI is transforming team structures. From enabling engineering teams to operate effectively with just a handful of people to reducing collaboration overhead and accelerating decision-making, there’s no denying the reality that AI is reshaping how development teams work.

As AI automates routine tasks like boilerplate code generation, bug triage, and basic testing, the role of developers is shifting toward architecture, critical judgment, and cross-functional collaboration.

This transformation doesn't eliminate the need for developers; instead, it elevates the skills that matter most. The 2025 Developer Survey added a new role, “architect,” now the fourth most popular role for respondents. That change reflects how the industry is recognizing the growing importance of systems-level thinking, design decisions, and integration work. With the benefit of their human experience, senior developers will increasingly focus on strategy, mentorship, and ensuring that AI-augmented teams maintain quality and reliability standards as they pick up even more momentum

APIs are becoming the backbone of AI integration

Abhinav Asthana, CEO and cofounder of Postman, explained how APIs are the key to enabling LLMs to function as true agents by connecting them to live data and workflows.

Well-designed APIs enable AI agents to interact with systems effectively, transforming AI from purely conversational tools into action-oriented systems capable of executing real-world tasks. In the episode, Abhinav shared how Postman uses AI agents to aggregate and summarize developer feedback, providing organizational clarity, while also detailing how the company scaled from just three founders to over 400 people.

The key lesson from all that? Organizations must prioritize API quality, documentation, and developer experience to achieve widespread adoption of AI tools. Postman’s 2025 State of the API report found that 89% of developers use generative AI in their daily work, yet only 24% actively design APIs with AI agents in mind.

This mismatch creates a critical gap: AI agents require precise, machine-readable signals—explicit schemas, typed errors, and clear behavioral rules—yet most APIs are still designed primarily for human consumption.

The report made the strong case that “APIs must be designed with AI agents in mind” because “APIs designed with machine-readable schemas, predictable patterns, and comprehensive documentation will integrate faster and more reliably than those built only for human consumption.” Organizations that invest in API-first development practices, treating APIs as products with proper governance, versioning, and documentation, therefore position themselves to capitalize on the AI agent revolution while competitors struggle with integration challenges.


Add to the discussion

Login with your stackoverflow.com account to take part in the discussion.