Solving the data doom loop
Ken Stott, Field CTO of API platform Hasura, tells Ryan about the data doom loop: the concept that organizations are spending lots of money on data systems without seeing improvements in data quality or efficiency.

Ken Stott, Field CTO of API platform Hasura, tells Ryan about the data doom loop: the concept that organizations are spending lots of money on data systems without seeing improvements in data quality or efficiency.
APIs have steadily become the backbone of AI systems, connecting data and tools seamlessly. Discover how they can drive scalable and secure training for AI models and intelligence automation.
Sagar Batchu, CEO and cofounder of API tooling company Speakeasy, talks with Ryan about the evolving API landscape, AI integration, the role of human technologists in an increasingly automated environment, and what people building APIs right now should keep in mind.
Ben and Ryan talk with Geoffrey (Jef) Huck, a software developer turned public speaking coach, about the importance of soft skills in the tech industry—in particular, speaking and communication skills. Their conversation touches on how Huck’s experiences with anxiety shaped his efforts to become a better communicator, practical techniques for dispelling anxiety and connecting with the audience, and the MVP approach to public speaking.
Ryan talks with Sterling Chin, a senior developer advocate at Postman, about the intersection of APIs and AI. They cover the emergence of AI APIs, the importance of quality APIs for AI integrations, and the evolving role of GraphQL in this new landscape. Sterling explains how some organizations are shifting toward an API-first development approach and talks about the future of data access in the agentic era, where APIs will play a crucial role in AI interactions.
In this episode, Ben and Ryan sit down with Inbal Shani, Chief Product Officer and Head of R&D at Twilio. They talk about how Twilio is incorporating AI into its offerings, the enormous importance of data quality in achieving high-quality responses from AI, the challenges of integrating cutting-edge AI technology into legacy systems, and how companies are turning to AI to improve developer productivity and customer engagement.
Happy New Year! In this episode, Ryan talks with Jetify founder and CEO Daniel Loreto, a former engineering lead at Google and Twitter, about what AI applications have in common with Google Search. They also discuss the challenges inherent in developing AI systems, why a data-driven approach to AI development is important, the implications of non-determinism, and the future of test automation.
It’s easy to generate code, but not so easy to generate good code.
Mark Doble, CEO of Alexi, an AI-powered litigation platform, joins Ben to talk about GenAI’s transformative effect on the legal world. Their conversation touches on the importance of ensuring accurate results and eliminating hallucinations when AI tools are used for legal work, how lawyers (like the rest of us) can adapt to GenAI, and what Alexi’s tech stack looks like.
Dan Parsons, co-founder and Chief Experience Officer at Thoughtful AI, talks about how his company is using AI to simplify how providers get paid by insurance companies.
Ben talks with Eran Yahav, a former researcher on IBM Watson who’s now the CTO and cofounder of AI coding company Tabnine. Ben and Eran talk about the intersection of software development and AI, the evolution of program synthesis, and Eran’s path from IBM research to startup CTO. They also discuss how to balance the productivity and learning gains of AI coding tools (especially for junior devs) against very real concerns around quality, security, and tech debt.
Fabrizio Ferri-Benedetti, who spent many years as a technical writer for Splunk and New Relic, joins Ben and Ryan for a conversation about the evolving role of documentation in software development. They explore how documentation can (and should) be integrated with code, the importance of quality control, and the hurdles to maintaining up-to-date documentation. Plus: Why technical writers shouldn’t be afraid of LLMs.
Retrieval-augmented generation (RAG) is one of the best (and easiest) ways to specialize an LLM over your own data, but successfully applying RAG in practice involves more than just stitching together pretrained models.
Ben and Eira talk with LlamaIndex CEO and cofounder Jerry Liu, along with venture capitalist Jerry Chen, about how the company is making it easier for developers to build LLM apps. They touch on the importance of high-quality training data to improve accuracy and relevance, the role of prompt engineering, the impact of larger context windows, and the challenges of setting up retrieval-augmented generation (RAG).
In this episode, Ben chats with Elastic software engineering director Paul Oremland along with Stack Overflow staff software engineer Steffi Grewenig and senior software developer Gregor Časar about vector databases and semantic search from both the vendor and customer perspectives.
Here’s a simple, three-part framework that explains generative language models.
Ben and Ryan are joined by Robin Gupta for a conversation about benchmarking and testing AI systems. They talk through the lack of trust and confidence in AI, the inherent challenges of nondeterministic systems, the role of human verification, and whether we can (or should) expect an AI to be reliable.
Ben and Ryan talk with Vikram Chatterji, founder and CEO of Galileo, a company focused on building and evaluating generative AI apps. They discuss the challenges of benchmarking and evaluating GenAI models, the importance of data quality in AI systems, and the trade-offs between using pre-trained models and fine-tuning models with custom data.
Product manager Ash Zade joins the home team to talk about the journey to OverflowAI, a GenAI-powered add-on for Stack Overflow for Teams that’s available now. Ash describes how his team built Enhanced Search, the problems they set out to solve, how they ensured data quality and accuracy, the role of metadata and prompt engineering, and the feedback they’ve gotten from users so far.
Only about 5% of GenAI projects lead to significant monetization of new product offerings.
The home team talks about the current state of the software job market, the changing sentiments around AI job opportunities, the impact of big players like Facebook and OpenAI on the space, and the challenges for startups. Plus: The philosophical implications of LLMs and the friendship potential of corvids.
Ben talks with Shane McAllister, lead developer advocate at MongoDB, Stanimira Vlaeva, senior developer advocate at MongoDB, and Miku Jha, director, AI/ML and generative AI at Google Cloud, about the challenges and opportunities of operationalizing and scaling generative AI models in enterprise organizations.
On this episode: Stack Overflow senior data scientist Michael Geden tells Ryan and Ben about how data scientists evaluate large language models (LLMs) and their output. They cover the challenges involved in evaluating LLMs, how LLMs are being used to evaluate other LLMs, the importance of data validating, the need for human raters, and more needs and tradeoffs involved in selecting and fine-tuning LLMs.
In this sponsored episode, Ben and Ryan are joined by Ria Cheruvu, an AI evangelist at Intel, to discuss the different approaches to incorporating AI models into organizations.