Loading…

The Stack Overflow Podcast

OverflowAI and the holy grail of search

Product manager Ash Zade joins the home team to talk about the journey to OverflowAI, a GenAI-powered add-on for Stack Overflow for Teams that’s available now. Ash describes how his team built Enhanced Search, the problems they set out to solve, how they ensured data quality and accuracy, the role of metadata and prompt engineering, and the feedback they’ve gotten from users so far.

The reverse mullet model of software engineering

Ben and Ryan are joined by software developer and listener Patrick Carlile for a conversation about how the job market for software engineers has changed since the dot-com days, navigating boom-and-bust hiring cycles, and the developers finding work at Walmart and In-N-Out. Plus: “Party in the front, business in the back” isn’t just for haircuts anymore.

Why configuration is so complicated

Ben and Ryan explore why configuration is so complicated, the right to repair, the best programming languages for beginners, how AI is grading exams in Texas, Automattic’s $125M acquisition of Beeper, and why a major US city’s train system still relies on floppy disks. Plus: The unique challenge of keeping up with a field that’s changing as rapidly as GenAI.

If everyone is building AI, why aren't more projects in production?

Ben talks with Shane McAllister, lead developer advocate at MongoDB, Stanimira Vlaeva, senior developer advocate at MongoDB, and Miku Jha, director, AI/ML and generative AI at Google Cloud, about the challenges and opportunities of operationalizing and scaling generative AI models in enterprise organizations.

How do you evaluate an LLM? Try an LLM.

On this episode: Stack Overflow senior data scientist Michael Geden tells Ryan and Ben about how data scientists evaluate large language models (LLMs) and their output. They cover the challenges involved in evaluating LLMs, how LLMs are being used to evaluate other LLMs, the importance of data validating, the need for human raters, and more needs and tradeoffs involved in selecting and fine-tuning LLMs.

Are long context windows the end of RAG?

The home team is joined by Michael Foree, Stack Overflow’s director of data science and data platform, and occasional cohost Cassidy Williams, CTO at Contenda, for a conversation about long context windows, retrieval-augmented generation, and how Databricks’ new open LLM could change the game for developers. Plus: How will FTX co-founder Sam Bankman-Fried’s sentence of 25 years in prison reverberate in the blockchain and crypto spaces?

Data, data everywhere and not a stop to think

Ben and Ryan are joined by Nick Heudecker, Senior Director of Market Strategy and Competitive Intelligence at Cribl, to discuss the state of data and analytics. They cover GenAI, the role of incumbents vs. startups, challenges of data storage and security, data quality and ETL pipelines, measures of data quality for GenAI, and Cribl’s role in the data and observability space.

Is AI making your code worse?

Ben and Ryan are joined by Bill Harding, CEO of GitClear, for a discussion of AI-generated code quality and its impact on productivity. GitClear’s research has highlighted the fact that while AI can suggest valid code, it can’t necessarily reuse and modify existing code—a recipe for long-term challenges in maintainability and test coverage if devs are too dependent on AI code-gen tools.

Why the creator of Node.js® created a new JavaScript runtime

Ryan Dahl, creator of Node.js and Deno, tells us about his journey into software development and the creation of Node.js. He explains why he started Deno, a new JavaScript runtime. Ryan also introduces JSR, an alternative to NPM, and emphasizes the importance of security in the JavaScript ecosystem. Plus: Thoughts on the future of JavaScript, including the role of TypeScript and bridging the gap between server-side and browser JavaScript.