How do you evaluate an LLM? Try an LLM.

On this episode: Stack Overflow senior data scientist Michael Geden tells Ryan and Ben about how data scientists evaluate large language models (LLMs) and their output. They cover the challenges involved in evaluating LLMs, how LLMs are being used to evaluate other LLMs, the importance of data validating, the need for human raters, and more needs and tradeoffs involved in selecting and fine-tuning LLMs.

Are long context windows the end of RAG?

The home team is joined by Michael Foree, Stack Overflow’s director of data science and data platform, and occasional cohost Cassidy Williams, CTO at Contenda, for a conversation about long context windows, retrieval-augmented generation, and how Databricks’ new open LLM could change the game for developers. Plus: How will FTX co-founder Sam Bankman-Fried’s sentence of 25 years in prison reverberate in the blockchain and crypto spaces?

A leading ML educator on what you need to know about LLMs

Machine learning scientist, author, and LLM developer Maxime Labonne talks with Ben and Ryan about his role as lead machine learning scientist, his contributions to the open-source community, the value of retrieval-augmented generation (RAG), and the process of fine-tuning and unfreezing layers in LLMs. The team talks through various challenges and considerations in implementing GenAI, from data quality to integration.