Loading…

ai

How Google is helping developers get better answers from AI

Today’s guest is Logan Kilpatrick, a senior product manager at Google, who tells Ben about his journey from software engineering to machine learning to product management, all with an emphasis on reducing developer friction. They talk through the challenges of non-determinism in AI models and how Google is addressing these issues with a new feature: Grounding with Google Search. Plus, what working at the Apple Store taught Logan about product management.

Tragedy of the (data) commons

Ben chats with Shayne Longpre and Robert Mahari of the Data Provenance Initiative about what GenAI means for the data commons. They discuss the decline of public datasets, the complexities of fair use in AI training, the challenges researchers face in accessing data, potential applications for synthetic data, and the evolving legal landscape surrounding AI and copyright.

The new pair programming: an AI agent that cleans your code as you write

Ben welcomes Sonar CEO Tariq Shaukat for a conversation about AI coding tools’ potential to boost developer productivity—and how to balance those potential gains against code quality and security concerns. They talk about Sonar’s origins as an open-source code quality tool, the excellent reasons to embrace a “clean as you code” philosophy, and how to determine where AI coding tools can be helpful and where they can’t (yet).

How API security is evolving for the GenAI era

Ben Popper chats with Keith Babo, Head of Product at Solo.io, about how the API security landscape is changing in the era of GenAI. They talk through the role of governance in AI, the importance of data protection, and the role API gateways play in enhancing security and functionality. Keith shares his insights on retrieval-augmented generation (RAG) systems, protecting PII, and the necessity of human-in-the-loop AI development.

Is this the real life? Training autonomous cars with simulations

Ben Popper interviews Vladislav Voroninski, CEO of Helm.ai, about unsupervised learning and the future of AI in autonomous driving. They discuss GenAI’s role in bridging the gap between simulation and reality, the challenges of scaling autonomous driving systems, the commercial potential of partial autonomy, and why software is emerging as a key differentiator in vehicle sales. Vlad spotlights the value of multimodal foundation models and how compute shortages affect AI startups.

He sold his first company for billions. Now he’s building a better developer experience.

Founder and entrepreneur Jyoti Bansal tells Ben, Cassidy, and Eira about the developer challenges he aims to solve with his new venture, Harness, an AI-driven software development platform meant to take the pain out of DevOps. Jyoti shares his journey as a founder, his perspective on the venture capital landscape, and his reasons behind his decision to raise debt capital for Harness.

Detecting errors in AI-generated code

Ben chats with Gias Uddin, an assistant professor at York University in Toronto, where he teaches software engineering, data science, and machine learning. His research focuses on designing intelligent tools for testing, debugging, and summarizing software and AI systems. He recently published a paper about detecting errors in code generated by LLMs. Gias and Ben discuss the concept of hallucinations in AI-generated code, the need for tools to detect and correct those hallucinations, and the potential for AI-powered tools to generate QA tests.

Looking under the hood at the tech stack that powers multimodal AI

Ryan chats with Russ d’Sa, cofounder and CEO of LiveKit, about multimodal AI and the technology that makes it possible. They talk through the tech stack required, including the use of WebRTC and UDP protocols for real-time audio and video streaming. They also explore the big challenges involved in ensuring privacy and security in streaming data, namely end-to-end encryption and obfuscation.

The world’s largest open-source business has plans for enhancing LLMs

Ben and Ryan talk to Scott McCarty, Global Senior Principal Product Manager for Red Hat Enterprise Linux, about the intersection between LLMs (large language models) and open source. They discuss the challenges and benefits of open-source LLMs, the importance of attribution and transparency, and the revolutionary potential for LLM-driven applications. They also explore the role of LLMs in code generation, testing, and documentation.

OverflowAI and the holy grail of search

Product manager Ash Zade joins the home team to talk about the journey to OverflowAI, a GenAI-powered add-on for Stack Overflow for Teams that’s available now. Ash describes how his team built Enhanced Search, the problems they set out to solve, how they ensured data quality and accuracy, the role of metadata and prompt engineering, and the feedback they’ve gotten from users so far.