Loading…

Semantic search without the napalm grandma exploit (Ep. 600)

Ben and senior software engineer Kyle Mitofsky are joined by two people who worked on the launch of Overflow AI: director of data science and data platform Michael Foree and senior software developer Alex Warren. They talk about how and why Stack Overflow launched semantic search, how to ensure a knowledge base is trustworthy, and why user prompts can make LLMs vulnerable to exploits.

Article hero image

Ben and senior software engineer Kyle Mitofsky are joined by two people who worked on the launch of Overflow AI: director of data science and data platform Michael Foree and senior software developer Alex Warren. They talk about how and why Stack Overflow launched semantic search, how to ensure a knowledge base is trustworthy, and why user prompts can make LLMs vulnerable to exploits.

Episode notes:

Last month, we announced the launch of OverflowAI from the stage of WeAreDevelopers. To learn more about AI-driven products and features in the works, check out Stack Overflow Labs.

Among the projects Alex works on is a semantic search API and the new search experience on Stack Overflow for Teams.

LLMs can be vulnerable to jailbreak attacks like the napalm grandma exploit.

Kyle is on GitHub, Linked, and text-based social media.

Michael is on LinkedIn.

Alex is on LinkedIn.

Shoutout to Lifeboat badge winner Pushpendra, who scooped Error: Invalid postback or callback argument from a churning ocean of ignorance.

TRANSCRIPT

Login with your stackoverflow.com account to take part in the discussion.