The hardest part of building software is not coding, it's requirements
Why replacing programmers with AI won’t be so easy.
From prompt attacks to data leaks, LLMs offer new capabilities and new threats
While there’s a lot of dangers out there, it’s not all doom and gloom; we also talk about how to mitigate these threats.
Are LLMs the end of computer programming (as we know it)?
Ben and Ryan discuss how LLMs are changing the industry and practice of software engineering, a notorious Crash Bandicoot bug, and communication via series of tubes.
Will developers return to hostile offices?
Ben, Ryan, and Eira convene to discuss return-to-office mandates, what’s surprising about employee attrition in 2023, and how technology can preserve digital records of cultural heritage sites before they’re lost for good.
How to scale a business-ready AI platform with watsonx: Q&A with IBM
We chat with IBM about how their watsonx platform makes generative AI more than just a fun toy.
Retrieval augmented generation: Keeping LLMs relevant and current
Retrieval augmented generation (RAG) is a strategy that helps address both LLM hallucinations and out-of-date training data.
USB-C for all, PHP 4EVA, and what do LLMs actually know (if anything)?
Ben and Ryan settle in for a wide-ranging discussion about whether large language models know anything, whether language ability is unique to humans, and what the end of the Hollywood writers’ strike says about the future of AI-generated content.
Fitting AI models in your pocket with quantization
A Qualcomm expert breaks down some of the tools and techniques they use to fit GenAI models on a smartphone.
Semantic search without the napalm grandma exploit (Ep. 600)
Ben and senior software engineer Kyle Mitofsky are joined by two people who worked on the launch of Overflow AI: director of data science and data platform Michael Foree and senior software developer Alex Warren. They talk about how and why Stack Overflow launched semantic search, how to ensure a knowledge base is trustworthy, and why user prompts can make LLMs vulnerable to exploits.
The Overflow #186: Do large language models know what they're talking about?
Knowledge management and AI, VPN security, and an SVG deep dive.
Do large language models know what they are talking about?
Large language models seem to possess the ability to reason intelligently, but does that mean they actually know things?
Let’s talk large language models (Ep. 550)
The home team unpacks their complicated feelings about AI, the Beyoncé deepfake that got kpop hopes up, and the pandemic’s ripple effects on today’s teenagers. Ben, the world’s worst coder, tells Cassidy and Ceora about building a web app with an AI assistant.