Loading…

se-stackoverflow

The new pair programming: an AI agent that cleans your code as you write

Ben welcomes Sonar CEO Tariq Shaukat for a conversation about AI coding tools’ potential to boost developer productivity—and how to balance those potential gains against code quality and security concerns. They talk about Sonar’s origins as an open-source code quality tool, the excellent reasons to embrace a “clean as you code” philosophy, and how to determine where AI coding tools can be helpful and where they can’t (yet).

How API security is evolving for the GenAI era

Ben Popper chats with Keith Babo, Head of Product at Solo.io, about how the API security landscape is changing in the era of GenAI. They talk through the role of governance in AI, the importance of data protection, and the role API gateways play in enhancing security and functionality. Keith shares his insights on retrieval-augmented generation (RAG) systems, protecting PII, and the necessity of human-in-the-loop AI development.

What launching rockets taught this CTO about hardware observability

Austin Spiegel, CTO and co-founder of Sift, tells Ben and Ryan about his journey from studying film to working at SpaceX to founding Sift. Austin shares his perspective on software development in high-stakes environments, the challenges of hardware observability, and why paranoia is valuable in safety-critical engineering. Bonus story: Austin invited Elon Musk to speak at his student club…and he came!

Is this the real life? Training autonomous cars with simulations

Ben Popper interviews Vladislav Voroninski, CEO of Helm.ai, about unsupervised learning and the future of AI in autonomous driving. They discuss GenAI’s role in bridging the gap between simulation and reality, the challenges of scaling autonomous driving systems, the commercial potential of partial autonomy, and why software is emerging as a key differentiator in vehicle sales. Vlad spotlights the value of multimodal foundation models and how compute shortages affect AI startups.

Deedy Das: from coding at Meta, to search at Google, to investing with Anthropic

We chat with Deedy Das, a Principal at Menlo Ventures, who began his career as a software engineer at Facebook and Google. He then dipped a toe in the startup world, spending time at the company now know as Glean. More recently he started a career as a venture capitalist, investing in AI and Infra out of the Anthology Fund, a partnership between Menlo Ventures and Anthropic.

He sold his first company for billions. Now he’s building a better developer experience.

Founder and entrepreneur Jyoti Bansal tells Ben, Cassidy, and Eira about the developer challenges he aims to solve with his new venture, Harness, an AI-driven software development platform meant to take the pain out of DevOps. Jyoti shares his journey as a founder, his perspective on the venture capital landscape, and his reasons behind his decision to raise debt capital for Harness.

Detecting errors in AI-generated code

Ben chats with Gias Uddin, an assistant professor at York University in Toronto, where he teaches software engineering, data science, and machine learning. His research focuses on designing intelligent tools for testing, debugging, and summarizing software and AI systems. He recently published a paper about detecting errors in code generated by LLMs. Gias and Ben discuss the concept of hallucinations in AI-generated code, the need for tools to detect and correct those hallucinations, and the potential for AI-powered tools to generate QA tests.

Looking under the hood at the tech stack that powers multimodal AI

Ryan chats with Russ d’Sa, cofounder and CEO of LiveKit, about multimodal AI and the technology that makes it possible. They talk through the tech stack required, including the use of WebRTC and UDP protocols for real-time audio and video streaming. They also explore the big challenges involved in ensuring privacy and security in streaming data, namely end-to-end encryption and obfuscation.

The world’s largest open-source business has plans for enhancing LLMs

Ben and Ryan talk to Scott McCarty, Global Senior Principal Product Manager for Red Hat Enterprise Linux, about the intersection between LLMs (large language models) and open source. They discuss the challenges and benefits of open-source LLMs, the importance of attribution and transparency, and the revolutionary potential for LLM-driven applications. They also explore the role of LLMs in code generation, testing, and documentation.