Reliability for unreliable LLMs
Large language models are non-deterministic by design. Here's how you can inject a little bit of determinism into GenAI workflows.

Large language models are non-deterministic by design. Here's how you can inject a little bit of determinism into GenAI workflows.
AI is no longer just a luxury for the most tech savvy companies — it's now a necessity for organizational transformation. How are real teams successfully leveraging and innovating with these new tools?
On this episode, Ryan chats with Vish Abrams, chief architect at Heroku, about all the work that needs to be done after you’ve vibe coded your dream app.
As a generation characterized as "digital natives," the way Gen Z interacts with and consumes knowledge is rooted in their desire for instant gratification and personalization. How will this affect the future of knowledge management and the technologies of tomorrow?
Ryan Donovan and Ben Popper sit down with Jamie de Guerre, SVP of Product at Together AI, to discuss the evolving landscape of AI and open-source models. They explore the significance of infrastructure in AI, the differences between open-source and closed-source models, and the ethical considerations surrounding AI technology. Jamie emphasized the importance of leveraging internal data for model training and the need for transparency in AI practices.
Diverse, high-quality data is a prerequisite for reliable, effective, and ethical AI solutions.
Ryan and Ben welcome Tulsee Doshi and Logan Kilpatrick from Google's DeepMind to discuss the advanced capabilities of the new Gemini 2.5.
Kathleen Vignos, VP of Software Engineering at Capital One, sits down with Ryan to explore shifting to 100% serverless architecture in enterprise, deploying talent for better customer experience, and fostering AI innovation and tech advancements in a regulated banking environment.
Positioned at the intersection of automation, decision intelligence, and data orchestration, AI agents are quickly emerging as essential tools for aligning business outcomes with technical workflows.
Ryan welcomes Glen Coates, VP of Product at Shopify, to dive into the intricacies of managing a developer-focused product, the challenges of backwards compatibility, and the implications of AI and LLMs in Shopify's development environment.
We’re always trying to make it easy for users to pick out the information they need and gain insights into their processes, so a natural language interface seemed like a dream.
Douwe Kiela, CEO and cofounder of Contextual AI, joins Ryan and Ben to explore the intricacies of retrieval-augmented generation (RAG). They discuss the early research Douwe did at Meta that jump started the whole thing, the challenges of hallucinations, and the significance of context windows in AI applications.
Matthew McCullough, VP of Product for Android Developer Experience, sits down with Ryan to talk advancements in Android development, enhancing developer efficiency and reducing routine toil, and the application of Gemini AI models to improve software toolchains.
Ryan is joined by Jeremy Edberg, CEO of DBOS, and Qian Li, co-founder of DBOS, to discuss durable execution and its use cases, its implementation using technologies like PostgreSQL, and its applications in machine learning pipelines and AI systems for reliability, debugging, and observability.
Christophe Coenraets, SVP of Developer Relations at Salesforce, tells Eira and Ben about building the new Salesforce Developer Edition, which includes access to the company’s agentic AI platform, Agentforce. Christophe explains how they solicited and incorporated feedback from the developer community in building the developer edition, what types of AI agents people are building, and the critical importance of guardrails and prompt engineering.
Money is pouring into the AI industry. Will software survive the disruption it causes?
Maryam Ashoori, Head of Product for watsonx.ai at IBM, joins Ryan and Eira to talk about the complexity of enterprise AI, the role of governance, the AI skill gap among developers, how AI coding tools impact developer productivity, what chain-of-thought reasoning entails, and what observability and monitoring look like for AI.
Ben Popper chats with CTO Abby Kearns about how Alembic is using composite AI and lessons learned from contract tracing and epidemiology to help companies map customer journeys and understand the ROI of their marketing spend. Ben and Abby also talk about where open-source models have the edge and the challenges startups face in building trust with big companies and securing the resources they need to grow.
Ryan welcomes Jeu George, cofounder and CEO of Orkes, to the show for a conversation about microservices orchestration. They talk through the evolution of microservices, the role of orchestration tools, and the importance of reliability in distributed systems. Their discussion also touches on the transition from open-source solutions to managed services, integration opportunities for AI agents, and the future of microservices in cloud computing.
At HumanX 2025, Ryan chatted with Rodrigo Liang, cofounder and CEO of SambaNova, about reimagining 30-year-old hardware architecture for the AI era.
Avoiding bad data is just as important in AI; it can open you to fines, lawsuits, and lost customers.
Ryan talks with Greg Fallon, CEO of Geminus, about the intersection of AI and physical infrastructure, the evolution of simulation technology, the role of synthetic data in machine learning, and the importance of building trust in AI systems. Their conversation also touches on automation, security concerns inherent in AI-driven infrastructure, and AI’s potential to revolutionize how complex infrastructure systems are managed.
Self-supervised learning is a key advancement that revolutionized natural language processing and generative AI. Here’s how it works and two examples of how it is used to train language models.
Today’s episode is a roundup of spontaneous, on-the-ground conversations from HumanX 2025, featuring guests from CodeConductor, DDN, Cloudflare, and Galileo.