The server-side rendering equivalent for LLM inference workloads
Ryan is joined by Tuhin Srivastava, CEO and co-founder of Baseten, to explore the evolving landscape of AI infrastructure and inference workloads, how the shift from traditional machine learning models to large-scale neural networks has made GPU usage challenging, and the potential future of hardware-specific optimizations in AI.
