Loading…

Detecting errors in AI-generated code

Ben chats with Gias Uddin, an assistant professor at York University in Toronto, where he teaches software engineering, data science, and machine learning. His research focuses on designing intelligent tools for testing, debugging, and summarizing software and AI systems. He recently published a paper about detecting errors in code generated by LLMs. Gias and Ben discuss the concept of hallucinations in AI-generated code, the need for tools to detect and correct those hallucinations, and the potential for AI-powered tools to generate QA tests.

Article hero image
Credit: Alexandra Francis

Read the paper Gias coauthored about incorrectness in AI-generated code or explore more of his research.

You can connect with Gias via his website.

We previously covered research on Stack Overflow code snippets that Gias was involved in and spoke to his team about deriving sentiment from SO comments.

Shoutout to Stack Overflow user Adhi Ardiansyah for an excellent explanation of How to update a GitHub access token via command line.

TRANSCRIPT

Login with your stackoverflow.com account to take part in the discussion.