In the seventh lesson of the series we'll discuss some methods for comparing linear regression models. In the process, we'll learn about the problem of overfitting and investigate some of the pros and cons of various evaluation methods (such as R-squared, adjusted R-squared, log likelihood, AIC, and BIC). We'll also continue to practice our Python skills.
Here are some Stack Overflow questions related to the work we did in today's session:
- How do I calculate the adjusted R-squared score using scikit-learn?
- scikit-learn & statsmodels - which R-squared is correct?
- How to compute AIC for linear regression model in Python?
If you want to ask any questions or provide feedback on the lesson, you are welcome to leave a comment on the YouTube recording of this lesson. If you’d like to watch a session live, follow the Codecademy YouTube channel. We'll be live again on Tuesday, July 13 at 11am EDT to summarize all of the topics we've covered over the past seven weeks. You can join that session here.
Finally, if you want even more linear regression content, you can sign up for the Linear Regression in Python interactive course this series was based on. This course was developed by Sophie and has many more quizzes, projects, and helpful nuggets that we can’t fit into our streams!