
CausalML Book Ch1: Foundations of Linear Regression and Prediction
カートのアイテムが多すぎます
カートに追加できませんでした。
ウィッシュリストに追加できませんでした。
ほしい物リストの削除に失敗しました。
ポッドキャストのフォローに失敗しました
ポッドキャストのフォロー解除に失敗しました
-
ナレーター:
-
著者:
このコンテンツについて
This episode explores the foundational concepts of linear regression as a tool for predictive inference and association analysis. It details the Best Linear Prediction (BLP) problem and its finite-sample counterpart, Ordinary Least Squares (OLS), emphasizing their statistical properties, including analysis of variance and the challenges of overfitting when the number of parameters is not small relative to the sample size. The text further introduces sample splitting as a method for robustly evaluating predictive models and clarifies how partialling-out helps in understanding the predictive effects of specific regressors, such as in analyzing wage gaps. Finally, it discusses adaptive statistical inference and the behavior of OLS in high-dimensional settings where traditional assumptions may not hold.
Disclosure
- The CausalML Book: Chernozhukov, V. & Hansen, C. & Kallus, N. & Spindler, M., & Syrgkanis, V. (2024): Applied Causal Inference Powered by ML and AI. CausalML-book.org; arXiv:2403.02467.
- Audio summary is generated by Google NotebookLM https://notebooklm.google/
- The episode art is generated by OpenAI ChatGPT