エピソード

  • CausalML Book Ch1: Foundations of Linear Regression and Prediction
    2025/07/01

    This episode explores the foundational concepts of linear regression as a tool for predictive inference and association analysis. It details the Best Linear Prediction (BLP) problem and its finite-sample counterpart, Ordinary Least Squares (OLS), emphasizing their statistical properties, including analysis of variance and the challenges of overfitting when the number of parameters is not small relative to the sample size. The text further introduces sample splitting as a method for robustly evaluating predictive models and clarifies how partialling-out helps in understanding the predictive effects of specific regressors, such as in analyzing wage gaps. Finally, it discusses adaptive statistical inference and the behavior of OLS in high-dimensional settings where traditional assumptions may not hold.

    Disclosure

    • The CausalML Book: Chernozhukov, V. & Hansen, C. & Kallus, N. & Spindler, M., & Syrgkanis, V. (2024): Applied Causal Inference Powered by ML and AI. CausalML-book.org; arXiv:2403.02467.
    • Audio summary is generated by Google NotebookLM https://notebooklm.google/
    • The episode art is generated by OpenAI ChatGPT
    続きを読む 一部表示
    15 分
  • CausalML Book Ch17: Regression Discontinuity Designs in Causal Inference
    2025/07/01

    This episode explores a powerful method for identifying causal effects in non-experimental settings. The authors, affiliated with various universities, explain the basic RDD framework, where treatment assignment is determined by a running variable crossing a cutoff value. The text highlights how modern machine learning (ML) methods can enhance RDD analysis, particularly when dealing with numerous covariates, improving efficiency and allowing for the study of heterogeneous treatment effects. An empirical example demonstrates the application of RDD and ML techniques to analyze the impact of an antipoverty program in Mexico.

    Disclosure

    • The CausalML Book: Chernozhukov, V. & Hansen, C. & Kallus, N. & Spindler, M., & Syrgkanis, V. (2024): Applied Causal Inference Powered by ML and AI. CausalML-book.org; arXiv:2403.02467.
    • Audio summary is generated by Google NotebookLM https://notebooklm.google/
    • The episode art is generated by OpenAI ChatGPT
    続きを読む 一部表示
    18 分
  • CausalML Book Ch16: Causal Inference with Difference-in-Differences and DML
    2025/07/01

    This episode introduces and explains the Difference-in-Differences (DiD) framework, a widely used method in social sciences for estimating causal effects in situations with treatment and control groups over multiple time periods. It elaborates on the core assumption of "parallel trends" and discusses how Debiased Machine Learning (DML) methods can be used to incorporate high-dimensional control variables, enhancing the robustness of DiD analysis. The text illustrates these concepts with a practical example applying DML to study the impact of minimum wage changes on teen employment, analyzing different machine learning models and assessing their performance. The authors also briefly touch on more advanced DiD settings, such as those involving repeated cross-sections, and provide exercises for further study.

    Disclosure

    • The CausalML Book: Chernozhukov, V. & Hansen, C. & Kallus, N. & Spindler, M., & Syrgkanis, V. (2024): Applied Causal Inference Powered by ML and AI. CausalML-book.org; arXiv:2403.02467.
    • Audio summary is generated by Google NotebookLM https://notebooklm.google/
    • The episode art is generated by OpenAI ChatGPT
    続きを読む 一部表示
    15 分
  • CausalML Book Ch15: Causal Machine Learning: CATE Estimation and Validation
    2025/07/01

    This episode focuses on methods for estimating and validating individualized treatment effects, particularly using machine learning (ML) techniques. It explores various "meta-learning" strategies like the S-Learner, T-Learner, Doubly Robust (DR)-Learner, and Residual (R)-Learner, comparing their strengths and weaknesses in different data scenarios. The text also discusses covariate shift and its implications for model performance, proposing adjustments. Finally, it addresses model selection and ensembling for CATE models, along with crucial validation techniques such as heterogeneity tests, calibration checks, and uplift curves to assess model quality and interpret treatment effects.

    Disclosure

    • The CausalML Book: Chernozhukov, V. & Hansen, C. & Kallus, N. & Spindler, M., & Syrgkanis, V. (2024): Applied Causal Inference Powered by ML and AI. CausalML-book.org; arXiv:2403.02467.
    • Audio summary is generated by Google NotebookLM https://notebooklm.google/
    • The episode art is generated by OpenAI ChatGPT
    続きを読む 一部表示
    28 分
  • CausalML Book Ch14: Statistical Inference on Heterogeneous Treatment Effects
    2025/07/01

    This episode focuses on Conditional Average Treatment Effects (CATEs), which are crucial for understanding how treatments affect different subgroups. It contrasts CATEs with simpler average treatment effects, highlighting the complexity and importance of personalized policy decisions. The text details least squares methods for learning CATEs, including Best Linear Approximations (BLAs) and Group Average Treatment Effects (GATEs), exemplified by a 401(k) study. Furthermore, it explores non-parametric inference for CATEs using Causal Forests and Doubly Robust Forests, demonstrating their application in the 401(k) example and a "welfare" experiment. The authors provide notebook resources for practical implementation of these statistical methods.keepSave to notecopy_alldocsAdd noteaudio_magic_eraserAudio OverviewflowchartMind Map
    Disclosure

    • The CausalML Book: Chernozhukov, V. & Hansen, C. & Kallus, N. & Spindler, M., & Syrgkanis, V. (2024): Applied Causal Inference Powered by ML and AI. CausalML-book.org; arXiv:2403.02467.
    • Audio summary is generated by Google NotebookLM https://notebooklm.google/
    • The episode art is generated by OpenAI ChatGPT
    続きを読む 一部表示
    20 分
  • CausalML Book Ch13: DML Inference Under Weak Identification
    2025/07/01

    This episode explores advanced econometric methods for causal inference using Double/Debiased Machine Learning (DML). It focuses on applying DML to instrumental variable (IV) models, including partially linear IV models and interactive IV regression models (IRM) for estimating Local Average Treatment Effects (LATE). A significant portion addresses robust DML inference under weak identification, a common challenge where instruments provide limited information about the endogenous variable. The chapter revisits classic examples like the effect of institutions on economic growth and 401(k) participation on financial assets, demonstrating how DML can offer more robust and flexible analyses compared to traditional methods, especially in the presence of weak instruments.

    Disclosure

    • The CausalML Book: Chernozhukov, V. & Hansen, C. & Kallus, N. & Spindler, M., & Syrgkanis, V. (2024): Applied Causal Inference Powered by ML and AI. CausalML-book.org; arXiv:2403.02467.
    • Audio summary is generated by Google NotebookLM https://notebooklm.google/
    • The episode art is generated by OpenAI ChatGPT
    続きを読む 一部表示
    16 分
  • CausalML Book Ch12: Unobserved Confounders, Instrumental Variables, and Proxy Controls
    2025/07/01

    This episode examines methods for causal inference when unobserved variables, known as confounders, complicate identifying true causal relationships. It begins by discussing sensitivity analysis to assess how robust causal inferences are to such unobserved confounders. The text then introduces instrumental variables (IVs) as a technique to identify causal effects in the presence of these hidden factors, offering both partially linear and non-linear models. Furthermore, the chapter explores the use of proxy controls, which are observed variables that act as stand-ins for unobserved confounders, to enable causal identification, extending these methods to non-linear settings. Throughout, the document highlights practical applications and the role of Double Machine Learning (DML) in these advanced causal inference strategies.

    Disclosure

    • The CausalML Book: Chernozhukov, V. & Hansen, C. & Kallus, N. & Spindler, M., & Syrgkanis, V. (2024): Applied Causal Inference Powered by ML and AI. CausalML-book.org; arXiv:2403.02467.
    • Audio summary is generated by Google NotebookLM https://notebooklm.google/
    • The episode art is generated by OpenAI ChatGPT
    続きを読む 一部表示
    17 分
  • CausalML Book Ch11: DAGs: Good and Bad Controls for Causal Inference
    2025/06/30

    This episode focuses on causal inference and the selection of control variables within the framework of Directed Acyclic Graphs (DAGs). It explains various strategies for constructing valid adjustment sets to identify average causal effects, such as conditioning on parents or common causes of treatment and outcome variables. The text differentiates between "good" and "bad" controls, emphasizing how conditioning on certain pre-treatment or post-treatment variables can introduce or amplify bias. Through examples like M-bias and collider bias, the authors illustrate scenarios where adjusting for seemingly innocuous variables can lead to incorrect causal conclusions. Ultimately, the excerpt provides guidance on robust methods for causal identification while cautioning against common pitfalls in empirical research.

    Disclosure

    • The CausalML Book: Chernozhukov, V. & Hansen, C. & Kallus, N. & Spindler, M., & Syrgkanis, V. (2024): Applied Causal Inference Powered by ML and AI. CausalML-book.org; arXiv:2403.02467.
    • Audio summary is generated by Google NotebookLM https://notebooklm.google/
    • The episode art is generated by OpenAI ChatGPT
    続きを読む 一部表示
    25 分