Basic PLR Models

This is the init_notebook_mode cell from ITables v2.1.4
(you should not see this message - is your notebook trusted?)

ATE Coverage

The simulations are based on the the make_plr_CCDDHNR2018-DGP with \(500\) observations.

DoubleML Version                       0.8.2
Script                   plr_ate_coverage.py
Date                     2024-08-12 16:26:22
Total Runtime (seconds)          2551.825863
Python Version                        3.12.4

Partialling out

Coverage for 95.0%-Confidence Interval over 1000 Repetitions
Learner l Learner m Bias CI Length Coverage
Lasso Lasso 0.035 0.175 0.956
Lasso Random Forest 0.042 0.171 0.887
Random Forest Lasso 0.036 0.181 0.952
Random Forest Random Forest 0.037 0.174 0.940
Coverage for 90.0%-Confidence Interval over 1000 Repetitions
Learner l Learner m Bias CI Length Coverage
Lasso Lasso 0.035 0.146 0.908
Lasso Random Forest 0.042 0.143 0.816
Random Forest Lasso 0.036 0.152 0.908
Random Forest Random Forest 0.037 0.146 0.875

IV-type

For the IV-type score, the learners ml_l and ml_g are both set to the same type of learner (here Learner g).

Coverage for 95.0%-Confidence Interval over 1000 Repetitions
Learner g Learner m Bias CI Length Coverage
Lasso Lasso 0.035 0.166 0.945
Lasso Random Forest 0.036 0.175 0.962
Random Forest Lasso 0.036 0.169 0.945
Random Forest Random Forest 0.037 0.178 0.951
Coverage for 90.0%-Confidence Interval over 1000 Repetitions
Learner g Learner m Bias CI Length Coverage
Lasso Lasso 0.035 0.139 0.881
Lasso Random Forest 0.036 0.147 0.904
Random Forest Lasso 0.036 0.142 0.877
Random Forest Random Forest 0.037 0.149 0.898

ATE Sensitivity

The simulations are based on the the make_confounded_plr_data-DGP with \(1000\) observations as highlighted in the Example Gallery. As the DGP is nonlinear, we will only use corresponding learners. Since the DGP includes unobserved confounders, we would expect a bias in the ATE estimates, leading to low coverage of the true parameter.

Both sensitivity parameters are set to \(cf_y=cf_d=0.1\), such that the robustness value \(RV\) should be approximately \(10\%\). Further, the corresponding confidence intervals are one-sided (since the direction of the bias is unkown), such that only one side should approximate the corresponding coverage level (here only the upper coverage is relevant since the bias is positive). Remark that for the coverage level the value of \(\rho\) has to be correctly specified, such that the coverage level will be generally (significantly) larger than the nominal level under the conservative choice of \(|\rho|=1\).

DoubleML Version                          0.8.2
Script                   plr_ate_sensitivity.py
Date                        2024-08-13 12:28:58
Total Runtime (seconds)            13316.461495
Python Version                           3.12.4
This is the init_notebook_mode cell from ITables v2.1.4
(you should not see this message - is your notebook trusted?)

Partialling out

Coverage for 95.0%-Confidence Interval over 500 Repetitions
Learner l Learner m Bias Bias (Lower) Bias (Upper) Coverage Coverage (Lower) Coverage (Upper) RV RVa
LGBM LGBM 0.922 1.646 0.283 0.114 1.000 0.962 0.123 0.052
LGBM Random Forest 0.995 1.810 0.291 0.142 1.000 0.974 0.118 0.045
Random Forest LGBM 1.572 2.774 0.402 0.008 1.000 0.946 0.128 0.067
Random Forest Random Forest 1.736 3.061 0.462 0.016 1.000 0.946 0.128 0.064
Coverage for 90.0%-Confidence Interval over 500 Repetitions
Learner l Learner m Bias Bias (Lower) Bias (Upper) Coverage Coverage (Lower) Coverage (Upper) RV RVa
LGBM LGBM 0.922 1.646 0.283 0.052 1.000 0.878 0.123 0.067
LGBM Random Forest 0.995 1.810 0.291 0.078 1.000 0.922 0.118 0.060
Random Forest LGBM 1.572 2.774 0.402 0.000 1.000 0.840 0.128 0.080
Random Forest Random Forest 1.736 3.061 0.462 0.000 1.000 0.824 0.128 0.078

IV-type

For the IV-type score, the learners ml_l and ml_g are both set to the same type of learner (here Learner g).

Coverage for 95.0%-Confidence Interval over 500 Repetitions
Learner g Learner m Bias Bias (Lower) Bias (Upper) Coverage Coverage (Lower) Coverage (Upper) RV RVa
LGBM LGBM 0.643 1.345 0.271 0.650 1.000 1.000 0.088 0.014
LGBM Random Forest 0.931 1.697 0.268 0.154 1.000 0.994 0.117 0.043
Random Forest LGBM 0.887 2.120 0.463 0.750 1.000 1.000 0.072 0.009
Random Forest Random Forest 1.613 2.942 0.395 0.036 1.000 0.974 0.119 0.056
Coverage for 90.0%-Confidence Interval over 500 Repetitions
Learner g Learner m Bias Bias (Lower) Bias (Upper) Coverage Coverage (Lower) Coverage (Upper) RV RVa
LGBM LGBM 0.643 1.345 0.271 0.490 1.000 0.998 0.088 0.025
LGBM Random Forest 0.931 1.697 0.268 0.070 1.000 0.938 0.117 0.058
Random Forest LGBM 0.887 2.120 0.463 0.554 1.000 0.998 0.072 0.018
Random Forest Random Forest 1.613 2.942 0.395 0.012 1.000 0.890 0.119 0.070