APO Models
APO Pointwise Coverage
The simulations are based on the the make_irm_data_discrete_treatments-DGP with \(500\) observations. Due to the linearity of the DGP, Lasso and Logit Regression are nearly optimal choices for the nuisance estimation.
DoubleML Version 0.10.dev0
Script irm_apo_coverage.py
Date 2025-02-17 14:15:48
Total Runtime (seconds) 5805.248526
Python Version 3.12.9
Number of observations 500
Number of repetitions 1000
Learner g | Learner m | Treatment Level | Bias | CI Length | Coverage |
---|---|---|---|---|---|
LGBM | LGBM | 0.000 | 2.077 | 10.316 | 0.966 |
LGBM | LGBM | 1.000 | 9.217 | 45.558 | 0.968 |
LGBM | LGBM | 2.000 | 9.573 | 44.674 | 0.953 |
LGBM | Logistic | 0.000 | 1.342 | 6.704 | 0.955 |
LGBM | Logistic | 1.000 | 1.693 | 8.845 | 0.969 |
LGBM | Logistic | 2.000 | 1.662 | 8.724 | 0.965 |
Linear | LGBM | 0.000 | 1.312 | 6.552 | 0.950 |
Linear | LGBM | 1.000 | 2.129 | 12.751 | 0.983 |
Linear | LGBM | 2.000 | 1.638 | 8.953 | 0.966 |
Linear | Logistic | 0.000 | 1.292 | 6.358 | 0.955 |
Linear | Logistic | 1.000 | 1.280 | 6.455 | 0.957 |
Linear | Logistic | 2.000 | 1.287 | 6.394 | 0.957 |
Learner g | Learner m | Treatment Level | Bias | CI Length | Coverage |
---|---|---|---|---|---|
LGBM | LGBM | 0.000 | 2.077 | 8.658 | 0.910 |
LGBM | LGBM | 1.000 | 9.217 | 38.233 | 0.914 |
LGBM | LGBM | 2.000 | 9.573 | 37.492 | 0.894 |
LGBM | Logistic | 0.000 | 1.342 | 5.626 | 0.904 |
LGBM | Logistic | 1.000 | 1.693 | 7.423 | 0.924 |
LGBM | Logistic | 2.000 | 1.662 | 7.321 | 0.920 |
Linear | LGBM | 0.000 | 1.312 | 5.498 | 0.900 |
Linear | LGBM | 1.000 | 2.129 | 10.701 | 0.948 |
Linear | LGBM | 2.000 | 1.638 | 7.514 | 0.934 |
Linear | Logistic | 0.000 | 1.292 | 5.336 | 0.905 |
Linear | Logistic | 1.000 | 1.280 | 5.418 | 0.908 |
Linear | Logistic | 2.000 | 1.287 | 5.366 | 0.906 |
APOS Coverage
The simulations are based on the the make_irm_data_discrete_treatments-DGP with \(500\) observations. Due to the linearity of the DGP, Lasso and Logit Regression are nearly optimal choices for the nuisance estimation.
The non-uniform results (coverage, ci length and bias) refer to averaged values over all quantiles (point-wise confidende intervals).
DoubleML Version 0.10.dev0
Script irm_apo_coverage.py
Date 2025-02-17 14:15:48
Total Runtime (seconds) 5805.248526
Python Version 3.12.9
Number of observations 500
Number of repetitions 1000
Learner g | Learner m | Bias | CI Length | Coverage | Uniform CI Length | Uniform Coverage |
---|---|---|---|---|---|---|
LGBM | LGBM | 7.118 | 33.615 | 0.958 | 40.868 | 0.973 |
LGBM | Logistic | 1.574 | 8.090 | 0.962 | 9.592 | 0.959 |
Linear | LGBM | 1.730 | 9.418 | 0.968 | 11.234 | 0.974 |
Linear | Logistic | 1.284 | 6.402 | 0.956 | 6.818 | 0.952 |
Learner g | Learner m | Bias | CI Length | Coverage | Uniform CI Length | Uniform Coverage |
---|---|---|---|---|---|---|
LGBM | LGBM | 7.118 | 28.210 | 0.906 | 36.194 | 0.926 |
LGBM | Logistic | 1.574 | 6.789 | 0.913 | 8.394 | 0.922 |
Linear | LGBM | 1.730 | 7.903 | 0.927 | 9.850 | 0.936 |
Linear | Logistic | 1.284 | 5.373 | 0.905 | 5.796 | 0.901 |
Causal Contrast Coverage
The simulations are based on the the make_irm_data_discrete_treatments-DGP with \(500\) observations. Due to the linearity of the DGP, Lasso and Logit Regression are nearly optimal choices for the nuisance estimation.
The non-uniform results (coverage, ci length and bias) refer to averaged values over all quantiles (point-wise confidende intervals).
DoubleML Version 0.10.dev0
Script irm_apo_coverage.py
Date 2025-02-17 14:15:48
Total Runtime (seconds) 5805.248526
Python Version 3.12.9
Number of observations 500
Number of repetitions 1000
Learner g | Learner m | Bias | CI Length | Coverage | Uniform CI Length | Uniform Coverage |
---|---|---|---|---|---|---|
LGBM | LGBM | 9.786 | 45.131 | 0.949 | 51.424 | 0.965 |
LGBM | Logistic | 1.268 | 6.822 | 0.962 | 7.765 | 0.961 |
Linear | LGBM | 1.507 | 8.855 | 0.989 | 10.090 | 0.992 |
Linear | Logistic | 0.298 | 1.361 | 0.933 | 1.550 | 0.916 |
Learner g | Learner m | Bias | CI Length | Coverage | Uniform CI Length | Uniform Coverage |
---|---|---|---|---|---|---|
LGBM | LGBM | 9.786 | 37.875 | 0.888 | 44.829 | 0.898 |
LGBM | Logistic | 1.268 | 5.725 | 0.927 | 6.774 | 0.926 |
Linear | LGBM | 1.507 | 7.431 | 0.958 | 8.799 | 0.975 |
Linear | Logistic | 0.298 | 1.143 | 0.873 | 1.351 | 0.871 |
Causal Contrast Sensitivity
The simulations are based on the the ADD-DGP with \(10,000\) observations. As the DGP is nonlinear, we will only use corresponding learners. Since the DGP includes an unobserved confounder, we would expect a bias in the ATE estimates, leading to low coverage of the true parameter.
The confounding is set such that both sensitivity parameters are approximately \(cf_y=cf_d=0.1\), such that the robustness value \(RV\) should be approximately \(10\%\). Further, the corresponding confidence intervals are one-sided (since the direction of the bias is unkown), such that only one side should approximate the corresponding coverage level (here only the lower coverage is relevant since the bias is positive). Remark that for the coverage level the value of \(\rho\) has to be correctly specified, such that the coverage level will be generally (significantly) larger than the nominal level under the conservative choice of \(|\rho|=1\).
ATE
DoubleML Version 0.10.dev0
Script irm_apo_sensitivity.py
Date 2025-02-17 15:18:22
Total Runtime (seconds) 9570.017815
Python Version 3.12.9
Sensitivity Errors 0
Number of observations 10000
Number of repetitions 100
Learner l | Learner m | Bias | Bias (Lower) | Bias (Upper) | Coverage | Coverage (Lower) | Coverage (Upper) | RV | RVa |
---|---|---|---|---|---|---|---|---|---|
LGBM | LGBM | 0.173 | 0.032 | 0.317 | 0.000 | 0.980 | 1.000 | 0.119 | 0.057 |
LGBM | Logistic Regr. | 0.151 | 0.021 | 0.296 | 0.030 | 1.000 | 1.000 | 0.104 | 0.043 |
Linear Reg. | LGBM | 0.175 | 0.033 | 0.319 | 0.000 | 0.990 | 1.000 | 0.121 | 0.058 |
Linear Reg. | Logistic Regr. | 0.089 | 0.057 | 0.235 | 0.710 | 1.000 | 1.000 | 0.063 | 0.008 |
Learner l | Learner m | Bias | Bias (Lower) | Bias (Upper) | Coverage | Coverage (Lower) | Coverage (Upper) | RV | RVa |
---|---|---|---|---|---|---|---|---|---|
LGBM | LGBM | 0.173 | 0.032 | 0.317 | 0.000 | 0.940 | 1.000 | 0.119 | 0.072 |
LGBM | Logistic Regr. | 0.151 | 0.021 | 0.296 | 0.000 | 0.990 | 1.000 | 0.104 | 0.057 |
Linear Reg. | LGBM | 0.175 | 0.033 | 0.319 | 0.000 | 0.920 | 1.000 | 0.121 | 0.072 |
Linear Reg. | Logistic Regr. | 0.089 | 0.057 | 0.235 | 0.560 | 1.000 | 1.000 | 0.063 | 0.016 |