Confidence bands and multiplier bootstrap for valid simultaneous inference#
DoubleML provides methods to perform valid simultaneous inference for multiple treatment variables. As an example, consider a PLR with \(p_1\) causal parameters of interest \(\theta_{0,1}, \ldots, \theta_{0,p_1}\) associated with treatment variables \(D_1, \ldots, D_{p_1}\). Inference on multiple target coefficients can be performed by iteratively applying the DML inference procedure over the target variables of interests: Each of the coefficients of interest, \(\theta_{0,j}\), with \(j \in \lbrace 1, \ldots, p_1 \rbrace\), solves a corresponding moment condition
Analogously to the case with a single parameter of interest, the PLR model with multiple treatment variables includes two regression steps to achieve orthogonality. First, the main regression is given by
with \([D_k, X]\) being a matrix comprising the confounders, \(X\), and all remaining treatment variables \(D_k\) with \(k \in \lbrace 1, \ldots, p_1\rbrace \setminus j\), by default. Second, the relationship between the treatment variable \(D_j\) and the remaining explanatory variables is determined by the equation
For further details, we refer to Belloni et al. (2018). Simultaneous inference can be based on a multiplier bootstrap procedure introduced in Chernozhukov et al. (2013, 2014). Alternatively, traditional correction approaches, for example the Bonferroni correction, can be used to adjust p-values.
Multiplier bootstrap and joint confidence intervals#
The bootstrap()
method provides an implementation of a multiplier bootstrap for double machine learning models.
For \(b=1, \ldots, B\) weights \(\xi_{i, b}\) are generated according to a normal (Gaussian) bootstrap, wild
bootstrap or exponential bootstrap.
The number of bootstrap samples is provided as input n_rep_boot
and for method
one can choose 'Bayes'
,
'normal'
or 'wild'
.
Based on the estimates of the standard errors \(\hat{\sigma}_j\)
and \(\hat{J}_{0,j} = \mathbb{E}_N(\psi_{a,j}(W; \eta_{0,j}))\)
that are obtained from DML, we construct bootstrap coefficients
\(\theta^{*,b}_j\) and bootstrap t-statistics \(t^{*,b}_j\)
for \(j=1, \ldots, p_1\)
The output of the multiplier bootstrap can be used to determine the constant, \(c_{1-\alpha}\) that is required for the construction of a simultaneous \((1-\alpha)\) confidence band
To demonstrate the bootstrap, we simulate data from a sparse partially linear regression model.
Then we estimate the PLR model and perform the multiplier bootstrap.
Joint confidence intervals based on the multiplier bootstrap are then obtained by setting the option joint
when calling the method confint
.
Moreover, a multiple hypotheses testing adjustment of p-values from a high-dimensional model can be obtained with
the method p_adjust
. DoubleML performs a version of the Romano-Wolf stepdown adjustment,
which is based on the multiplier bootstrap, by default. Alternatively, p_adjust
allows users to apply traditional corrections
via the option method
.
In [1]: import doubleml as dml
In [2]: import numpy as np
In [3]: from sklearn.base import clone
In [4]: from sklearn.linear_model import LassoCV
# Simulate data
In [5]: np.random.seed(1234)
In [6]: n_obs = 500
In [7]: n_vars = 100
In [8]: X = np.random.normal(size=(n_obs, n_vars))
In [9]: theta = np.array([3., 3., 3.])
In [10]: y = np.dot(X[:, :3], theta) + np.random.standard_normal(size=(n_obs,))
In [11]: dml_data = dml.DoubleMLData.from_arrays(X[:, 10:], y, X[:, :10])
In [12]: learner = LassoCV()
In [13]: ml_l = clone(learner)
In [14]: ml_m = clone(learner)
In [15]: dml_plr = dml.DoubleMLPLR(dml_data, ml_l, ml_m)
In [16]: print(dml_plr.fit().bootstrap().confint(joint=True))
2.5 % 97.5 %
d1 2.813342 3.055680
d2 2.815224 3.083258
d3 2.860663 3.109069
d4 -0.141546 0.091391
d5 -0.060845 0.176929
d6 -0.158697 0.078474
d7 -0.172022 0.062964
d8 -0.067721 0.174499
d9 -0.092365 0.139491
d10 -0.110717 0.138698
In [17]: print(dml_plr.p_adjust())
thetas pval
d1 2.934511 0.000
d2 2.949241 0.000
d3 2.984866 0.000
d4 -0.025077 0.902
d5 0.058042 0.784
d6 -0.040112 0.808
d7 -0.054529 0.784
d8 0.053389 0.784
d9 0.023563 0.902
d10 0.013990 0.902
In [18]: print(dml_plr.p_adjust(method='bonferroni'))
thetas pval
d1 2.934511 0.0
d2 2.949241 0.0
d3 2.984866 0.0
d4 -0.025077 1.0
d5 0.058042 1.0
d6 -0.040112 1.0
d7 -0.054529 1.0
d8 0.053389 1.0
d9 0.023563 1.0
d10 0.013990 1.0
library(DoubleML)
library(mlr3)
library(mlr3learners)
library(data.table)
lgr::get_logger("mlr3")$set_threshold("warn")
set.seed(3141)
n_obs = 500
n_vars = 100
theta = rep(3, 3)
X = matrix(stats::rnorm(n_obs * n_vars), nrow = n_obs, ncol = n_vars)
y = X[, 1:3, drop = FALSE] %*% theta + stats::rnorm(n_obs)
dml_data = double_ml_data_from_matrix(X = X[, 11:n_vars], y = y, d = X[,1:10])
learner = lrn("regr.cv_glmnet", s="lambda.min")
ml_l = learner$clone()
ml_m = learner$clone()
dml_plr = DoubleMLPLR$new(dml_data, ml_l, ml_m)
dml_plr$fit()
dml_plr$bootstrap()
dml_plr$confint(joint=TRUE)
dml_plr$p_adjust()
dml_plr$p_adjust(method="bonferroni")
2.5 % | 97.5 % | |
---|---|---|
d1 | 2.89027368 | 3.14532650 |
d2 | 2.90794478 | 3.14368145 |
d3 | 2.87430335 | 3.12752825 |
d4 | -0.14790924 | 0.07828372 |
d5 | -0.09779675 | 0.16803512 |
d6 | -0.12105472 | 0.12539340 |
d7 | -0.16536299 | 0.09310496 |
d8 | -0.10127930 | 0.14200098 |
d9 | -0.13868238 | 0.09980311 |
d10 | -0.04444978 | 0.19680840 |
Estimate. | pval | |
---|---|---|
d1 | 3.017800092 | 0.000 |
d2 | 3.025813114 | 0.000 |
d3 | 3.000915799 | 0.000 |
d4 | -0.034812763 | 0.938 |
d5 | 0.035119185 | 0.938 |
d6 | 0.002169338 | 0.958 |
d7 | -0.036129015 | 0.938 |
d8 | 0.020360838 | 0.954 |
d9 | -0.019439633 | 0.954 |
d10 | 0.076179312 | 0.428 |
Estimate. | pval | |
---|---|---|
d1 | 3.017800092 | 0.0000000 |
d2 | 3.025813114 | 0.0000000 |
d3 | 3.000915799 | 0.0000000 |
d4 | -0.034812763 | 1.0000000 |
d5 | 0.035119185 | 1.0000000 |
d6 | 0.002169338 | 1.0000000 |
d7 | -0.036129015 | 1.0000000 |
d8 | 0.020360838 | 1.0000000 |
d9 | -0.019439633 | 1.0000000 |
d10 | 0.076179312 | 0.8116912 |
References#
Belloni, A., Chernozhukov, V., Chetverikov, D., Wei, Y. (2018), Uniformly valid post-regularization confidence regions for many functional parameters in z-estimation framework. The Annals of Statistics, 46 (6B): 3643-75, doi: 10.1214/17-AOS1671.
Chernozhukov, V., Chetverikov, D., Kato, K. (2013). Gaussian approximations and multiplier bootstrap for maxima of sums of high-dimensional random vectors. The Annals of Statistics 41 (6): 2786-2819, doi: 10.1214/13-AOS1161.
Chernozhukov, V., Chetverikov, D., Kato, K. (2014), Gaussian approximation of suprema of empirical processes. The Annals of Statistics 42 (4): 1564-97, doi: 10.1214/14-AOS1230.