Note
-
Download Jupyter notebook:
https://docs.doubleml.org/stable/examples/did/py_panel_data_example.ipynb.
Python: Real-Data Example for Multi-Period Difference-in-Differences#
In this example, we replicate a real-data demo notebook from the did-R-package in order to illustrate the use of DoubleML for multi-period difference-in-differences (DiD) models.
The notebook requires the following packages:
[1]:
import pyreadr
import pandas as pd
import numpy as np
from sklearn.linear_model import LinearRegression, LogisticRegression
from sklearn.dummy import DummyRegressor, DummyClassifier
from sklearn.linear_model import LassoCV, LogisticRegressionCV
from doubleml.data import DoubleMLPanelData
from doubleml.did import DoubleMLDIDMulti
Causal Research Question#
Callaway and Sant’Anna (2021) study the causal effect of raising the minimum wage on teen employment in the US using county data over a period from 2001 to 2007. A county is defined as treated if the minimum wage in that county is above the federal minimum wage. We focus on a preprocessed balanced panel data set as provided by the did-R-package. The corresponding documentation for the mpdta
data is available from the did package website. We use this data solely as a demonstration example to help readers understand differences in the DoubleML and did packages. An analogous notebook using the same data is available from the did documentation.
We follow the original notebook and provide results under identification based on unconditional and conditional parallel trends. For the Double Machine Learning (DML) Difference-in-Differences estimator, we demonstrate two different specifications, one based on linear and logistic regression and one based on their \(\ell_1\) penalized variants Lasso and logistic regression with cross-validated penalty choice. The results for the former are expected to be very similar to those in the did data example. Minor differences might arise due to the use of sample-splitting in the DML estimation.
Data#
We will download and read a preprocessed data file as provided by the did-R-package.
[2]:
# download file from did package for R
url = "https://github.com/bcallaway11/did/raw/refs/heads/master/data/mpdta.rda"
pyreadr.download_file(url, "mpdta.rda")
mpdta = pyreadr.read_r("mpdta.rda")["mpdta"]
mpdta.head()
[2]:
| year | countyreal | lpop | lemp | first.treat | treat | |
|---|---|---|---|---|---|---|
| 0 | 2003 | 8001.0 | 5.896761 | 8.461469 | 2007.0 | 1.0 |
| 1 | 2004 | 8001.0 | 5.896761 | 8.336870 | 2007.0 | 1.0 |
| 2 | 2005 | 8001.0 | 5.896761 | 8.340217 | 2007.0 | 1.0 |
| 3 | 2006 | 8001.0 | 5.896761 | 8.378161 | 2007.0 | 1.0 |
| 4 | 2007 | 8001.0 | 5.896761 | 8.487352 | 2007.0 | 1.0 |
To work with DoubleML, we initialize a DoubleMLPanelData object. The input data has to satisfy some requirements, i.e., it should be in a long format with every row containing the information of one unit at one time period. Moreover, the data should contain a column on the unit identifier and a column on the time period. The requirements are virtually identical to those of the
did-R-package, as listed in their data example. In line with the naming conventions of DoubleML, the treatment group indicator is passed to DoubleMLPanelData by the d_cols argument. To flexibly handle different formats for handling time periods, the time variable t_col can handle float,
int and datetime formats. More information are available in the user guide. To indicate never treated units, we set their value for the treatment group variable to np.inf.
Now, we can initialize the DoubleMLPanelData object, specifying
y_col: the outcomed_cols: the group variable indicating the first treated period for each unitid_col: the unique identification column for each unitt_col: the time columnx_cols: the additional pre-treatment controls
[3]:
# Set values for treatment group indicator for never-treated to np.inf
mpdta.loc[mpdta['first.treat'] == 0, 'first.treat'] = np.inf
dml_data = DoubleMLPanelData(
data=mpdta,
y_col="lemp",
d_cols="first.treat",
id_col="countyreal",
t_col="year",
x_cols=['lpop']
)
print(dml_data)
================== DoubleMLPanelData Object ==================
------------------ Data summary ------------------
Outcome variable: lemp
Treatment variable(s): ['first.treat']
Covariates: ['lpop']
Instrument variable(s): None
Time variable: year
Id variable: countyreal
No. Unique Ids: 500
No. Observations: 2500
------------------ DataFrame info ------------------
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 2500 entries, 0 to 2499
Columns: 6 entries, year to treat
dtypes: float64(5), int32(1)
memory usage: 107.6 KB
Note that we specified a pre-treatment confounding variable lpop through the x_cols argument. To consider cases under unconditional parallel trends, we can use dummy learners to ignore the pre-treatment confounding variable. This is illustrated below.
ATT Estimation: Unconditional Parallel Trends#
We start with identification under the unconditional parallel trends assumption. To do so, initialize a DoubleMLDIDMulti object (see model documentation), which takes the previously initialized DoubleMLPanelData object as input. We use scikit-learn’s DummyRegressor (documentation here) and
DummyClassifier (documentation here) to ignore the pre-treatment confounding variable. At this stage, we can also pass further options, for example specifying the number of folds and repetitions used for cross-fitting.
When calling the fit() method, the model estimates standard combinations of \(ATT(g,t)\) parameters, which corresponds to the defaults in the did-R-package. These combinations can also be customized through the gt_combinations argument, see the user guide.
[4]:
dml_obj = DoubleMLDIDMulti(
obj_dml_data=dml_data,
ml_g=DummyRegressor(),
ml_m=DummyClassifier(),
control_group="never_treated",
n_folds=10
)
dml_obj.fit()
print(dml_obj.summary.round(4))
coef std err t P>|t| 2.5 % 97.5 %
ATT(2004.0,2003,2004) -0.0105 0.0233 -0.4487 0.6536 -0.0562 0.0353
ATT(2004.0,2003,2005) -0.0704 0.0310 -2.2675 0.0234 -0.1312 -0.0095
ATT(2004.0,2003,2006) -0.1373 0.0364 -3.7715 0.0002 -0.2086 -0.0659
ATT(2004.0,2003,2007) -0.1009 0.0346 -2.9141 0.0036 -0.1687 -0.0330
ATT(2006.0,2003,2004) 0.0065 0.0233 0.2799 0.7796 -0.0391 0.0522
ATT(2006.0,2004,2005) -0.0027 0.0196 -0.1391 0.8893 -0.0411 0.0356
ATT(2006.0,2005,2006) -0.0046 0.0178 -0.2588 0.7958 -0.0394 0.0302
ATT(2006.0,2005,2007) -0.0412 0.0202 -2.0370 0.0417 -0.0809 -0.0016
ATT(2007.0,2003,2004) 0.0305 0.0150 2.0293 0.0424 0.0010 0.0599
ATT(2007.0,2004,2005) -0.0028 0.0164 -0.1691 0.8657 -0.0349 0.0293
ATT(2007.0,2005,2006) -0.0312 0.0179 -1.7458 0.0808 -0.0663 0.0038
ATT(2007.0,2006,2007) -0.0261 0.0167 -1.5628 0.1181 -0.0587 0.0066
The summary displays estimates of the \(ATT(g,t_\text{eval})\) effects for different combinations of \((g,t_\text{eval})\) via \(\widehat{ATT}(\mathrm{g},t_\text{pre},t_\text{eval})\), where
\(\mathrm{g}\) specifies the group
\(t_\text{pre}\) specifies the corresponding pre-treatment period
\(t_\text{eval}\) specifies the evaluation period
This corresponds to the estimates given in att_gt function in the did-R-package, where the standard choice is \(t_\text{pre} = \min(\mathrm{g}, t_\text{eval}) - 1\) (without anticipation).
Remark that this includes pre-tests effects if \(\mathrm{g} > t_{eval}\), e.g. \(ATT(2007,2005)\).
As usual for the DoubleML-package, you can obtain joint confidence intervals via bootstrap.
[5]:
level = 0.95
ci = dml_obj.confint(level=level)
dml_obj.bootstrap(n_rep_boot=5000)
ci_joint = dml_obj.confint(level=level, joint=True)
print(ci_joint)
2.5 % 97.5 %
ATT(2004.0,2003,2004) -0.076441 0.055494
ATT(2004.0,2003,2005) -0.158141 0.017351
ATT(2004.0,2003,2006) -0.240123 -0.034396
ATT(2004.0,2003,2007) -0.198695 -0.003037
ATT(2006.0,2003,2004) -0.059330 0.072370
ATT(2006.0,2004,2005) -0.058024 0.052579
ATT(2006.0,2005,2006) -0.054790 0.045596
ATT(2006.0,2005,2007) -0.098391 0.015971
ATT(2007.0,2003,2004) -0.011971 0.072926
ATT(2007.0,2004,2005) -0.049075 0.043535
ATT(2007.0,2005,2006) -0.081798 0.019332
ATT(2007.0,2006,2007) -0.073191 0.021071
A visualization of the effects can be obtained via the plot_effects() method.
Remark that the plot used joint confidence intervals per default.
[6]:
fig, ax = dml_obj.plot_effects()
/opt/hostedtoolcache/Python/3.12.12/x64/lib/python3.12/site-packages/matplotlib/cbook.py:1719: FutureWarning: Calling float on a single element Series is deprecated and will raise a TypeError in the future. Use float(ser.iloc[0]) instead
return math.isfinite(val)
Effect Aggregation#
As the did-R-package, the \(ATT\)’s can be aggregated to summarize multiple effects. For details on different aggregations and details on their interpretations see Callaway and Sant’Anna(2021).
The aggregations are implemented via the aggregate() method. We follow the structure of the did package notebook and start with an aggregation relative to the treatment timing.
Event Study Aggregation#
We can aggregate the \(ATT\)s relative to the treatment timing. This is done by setting aggregation="eventstudy" in the aggregate() method. aggregation="eventstudy" aggregates \(\widehat{ATT}(\mathrm{g},t_\text{pre},t_\text{eval})\) based on exposure time \(e = t_\text{eval} - \mathrm{g}\) (respecting group size).
[7]:
# rerun bootstrap for valid simultaneous inference (as values are not saved)
dml_obj.bootstrap(n_rep_boot=5000)
aggregated_eventstudy = dml_obj.aggregate("eventstudy")
# run bootstrap to obtain simultaneous confidence intervals
aggregated_eventstudy.aggregated_frameworks.bootstrap()
print(aggregated_eventstudy)
fig, ax = aggregated_eventstudy.plot_effects()
================== DoubleMLDIDAggregation Object ==================
Event Study Aggregation
------------------ Overall Aggregated Effects ------------------
coef std err t P>|t| 2.5 % 97.5 %
-0.077249 0.019999 -3.862648 0.000112 -0.116447 -0.038052
------------------ Aggregated Effects ------------------
coef std err t P>|t| 2.5 % 97.5 %
-3.0 0.030477 0.015019 2.029297 0.042428 0.001041 0.059913
-2.0 -0.000597 0.013274 -0.044967 0.964134 -0.026613 0.025419
-1.0 -0.024564 0.014212 -1.728387 0.083919 -0.052419 0.003291
0.0 -0.019933 0.011825 -1.685729 0.091848 -0.043109 0.003243
1.0 -0.050939 0.016811 -3.030104 0.002445 -0.083887 -0.017990
2.0 -0.137260 0.036394 -3.771486 0.000162 -0.208591 -0.065929
3.0 -0.100866 0.034613 -2.914131 0.003567 -0.168705 -0.033026
------------------ Additional Information ------------------
Score function: observational
Control group: never_treated
Anticipation periods: 0
Alternatively, the \(ATT\) could also be aggregated according to (calendar) time periods or treatment groups, see the user guide.
Aggregation Details#
The DoubleMLDIDAggregation objects include several DoubleMLFrameworks which support methods like bootstrap() or confint(). Further, the weights can be accessed via the properties
overall_aggregation_weights: weights for the overall aggregationaggregation_weights: weights for the aggregation
To clarify, e.g. for the eventstudy aggregation
If one would like to consider how the aggregated effect with \(e=0\) is computed, one would have to look at the third set of weights within the aggregation_weights property
[8]:
aggregated_eventstudy.aggregation_weights[2]
[8]:
array([0. , 0. , 0. , 0. , 0. ,
0.23391813, 0. , 0. , 0. , 0. ,
0.76608187, 0. ])
ATT Estimation: Conditional Parallel Trends#
We briefly demonstrate how to use the DoubleMLDIDMulti model with conditional parallel trends. As the rationale behind DML is to flexibly model nuisance components as prediction problems, the DML DiD estimator includes pre-treatment covariates by default. In DiD, the nuisance components are the outcome regression and the propensity score estimation for the treatment group variable. This is why we had to enforce dummy learners in the unconditional parallel trends case to ignore the
pre-treatment covariates. Now, we can replicate the classical doubly robust DiD estimator as of Callaway and Sant’Anna(2021) by using linear and logistic regression for the nuisance components. This is done by setting ml_g to LinearRegression() and ml_m to LogisticRegression(). Similarly, we can also choose other learners, for example by setting ml_g and ml_m to LassoCV() and LogisticRegressionCV(). We present
the results for the ATTs and their event-study aggregation in the corresponding effect plots.
Please note that the example is meant to illustrate the usage of the DoubleMLDIDMulti model in combination with ML learners. In real-data applicatoins, careful choice and empirical evaluation of the learners are required. Default measures for the prediction of the nuisance components are printed in the model summary, as briefly illustrated below.
[9]:
dml_obj_linear_logistic = DoubleMLDIDMulti(
obj_dml_data=dml_data,
ml_g=LinearRegression(),
ml_m=LogisticRegression(penalty=None),
control_group="never_treated",
n_folds=10
)
dml_obj_linear_logistic.fit()
dml_obj_linear_logistic.bootstrap(n_rep_boot=5000)
dml_obj_linear_logistic.plot_effects(title="Estimated ATTs by Group, Linear and logistic Regression")
/opt/hostedtoolcache/Python/3.12.12/x64/lib/python3.12/site-packages/matplotlib/cbook.py:1719: FutureWarning: Calling float on a single element Series is deprecated and will raise a TypeError in the future. Use float(ser.iloc[0]) instead
return math.isfinite(val)
[9]:
(<Figure size 1200x800 with 4 Axes>,
[<Axes: title={'center': 'First Treated: 2004.0'}, ylabel='Effect'>,
<Axes: title={'center': 'First Treated: 2006.0'}, ylabel='Effect'>,
<Axes: title={'center': 'First Treated: 2007.0'}, xlabel='Evaluation Period', ylabel='Effect'>])
We briefly look at the model summary, which includes some standard diagnostics for the prediction of the nuisance components.
[10]:
print(dml_obj_linear_logistic)
================== DoubleMLDIDMulti Object ==================
------------------ Data summary ------------------
Outcome variable: lemp
Treatment variable(s): ['first.treat']
Covariates: ['lpop']
Instrument variable(s): None
Time variable: year
Id variable: countyreal
No. Unique Ids: 500
No. Observations: 2500
------------------ Score & algorithm ------------------
Score function: observational
Control group: never_treated
Anticipation periods: 0
------------------ Machine learner ------------------
Learner ml_g: LinearRegression()
Learner ml_m: LogisticRegression(penalty=None)
Out-of-sample Performance:
Regression:
Learner ml_g0 RMSE: [[0.17234672 0.18201945 0.25776653 0.2585917 0.17161403 0.15278096
0.20173732 0.20652577 0.17236027 0.15189273 0.2006259 0.16358882]]
Learner ml_g1 RMSE: [[0.11042218 0.12785964 0.13803254 0.15336298 0.13900455 0.11202501
0.08695402 0.10686485 0.13232116 0.16399192 0.16317426 0.16092767]]
Classification:
Learner ml_m Log Loss: [[0.23192271 0.22935895 0.22919091 0.23172733 0.34912542 0.34917713
0.35047496 0.34920303 0.60645272 0.60839745 0.60869083 0.60788417]]
------------------ Resampling ------------------
No. folds: 10
No. repeated sample splits: 1
------------------ Fit summary ------------------
coef std err t P>|t| 2.5 % \
ATT(2004.0,2003,2004) -0.017013 0.022370 -0.760502 0.446955 -0.060858
ATT(2004.0,2003,2005) -0.074440 0.028362 -2.624586 0.008675 -0.130029
ATT(2004.0,2003,2006) -0.139863 0.034912 -4.006143 0.000062 -0.208289
ATT(2004.0,2003,2007) -0.105978 0.032622 -3.248627 0.001160 -0.169917
ATT(2006.0,2003,2004) -0.001152 0.022275 -0.051699 0.958769 -0.044811
ATT(2006.0,2004,2005) -0.005436 0.018542 -0.293181 0.769384 -0.041778
ATT(2006.0,2005,2006) -0.002113 0.019419 -0.108838 0.913331 -0.040173
ATT(2006.0,2005,2007) -0.037322 0.019732 -1.891410 0.058570 -0.075996
ATT(2007.0,2003,2004) 0.026911 0.014033 1.917765 0.055141 -0.000592
ATT(2007.0,2004,2005) -0.004941 0.015727 -0.314146 0.753410 -0.035766
ATT(2007.0,2005,2006) -0.027957 0.018542 -1.507786 0.131609 -0.064299
ATT(2007.0,2006,2007) -0.029652 0.016262 -1.823419 0.068240 -0.061524
97.5 %
ATT(2004.0,2003,2004) 0.026832
ATT(2004.0,2003,2005) -0.018850
ATT(2004.0,2003,2006) -0.071436
ATT(2004.0,2003,2007) -0.042039
ATT(2006.0,2003,2004) 0.042507
ATT(2006.0,2004,2005) 0.030905
ATT(2006.0,2005,2006) 0.035946
ATT(2006.0,2005,2007) 0.001353
ATT(2007.0,2003,2004) 0.054415
ATT(2007.0,2004,2005) 0.025884
ATT(2007.0,2005,2006) 0.008384
ATT(2007.0,2006,2007) 0.002220
[11]:
es_linear_logistic = dml_obj_linear_logistic.aggregate("eventstudy")
es_linear_logistic.aggregated_frameworks.bootstrap()
es_linear_logistic.plot_effects(title="Estimated ATTs by Group, Linear and logistic Regression")
[11]:
(<Figure size 1200x600 with 1 Axes>,
<Axes: title={'center': 'Estimated ATTs by Group, Linear and logistic Regression'}, ylabel='Effect'>)
[12]:
dml_obj_lasso = DoubleMLDIDMulti(
obj_dml_data=dml_data,
ml_g=LassoCV(),
ml_m=LogisticRegressionCV(),
control_group="never_treated",
n_folds=10
)
dml_obj_lasso.fit()
dml_obj_lasso.bootstrap(n_rep_boot=5000)
dml_obj_lasso.plot_effects(title="Estimated ATTs by Group, LassoCV and LogisticRegressionCV()")
/opt/hostedtoolcache/Python/3.12.12/x64/lib/python3.12/site-packages/matplotlib/cbook.py:1719: FutureWarning: Calling float on a single element Series is deprecated and will raise a TypeError in the future. Use float(ser.iloc[0]) instead
return math.isfinite(val)
[12]:
(<Figure size 1200x800 with 4 Axes>,
[<Axes: title={'center': 'First Treated: 2004.0'}, ylabel='Effect'>,
<Axes: title={'center': 'First Treated: 2006.0'}, ylabel='Effect'>,
<Axes: title={'center': 'First Treated: 2007.0'}, xlabel='Evaluation Period', ylabel='Effect'>])
[13]:
# Model summary
print(dml_obj_lasso)
================== DoubleMLDIDMulti Object ==================
------------------ Data summary ------------------
Outcome variable: lemp
Treatment variable(s): ['first.treat']
Covariates: ['lpop']
Instrument variable(s): None
Time variable: year
Id variable: countyreal
No. Unique Ids: 500
No. Observations: 2500
------------------ Score & algorithm ------------------
Score function: observational
Control group: never_treated
Anticipation periods: 0
------------------ Machine learner ------------------
Learner ml_g: LassoCV()
Learner ml_m: LogisticRegressionCV()
Out-of-sample Performance:
Regression:
Learner ml_g0 RMSE: [[0.17244361 0.18206397 0.25950511 0.25740778 0.1717156 0.15183651
0.20071667 0.20546727 0.17319337 0.15339537 0.20031397 0.16600689]]
Learner ml_g1 RMSE: [[0.09979259 0.12939432 0.13900145 0.1496077 0.14063979 0.11737932
0.08913772 0.11269852 0.13130725 0.16041908 0.15915473 0.15947581]]
Classification:
Learner ml_m Log Loss: [[0.22913904 0.22913697 0.22913589 0.22913948 0.35596105 0.35596776
0.35595655 0.35595957 0.60888183 0.6088583 0.60884011 0.6088703 ]]
------------------ Resampling ------------------
No. folds: 10
No. repeated sample splits: 1
------------------ Fit summary ------------------
coef std err t P>|t| 2.5 % \
ATT(2004.0,2003,2004) -0.012666 0.022585 -0.560817 0.574922 -0.056932
ATT(2004.0,2003,2005) -0.076787 0.028960 -2.651474 0.008014 -0.133547
ATT(2004.0,2003,2006) -0.137672 0.035303 -3.899662 0.000096 -0.206865
ATT(2004.0,2003,2007) -0.107403 0.033169 -3.238099 0.001203 -0.172412
ATT(2006.0,2003,2004) -0.001770 0.023243 -0.076141 0.939307 -0.047325
ATT(2006.0,2004,2005) -0.004272 0.019477 -0.219330 0.826393 -0.042447
ATT(2006.0,2005,2006) -0.004557 0.017884 -0.254796 0.798880 -0.039608
ATT(2006.0,2005,2007) -0.041254 0.020309 -2.031336 0.042221 -0.081059
ATT(2007.0,2003,2004) 0.027550 0.015032 1.832736 0.066842 -0.001913
ATT(2007.0,2004,2005) -0.002961 0.016507 -0.179361 0.857655 -0.035314
ATT(2007.0,2005,2006) -0.031016 0.017866 -1.735965 0.082570 -0.066033
ATT(2007.0,2006,2007) -0.028889 0.016860 -1.713423 0.086635 -0.061934
97.5 %
ATT(2004.0,2003,2004) 0.031600
ATT(2004.0,2003,2005) -0.020026
ATT(2004.0,2003,2006) -0.068478
ATT(2004.0,2003,2007) -0.042394
ATT(2006.0,2003,2004) 0.043786
ATT(2006.0,2004,2005) 0.033903
ATT(2006.0,2005,2006) 0.030494
ATT(2006.0,2005,2007) -0.001449
ATT(2007.0,2003,2004) 0.057012
ATT(2007.0,2004,2005) 0.029393
ATT(2007.0,2005,2006) 0.004002
ATT(2007.0,2006,2007) 0.004157
[14]:
es_rf = dml_obj_lasso.aggregate("eventstudy")
es_rf.aggregated_frameworks.bootstrap()
es_rf.plot_effects(title="Estimated ATTs by Group, LassoCV and LogisticRegressionCV()")
[14]:
(<Figure size 1200x600 with 1 Axes>,
<Axes: title={'center': 'Estimated ATTs by Group, LassoCV and LogisticRegressionCV()'}, ylabel='Effect'>)