Note
-
Download Jupyter notebook:
https://docs.doubleml.org/stable/examples/did/py_panel_data_example.ipynb.
Python: Real-Data Example for Multi-Period Difference-in-Differences#
In this example, we replicate a real-data demo notebook from the did-R-package in order to illustrate the use of DoubleML
for multi-period difference-in-differences (DiD) models.
The notebook requires the following packages:
[1]:
import pyreadr
import pandas as pd
import numpy as np
from sklearn.linear_model import LinearRegression, LogisticRegression
from sklearn.dummy import DummyRegressor, DummyClassifier
from sklearn.linear_model import LassoCV, LogisticRegressionCV
from doubleml.data import DoubleMLPanelData
from doubleml.did import DoubleMLDIDMulti
Causal Research Question#
Callaway and Sant’Anna (2021) study the causal effect of raising the minimum wage on teen employment in the US using county data over a period from 2001 to 2007. A county is defined as treated if the minimum wage in that county is above the federal minimum wage. We focus on a preprocessed balanced panel data set as provided by the did-R-package. The corresponding documentation for the mpdta
data is available from the did package website. We use this data solely as a demonstration example to help readers understand differences in the DoubleML
and did
packages. An analogous notebook using the same data is available from the did documentation.
We follow the original notebook and provide results under identification based on unconditional and conditional parallel trends. For the Double Machine Learning (DML) Difference-in-Differences estimator, we demonstrate two different specifications, one based on linear and logistic regression and one based on their \(\ell_1\) penalized variants Lasso and logistic regression with cross-validated penalty choice. The results for the former are expected to be very similar to those in the did data example. Minor differences might arise due to the use of sample-splitting in the DML estimation.
Data#
We will download and read a preprocessed data file as provided by the did-R-package.
[2]:
# download file from did package for R
url = "https://github.com/bcallaway11/did/raw/refs/heads/master/data/mpdta.rda"
pyreadr.download_file(url, "mpdta.rda")
mpdta = pyreadr.read_r("mpdta.rda")["mpdta"]
mpdta.head()
[2]:
year | countyreal | lpop | lemp | first.treat | treat | |
---|---|---|---|---|---|---|
0 | 2003 | 8001.0 | 5.896761 | 8.461469 | 2007.0 | 1.0 |
1 | 2004 | 8001.0 | 5.896761 | 8.336870 | 2007.0 | 1.0 |
2 | 2005 | 8001.0 | 5.896761 | 8.340217 | 2007.0 | 1.0 |
3 | 2006 | 8001.0 | 5.896761 | 8.378161 | 2007.0 | 1.0 |
4 | 2007 | 8001.0 | 5.896761 | 8.487352 | 2007.0 | 1.0 |
To work with DoubleML, we initialize a DoubleMLPanelData
object. The input data has to satisfy some requirements, i.e., it should be in a long format with every row containing the information of one unit at one time period. Moreover, the data should contain a column on the unit identifier and a column on the time period. The requirements are virtually identical to those of the
did-R-package, as listed in their data example. In line with the naming conventions of DoubleML, the treatment group indicator is passed to DoubleMLPanelData
by the d_cols
argument. To flexibly handle different formats for handling time periods, the time variable t_col
can handle float
,
int
and datetime
formats. More information are available in the user guide. To indicate never treated units, we set their value for the treatment group variable to np.inf
.
Now, we can initialize the DoubleMLPanelData
object, specifying
y_col
: the outcomed_cols
: the group variable indicating the first treated period for each unitid_col
: the unique identification column for each unitt_col
: the time columnx_cols
: the additional pre-treatment controls
[3]:
# Set values for treatment group indicator for never-treated to np.inf
mpdta.loc[mpdta['first.treat'] == 0, 'first.treat'] = np.inf
dml_data = DoubleMLPanelData(
data=mpdta,
y_col="lemp",
d_cols="first.treat",
id_col="countyreal",
t_col="year",
x_cols=['lpop']
)
print(dml_data)
================== DoubleMLPanelData Object ==================
------------------ Data summary ------------------
Outcome variable: lemp
Treatment variable(s): ['first.treat']
Covariates: ['lpop']
Instrument variable(s): None
Time variable: year
Id variable: countyreal
No. Observations: 500
------------------ DataFrame info ------------------
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 2500 entries, 0 to 2499
Columns: 6 entries, year to treat
dtypes: float64(5), int32(1)
memory usage: 107.6 KB
Note that we specified a pre-treatment confounding variable lpop
through the x_cols
argument. To consider cases under unconditional parallel trends, we can use dummy learners to ignore the pre-treatment confounding variable. This is illustrated below.
ATT Estimation: Unconditional Parallel Trends#
We start with identification under the unconditional parallel trends assumption. To do so, initialize a DoubleMLDIDMulti
object (see model documentation), which takes the previously initialized DoubleMLPanelData
object as input. We use scikit-learn’s DummyRegressor
(documentation here) and
DummyClassifier
(documentation here) to ignore the pre-treatment confounding variable. At this stage, we can also pass further options, for example specifying the number of folds and repetitions used for cross-fitting.
When calling the fit()
method, the model estimates standard combinations of \(ATT(g,t)\) parameters, which corresponds to the defaults in the did-R-package. These combinations can also be customized through the gt_combinations
argument, see the user guide.
[4]:
dml_obj = DoubleMLDIDMulti(
obj_dml_data=dml_data,
ml_g=DummyRegressor(),
ml_m=DummyClassifier(),
control_group="never_treated",
n_folds=10
)
dml_obj.fit()
print(dml_obj.summary.round(4))
coef std err t P>|t| 2.5 % 97.5 %
ATT(2004.0,2003,2004) -0.0105 0.0232 -0.4516 0.6516 -0.0560 0.0350
ATT(2004.0,2003,2005) -0.0704 0.0310 -2.2728 0.0230 -0.1311 -0.0097
ATT(2004.0,2003,2006) -0.1372 0.0371 -3.7010 0.0002 -0.2099 -0.0646
ATT(2004.0,2003,2007) -0.1008 0.0344 -2.9314 0.0034 -0.1682 -0.0334
ATT(2006.0,2003,2004) 0.0065 0.0233 0.2797 0.7797 -0.0392 0.0522
ATT(2006.0,2004,2005) -0.0027 0.0195 -0.1395 0.8890 -0.0410 0.0356
ATT(2006.0,2005,2006) -0.0046 0.0178 -0.2578 0.7966 -0.0394 0.0303
ATT(2006.0,2005,2007) -0.0412 0.0202 -2.0458 0.0408 -0.0807 -0.0017
ATT(2007.0,2003,2004) 0.0306 0.0151 2.0219 0.0432 0.0009 0.0602
ATT(2007.0,2004,2005) -0.0027 0.0164 -0.1620 0.8713 -0.0348 0.0295
ATT(2007.0,2005,2006) -0.0311 0.0179 -1.7350 0.0827 -0.0662 0.0040
ATT(2007.0,2006,2007) -0.0259 0.0167 -1.5561 0.1197 -0.0586 0.0067
The summary displays estimates of the \(ATT(g,t_\text{eval})\) effects for different combinations of \((g,t_\text{eval})\) via \(\widehat{ATT}(\mathrm{g},t_\text{pre},t_\text{eval})\), where
\(\mathrm{g}\) specifies the group
\(t_\text{pre}\) specifies the corresponding pre-treatment period
\(t_\text{eval}\) specifies the evaluation period
This corresponds to the estimates given in att_gt
function in the did-R-package, where the standard choice is \(t_\text{pre} = \min(\mathrm{g}, t_\text{eval}) - 1\) (without anticipation).
Remark that this includes pre-tests effects if \(\mathrm{g} > t_{eval}\), e.g. \(ATT(2007,2005)\).
As usual for the DoubleML-package, you can obtain joint confidence intervals via bootstrap.
[5]:
level = 0.95
ci = dml_obj.confint(level=level)
dml_obj.bootstrap(n_rep_boot=5000)
ci_joint = dml_obj.confint(level=level, joint=True)
print(ci_joint)
2.5 % 97.5 %
ATT(2004.0,2003,2004) -0.076871 0.055907
ATT(2004.0,2003,2005) -0.159040 0.018202
ATT(2004.0,2003,2006) -0.243304 -0.031176
ATT(2004.0,2003,2007) -0.199188 -0.002446
ATT(2006.0,2003,2004) -0.060185 0.073229
ATT(2006.0,2004,2005) -0.058632 0.053178
ATT(2006.0,2005,2006) -0.055458 0.046289
ATT(2006.0,2005,2007) -0.098879 0.016415
ATT(2007.0,2003,2004) -0.012668 0.073769
ATT(2007.0,2004,2005) -0.049574 0.044258
ATT(2007.0,2005,2006) -0.082285 0.020151
ATT(2007.0,2006,2007) -0.073607 0.021736
A visualization of the effects can be obtained via the plot_effects()
method.
Remark that the plot used joint confidence intervals per default.
[6]:
fig, ax = dml_obj.plot_effects()
/opt/hostedtoolcache/Python/3.12.10/x64/lib/python3.12/site-packages/matplotlib/cbook.py:1719: FutureWarning: Calling float on a single element Series is deprecated and will raise a TypeError in the future. Use float(ser.iloc[0]) instead
return math.isfinite(val)

Effect Aggregation#
As the did-R-package, the \(ATT\)’s can be aggregated to summarize multiple effects. For details on different aggregations and details on their interpretations see Callaway and Sant’Anna(2021).
The aggregations are implemented via the aggregate()
method. We follow the structure of the did package notebook and start with an aggregation relative to the treatment timing.
Event Study Aggregation#
We can aggregate the \(ATT\)s relative to the treatment timing. This is done by setting aggregation="eventstudy"
in the aggregate()
method. aggregation="eventstudy"
aggregates \(\widehat{ATT}(\mathrm{g},t_\text{pre},t_\text{eval})\) based on exposure time \(e = t_\text{eval} - \mathrm{g}\) (respecting group size).
[7]:
# rerun bootstrap for valid simultaneous inference (as values are not saved)
dml_obj.bootstrap(n_rep_boot=5000)
aggregated_eventstudy = dml_obj.aggregate("eventstudy")
# run bootstrap to obtain simultaneous confidence intervals
aggregated_eventstudy.aggregated_frameworks.bootstrap()
print(aggregated_eventstudy)
fig, ax = aggregated_eventstudy.plot_effects()
================== DoubleMLDIDAggregation Object ==================
Event Study Aggregation
------------------ Overall Aggregated Effects ------------------
coef std err t P>|t| 2.5 % 97.5 %
-0.077216 0.020125 -3.836778 0.000125 -0.11666 -0.037771
------------------ Aggregated Effects ------------------
coef std err t P>|t| 2.5 % 97.5 %
-3.0 0.030551 0.015110 2.021922 0.043184 0.000936 0.060166
-2.0 -0.000510 0.013291 -0.038396 0.969372 -0.026561 0.025540
-1.0 -0.024438 0.014226 -1.717846 0.085825 -0.052320 0.003444
0.0 -0.019846 0.011813 -1.679995 0.092958 -0.042999 0.003307
1.0 -0.050961 0.016759 -3.040777 0.002360 -0.083808 -0.018114
2.0 -0.137240 0.037082 -3.701018 0.000215 -0.209919 -0.064561
3.0 -0.100817 0.034392 -2.931394 0.003374 -0.168224 -0.033410
------------------ Additional Information ------------------
Score function: observational
Control group: never_treated
Anticipation periods: 0

Alternatively, the \(ATT\) could also be aggregated according to (calendar) time periods or treatment groups, see the user guide.
Aggregation Details#
The DoubleMLDIDAggregation
objects include several DoubleMLFrameworks
which support methods like bootstrap()
or confint()
. Further, the weights can be accessed via the properties
overall_aggregation_weights
: weights for the overall aggregationaggregation_weights
: weights for the aggregation
To clarify, e.g. for the eventstudy aggregation
If one would like to consider how the aggregated effect with \(e=0\) is computed, one would have to look at the third set of weights within the aggregation_weights
property
[8]:
aggregated_eventstudy.aggregation_weights[2]
[8]:
array([0. , 0. , 0. , 0. , 0. ,
0.23391813, 0. , 0. , 0. , 0. ,
0.76608187, 0. ])
ATT Estimation: Conditional Parallel Trends#
We briefly demonstrate how to use the DoubleMLDIDMulti
model with conditional parallel trends. As the rationale behind DML is to flexibly model nuisance components as prediction problems, the DML DiD estimator includes pre-treatment covariates by default. In DiD, the nuisance components are the outcome regression and the propensity score estimation for the treatment group variable. This is why we had to enforce dummy learners in the unconditional parallel trends case to ignore the
pre-treatment covariates. Now, we can replicate the classical doubly robust DiD estimator as of Callaway and Sant’Anna(2021) by using linear and logistic regression for the nuisance components. This is done by setting ml_g
to LinearRegression()
and ml_m
to LogisticRegression()
. Similarly, we can also choose other learners, for example by setting ml_g
and ml_m
to LassoCV()
and LogisticRegressionCV()
. We present
the results for the ATTs and their event-study aggregation in the corresponding effect plots.
Please note that the example is meant to illustrate the usage of the DoubleMLDIDMulti
model in combination with ML learners. In real-data applicatoins, careful choice and empirical evaluation of the learners are required. Default measures for the prediction of the nuisance components are printed in the model summary, as briefly illustrated below.
[9]:
dml_obj_linear_logistic = DoubleMLDIDMulti(
obj_dml_data=dml_data,
ml_g=LinearRegression(),
ml_m=LogisticRegression(penalty=None),
control_group="never_treated",
n_folds=10
)
dml_obj_linear_logistic.fit()
dml_obj_linear_logistic.bootstrap(n_rep_boot=5000)
dml_obj_linear_logistic.plot_effects(title="Estimated ATTs by Group, Linear and logistic Regression")
/opt/hostedtoolcache/Python/3.12.10/x64/lib/python3.12/site-packages/matplotlib/cbook.py:1719: FutureWarning: Calling float on a single element Series is deprecated and will raise a TypeError in the future. Use float(ser.iloc[0]) instead
return math.isfinite(val)
[9]:
(<Figure size 1200x800 with 4 Axes>,
[<Axes: title={'center': 'First Treated: 2004.0'}, ylabel='Effect'>,
<Axes: title={'center': 'First Treated: 2006.0'}, ylabel='Effect'>,
<Axes: title={'center': 'First Treated: 2007.0'}, xlabel='Evaluation Period', ylabel='Effect'>])

We briefly look at the model summary, which includes some standard diagnostics for the prediction of the nuisance components.
[10]:
print(dml_obj_linear_logistic)
================== DoubleMLDIDMulti Object ==================
------------------ Data summary ------------------
Outcome variable: lemp
Treatment variable(s): ['first.treat']
Covariates: ['lpop']
Instrument variable(s): None
Time variable: year
Id variable: countyreal
No. Observations: 500
------------------ Score & algorithm ------------------
Score function: observational
Control group: never_treated
Anticipation periods: 0
------------------ Machine learner ------------------
Learner ml_g: LinearRegression()
Learner ml_m: LogisticRegression(penalty=None)
Out-of-sample Performance:
Regression:
Learner ml_g0 RMSE: [[0.17226484 0.18166755 0.25783132 0.25758358 0.17220879 0.15168775
0.2012599 0.20593027 0.17155829 0.1514822 0.20084731 0.16406172]]
Learner ml_g1 RMSE: [[0.10216842 0.12879101 0.14425857 0.13766611 0.13862368 0.11669285
0.08577943 0.10432464 0.13512132 0.16147053 0.16200373 0.16111342]]
Classification:
Learner ml_m Log Loss: [[0.22941482 0.23091908 0.23213826 0.23184114 0.34832641 0.34858298
0.34858917 0.3493391 0.6052269 0.6058827 0.60544154 0.60677155]]
------------------ Resampling ------------------
No. folds: 10
No. repeated sample splits: 1
------------------ Fit summary ------------------
coef std err t P>|t| 2.5 % \
ATT(2004.0,2003,2004) -0.014054 0.022150 -0.634506 0.525750 -0.057466
ATT(2004.0,2003,2005) -0.076466 0.028540 -2.679268 0.007378 -0.132403
ATT(2004.0,2003,2006) -0.146698 0.034441 -4.259399 0.000020 -0.214201
ATT(2004.0,2003,2007) -0.105661 0.033077 -3.194448 0.001401 -0.170490
ATT(2006.0,2003,2004) -0.001484 0.022190 -0.066859 0.946694 -0.044975
ATT(2006.0,2004,2005) -0.006535 0.018536 -0.352557 0.724421 -0.042864
ATT(2006.0,2005,2006) 0.002804 0.019890 0.141000 0.887870 -0.036179
ATT(2006.0,2005,2007) -0.040352 0.019923 -2.025390 0.042827 -0.079401
ATT(2007.0,2003,2004) 0.027132 0.014105 1.923550 0.054411 -0.000514
ATT(2007.0,2004,2005) -0.004200 0.015737 -0.266882 0.789560 -0.035044
ATT(2007.0,2005,2006) -0.028712 0.018166 -1.580518 0.113988 -0.064317
ATT(2007.0,2006,2007) -0.028947 0.016303 -1.775575 0.075803 -0.060900
97.5 %
ATT(2004.0,2003,2004) 0.029358
ATT(2004.0,2003,2005) -0.020529
ATT(2004.0,2003,2006) -0.079195
ATT(2004.0,2003,2007) -0.040832
ATT(2006.0,2003,2004) 0.042008
ATT(2006.0,2004,2005) 0.029794
ATT(2006.0,2005,2006) 0.041788
ATT(2006.0,2005,2007) -0.001303
ATT(2007.0,2003,2004) 0.054777
ATT(2007.0,2004,2005) 0.026644
ATT(2007.0,2005,2006) 0.006893
ATT(2007.0,2006,2007) 0.003006
[11]:
es_linear_logistic = dml_obj_linear_logistic.aggregate("eventstudy")
es_linear_logistic.aggregated_frameworks.bootstrap()
es_linear_logistic.plot_effects(title="Estimated ATTs by Group, Linear and logistic Regression")
[11]:
(<Figure size 1200x600 with 1 Axes>,
<Axes: title={'center': 'Estimated ATTs by Group, Linear and logistic Regression'}, ylabel='Effect'>)

[12]:
dml_obj_lasso = DoubleMLDIDMulti(
obj_dml_data=dml_data,
ml_g=LassoCV(),
ml_m=LogisticRegressionCV(),
control_group="never_treated",
n_folds=10
)
dml_obj_lasso.fit()
dml_obj_lasso.bootstrap(n_rep_boot=5000)
dml_obj_lasso.plot_effects(title="Estimated ATTs by Group, LassoCV and LogisticRegressionCV()")
/opt/hostedtoolcache/Python/3.12.10/x64/lib/python3.12/site-packages/matplotlib/cbook.py:1719: FutureWarning: Calling float on a single element Series is deprecated and will raise a TypeError in the future. Use float(ser.iloc[0]) instead
return math.isfinite(val)
[12]:
(<Figure size 1200x800 with 4 Axes>,
[<Axes: title={'center': 'First Treated: 2004.0'}, ylabel='Effect'>,
<Axes: title={'center': 'First Treated: 2006.0'}, ylabel='Effect'>,
<Axes: title={'center': 'First Treated: 2007.0'}, xlabel='Evaluation Period', ylabel='Effect'>])

[13]:
# Model summary
print(dml_obj_lasso)
================== DoubleMLDIDMulti Object ==================
------------------ Data summary ------------------
Outcome variable: lemp
Treatment variable(s): ['first.treat']
Covariates: ['lpop']
Instrument variable(s): None
Time variable: year
Id variable: countyreal
No. Observations: 500
------------------ Score & algorithm ------------------
Score function: observational
Control group: never_treated
Anticipation periods: 0
------------------ Machine learner ------------------
Learner ml_g: LassoCV()
Learner ml_m: LogisticRegressionCV()
Out-of-sample Performance:
Regression:
Learner ml_g0 RMSE: [[0.17334516 0.18182436 0.25907442 0.25916087 0.17240567 0.15208742
0.20111859 0.20599422 0.17393255 0.15246867 0.20196347 0.16375282]]
Learner ml_g1 RMSE: [[0.1020218 0.12694835 0.14055715 0.15369487 0.13955543 0.11426874
0.08906546 0.1057638 0.13274251 0.16319667 0.1596991 0.15915781]]
Classification:
Learner ml_m Log Loss: [[0.22913291 0.22913943 0.22913294 0.22913756 0.35595646 0.35596884
0.35597197 0.35596843 0.60885715 0.60884359 0.61317944 0.61330627]]
------------------ Resampling ------------------
No. folds: 10
No. repeated sample splits: 1
------------------ Fit summary ------------------
coef std err t P>|t| 2.5 % \
ATT(2004.0,2003,2004) -0.013439 0.022971 -0.585035 0.558524 -0.058461
ATT(2004.0,2003,2005) -0.075213 0.028823 -2.609472 0.009068 -0.131705
ATT(2004.0,2003,2006) -0.137772 0.035654 -3.864143 0.000111 -0.207653
ATT(2004.0,2003,2007) -0.109893 0.034053 -3.227120 0.001250 -0.176635
ATT(2006.0,2003,2004) -0.001413 0.023393 -0.060385 0.951849 -0.047261
ATT(2006.0,2004,2005) -0.004386 0.019365 -0.226500 0.820813 -0.042342
ATT(2006.0,2005,2006) -0.004544 0.017827 -0.254911 0.798792 -0.039484
ATT(2006.0,2005,2007) -0.041241 0.020216 -2.040035 0.041347 -0.080863
ATT(2007.0,2003,2004) 0.025978 0.015149 1.714852 0.086372 -0.003713
ATT(2007.0,2004,2005) -0.005121 0.016412 -0.312012 0.755031 -0.037289
ATT(2007.0,2005,2006) -0.028285 0.017837 -1.585766 0.112792 -0.063245
ATT(2007.0,2006,2007) -0.030477 0.016885 -1.804972 0.071079 -0.063572
97.5 %
ATT(2004.0,2003,2004) 0.031583
ATT(2004.0,2003,2005) -0.018721
ATT(2004.0,2003,2006) -0.067892
ATT(2004.0,2003,2007) -0.043150
ATT(2006.0,2003,2004) 0.044436
ATT(2006.0,2004,2005) 0.033569
ATT(2006.0,2005,2006) 0.030396
ATT(2006.0,2005,2007) -0.001619
ATT(2007.0,2003,2004) 0.055669
ATT(2007.0,2004,2005) 0.027047
ATT(2007.0,2005,2006) 0.006675
ATT(2007.0,2006,2007) 0.002617
[14]:
es_rf = dml_obj_lasso.aggregate("eventstudy")
es_rf.aggregated_frameworks.bootstrap()
es_rf.plot_effects(title="Estimated ATTs by Group, LassoCV and LogisticRegressionCV()")
[14]:
(<Figure size 1200x600 with 1 Axes>,
<Axes: title={'center': 'Estimated ATTs by Group, LassoCV and LogisticRegressionCV()'}, ylabel='Effect'>)
