Note
-
Download Jupyter notebook:
https://docs.doubleml.org/stable/examples/did/py_panel_data_example.ipynb.
Python: Real-Data Example for Multi-Period Difference-in-Differences#
In this example, we replicate a real-data demo notebook from the did-R-package in order to illustrate the use of DoubleML
for multi-period difference-in-differences (DiD) models.
The notebook requires the following packages:
[1]:
import pyreadr
import pandas as pd
import numpy as np
from sklearn.linear_model import LinearRegression, LogisticRegression
from sklearn.dummy import DummyRegressor, DummyClassifier
from sklearn.linear_model import LassoCV, LogisticRegressionCV
from doubleml.data import DoubleMLPanelData
from doubleml.did import DoubleMLDIDMulti
Causal Research Question#
Callaway and Sant’Anna (2021) study the causal effect of raising the minimum wage on teen employment in the US using county data over a period from 2001 to 2007. A county is defined as treated if the minimum wage in that county is above the federal minimum wage. We focus on a preprocessed balanced panel data set as provided by the did-R-package. The corresponding documentation for the mpdta
data is available from the did package website. We use this data solely as a demonstration example to help readers understand differences in the DoubleML
and did
packages. An analogous notebook using the same data is available from the did documentation.
We follow the original notebook and provide results under identification based on unconditional and conditional parallel trends. For the Double Machine Learning (DML) Difference-in-Differences estimator, we demonstrate two different specifications, one based on linear and logistic regression and one based on their \(\ell_1\) penalized variants Lasso and logistic regression with cross-validated penalty choice. The results for the former are expected to be very similar to those in the did data example. Minor differences might arise due to the use of sample-splitting in the DML estimation.
Data#
We will download and read a preprocessed data file as provided by the did-R-package.
[2]:
# download file from did package for R
url = "https://github.com/bcallaway11/did/raw/refs/heads/master/data/mpdta.rda"
pyreadr.download_file(url, "mpdta.rda")
mpdta = pyreadr.read_r("mpdta.rda")["mpdta"]
mpdta.head()
[2]:
year | countyreal | lpop | lemp | first.treat | treat | |
---|---|---|---|---|---|---|
0 | 2003 | 8001.0 | 5.896761 | 8.461469 | 2007.0 | 1.0 |
1 | 2004 | 8001.0 | 5.896761 | 8.336870 | 2007.0 | 1.0 |
2 | 2005 | 8001.0 | 5.896761 | 8.340217 | 2007.0 | 1.0 |
3 | 2006 | 8001.0 | 5.896761 | 8.378161 | 2007.0 | 1.0 |
4 | 2007 | 8001.0 | 5.896761 | 8.487352 | 2007.0 | 1.0 |
To work with DoubleML, we initialize a DoubleMLPanelData
object. The input data has to satisfy some requirements, i.e., it should be in a long format with every row containing the information of one unit at one time period. Moreover, the data should contain a column on the unit identifier and a column on the time period. The requirements are virtually identical to those of the
did-R-package, as listed in their data example. In line with the naming conventions of DoubleML, the treatment group indicator is passed to DoubleMLPanelData
by the d_cols
argument. To flexibly handle different formats for handling time periods, the time variable t_col
can handle float
,
int
and datetime
formats. More information are available in the user guide. To indicate never treated units, we set their value for the treatment group variable to np.inf
.
Now, we can initialize the DoubleMLPanelData
object, specifying
y_col
: the outcomed_cols
: the group variable indicating the first treated period for each unitid_col
: the unique identification column for each unitt_col
: the time columnx_cols
: the additional pre-treatment controls
[3]:
# Set values for treatment group indicator for never-treated to np.inf
mpdta.loc[mpdta['first.treat'] == 0, 'first.treat'] = np.inf
dml_data = DoubleMLPanelData(
data=mpdta,
y_col="lemp",
d_cols="first.treat",
id_col="countyreal",
t_col="year",
x_cols=['lpop']
)
print(dml_data)
================== DoubleMLPanelData Object ==================
------------------ Data summary ------------------
Outcome variable: lemp
Treatment variable(s): ['first.treat']
Covariates: ['lpop']
Instrument variable(s): None
Time variable: year
Id variable: countyreal
No. Observations: 500
------------------ DataFrame info ------------------
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 2500 entries, 0 to 2499
Columns: 6 entries, year to treat
dtypes: float64(5), int32(1)
memory usage: 107.6 KB
Note that we specified a pre-treatment confounding variable lpop
through the x_cols
argument. To consider cases under unconditional parallel trends, we can use dummy learners to ignore the pre-treatment confounding variable. This is illustrated below.
ATT Estimation: Unconditional Parallel Trends#
We start with identification under the unconditional parallel trends assumption. To do so, initialize a DoubleMLDIDMulti
object (see model documentation), which takes the previously initialized DoubleMLPanelData
object as input. We use scikit-learn’s DummyRegressor
(documentation here) and
DummyClassifier
(documentation here) to ignore the pre-treatment confounding variable. At this stage, we can also pass further options, for example specifying the number of folds and repetitions used for cross-fitting.
When calling the fit()
method, the model estimates standard combinations of \(ATT(g,t)\) parameters, which corresponds to the defaults in the did-R-package. These combinations can also be customized through the gt_combinations
argument, see the user guide.
[4]:
dml_obj = DoubleMLDIDMulti(
obj_dml_data=dml_data,
ml_g=DummyRegressor(),
ml_m=DummyClassifier(),
control_group="never_treated",
n_folds=10
)
dml_obj.fit()
print(dml_obj.summary.round(4))
coef std err t P>|t| 2.5 % 97.5 %
ATT(2004.0,2003,2004) -0.0105 0.0232 -0.4529 0.6506 -0.0560 0.0350
ATT(2004.0,2003,2005) -0.0704 0.0310 -2.2704 0.0232 -0.1312 -0.0096
ATT(2004.0,2003,2006) -0.1372 0.0364 -3.7740 0.0002 -0.2085 -0.0660
ATT(2004.0,2003,2007) -0.1008 0.0347 -2.9033 0.0037 -0.1689 -0.0328
ATT(2006.0,2003,2004) 0.0066 0.0234 0.2809 0.7788 -0.0392 0.0523
ATT(2006.0,2004,2005) -0.0028 0.0196 -0.1411 0.8878 -0.0412 0.0357
ATT(2006.0,2005,2006) -0.0046 0.0178 -0.2615 0.7937 -0.0394 0.0302
ATT(2006.0,2005,2007) -0.0412 0.0203 -2.0270 0.0427 -0.0811 -0.0014
ATT(2007.0,2003,2004) 0.0305 0.0151 2.0243 0.0429 0.0010 0.0600
ATT(2007.0,2004,2005) -0.0027 0.0164 -0.1653 0.8687 -0.0349 0.0295
ATT(2007.0,2005,2006) -0.0313 0.0179 -1.7482 0.0804 -0.0663 0.0038
ATT(2007.0,2006,2007) -0.0261 0.0167 -1.5631 0.1180 -0.0588 0.0066
The summary displays estimates of the \(ATT(g,t_\text{eval})\) effects for different combinations of \((g,t_\text{eval})\) via \(\widehat{ATT}(\mathrm{g},t_\text{pre},t_\text{eval})\), where
\(\mathrm{g}\) specifies the group
\(t_\text{pre}\) specifies the corresponding pre-treatment period
\(t_\text{eval}\) specifies the evaluation period
This corresponds to the estimates given in att_gt
function in the did-R-package, where the standard choice is \(t_\text{pre} = \min(\mathrm{g}, t_\text{eval}) - 1\) (without anticipation).
Remark that this includes pre-tests effects if \(\mathrm{g} > t_{eval}\), e.g. \(ATT(2007,2005)\).
As usual for the DoubleML-package, you can obtain joint confidence intervals via bootstrap.
[5]:
level = 0.95
ci = dml_obj.confint(level=level)
dml_obj.bootstrap(n_rep_boot=5000)
ci_joint = dml_obj.confint(level=level, joint=True)
print(ci_joint)
2.5 % 97.5 %
ATT(2004.0,2003,2004) -0.076096 0.055083
ATT(2004.0,2003,2005) -0.158133 0.017272
ATT(2004.0,2003,2006) -0.240050 -0.034433
ATT(2004.0,2003,2007) -0.199012 -0.002646
ATT(2006.0,2003,2004) -0.059462 0.072582
ATT(2006.0,2004,2005) -0.058209 0.052676
ATT(2006.0,2005,2006) -0.054835 0.045549
ATT(2006.0,2005,2007) -0.098765 0.016280
ATT(2007.0,2003,2004) -0.012093 0.073076
ATT(2007.0,2004,2005) -0.049154 0.043724
ATT(2007.0,2005,2006) -0.081829 0.019296
ATT(2007.0,2006,2007) -0.073240 0.021089
A visualization of the effects can be obtained via the plot_effects()
method.
Remark that the plot used joint confidence intervals per default.
[6]:
fig, ax = dml_obj.plot_effects()
/opt/hostedtoolcache/Python/3.12.10/x64/lib/python3.12/site-packages/matplotlib/cbook.py:1719: FutureWarning: Calling float on a single element Series is deprecated and will raise a TypeError in the future. Use float(ser.iloc[0]) instead
return math.isfinite(val)

Effect Aggregation#
As the did-R-package, the \(ATT\)’s can be aggregated to summarize multiple effects. For details on different aggregations and details on their interpretations see Callaway and Sant’Anna(2021).
The aggregations are implemented via the aggregate()
method. We follow the structure of the did package notebook and start with an aggregation relative to the treatment timing.
Event Study Aggregation#
We can aggregate the \(ATT\)s relative to the treatment timing. This is done by setting aggregation="eventstudy"
in the aggregate()
method. aggregation="eventstudy"
aggregates \(\widehat{ATT}(\mathrm{g},t_\text{pre},t_\text{eval})\) based on exposure time \(e = t_\text{eval} - \mathrm{g}\) (respecting group size).
[7]:
# rerun bootstrap for valid simultaneous inference (as values are not saved)
dml_obj.bootstrap(n_rep_boot=5000)
aggregated_eventstudy = dml_obj.aggregate("eventstudy")
# run bootstrap to obtain simultaneous confidence intervals
aggregated_eventstudy.aggregated_frameworks.bootstrap()
print(aggregated_eventstudy)
fig, ax = aggregated_eventstudy.plot_effects()
================== DoubleMLDIDAggregation Object ==================
Event Study Aggregation
------------------ Overall Aggregated Effects ------------------
coef std err t P>|t| 2.5 % 97.5 %
-0.07725 0.020031 -3.856522 0.000115 -0.11651 -0.03799
------------------ Aggregated Effects ------------------
coef std err t P>|t| 2.5 % 97.5 %
-3.0 0.030492 0.015063 2.024336 0.042936 0.000970 0.060014
-2.0 -0.000545 0.013308 -0.040976 0.967315 -0.026629 0.025539
-1.0 -0.024600 0.014215 -1.730528 0.083536 -0.052461 0.003261
0.0 -0.019957 0.011821 -1.688268 0.091360 -0.043125 0.003212
1.0 -0.050972 0.016867 -3.022040 0.002511 -0.084030 -0.017914
2.0 -0.137242 0.036365 -3.774049 0.000161 -0.208515 -0.065969
3.0 -0.100829 0.034729 -2.903340 0.003692 -0.168895 -0.032762
------------------ Additional Information ------------------
Score function: observational
Control group: never_treated
Anticipation periods: 0

Alternatively, the \(ATT\) could also be aggregated according to (calendar) time periods or treatment groups, see the user guide.
Aggregation Details#
The DoubleMLDIDAggregation
objects include several DoubleMLFrameworks
which support methods like bootstrap()
or confint()
. Further, the weights can be accessed via the properties
overall_aggregation_weights
: weights for the overall aggregationaggregation_weights
: weights for the aggregation
To clarify, e.g. for the eventstudy aggregation
If one would like to consider how the aggregated effect with \(e=0\) is computed, one would have to look at the third set of weights within the aggregation_weights
property
[8]:
aggregated_eventstudy.aggregation_weights[2]
[8]:
array([0. , 0. , 0. , 0. , 0. ,
0.23391813, 0. , 0. , 0. , 0. ,
0.76608187, 0. ])
ATT Estimation: Conditional Parallel Trends#
We briefly demonstrate how to use the DoubleMLDIDMulti
model with conditional parallel trends. As the rationale behind DML is to flexibly model nuisance components as prediction problems, the DML DiD estimator includes pre-treatment covariates by default. In DiD, the nuisance components are the outcome regression and the propensity score estimation for the treatment group variable. This is why we had to enforce dummy learners in the unconditional parallel trends case to ignore the
pre-treatment covariates. Now, we can replicate the classical doubly robust DiD estimator as of Callaway and Sant’Anna(2021) by using linear and logistic regression for the nuisance components. This is done by setting ml_g
to LinearRegression()
and ml_m
to LogisticRegression()
. Similarly, we can also choose other learners, for example by setting ml_g
and ml_m
to LassoCV()
and LogisticRegressionCV()
. We present
the results for the ATTs and their event-study aggregation in the corresponding effect plots.
Please note that the example is meant to illustrate the usage of the DoubleMLDIDMulti
model in combination with ML learners. In real-data applicatoins, careful choice and empirical evaluation of the learners are required. Default measures for the prediction of the nuisance components are printed in the model summary, as briefly illustrated below.
[9]:
dml_obj_linear_logistic = DoubleMLDIDMulti(
obj_dml_data=dml_data,
ml_g=LinearRegression(),
ml_m=LogisticRegression(penalty=None),
control_group="never_treated",
n_folds=10
)
dml_obj_linear_logistic.fit()
dml_obj_linear_logistic.bootstrap(n_rep_boot=5000)
dml_obj_linear_logistic.plot_effects(title="Estimated ATTs by Group, Linear and logistic Regression")
/opt/hostedtoolcache/Python/3.12.10/x64/lib/python3.12/site-packages/matplotlib/cbook.py:1719: FutureWarning: Calling float on a single element Series is deprecated and will raise a TypeError in the future. Use float(ser.iloc[0]) instead
return math.isfinite(val)
[9]:
(<Figure size 1200x800 with 4 Axes>,
[<Axes: title={'center': 'First Treated: 2004.0'}, ylabel='Effect'>,
<Axes: title={'center': 'First Treated: 2006.0'}, ylabel='Effect'>,
<Axes: title={'center': 'First Treated: 2007.0'}, xlabel='Evaluation Period', ylabel='Effect'>])

We briefly look at the model summary, which includes some standard diagnostics for the prediction of the nuisance components.
[10]:
print(dml_obj_linear_logistic)
================== DoubleMLDIDMulti Object ==================
------------------ Data summary ------------------
Outcome variable: lemp
Treatment variable(s): ['first.treat']
Covariates: ['lpop']
Instrument variable(s): None
Time variable: year
Id variable: countyreal
No. Observations: 500
------------------ Score & algorithm ------------------
Score function: observational
Control group: never_treated
Anticipation periods: 0
------------------ Machine learner ------------------
Learner ml_g: LinearRegression()
Learner ml_m: LogisticRegression(penalty=None)
Out-of-sample Performance:
Regression:
Learner ml_g0 RMSE: [[0.17310774 0.18138656 0.26005339 0.25843799 0.17168557 0.15189413
0.20100187 0.20664644 0.17207549 0.15167985 0.20104387 0.16454762]]
Learner ml_g1 RMSE: [[0.10038215 0.12688924 0.13735069 0.15347984 0.13836257 0.11056343
0.08712773 0.10433097 0.13370003 0.16245033 0.16080991 0.16124508]]
Classification:
Learner ml_m Log Loss: [[0.22910947 0.23019094 0.23021164 0.23239595 0.34903417 0.34805949
0.34863355 0.3497071 0.60483543 0.60563165 0.60700881 0.60702616]]
------------------ Resampling ------------------
No. folds: 10
No. repeated sample splits: 1
------------------ Fit summary ------------------
coef std err t P>|t| 2.5 % \
ATT(2004.0,2003,2004) -0.015259 0.022429 -0.680323 0.496300 -0.059218
ATT(2004.0,2003,2005) -0.077849 0.028347 -2.746326 0.006027 -0.133408
ATT(2004.0,2003,2006) -0.140491 0.035129 -3.999354 0.000064 -0.209342
ATT(2004.0,2003,2007) -0.104971 0.033841 -3.101876 0.001923 -0.171298
ATT(2006.0,2003,2004) 0.000917 0.022310 0.041121 0.967199 -0.042809
ATT(2006.0,2004,2005) -0.005136 0.018502 -0.277580 0.781335 -0.041398
ATT(2006.0,2005,2006) 0.001733 0.019831 0.087376 0.930373 -0.037136
ATT(2006.0,2005,2007) -0.040430 0.019662 -2.056202 0.039763 -0.078967
ATT(2007.0,2003,2004) 0.026940 0.014060 1.916133 0.055348 -0.000616
ATT(2007.0,2004,2005) -0.004465 0.015707 -0.284298 0.776182 -0.035250
ATT(2007.0,2005,2006) -0.028889 0.018307 -1.578061 0.114552 -0.064770
ATT(2007.0,2006,2007) -0.028560 0.016158 -1.767570 0.077133 -0.060230
97.5 %
ATT(2004.0,2003,2004) 0.028701
ATT(2004.0,2003,2005) -0.022291
ATT(2004.0,2003,2006) -0.071641
ATT(2004.0,2003,2007) -0.038644
ATT(2006.0,2003,2004) 0.044644
ATT(2006.0,2004,2005) 0.031127
ATT(2006.0,2005,2006) 0.040601
ATT(2006.0,2005,2007) -0.001892
ATT(2007.0,2003,2004) 0.054497
ATT(2007.0,2004,2005) 0.026320
ATT(2007.0,2005,2006) 0.006991
ATT(2007.0,2006,2007) 0.003109
[11]:
es_linear_logistic = dml_obj_linear_logistic.aggregate("eventstudy")
es_linear_logistic.aggregated_frameworks.bootstrap()
es_linear_logistic.plot_effects(title="Estimated ATTs by Group, Linear and logistic Regression")
[11]:
(<Figure size 1200x600 with 1 Axes>,
<Axes: title={'center': 'Estimated ATTs by Group, Linear and logistic Regression'}, ylabel='Effect'>)

[12]:
dml_obj_lasso = DoubleMLDIDMulti(
obj_dml_data=dml_data,
ml_g=LassoCV(),
ml_m=LogisticRegressionCV(),
control_group="never_treated",
n_folds=10
)
dml_obj_lasso.fit()
dml_obj_lasso.bootstrap(n_rep_boot=5000)
dml_obj_lasso.plot_effects(title="Estimated ATTs by Group, LassoCV and LogisticRegressionCV()")
/opt/hostedtoolcache/Python/3.12.10/x64/lib/python3.12/site-packages/matplotlib/cbook.py:1719: FutureWarning: Calling float on a single element Series is deprecated and will raise a TypeError in the future. Use float(ser.iloc[0]) instead
return math.isfinite(val)
[12]:
(<Figure size 1200x800 with 4 Axes>,
[<Axes: title={'center': 'First Treated: 2004.0'}, ylabel='Effect'>,
<Axes: title={'center': 'First Treated: 2006.0'}, ylabel='Effect'>,
<Axes: title={'center': 'First Treated: 2007.0'}, xlabel='Evaluation Period', ylabel='Effect'>])

[13]:
# Model summary
print(dml_obj_lasso)
================== DoubleMLDIDMulti Object ==================
------------------ Data summary ------------------
Outcome variable: lemp
Treatment variable(s): ['first.treat']
Covariates: ['lpop']
Instrument variable(s): None
Time variable: year
Id variable: countyreal
No. Observations: 500
------------------ Score & algorithm ------------------
Score function: observational
Control group: never_treated
Anticipation periods: 0
------------------ Machine learner ------------------
Learner ml_g: LassoCV()
Learner ml_m: LogisticRegressionCV()
Out-of-sample Performance:
Regression:
Learner ml_g0 RMSE: [[0.17337434 0.18189187 0.25837942 0.26033963 0.17423946 0.15127751
0.20094498 0.2060928 0.17447537 0.1525446 0.20048868 0.16299535]]
Learner ml_g1 RMSE: [[0.10056645 0.12254485 0.14047189 0.15126726 0.14180365 0.11151809
0.08723916 0.1060294 0.1326365 0.16162204 0.15912645 0.15939675]]
Classification:
Learner ml_m Log Loss: [[0.22913701 0.22914032 0.22913825 0.22913749 0.35597208 0.35595841
0.35595271 0.35595694 0.60884536 0.60887943 0.60884129 0.60884323]]
------------------ Resampling ------------------
No. folds: 10
No. repeated sample splits: 1
------------------ Fit summary ------------------
coef std err t P>|t| 2.5 % \
ATT(2004.0,2003,2004) -0.013391 0.022920 -0.584253 0.559050 -0.058313
ATT(2004.0,2003,2005) -0.076858 0.029177 -2.634209 0.008433 -0.134044
ATT(2004.0,2003,2006) -0.137151 0.035721 -3.839463 0.000123 -0.207163
ATT(2004.0,2003,2007) -0.106440 0.032870 -3.238269 0.001203 -0.170864
ATT(2006.0,2003,2004) 0.003189 0.023550 0.135413 0.892285 -0.042968
ATT(2006.0,2004,2005) -0.004944 0.019319 -0.255922 0.798011 -0.042809
ATT(2006.0,2005,2006) -0.004300 0.017750 -0.242239 0.808595 -0.039090
ATT(2006.0,2005,2007) -0.041221 0.020407 -2.019926 0.043391 -0.081219
ATT(2007.0,2003,2004) 0.026642 0.015284 1.743168 0.081304 -0.003313
ATT(2007.0,2004,2005) -0.004285 0.016428 -0.260840 0.794216 -0.036484
ATT(2007.0,2005,2006) -0.031044 0.017893 -1.734969 0.082746 -0.066115
ATT(2007.0,2006,2007) -0.028140 0.016833 -1.671646 0.094594 -0.061133
97.5 %
ATT(2004.0,2003,2004) 0.031531
ATT(2004.0,2003,2005) -0.019672
ATT(2004.0,2003,2006) -0.067138
ATT(2004.0,2003,2007) -0.042017
ATT(2006.0,2003,2004) 0.049345
ATT(2006.0,2004,2005) 0.032921
ATT(2006.0,2005,2006) 0.030490
ATT(2006.0,2005,2007) -0.001224
ATT(2007.0,2003,2004) 0.056598
ATT(2007.0,2004,2005) 0.027914
ATT(2007.0,2005,2006) 0.004026
ATT(2007.0,2006,2007) 0.004853
[14]:
es_rf = dml_obj_lasso.aggregate("eventstudy")
es_rf.aggregated_frameworks.bootstrap()
es_rf.plot_effects(title="Estimated ATTs by Group, LassoCV and LogisticRegressionCV()")
[14]:
(<Figure size 1200x600 with 1 Axes>,
<Axes: title={'center': 'Estimated ATTs by Group, LassoCV and LogisticRegressionCV()'}, ylabel='Effect'>)
