Python: Panel Data with Multiple Time Periods#

In this example, a detailed guide on Difference-in-Differences with multiple time periods using the DoubleML-package. The implementation is based on Callaway and Sant’Anna(2021).

The notebook requires the following packages:

[1]:
import seaborn as sns
import matplotlib.pyplot as plt
import pandas as pd
import numpy as np

from lightgbm import LGBMRegressor, LGBMClassifier
from sklearn.linear_model import LinearRegression, LogisticRegression

from doubleml.did import DoubleMLDIDMulti
from doubleml.data import DoubleMLPanelData

from doubleml.did.datasets import make_did_CS2021

Data#

We will rely on the make_did_CS2021 DGP, which is inspired by Callaway and Sant’Anna(2021) (Appendix SC) and Sant’Anna and Zhao (2020).

We will observe n_obs units over n_periods. Remark that the dataframe includes observations of the potential outcomes y0 and y1, such that we can use oracle estimates as comparisons.

[2]:
n_obs = 5000
n_periods = 6

df = make_did_CS2021(n_obs, dgp_type=4, n_periods=n_periods, n_pre_treat_periods=3, time_type="datetime")
df["ite"] = df["y1"] - df["y0"]

print(df.shape)
df.head()
(30000, 11)
[2]:
id y y0 y1 d t Z1 Z2 Z3 Z4 ite
0 0 207.918657 207.918657 208.754770 2025-05-01 2025-01-01 0.066515 0.285642 -0.408956 -0.002421 0.836113
1 0 209.737010 209.737010 210.531176 2025-05-01 2025-02-01 0.066515 0.285642 -0.408956 -0.002421 0.794166
2 0 210.939501 210.939501 209.541660 2025-05-01 2025-03-01 0.066515 0.285642 -0.408956 -0.002421 -1.397842
3 0 208.311300 208.311300 209.637482 2025-05-01 2025-04-01 0.066515 0.285642 -0.408956 -0.002421 1.326183
4 0 209.909184 208.150339 209.909184 2025-05-01 2025-05-01 0.066515 0.285642 -0.408956 -0.002421 1.758845

Data Details#

Here, we slightly abuse the definition of the potential outcomes. \(Y_{i,t}(1)\) corresponds to the (potential) outcome if unit \(i\) would have received treatment at time period \(\mathrm{g}\) (where the group \(\mathrm{g}\) is drawn with probabilities based on \(Z\)).

More specifically

\[\begin{split}\begin{align*} Y_{i,t}(0)&:= f_t(Z) + \delta_t + \eta_i + \varepsilon_{i,t,0}\\ Y_{i,t}(1)&:= Y_{i,t}(0) + \theta_{i,t,\mathrm{g}} + \epsilon_{i,t,1} - \epsilon_{i,t,0} \end{align*}\end{split}\]

where

  • \(f_t(Z)\) depends on pre-treatment observable covariates \(Z_1,\dots, Z_4\) and time \(t\)

  • \(\delta_t\) is a time fixed effect

  • \(\eta_i\) is a unit fixed effect

  • \(\epsilon_{i,t,\cdot}\) are time varying unobservables (iid. \(N(0,1)\))

  • \(\theta_{i,t,\mathrm{g}}\) correponds to the exposure effect of unit \(i\) based on group \(\mathrm{g}\) at time \(t\)

For the pre-treatment periods the exposure effect is set to

\[\theta_{i,t,\mathrm{g}}:= 0 \text{ for } t<\mathrm{g}\]

such that

\[\mathbb{E}[Y_{i,t}(1) - Y_{i,t}(0)] = \mathbb{E}[\epsilon_{i,t,1} - \epsilon_{i,t,0}]=0 \text{ for } t<\mathrm{g}\]

The DoubleML Coverage Repository includes coverage simulations based on this DGP.

Data Description#

The data is a balanced panel where each unit is observed over n_periods starting Janary 2025.

[3]:
df.groupby("t").size()
[3]:
t
2025-01-01    5000
2025-02-01    5000
2025-03-01    5000
2025-04-01    5000
2025-05-01    5000
2025-06-01    5000
dtype: int64

The treatment column d indicates first treatment period of the corresponding unit, whereas NaT units are never treated.

Generally, never treated units should take either on the value ``np.inf`` or ``pd.NaT`` depending on the data type (``float`` or ``datetime``).

The individual units are roughly uniformly divided between the groups, where treatment assignment depends on the pre-treatment covariates Z1 to Z4.

[4]:
df.groupby("d", dropna=False).size()
[4]:
d
2025-04-01    7548
2025-05-01    7542
2025-06-01    7248
NaT           7662
dtype: int64

Here, the group indicates the first treated period and NaT units are never treated. To simplify plotting and pands

[5]:
df.groupby("d", dropna=False).size()
[5]:
d
2025-04-01    7548
2025-05-01    7542
2025-06-01    7248
NaT           7662
dtype: int64

To get a better understanding of the underlying data and true effects, we will compare the unconditional averages and the true effects based on the oracle values of individual effects ite.

[6]:
# rename for plotting
df["First Treated"] = df["d"].dt.strftime("%Y-%m").fillna("Never Treated")

# Create aggregation dictionary for means
def agg_dict(col_name):
    return {
        f'{col_name}_mean': (col_name, 'mean'),
        f'{col_name}_lower_quantile': (col_name, lambda x: x.quantile(0.05)),
        f'{col_name}_upper_quantile': (col_name, lambda x: x.quantile(0.95))
    }

# Calculate means and confidence intervals
agg_dictionary = agg_dict("y") | agg_dict("ite")

agg_df = df.groupby(["t", "First Treated"]).agg(**agg_dictionary).reset_index()
agg_df.head()
[6]:
t First Treated y_mean y_lower_quantile y_upper_quantile ite_mean ite_lower_quantile ite_upper_quantile
0 2025-01-01 2025-04 208.549412 198.630328 217.896541 -0.030008 -2.454188 2.200424
1 2025-01-01 2025-05 210.675512 200.717314 221.121875 -0.028021 -2.364197 2.304282
2 2025-01-01 2025-06 212.390878 202.486847 221.996130 -0.052932 -2.324266 2.285985
3 2025-01-01 Never Treated 214.127059 203.924385 224.101361 0.050714 -2.248176 2.166753
4 2025-02-01 2025-04 208.063276 188.700010 226.596263 -0.012369 -2.323319 2.156246
[7]:
def plot_data(df, col_name='y'):
    """
    Create an improved plot with colorblind-friendly features

    Parameters:
    -----------
    df : DataFrame
        The dataframe containing the data
    col_name : str, default='y'
        Column name to plot (will use '{col_name}_mean')
    """
    plt.figure(figsize=(12, 7))
    n_colors = df["First Treated"].nunique()
    color_palette = sns.color_palette("colorblind", n_colors=n_colors)

    sns.lineplot(
        data=df,
        x='t',
        y=f'{col_name}_mean',
        hue='First Treated',
        style='First Treated',
        palette=color_palette,
        markers=True,
        dashes=True,
        linewidth=2.5,
        alpha=0.8
    )

    plt.title(f'Average Values {col_name} by Group Over Time', fontsize=16)
    plt.xlabel('Time', fontsize=14)
    plt.ylabel(f'Average Value {col_name}', fontsize=14)


    plt.legend(title='First Treated', title_fontsize=13, fontsize=12,
               frameon=True, framealpha=0.9, loc='best')

    plt.grid(alpha=0.3, linestyle='-')
    plt.tight_layout()

    plt.show()

So let us take a look at the average values over time

[8]:
plot_data(agg_df, col_name='y')
../../_images/examples_did_py_panel_16_0.png

Instead the true average treatment treatment effects can be obtained by averaging (usually unobserved) the ite values.

The true effect just equals the exposure time (in months):

\[ATT(\mathrm{g}, t) = \min(\mathrm{t} - \mathrm{g} + 1, 0) =: e\]
[9]:
plot_data(agg_df, col_name='ite')
../../_images/examples_did_py_panel_18_0.png

DoubleMLPanelData#

Finally, we can construct our DoubleMLPanelData, specifying

  • y_col : the outcome

  • d_cols: the group variable indicating the first treated period for each unit

  • id_col: the unique identification column for each unit

  • t_col : the time column

  • x_cols: the additional pre-treatment controls

  • datetime_unit: unit required for datetime columns and plotting

[10]:
dml_data = DoubleMLPanelData(
    data=df,
    y_col="y",
    d_cols="d",
    id_col="id",
    t_col="t",
    x_cols=["Z1", "Z2", "Z3", "Z4"],
    datetime_unit="M"
)
print(dml_data)
================== DoubleMLPanelData Object ==================

------------------ Data summary      ------------------
Outcome variable: y
Treatment variable(s): ['d']
Covariates: ['Z1', 'Z2', 'Z3', 'Z4']
Instrument variable(s): None
Time variable: t
Id variable: id
No. Unique Ids: 5000
No. Observations: 30000

------------------ DataFrame info    ------------------
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 30000 entries, 0 to 29999
Columns: 12 entries, id to First Treated
dtypes: datetime64[s](2), float64(8), int64(1), object(1)
memory usage: 2.7+ MB

ATT Estimation#

The DoubleML-package implements estimation of group-time average treatment effect via the DoubleMLDIDMulti class (see model documentation).

Basics#

The class basically behaves like other DoubleML classes and requires the specification of two learners (for more details on the regression elements, see score documentation).

The basic arguments of a DoubleMLDIDMulti object include

  • ml_g “outcome” regression learner

  • ml_m propensity Score learner

  • control_group the control group for the parallel trend assumption

  • gt_combinations combinations of \((\mathrm{g},t_\text{pre}, t_\text{eval})\)

  • anticipation_periods number of anticipation periods

We will construct a dict with “default” arguments.

[11]:
default_args = {
    "ml_g": LGBMRegressor(n_estimators=500, learning_rate=0.01, verbose=-1, random_state=123),
    "ml_m": LGBMClassifier(n_estimators=500, learning_rate=0.01, verbose=-1, random_state=123),
    "control_group": "never_treated",
    "gt_combinations": "standard",
    "anticipation_periods": 0,
    "n_folds": 5,
    "n_rep": 1,
}

The model will be estimated using the fit() method.

[12]:
dml_obj = DoubleMLDIDMulti(dml_data, **default_args)
dml_obj.fit()
print(dml_obj)
================== DoubleMLDIDMulti Object ==================

------------------ Data summary      ------------------
Outcome variable: y
Treatment variable(s): ['d']
Covariates: ['Z1', 'Z2', 'Z3', 'Z4']
Instrument variable(s): None
Time variable: t
Id variable: id
No. Unique Ids: 5000
No. Observations: 30000

------------------ Score & algorithm ------------------
Score function: observational
Control group: never_treated
Anticipation periods: 0

------------------ Machine learner   ------------------
Learner ml_g: LGBMRegressor(learning_rate=0.01, n_estimators=500, random_state=123,
              verbose=-1)
Learner ml_m: LGBMClassifier(learning_rate=0.01, n_estimators=500, random_state=123,
               verbose=-1)
Out-of-sample Performance:
Regression:
Learner ml_g0 RMSE: [[1.84029513 1.9385089  1.85441491 2.84059297 3.80117909 1.88449977
  1.94420632 1.8708672  1.84767927 2.77314687 1.87508173 1.95079708
  1.87648198 1.86156218 1.88397239]]
Learner ml_g1 RMSE: [[1.92270892 1.96586194 2.02150029 3.07596974 4.1848624  1.86922927
  1.95104742 1.92566844 1.89756162 2.80994067 1.89639464 1.90925087
  1.90283043 1.93191324 1.82731919]]
Classification:
Learner ml_m Log Loss: [[0.66198912 0.66612794 0.66540882 0.6634429  0.66696117 0.71585901
  0.71618378 0.71788533 0.70757981 0.72344628 0.7203548  0.72314033
  0.72314685 0.71918986 0.71880047]]

------------------ Resampling        ------------------
No. folds: 5
No. repeated sample splits: 1

------------------ Fit summary       ------------------
                                  coef   std err          t         P>|t|  \
ATT(2025-04,2025-01,2025-02) -0.088919  0.120041  -0.740738  4.588525e-01
ATT(2025-04,2025-02,2025-03) -0.230994  0.125153  -1.845694  6.493665e-02
ATT(2025-04,2025-03,2025-04)  0.911047  0.132078   6.897811  5.281109e-12
ATT(2025-04,2025-03,2025-05)  1.788483  0.221126   8.088054  6.661338e-16
ATT(2025-04,2025-03,2025-06)  2.695702  0.290385   9.283189  0.000000e+00
ATT(2025-05,2025-01,2025-02) -0.001312  0.096575  -0.013590  9.891570e-01
ATT(2025-05,2025-02,2025-03) -0.088336  0.096565  -0.914784  3.603048e-01
ATT(2025-05,2025-03,2025-04) -0.033562  0.099822  -0.336216  7.367077e-01
ATT(2025-05,2025-04,2025-05)  0.890718  0.096599   9.220777  0.000000e+00
ATT(2025-05,2025-04,2025-06)  1.924818  0.219230   8.779902  0.000000e+00
ATT(2025-06,2025-01,2025-02) -0.004552  0.087617  -0.051950  9.585684e-01
ATT(2025-06,2025-02,2025-03)  0.032390  0.086574   0.374128  7.083091e-01
ATT(2025-06,2025-03,2025-04) -0.096029  0.086900  -1.105053  2.691368e-01
ATT(2025-06,2025-04,2025-05) -0.095587  0.085713  -1.115193  2.647676e-01
ATT(2025-06,2025-05,2025-06)  0.955500  0.089862  10.633021  0.000000e+00

                                 2.5 %    97.5 %
ATT(2025-04,2025-01,2025-02) -0.324195  0.146357
ATT(2025-04,2025-02,2025-03) -0.476289  0.014301
ATT(2025-04,2025-03,2025-04)  0.652180  1.169915
ATT(2025-04,2025-03,2025-05)  1.355083  2.221883
ATT(2025-04,2025-03,2025-06)  2.126557  3.264847
ATT(2025-05,2025-01,2025-02) -0.190596  0.187971
ATT(2025-05,2025-02,2025-03) -0.277599  0.100927
ATT(2025-05,2025-03,2025-04) -0.229208  0.162085
ATT(2025-05,2025-04,2025-05)  0.701387  1.080048
ATT(2025-05,2025-04,2025-06)  1.495135  2.354501
ATT(2025-06,2025-01,2025-02) -0.176278  0.167174
ATT(2025-06,2025-02,2025-03) -0.137292  0.202071
ATT(2025-06,2025-03,2025-04) -0.266351  0.074292
ATT(2025-06,2025-04,2025-05) -0.263581  0.072408
ATT(2025-06,2025-05,2025-06)  0.779375  1.131626

The summary displays estimates of the \(ATT(g,t_\text{eval})\) effects for different combinations of \((g,t_\text{eval})\) via \(\widehat{ATT}(\mathrm{g},t_\text{pre},t_\text{eval})\), where

  • \(\mathrm{g}\) specifies the group

  • \(t_\text{pre}\) specifies the corresponding pre-treatment period

  • \(t_\text{eval}\) specifies the evaluation period

The choice gt_combinations="standard", used estimates all possible combinations of \(ATT(g,t_\text{eval})\) via \(\widehat{ATT}(\mathrm{g},t_\text{pre},t_\text{eval})\), where the standard choice is \(t_\text{pre} = \min(\mathrm{g}, t_\text{eval}) - 1\) (without anticipation).

Remark that this includes pre-tests effects if \(\mathrm{g} > t_{eval}\), e.g. \(\widehat{ATT}(g=\text{2025-04}, t_{\text{pre}}=\text{2025-01}, t_{\text{eval}}=\text{2025-02})\) which estimates the pre-trend from January to February even if the actual treatment occured in April.

As usual for the DoubleML-package, you can obtain joint confidence intervals via bootstrap.

[13]:
level = 0.95

ci = dml_obj.confint(level=level)
dml_obj.bootstrap(n_rep_boot=5000)
ci_joint = dml_obj.confint(level=level, joint=True)
ci_joint
[13]:
2.5 % 97.5 %
ATT(2025-04,2025-01,2025-02) -0.434242 0.256404
ATT(2025-04,2025-02,2025-03) -0.591023 0.129035
ATT(2025-04,2025-03,2025-04) 0.531098 1.290997
ATT(2025-04,2025-03,2025-05) 1.152366 2.424599
ATT(2025-04,2025-03,2025-06) 1.860348 3.531057
ATT(2025-05,2025-01,2025-02) -0.279130 0.276505
ATT(2025-05,2025-02,2025-03) -0.366124 0.189453
ATT(2025-05,2025-03,2025-04) -0.320719 0.253596
ATT(2025-05,2025-04,2025-05) 0.612831 1.168605
ATT(2025-05,2025-04,2025-06) 1.294157 2.555479
ATT(2025-06,2025-01,2025-02) -0.256600 0.247497
ATT(2025-06,2025-02,2025-03) -0.216658 0.281437
ATT(2025-06,2025-03,2025-04) -0.346016 0.153958
ATT(2025-06,2025-04,2025-05) -0.342158 0.150985
ATT(2025-06,2025-05,2025-06) 0.696995 1.214006

A visualization of the effects can be obtained via the plot_effects() method.

Remark that the plot used joint confidence intervals per default.

[14]:
dml_obj.plot_effects()
/opt/hostedtoolcache/Python/3.12.11/x64/lib/python3.12/site-packages/matplotlib/cbook.py:1719: FutureWarning: Calling float on a single element Series is deprecated and will raise a TypeError in the future. Use float(ser.iloc[0]) instead
  return math.isfinite(val)
[14]:
(<Figure size 1200x800 with 4 Axes>,
 [<Axes: title={'center': 'First Treated: 2025-04'}, ylabel='Effect'>,
  <Axes: title={'center': 'First Treated: 2025-05'}, ylabel='Effect'>,
  <Axes: title={'center': 'First Treated: 2025-06'}, xlabel='Evaluation Period', ylabel='Effect'>])
../../_images/examples_did_py_panel_30_2.png

Sensitivity Analysis#

As descripted in the Sensitivity Guide, robustness checks on omitted confounding/parallel trend violations are available, via the standard sensitivity_analysis() method.

[15]:
dml_obj.sensitivity_analysis()
print(dml_obj.sensitivity_summary)
================== Sensitivity Analysis ==================

------------------ Scenario          ------------------
Significance Level: level=0.95
Sensitivity parameters: cf_y=0.03; cf_d=0.03, rho=1.0

------------------ Bounds with CI    ------------------
                              CI lower  theta lower     theta  theta upper  \
ATT(2025-04,2025-01,2025-02) -0.401677    -0.203490 -0.088919     0.025652
ATT(2025-04,2025-02,2025-03) -0.543680    -0.339912 -0.230994    -0.122076
ATT(2025-04,2025-03,2025-04)  0.598865     0.806389  0.911047     1.015705
ATT(2025-04,2025-03,2025-05)  1.273616     1.625695  1.788483     1.951270
ATT(2025-04,2025-03,2025-06)  2.030685     2.514192  2.695702     2.877213
ATT(2025-05,2025-01,2025-02) -0.262094    -0.104475 -0.001312     0.101850
ATT(2025-05,2025-02,2025-03) -0.347309    -0.190918 -0.088336     0.014246
ATT(2025-05,2025-03,2025-04) -0.295408    -0.133793 -0.033562     0.066670
ATT(2025-05,2025-04,2025-05)  0.627332     0.786242  0.890718     0.995193
ATT(2025-05,2025-04,2025-06)  1.500458     1.808306  1.924818     2.041331
ATT(2025-06,2025-01,2025-02) -0.253658    -0.109820 -0.004552     0.100717
ATT(2025-06,2025-02,2025-03) -0.213976    -0.072785  0.032390     0.137565
ATT(2025-06,2025-03,2025-04) -0.342704    -0.199869 -0.096029     0.007810
ATT(2025-06,2025-04,2025-05) -0.340159    -0.199547 -0.095587     0.008373
ATT(2025-06,2025-05,2025-06)  0.705910     0.853024  0.955500     1.057976

                              CI upper
ATT(2025-04,2025-01,2025-02)  0.223726
ATT(2025-04,2025-02,2025-03)  0.089591
ATT(2025-04,2025-03,2025-04)  1.244915
ATT(2025-04,2025-03,2025-05)  2.329210
ATT(2025-04,2025-03,2025-06)  3.358594
ATT(2025-05,2025-01,2025-02)  0.262436
ATT(2025-05,2025-02,2025-03)  0.176825
ATT(2025-05,2025-03,2025-04)  0.234030
ATT(2025-05,2025-04,2025-05)  1.154525
ATT(2025-05,2025-04,2025-06)  2.464599
ATT(2025-06,2025-01,2025-02)  0.245425
ATT(2025-06,2025-02,2025-03)  0.281602
ATT(2025-06,2025-03,2025-04)  0.151202
ATT(2025-06,2025-04,2025-05)  0.150119
ATT(2025-06,2025-05,2025-06)  1.206831

------------------ Robustness Values ------------------
                              H_0     RV (%)    RVa (%)
ATT(2025-04,2025-01,2025-02)  0.0   2.336346   0.000357
ATT(2025-04,2025-02,2025-03)  0.0   6.254804   0.673535
ATT(2025-04,2025-03,2025-04)  0.0  23.232415  18.506136
ATT(2025-04,2025-03,2025-05)  0.0  28.331221  23.303721
ATT(2025-04,2025-03,2025-06)  0.0  36.148569  26.584944
ATT(2025-05,2025-01,2025-02)  0.0   0.038605   0.000511
ATT(2025-05,2025-02,2025-03)  0.0   2.588916   0.000599
ATT(2025-05,2025-03,2025-04)  0.0   1.014895   0.000590
ATT(2025-05,2025-04,2025-05)  0.0  22.815156  18.963884
ATT(2025-05,2025-04,2025-06)  0.0  39.228597  27.611722
ATT(2025-06,2025-01,2025-02)  0.0   0.131475   0.000546
ATT(2025-06,2025-02,2025-03)  0.0   0.933810   0.000489
ATT(2025-06,2025-03,2025-04)  0.0   2.777608   0.000629
ATT(2025-06,2025-04,2025-05)  0.0   2.761827   0.000652
ATT(2025-06,2025-05,2025-06)  0.0  24.653276  21.204486

In this example one can clearly, distinguish the robustness of the non-zero effects vs. the pre-treatment periods.

Control Groups#

The current implementation support the following control groups

  • "never_treated"

  • "not_yet_treated"

Remark that the ``”not_yet_treated” depends on anticipation.

For differences and recommendations, we refer to Callaway and Sant’Anna(2021).

[16]:
dml_obj_nyt = DoubleMLDIDMulti(dml_data, **(default_args | {"control_group": "not_yet_treated"}))
dml_obj_nyt.fit()
dml_obj_nyt.bootstrap(n_rep_boot=5000)
dml_obj_nyt.plot_effects()
/opt/hostedtoolcache/Python/3.12.11/x64/lib/python3.12/site-packages/matplotlib/cbook.py:1719: FutureWarning: Calling float on a single element Series is deprecated and will raise a TypeError in the future. Use float(ser.iloc[0]) instead
  return math.isfinite(val)
[16]:
(<Figure size 1200x800 with 4 Axes>,
 [<Axes: title={'center': 'First Treated: 2025-04'}, ylabel='Effect'>,
  <Axes: title={'center': 'First Treated: 2025-05'}, ylabel='Effect'>,
  <Axes: title={'center': 'First Treated: 2025-06'}, xlabel='Evaluation Period', ylabel='Effect'>])
../../_images/examples_did_py_panel_35_2.png

Linear Covariate Adjustment#

Remark that we relied on boosted trees to adjust for conditional parallel trends which allow for a nonlinear adjustment. In comparison to linear adjustment, we could rely on linear learners.

Remark that the DGP (``dgp_type=4``) is based on nonlinear conditional expectations such that the estimates will be biased

[17]:
linear_learners = {
    "ml_g": LinearRegression(),
    "ml_m": LogisticRegression(),
}

dml_obj_linear = DoubleMLDIDMulti(dml_data, **(default_args | linear_learners))
dml_obj_linear.fit()
dml_obj_linear.bootstrap(n_rep_boot=5000)
dml_obj_linear.plot_effects()
/opt/hostedtoolcache/Python/3.12.11/x64/lib/python3.12/site-packages/matplotlib/cbook.py:1719: FutureWarning: Calling float on a single element Series is deprecated and will raise a TypeError in the future. Use float(ser.iloc[0]) instead
  return math.isfinite(val)
[17]:
(<Figure size 1200x800 with 4 Axes>,
 [<Axes: title={'center': 'First Treated: 2025-04'}, ylabel='Effect'>,
  <Axes: title={'center': 'First Treated: 2025-05'}, ylabel='Effect'>,
  <Axes: title={'center': 'First Treated: 2025-06'}, xlabel='Evaluation Period', ylabel='Effect'>])
../../_images/examples_did_py_panel_37_2.png

Aggregated Effects#

As the did-R-package, the \(ATT\)’s can be aggregated to summarize multiple effects. For details on different aggregations and details on their interpretations see Callaway and Sant’Anna(2021).

The aggregations are implemented via the aggregate() method.

Group Aggregation#

To obtain group-specific effects one can would like to average \(ATT(\mathrm{g}, t_\text{eval})\) over \(t_\text{eval}\). As a sample oracle we will combine all ite’s based on group \(\mathrm{g}\).

[18]:
df_post_treatment = df[df["t"] >= df["d"]]
df_post_treatment.groupby("d")["ite"].mean()
[18]:
d
2025-04-01    1.948194
2025-05-01    1.445309
2025-06-01    0.987942
Name: ite, dtype: float64

To obtain group-specific effects it is possible to aggregate several \(\widehat{ATT}(\mathrm{g},t_\text{pre},t_\text{eval})\) values based on the group \(\mathrm{g}\) by setting the aggregation="group" argument.

[19]:
aggregated_group = dml_obj.aggregate(aggregation="group")
print(aggregated_group)
_ = aggregated_group.plot_effects()
================== DoubleMLDIDAggregation Object ==================
 Group Aggregation

------------------ Overall Aggregated Effects ------------------
    coef  std err         t  P>|t|    2.5 %   97.5 %
1.393019 0.114029 12.216409    0.0 1.169527 1.616511
------------------ Aggregated Effects         ------------------
             coef   std err          t  P>|t|     2.5 %    97.5 %
2025-04  1.798411  0.191624   9.385119    0.0  1.422835  2.173986
2025-05  1.407768  0.139840  10.066966    0.0  1.133686  1.681850
2025-06  0.955500  0.089862  10.633021    0.0  0.779375  1.131626
------------------ Additional Information     ------------------
Score function: observational
Control group: never_treated
Anticipation periods: 0

/opt/hostedtoolcache/Python/3.12.11/x64/lib/python3.12/site-packages/doubleml/did/did_aggregation.py:368: UserWarning: Joint confidence intervals require bootstrapping which hasn't been performed yet. Automatically applying '.aggregated_frameworks.bootstrap(method="normal", n_rep_boot=500)' with default values. For different bootstrap settings, call bootstrap() explicitly before plotting.
  warnings.warn(
../../_images/examples_did_py_panel_42_2.png

The output is a DoubleMLDIDAggregation object which includes an overall aggregation summary based on group size.

Time Aggregation#

To obtain time-specific effects one can would like to average \(ATT(\mathrm{g}, t_\text{eval})\) over \(\mathrm{g}\) (respecting group size). As a sample oracle we will combine all ite’s based on group \(\mathrm{g}\). As oracle values, we obtain

[20]:
df_post_treatment.groupby("t")["ite"].mean()
[20]:
t
2025-04-01    0.986403
2025-05-01    1.437635
2025-06-01    1.966929
Name: ite, dtype: float64

To aggregate \(\widehat{ATT}(\mathrm{g},t_\text{pre},t_\text{eval})\), based on \(t_\text{eval}\), but weighted with respect to group size. Corresponds to Calendar Time Effects from the did-R-package.

For calendar time effects set aggregation="time".

[21]:
aggregated_time = dml_obj.aggregate("time")
print(aggregated_time)
fig, ax = aggregated_time.plot_effects()
================== DoubleMLDIDAggregation Object ==================
 Time Aggregation

------------------ Overall Aggregated Effects ------------------
    coef  std err         t  P>|t|    2.5 %   97.5 %
1.373871 0.127152 10.804944    0.0 1.124657 1.623084
------------------ Aggregated Effects         ------------------
             coef   std err          t         P>|t|     2.5 %    97.5 %
2025-04  0.911047  0.132078   6.897811  5.281109e-12  0.652180  1.169915
2025-05  1.339779  0.136448   9.818953  0.000000e+00  1.072345  1.607213
2025-06  1.870786  0.162970  11.479337  0.000000e+00  1.551371  2.190201
------------------ Additional Information     ------------------
Score function: observational
Control group: never_treated
Anticipation periods: 0

/opt/hostedtoolcache/Python/3.12.11/x64/lib/python3.12/site-packages/doubleml/did/did_aggregation.py:368: UserWarning: Joint confidence intervals require bootstrapping which hasn't been performed yet. Automatically applying '.aggregated_frameworks.bootstrap(method="normal", n_rep_boot=500)' with default values. For different bootstrap settings, call bootstrap() explicitly before plotting.
  warnings.warn(
../../_images/examples_did_py_panel_47_2.png

Event Study Aggregation#

To obtain event-study-type effects one can would like to aggregate \(ATT(\mathrm{g}, t_\text{eval})\) over \(e = t_\text{eval} - \mathrm{g}\) (respecting group size). As a sample oracle we will combine all ite’s based on group \(\mathrm{g}\). As oracle values, we obtain

[22]:
df["e"] = pd.to_datetime(df["t"]).values.astype("datetime64[M]") - \
    pd.to_datetime(df["d"]).values.astype("datetime64[M]")
df.groupby("e")["ite"].mean()[1:]
[22]:
e
-122 days   -0.015748
-92 days    -0.015152
-61 days     0.010536
-31 days    -0.021038
0 days       0.969509
31 days      1.947854
59 days      2.918196
Name: ite, dtype: float64

Analogously, aggregation="eventstudy" aggregates \(\widehat{ATT}(\mathrm{g},t_\text{pre},t_\text{eval})\) based on exposure time \(e = t_\text{eval} - \mathrm{g}\) (respecting group size).

[23]:
aggregated_eventstudy = dml_obj.aggregate("eventstudy")
print(aggregated_eventstudy)
aggregated_eventstudy.plot_effects()
================== DoubleMLDIDAggregation Object ==================
 Event Study Aggregation

------------------ Overall Aggregated Effects ------------------
    coef  std err         t  P>|t|   2.5 %   97.5 %
1.823644 0.168689 10.810675    0.0 1.49302 2.154269
------------------ Aggregated Effects         ------------------
               coef   std err          t     P>|t|     2.5 %    97.5 %
-4 months -0.004552  0.087617  -0.051950  0.958568 -0.176278  0.167174
-3 months  0.015204  0.069976   0.217270  0.827998 -0.121946  0.152353
-2 months -0.091029  0.068167  -1.335379  0.181752 -0.224635  0.042576
-1 months -0.120399  0.066892  -1.799900  0.071876 -0.251505  0.010707
0 months   0.918607  0.071809  12.792308  0.000000  0.777863  1.059351
1 months   1.856624  0.191100   9.715465  0.000000  1.482075  2.231172
2 months   2.695702  0.290385   9.283189  0.000000  2.126557  3.264847
------------------ Additional Information     ------------------
Score function: observational
Control group: never_treated
Anticipation periods: 0

/opt/hostedtoolcache/Python/3.12.11/x64/lib/python3.12/site-packages/doubleml/did/did_aggregation.py:368: UserWarning: Joint confidence intervals require bootstrapping which hasn't been performed yet. Automatically applying '.aggregated_frameworks.bootstrap(method="normal", n_rep_boot=500)' with default values. For different bootstrap settings, call bootstrap() explicitly before plotting.
  warnings.warn(
[23]:
(<Figure size 1200x600 with 1 Axes>,
 <Axes: title={'center': 'Aggregated Treatment Effects'}, ylabel='Effect'>)
../../_images/examples_did_py_panel_51_3.png

Aggregation Details#

The DoubleMLDIDAggregation objects include several DoubleMLFrameworks which support methods like bootstrap() or confint(). Further, the weights can be accessed via the properties

  • overall_aggregation_weights: weights for the overall aggregation

  • aggregation_weights: weights for the aggregation

To clarify, e.g. for the eventstudy aggregation

[24]:
print(aggregated_eventstudy)
================== DoubleMLDIDAggregation Object ==================
 Event Study Aggregation

------------------ Overall Aggregated Effects ------------------
    coef  std err         t  P>|t|   2.5 %   97.5 %
1.823644 0.168689 10.810675    0.0 1.49302 2.154269
------------------ Aggregated Effects         ------------------
               coef   std err          t     P>|t|     2.5 %    97.5 %
-4 months -0.004552  0.087617  -0.051950  0.958568 -0.176278  0.167174
-3 months  0.015204  0.069976   0.217270  0.827998 -0.121946  0.152353
-2 months -0.091029  0.068167  -1.335379  0.181752 -0.224635  0.042576
-1 months -0.120399  0.066892  -1.799900  0.071876 -0.251505  0.010707
0 months   0.918607  0.071809  12.792308  0.000000  0.777863  1.059351
1 months   1.856624  0.191100   9.715465  0.000000  1.482075  2.231172
2 months   2.695702  0.290385   9.283189  0.000000  2.126557  3.264847
------------------ Additional Information     ------------------
Score function: observational
Control group: never_treated
Anticipation periods: 0

Here, the overall effect aggregation aggregates each effect with positive exposure

[25]:
print(aggregated_eventstudy.overall_aggregation_weights)
[0.         0.         0.         0.         0.33333333 0.33333333
 0.33333333]

If one would like to consider how the aggregated effect with \(e=0\) is computed, one would have to look at the corresponding set of weights within the aggregation_weights property

[26]:
# the weights for e=0 correspond to the fifth element of the aggregation weights
aggregated_eventstudy.aggregation_weights[4]
[26]:
array([0.        , 0.        , 0.33789954, 0.        , 0.        ,
       0.        , 0.        , 0.        , 0.33763094, 0.        ,
       0.        , 0.        , 0.        , 0.        , 0.32446951])

Taking a look at the original dml_obj, one can see that this combines the following estimates (only show month):

  • \(\widehat{ATT}(04,03,04)\)

  • \(\widehat{ATT}(05,04,05)\)

  • \(\widehat{ATT}(06,05,06)\)

[27]:
print(dml_obj.summary["coef"])
ATT(2025-04,2025-01,2025-02)   -0.088919
ATT(2025-04,2025-02,2025-03)   -0.230994
ATT(2025-04,2025-03,2025-04)    0.911047
ATT(2025-04,2025-03,2025-05)    1.788483
ATT(2025-04,2025-03,2025-06)    2.695702
ATT(2025-05,2025-01,2025-02)   -0.001312
ATT(2025-05,2025-02,2025-03)   -0.088336
ATT(2025-05,2025-03,2025-04)   -0.033562
ATT(2025-05,2025-04,2025-05)    0.890718
ATT(2025-05,2025-04,2025-06)    1.924818
ATT(2025-06,2025-01,2025-02)   -0.004552
ATT(2025-06,2025-02,2025-03)    0.032390
ATT(2025-06,2025-03,2025-04)   -0.096029
ATT(2025-06,2025-04,2025-05)   -0.095587
ATT(2025-06,2025-05,2025-06)    0.955500
Name: coef, dtype: float64

Anticipation#

As described in the Model Guide, one can include anticipation periods \(\delta>0\) by setting the anticipation_periods parameter.

Data with Anticipation#

The DGP allows to include anticipation periods via the anticipation_periods parameter. In this case the observations will be “shifted” such that units anticipate the effect earlier and the exposure effect is increased by the number of periods where the effect is anticipated.

[28]:
n_obs = 4000
n_periods = 6

df_anticipation = make_did_CS2021(n_obs, dgp_type=4, n_periods=n_periods, n_pre_treat_periods=3, time_type="datetime", anticipation_periods=1)

print(df_anticipation.shape)
df_anticipation.head()

(19410, 10)
[28]:
id y y0 y1 d t Z1 Z2 Z3 Z4
1 0 211.690470 211.690470 211.091121 2025-06-01 2025-01-01 -0.710666 0.556344 -0.209281 0.382169
2 0 211.772159 211.772159 211.619900 2025-06-01 2025-02-01 -0.710666 0.556344 -0.209281 0.382169
3 0 211.698657 211.698657 211.281907 2025-06-01 2025-03-01 -0.710666 0.556344 -0.209281 0.382169
4 0 209.093983 209.093983 209.745386 2025-06-01 2025-04-01 -0.710666 0.556344 -0.209281 0.382169
5 0 212.399239 211.565163 212.399239 2025-06-01 2025-05-01 -0.710666 0.556344 -0.209281 0.382169

To visualize the anticipation, we will again plot the “oracle” values

[29]:
df_anticipation["ite"] = df_anticipation["y1"] - df_anticipation["y0"]
df_anticipation["First Treated"] = df_anticipation["d"].dt.strftime("%Y-%m").fillna("Never Treated")
agg_df_anticipation = df_anticipation.groupby(["t", "First Treated"]).agg(**agg_dictionary).reset_index()
agg_df_anticipation.head()
[29]:
t First Treated y_mean y_lower_quantile y_upper_quantile ite_mean ite_lower_quantile ite_upper_quantile
0 2025-01-01 2025-04 208.535356 192.063882 224.770222 -0.019135 -2.375457 2.315003
1 2025-01-01 2025-05 210.601045 193.781049 227.723712 -0.014827 -2.333886 2.186788
2 2025-01-01 2025-06 212.284276 193.644525 229.807207 0.047682 -2.119362 2.249240
3 2025-01-01 Never Treated 217.025829 200.380353 233.456594 0.074036 -2.297915 2.226158
4 2025-02-01 2025-04 208.253662 184.114287 232.236940 0.041466 -2.388959 2.271953

One can see that the effect is already anticipated one period before the actual treatment assignment.

[30]:
plot_data(agg_df_anticipation, col_name='ite')
../../_images/examples_did_py_panel_66_0.png

Initialize a corresponding DoubleMLPanelData object.

[31]:
dml_data_anticipation = DoubleMLPanelData(
    data=df_anticipation,
    y_col="y",
    d_cols="d",
    id_col="id",
    t_col="t",
    x_cols=["Z1", "Z2", "Z3", "Z4"],
    datetime_unit="M"
)

ATT Estimation#

Let us take a look at the estimation without anticipation.

[32]:
dml_obj_anticipation = DoubleMLDIDMulti(dml_data_anticipation, **default_args)
dml_obj_anticipation.fit()
dml_obj_anticipation.bootstrap(n_rep_boot=5000)
dml_obj_anticipation.plot_effects()
/opt/hostedtoolcache/Python/3.12.11/x64/lib/python3.12/site-packages/doubleml/double_ml.py:1479: UserWarning: The estimated nu2 for d is not positive. Re-estimation based on riesz representer (non-orthogonal).
  warnings.warn(msg, UserWarning)
/opt/hostedtoolcache/Python/3.12.11/x64/lib/python3.12/site-packages/matplotlib/cbook.py:1719: FutureWarning: Calling float on a single element Series is deprecated and will raise a TypeError in the future. Use float(ser.iloc[0]) instead
  return math.isfinite(val)
[32]:
(<Figure size 1200x800 with 4 Axes>,
 [<Axes: title={'center': 'First Treated: 2025-04'}, ylabel='Effect'>,
  <Axes: title={'center': 'First Treated: 2025-05'}, ylabel='Effect'>,
  <Axes: title={'center': 'First Treated: 2025-06'}, xlabel='Evaluation Period', ylabel='Effect'>])
../../_images/examples_did_py_panel_70_2.png

The effects are obviously biased. To include anticipation periods, one can adjust the anticipation_periods parameter. Correspondingly, the outcome regression (and not yet treated units) are adjusted.

[33]:
dml_obj_anticipation = DoubleMLDIDMulti(dml_data_anticipation, **(default_args| {"anticipation_periods": 1}))
dml_obj_anticipation.fit()
dml_obj_anticipation.bootstrap(n_rep_boot=5000)
dml_obj_anticipation.plot_effects()
/opt/hostedtoolcache/Python/3.12.11/x64/lib/python3.12/site-packages/matplotlib/cbook.py:1719: FutureWarning: Calling float on a single element Series is deprecated and will raise a TypeError in the future. Use float(ser.iloc[0]) instead
  return math.isfinite(val)
/opt/hostedtoolcache/Python/3.12.11/x64/lib/python3.12/site-packages/matplotlib/cbook.py:1719: FutureWarning: Calling float on a single element Series is deprecated and will raise a TypeError in the future. Use float(ser.iloc[0]) instead
  return math.isfinite(val)
/opt/hostedtoolcache/Python/3.12.11/x64/lib/python3.12/site-packages/matplotlib/cbook.py:1719: FutureWarning: Calling float on a single element Series is deprecated and will raise a TypeError in the future. Use float(ser.iloc[0]) instead
  return math.isfinite(val)
/opt/hostedtoolcache/Python/3.12.11/x64/lib/python3.12/site-packages/matplotlib/cbook.py:1719: FutureWarning: Calling float on a single element Series is deprecated and will raise a TypeError in the future. Use float(ser.iloc[0]) instead
  return math.isfinite(val)
/opt/hostedtoolcache/Python/3.12.11/x64/lib/python3.12/site-packages/matplotlib/cbook.py:1719: FutureWarning: Calling float on a single element Series is deprecated and will raise a TypeError in the future. Use float(ser.iloc[0]) instead
  return math.isfinite(val)
[33]:
(<Figure size 1200x800 with 4 Axes>,
 [<Axes: title={'center': 'First Treated: 2025-04'}, ylabel='Effect'>,
  <Axes: title={'center': 'First Treated: 2025-05'}, ylabel='Effect'>,
  <Axes: title={'center': 'First Treated: 2025-06'}, xlabel='Evaluation Period', ylabel='Effect'>])
../../_images/examples_did_py_panel_72_2.png

Group-Time Combinations#

The default option gt_combinations="standard" includes all group time values with the specific choice of \(t_\text{pre} = \min(\mathrm{g}, t_\text{eval}) - 1\) (without anticipation) which is the weakest possible parallel trend assumption.

Other options are possible or only specific combinations of \((\mathrm{g},t_\text{pre},t_\text{eval})\).

All Combinations#

The option gt_combinations="all" includes all relevant group time values with \(t_\text{pre} < \min(\mathrm{g}, t_\text{eval})\), including longer parallel trend assumptions. This can result in multiple estimates for the same \(ATT(\mathrm{g},t)\), which have slightly different assumptions (length of parallel trends).

[34]:
dml_obj_all = DoubleMLDIDMulti(dml_data, **(default_args| {"gt_combinations": "all"}))
dml_obj_all.fit()
dml_obj_all.bootstrap(n_rep_boot=5000)
dml_obj_all.plot_effects()
[34]:
(<Figure size 1200x800 with 4 Axes>,
 [<Axes: title={'center': 'First Treated: 2025-04'}, ylabel='Effect'>,
  <Axes: title={'center': 'First Treated: 2025-05'}, ylabel='Effect'>,
  <Axes: title={'center': 'First Treated: 2025-06'}, xlabel='Evaluation Period', ylabel='Effect'>])
../../_images/examples_did_py_panel_75_1.png

Universal Base Period#

The option gt_combinations="universal" set \(t_\text{pre} = \mathrm{g} - \delta - 1\), corresponding to a universal/constant comparison or base period.

Remark that this implies \(t_\text{pre} > t_\text{eval}\) for all pre-treatment periods (accounting for anticipation). Therefore these effects do not have the same straightforward interpretation as ATT’s.

[35]:
dml_obj_universal = DoubleMLDIDMulti(dml_data, **(default_args| {"gt_combinations": "universal"}))
dml_obj_universal.fit()
dml_obj_universal.bootstrap(n_rep_boot=5000)
dml_obj_universal.plot_effects()
/opt/hostedtoolcache/Python/3.12.11/x64/lib/python3.12/site-packages/doubleml/double_ml.py:1479: UserWarning: The estimated nu2 for d is not positive. Re-estimation based on riesz representer (non-orthogonal).
  warnings.warn(msg, UserWarning)
/opt/hostedtoolcache/Python/3.12.11/x64/lib/python3.12/site-packages/matplotlib/cbook.py:1719: FutureWarning: Calling float on a single element Series is deprecated and will raise a TypeError in the future. Use float(ser.iloc[0]) instead
  return math.isfinite(val)
[35]:
(<Figure size 1200x800 with 4 Axes>,
 [<Axes: title={'center': 'First Treated: 2025-04'}, ylabel='Effect'>,
  <Axes: title={'center': 'First Treated: 2025-05'}, ylabel='Effect'>,
  <Axes: title={'center': 'First Treated: 2025-06'}, xlabel='Evaluation Period', ylabel='Effect'>])
../../_images/examples_did_py_panel_77_2.png

Selected Combinations#

Instead it is also possible to just submit a list of tuples containing \((\mathrm{g}, t_\text{pre}, t_\text{eval})\) combinations. E.g. only two combinations

[36]:
gt_dict = {
    "gt_combinations": [
        (np.datetime64('2025-04'),
         np.datetime64('2025-01'),
         np.datetime64('2025-02')),
        (np.datetime64('2025-04'),
         np.datetime64('2025-02'),
         np.datetime64('2025-03')),
    ]
}

dml_obj_all = DoubleMLDIDMulti(dml_data, **(default_args| gt_dict))
dml_obj_all.fit()
dml_obj_all.bootstrap(n_rep_boot=5000)
dml_obj_all.plot_effects()
[36]:
(<Figure size 1200x800 with 2 Axes>,
 [<Axes: title={'center': 'First Treated: 2025-04'}, xlabel='Evaluation Period', ylabel='Effect'>])
../../_images/examples_did_py_panel_79_1.png