In this notebook, we demontrate exemplarily how the DoubleML package can be used to estimate the causal effect of seeing a new ad design on customers' purchases in a webshop. We base the estimation steps of our analysis according to the DoubleML workflow.
Let's consider the following stylized scenario. The manager of a webshop performs an A/B test to estimate the effect a new ad design $A$ has on customers' purchases (in $100\$$), $Y$, on average. This effect is called the **A**verage **T**reatment **E**ffect (**ATE**). The treatment is assigned randomly conditional on the visitors' characteristics, which we call $V$. Such characteristics could be collected from a customer's shoppers account, for example. These might include the number of previous purchases, time since the last purchase, length of stay on a page as well as whether a customer has a rewards card, among other characteristics.
In the following, we use a Directed Acyclical Graph (DAG) to illustrate our assumptions on the causal structure of the scenario. As not only the outcome, but also the treatment is dependent on the individual characteristics, there are arrows going from $V$ to both $A$ and $Y$. In our example, we also assume that the treatment $A$ is a direct cause of the customers' purchases $Y$.
Let's assume the conditional randomization has been conducted properly, such that a tidy data set has been collected. Now, a data scientist wants to evaluate whether the new ad design causally affected the sales, by using the DoubleML package.
Before we start the case study, let us briefly address the question why we need to include individual characteristics in our analysis at all. There are mainly two reasons why we want to control for observable characteristics. First, so-called confounders, i.e., variables that have a causal effect on both the treatment variable and the outcome variable, possibly create a bias in our estimate. In order to uncover the true causal effect of the treatment, it is necessary that our causal framework takes all confounding variables into account. Otherwise, the average causal effect of the treatment on the outcome is not identified. A second reason to include individual characteristics is efficiency. The more variation can be explained within our causal framework, the more precise will be the resulting estimate. In practical terms, greater efficiency leads to tighter confidence intervals and smaller standard errors and p-values. This might help to improve the power of A/B tests even if the treatment variable is unconditionally assigned to individuals.
ML methods have turned out to be very flexible in terms of modeling complex relationships of explanatory variables and dependent variables and, thus, have exhibited a great predictive performance in many applications. In the double machine learning approach (Chernozhukov et al. (2018)), ML methods are used for modelling so-called nuisance functions. In terms of the A/B case study considered here, ML tools can be used to flexibly control for confounding variables. For example, a linear parametric specification as in a standard linear regression model might not be correct and, hence, not sufficient to account for the underlying confounding. Moreover, by using powerful ML techniques, the causal model will likely be able to explain a greater share of the total variation and, hence, lead to more precise estimation.
As an illustrative example we use a data set from the ACIC 2019 Data Challenge. In this challenge, a great number of data sets have been generated in a way that they mimic distributional relationships that are found in many economic real data applications. Although the data have not been generated explicitly to address an A/B testing case study, they are well-suited for demonstration purposes. We will focus on one of the many different data genereting processes (DGP) that we picked at random, in this particualar case a data set called high42
. An advantage of using the synthetic ACIC 2019 data is that we know the true average treatment effect which is 0.8 in our data set.
# Load required modules
import numpy as np
import pandas as pd
import doubleml as dml
from sklearn.linear_model import LinearRegression
from sklearn.linear_model import LogisticRegression
from sklearn.linear_model import LassoCV, LogisticRegressionCV
from sklearn.ensemble import RandomForestClassifier, RandomForestRegressor
from sklearn.preprocessing import StandardScaler
from sklearn.pipeline import make_pipeline
from sklearn.metrics import mean_squared_error
from sklearn.metrics import log_loss
from xgboost import XGBClassifier, XGBRegressor
import matplotlib.pyplot as plt
import scipy.stats as stats
First we load the data.
import pandas as pd
# Load data set from url (internet connection required)
url = 'https://raw.githubusercontent.com/DoubleML/doubleml-docs/master/doc/examples/data/high42.CSV'
df = pd.read_csv(url)
print(df.shape)
df.head()
We see that the data set consists of 1000 observations (= website visitors) and 202 variables:
Y
: A customer's purchases (in $100\$$)A
: Binary treatment variable with a value 1 indicating that a customer has been exposed to the new ad design (and value 0 otherwise).V1
,..., V200
: The remaining 200 columns $V$ represent individual characteristics of the customers (=confounders).To start our analysis, we initialize the data backend from the previously loaded data set, i.e., we create a new instance of a DoubleMLData object. During initialization, we specify the roles of the variables in the data set, i.e., in our example the outcome variable $Y$ via the parameter y_col
, the treatment variable $A$ via d_cols
and the confounding variables $V$ via x_cols
.
# Specify explanatory variables for data-backend
features_base = list(df.columns.values)[2:]
# TODO: Initialize DoubleMLData (data-backend of DoubleML)
We can print the data-backend to see the variables, which we have assigned as outcome, treatment and controls.
# TODO: print data backend
The inference problem is to determine the causal effect of seeing the new ad design $A$ on customers' purchases $Y$ once we control for individual characteristics $V$. In our example, we are interested in the average treatment effect. Basically, there are two causal models available in DoubleML that can be used to estimate the ATE.
The so-called interactive regression model (IRM) called by DoubleMLIRM is a flexible (nonparametric) model to estimate this causal quantity. The model does not impose functional form restrictions on the underlying regression relationships, for example, linearity or additivity as in a standard linear regression model. This means that the model hosts heterogeneous treatment effects, i.e., account for variation in the effect of the new ad design across customers. Moreover, it is possible to also estimate other causal parameters with the IRM, for example, the average treatment effect on the treated (= those customers who have been exposed to the new ad), which might be of interest too.
We briefly introduce the interactive regression model where the main regression relationship of interest is provided by
$$Y = g_0(A, V) + U_1, \quad E(U_1 | V, A) = 0,$$where the treatment variable is binary, $A \in \lbrace 0,1 \rbrace$. We consider estimation of the average treatment effect (ATE):
$$\theta_0 = \mathbb{E}[g_0(1, V) - g_0(0,V)],$$when treatment effects are heterogeneous. In order to be able to use ML methods, the estimation framework generally requires a property called "double robustness" or "Neyman orthogonality". In the IRM, double robustness can be achieved by including the first-stage estimation
$$A = m_0(V) + U_2, \quad E(U_2| V) = 0,$$which amounts to estimation of the propensity score, i.e., the probability that a customer is exposed to the treatment provided her observed characteristics. Both predictions are then combined in the doubly robust score for the average treatment effect which is given by
$$\psi(W; \theta, \eta) := g(1,V) - g(0,V) + \frac{A (Y - g(1,V))}{m(V)} - \frac{(1 - A)(Y - g(0,V))}{1 - m(V)} - \theta.$$As a naive estimate, we could calculate the unconditional average treatment effect. In other words, we simply take the difference between $Y$ observed for the customers who have been exposed to the treatment $(A=1)$ and those who haven't been exposed $(A=0)$.
Since the unconditional ATE does not account for the confounding variables, it will generally not correspond to the true ATE (only in the case of unconditionally random treatment assignment, the unconditional ATE will correspond to the true ATE). For example, if the unconditional ATE estimate is greater than the actual ATE, the manager would erroneously overinterpret the effect of the new ad design and probably make misleading decisions for the marketing budget in the future.
# TODO: Calculate unconditional average treatment effect
In this step, we define the learners that will be used for estimation of the nuisance functions later.
Let us first start with a benchmark model that is based on (unpenalized) linear and logistic regression. Hence, we estimate the functions $g_0(A,V)$ using a linear regression model and $m_0(V)$ by using an (unpenalized) logistic regression. In both cases, we include all available characteristics $V$. We will later compare the performance of this model to that using more advanced ML methods.
# TODO: Initialize Linear and Logistic Regression learners
# TODO: Initialize one ML learner of your choice
# TODO: Initialize a second ML learner of your choice
# (proceed as long as you like)
At this stage, we instantiate a causal model object of the class DoubleMLIRM. Provide the learners via parameters ml_g
and ml_m
. You can either stick with the default setting or change the parameters. The API documentation is available here.
Hint: Use numpy.random.seed to set a random seed prior to your initialization. This makes the sample splits of the different models comparable. Also try to use the same DML specifications in all models to attain some comparability.
# TODO: Initialize benchmark DoubleMLIRM model
# TODO: Initialize a DoubleMLIRM model using the ML learners of your choice
Proceed with the models using the other ML learners.
# TODO: Fit benchmark DoubleMLIRM model using the fit() method
# HINT: set parameter 'store_predictions = True' for later model diagnostics
# TODO: Summarize your results
To evaluate the different models we can compare how well the employed estimators fit the nuisance functions $g_0(\cdot)$ and $m_0(\cdot)$. Use the following helper function to compare the predictive performance of your models.
def pred_acc_irm(DoubleML, prop):
"""
A function to calculate prediction accuracy values for every repetition
of a Double Machine Learning model using IRM, DoubleMLIRM
...
Parameters
----------
DoubleML : doubleml.double_ml_irm.DoubleMLIRM
The IRM Double Machine Learning model
prop : bool
Indication if RMSE values have to be computed for main regression or
log loss values for propensity score
"""
# export data and predictions of the DoubleML model
y = DoubleML._dml_data.y
d = DoubleML._dml_data.d
g0 = DoubleML.predictions.get('ml_g0')
g1 = DoubleML.predictions.get('ml_g1')
m = DoubleML.predictions.get('ml_m')
# dimensions of prediction array
h = g0.shape[0]
w = DoubleML.n_rep
# check whether treatment is binary
if np.isin(d, [0,1]).all() == False:
raise ValueError("Treatment must be a binary variable.")
# prepare array to store prediction accuracy measure values
pred_acc_array = np.zeros((w,))
# check whether to assess main regression or propensity score accuracy:
if prop == False:
# evaluate main regression accuracy
# export an array with correctly picked prediction values
export_pred_array = np.zeros((h, w))
for i in range(w):
for j in range(h):
if d[j] == 0:
export_pred_array[j,i] = g0[j,i]
else:
export_pred_array[j,i] = g1[j,i]
# fill array that contains rmse of each repetition
for i in range(w):
pred_acc_array[i] = mean_squared_error(y, export_pred_array[:,i], squared=False)
else:
# evaluate propensity score accuracy
# fill array that contains log loss of each repetition
for i in range(w):
pred_acc_array[i] = log_loss(d, m[:,i], eps=0.025)
return pred_acc_array
# TODO: Evaluate the predictive performance for `ml_g` and `ml_m` using the
# helper function `pred_acc_irm()`.
The propensity score $m_0(A,V)$ plays an important role in the score of the IRM model. Try to summarize the estimates for $m_0(A,V)$ using some descriptive statistics or visualization. You can use the following helper function for visualizing the propensity score estimates.
def rep_propscore_plot(DoubleML):
"""
A function to create histograms as sublots for every repetition's propensity score density
of a Double Machine Learning model
...
Parameters
----------
DoubleML : doubleml
The Double Machine Learning model
"""
#export nuisance part from the DoubleML model
m = DoubleML.predictions.get('ml_m')
# dimensions of nuisance array
h = m.shape[0]
rep = DoubleML.n_rep
i = 0
# create histograms as subplots covering the propensity score densities of all repetitions
if rep > 1:
fig, ax = plt.subplots(1, rep, figsize=[20,4.8])
for i in range(rep):
ax[i].hist(np.reshape(m[:,i], h), range=[0,1], bins=25, density=False)
ax[i].set_title('repetition ' + str(i+1))
ax[i].set_xlabel("prop_score")
ax[i].set_ylabel("count")
else:
fig, ax = plt.subplots(figsize=[20,4.8])
ax.hist(np.reshape(m[:,i], h), range=[0,1], bins=25, density=False)
ax.hist(np.reshape(m[:,i], h), range=[0,1], bins=25, density=False)
ax.set_title('repetition ' + str(i+1))
ax.set_xlabel("prop_score")
ax.set_ylabel("count")
plt.show()
# (TODO): Summarize the propensity score estimates
# TODO: Fit the ML DoubleMLIRM model using the fit() method
# TODO: Summarize your results
# TODO: Evaluate the predictive performance for `ml_g` and `ml_m` using the
# helper function `pred_acc_irm()`.
# (TODO): Summarize the propensity score estimates
Proceed with the models using the other ML learners.
Provide a brief summary of your estimation results, for example by creating a table or figure.
# TODO: Summarize the results on the nuisance estimation in a table or figure
Summarize your results on the coefficient estimate for $\theta_0$ as well as the standard errors and / or confidence intervals, respectively. You can create a table or a figure illustrating your findings.
Try to answer the following questions:
## TODO: After calling fit(), access the coefficient parameter,
## the standard error and confidence interval accessing the fiels
## `coef` and `summary`.
## TODO: After calling fit(), access the coefficient parameter,
## the standard error and confidence interval accessing the fiels
## `coef` and `summary`.
Proceed with the models using the other ML learners.
Notes and Acknowledgement
We would like to thank the organizers of the ACIC 2019 Data Challenge for setting up this data challenge and making the numerous synthetic data examples publicly available. Although the data examples in the ACIC 2019 Data Challenge do not explicitly adress A/B testing, we put the data example here in this context to give a tractable example on the use of causal machine learning in practice. The parameters for the random forests and extreme gradient boosting learners have been tuned externally. The corresponding tuning notebook will be uploaded in the examples gallery in the future.
Chernozhukov, V., Chetverikov, D., Demirer, M., Duflo, E., Hansen, C., Newey, W. and Robins, J. (2018), Double/debiased machine learning for treatment and structural parameters. The Econometrics Journal, 21: C1-C68. doi:10.1111/ectj.12097.