9. Sample-splitting, cross-fitting and repeated cross-fitting#

Sample-splitting and the application of cross-fitting is a central part of double/debiased machine learning (DML). For all DML models DoubleMLPLR, DoubleMLPLIV, DoubleMLIRM, and DoubleMLIIVM, the specification is done via the parameters n_folds and n_rep. Advanced resampling techniques can be obtained via the boolean parameters draw_sample_splitting and apply_cross_fitting as well as the methods draw_sample_splitting() and set_sample_splitting().

As an example we consider a partially linear regression model (PLR) implemented in DoubleMLPLR.

In [1]: import doubleml as dml

In [2]: import numpy as np

In [3]: from doubleml.datasets import make_plr_CCDDHNR2018

In [4]: from sklearn.ensemble import RandomForestRegressor

In [5]: from sklearn.base import clone

In [6]: learner = RandomForestRegressor(n_estimators=100, max_features=20, max_depth=5, min_samples_leaf=2)

In [7]: ml_l = clone(learner)

In [8]: ml_m = clone(learner)

In [9]: np.random.seed(1234)

In [10]: obj_dml_data = make_plr_CCDDHNR2018(alpha=0.5, n_obs=100)
library(DoubleML)
library(mlr3)
lgr::get_logger("mlr3")$set_threshold("warn")
library(mlr3learners)
library(data.table)

learner = lrn("regr.ranger", num.trees = 100, mtry = 20, min.node.size = 2, max.depth = 5)
ml_l = learner
ml_m = learner
data = make_plr_CCDDHNR2018(alpha=0.5, n_obs=100, return_type = "data.table")
obj_dml_data = DoubleMLData$new(data,
                                y_col = "y",
                                d_cols = "d")

9.1. Cross-fitting with K folds#

The default setting is n_folds = 5 and n_rep = 1, i.e., \(K=5\) folds and no repeated cross-fitting.

In [11]: dml_plr_obj = dml.DoubleMLPLR(obj_dml_data, ml_l, ml_m, n_folds = 5, n_rep = 1)

In [12]: print(dml_plr_obj.n_folds)
5

In [13]: print(dml_plr_obj.n_rep)
1
dml_plr_obj = DoubleMLPLR$new(obj_dml_data, ml_l, ml_m, n_folds = 5, n_rep = 1)
print(dml_plr_obj$n_folds)
print(dml_plr_obj$n_rep)
[1] 5
[1] 1

During the initialization of a DML model like DoubleMLPLR a \(K\)-fold random partition \((I_k)_{k=1}^{K}\) of observation indices is generated. The \(K\)-fold random partition is stored in the smpls attribute of the DML model object.

In [14]: print(dml_plr_obj.smpls)
[[(array([ 0,  2,  3,  4,  6,  7,  8,  9, 10, 11, 12, 13, 14, 15, 16, 17, 19,
       21, 22, 23, 24, 25, 27, 28, 29, 31, 32, 34, 35, 36, 37, 38, 40, 44,
       46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 58, 59, 60, 61, 62, 63, 64,
       65, 67, 68, 69, 70, 71, 72, 74, 75, 76, 77, 79, 80, 81, 82, 84, 85,
       86, 88, 89, 90, 91, 92, 93, 94, 95, 97, 98, 99]), array([ 1,  5, 18, 20, 26, 30, 33, 39, 41, 42, 43, 45, 56, 57, 66, 73, 78,
       83, 87, 96])), (array([ 0,  1,  2,  3,  4,  5,  6,  7,  8, 10, 11, 12, 13, 14, 15, 16, 17,
       18, 19, 20, 21, 22, 26, 29, 30, 31, 32, 33, 34, 36, 37, 38, 39, 40,
       41, 42, 43, 44, 45, 46, 48, 53, 54, 55, 56, 57, 58, 60, 61, 62, 63,
       65, 66, 69, 70, 71, 72, 73, 77, 78, 79, 80, 81, 82, 83, 84, 85, 87,
       88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99]), array([ 9, 23, 24, 25, 27, 28, 35, 47, 49, 50, 51, 52, 59, 64, 67, 68, 74,
       75, 76, 86])), (array([ 0,  1,  2,  3,  5,  6,  7,  9, 10, 11, 12, 14, 16, 17, 18, 20, 21,
       22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 33, 35, 38, 39, 40, 41, 42,
       43, 44, 45, 47, 48, 49, 50, 51, 52, 53, 56, 57, 58, 59, 60, 61, 62,
       63, 64, 65, 66, 67, 68, 69, 71, 73, 74, 75, 76, 77, 78, 79, 80, 81,
       83, 84, 86, 87, 88, 89, 90, 91, 92, 96, 98, 99]), array([ 4,  8, 13, 15, 19, 32, 34, 36, 37, 46, 54, 55, 70, 72, 82, 85, 93,
       94, 95, 97])), (array([ 0,  1,  3,  4,  5,  6,  8,  9, 13, 14, 15, 17, 18, 19, 20, 21, 22,
       23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 39, 41,
       42, 43, 44, 45, 46, 47, 49, 50, 51, 52, 53, 54, 55, 56, 57, 59, 64,
       65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 82,
       83, 85, 86, 87, 88, 89, 93, 94, 95, 96, 97, 98]), array([ 2,  7, 10, 11, 12, 16, 38, 40, 48, 58, 60, 61, 62, 63, 81, 84, 90,
       91, 92, 99])), (array([ 1,  2,  4,  5,  7,  8,  9, 10, 11, 12, 13, 15, 16, 18, 19, 20, 23,
       24, 25, 26, 27, 28, 30, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42,
       43, 45, 46, 47, 48, 49, 50, 51, 52, 54, 55, 56, 57, 58, 59, 60, 61,
       62, 63, 64, 66, 67, 68, 70, 72, 73, 74, 75, 76, 78, 81, 82, 83, 84,
       85, 86, 87, 90, 91, 92, 93, 94, 95, 96, 97, 99]), array([ 0,  3,  6, 14, 17, 21, 22, 29, 31, 44, 53, 65, 69, 71, 77, 79, 80,
       88, 89, 98]))]]
dml_plr_obj$smpls
  1. $train_ids
      1. 1
      2. 5
      3. 12
      4. 15
      5. 21
      6. 27
      7. 28
      8. 31
      9. 40
      10. 50
      11. 52
      12. 59
      13. 66
      14. 69
      15. 73
      16. 86
      17. 87
      18. 89
      19. 95
      20. 97
      21. 2
      22. 9
      23. 10
      24. 14
      25. 18
      26. 22
      27. 23
      28. 26
      29. 34
      30. 36
      31. 38
      32. 42
      33. 51
      34. 71
      35. 74
      36. 76
      37. 81
      38. 90
      39. 93
      40. 94
      41. 3
      42. 7
      43. 13
      44. 17
      45. 30
      46. 35
      47. 37
      48. 41
      49. 43
      50. 44
      51. 45
      52. 46
      53. 53
      54. 68
      55. 70
      56. 72
      57. 77
      58. 78
      59. 91
      60. 99
      61. 4
      62. 6
      63. 8
      64. 11
      65. 16
      66. 24
      67. 29
      68. 33
      69. 54
      70. 57
      71. 58
      72. 60
      73. 61
      74. 62
      75. 63
      76. 65
      77. 75
      78. 80
      79. 82
      80. 96
      1. 19
      2. 20
      3. 25
      4. 32
      5. 39
      6. 47
      7. 48
      8. 49
      9. 55
      10. 56
      11. 64
      12. 67
      13. 79
      14. 83
      15. 84
      16. 85
      17. 88
      18. 92
      19. 98
      20. 100
      21. 2
      22. 9
      23. 10
      24. 14
      25. 18
      26. 22
      27. 23
      28. 26
      29. 34
      30. 36
      31. 38
      32. 42
      33. 51
      34. 71
      35. 74
      36. 76
      37. 81
      38. 90
      39. 93
      40. 94
      41. 3
      42. 7
      43. 13
      44. 17
      45. 30
      46. 35
      47. 37
      48. 41
      49. 43
      50. 44
      51. 45
      52. 46
      53. 53
      54. 68
      55. 70
      56. 72
      57. 77
      58. 78
      59. 91
      60. 99
      61. 4
      62. 6
      63. 8
      64. 11
      65. 16
      66. 24
      67. 29
      68. 33
      69. 54
      70. 57
      71. 58
      72. 60
      73. 61
      74. 62
      75. 63
      76. 65
      77. 75
      78. 80
      79. 82
      80. 96
      1. 19
      2. 20
      3. 25
      4. 32
      5. 39
      6. 47
      7. 48
      8. 49
      9. 55
      10. 56
      11. 64
      12. 67
      13. 79
      14. 83
      15. 84
      16. 85
      17. 88
      18. 92
      19. 98
      20. 100
      21. 1
      22. 5
      23. 12
      24. 15
      25. 21
      26. 27
      27. 28
      28. 31
      29. 40
      30. 50
      31. 52
      32. 59
      33. 66
      34. 69
      35. 73
      36. 86
      37. 87
      38. 89
      39. 95
      40. 97
      41. 3
      42. 7
      43. 13
      44. 17
      45. 30
      46. 35
      47. 37
      48. 41
      49. 43
      50. 44
      51. 45
      52. 46
      53. 53
      54. 68
      55. 70
      56. 72
      57. 77
      58. 78
      59. 91
      60. 99
      61. 4
      62. 6
      63. 8
      64. 11
      65. 16
      66. 24
      67. 29
      68. 33
      69. 54
      70. 57
      71. 58
      72. 60
      73. 61
      74. 62
      75. 63
      76. 65
      77. 75
      78. 80
      79. 82
      80. 96
      1. 19
      2. 20
      3. 25
      4. 32
      5. 39
      6. 47
      7. 48
      8. 49
      9. 55
      10. 56
      11. 64
      12. 67
      13. 79
      14. 83
      15. 84
      16. 85
      17. 88
      18. 92
      19. 98
      20. 100
      21. 1
      22. 5
      23. 12
      24. 15
      25. 21
      26. 27
      27. 28
      28. 31
      29. 40
      30. 50
      31. 52
      32. 59
      33. 66
      34. 69
      35. 73
      36. 86
      37. 87
      38. 89
      39. 95
      40. 97
      41. 2
      42. 9
      43. 10
      44. 14
      45. 18
      46. 22
      47. 23
      48. 26
      49. 34
      50. 36
      51. 38
      52. 42
      53. 51
      54. 71
      55. 74
      56. 76
      57. 81
      58. 90
      59. 93
      60. 94
      61. 4
      62. 6
      63. 8
      64. 11
      65. 16
      66. 24
      67. 29
      68. 33
      69. 54
      70. 57
      71. 58
      72. 60
      73. 61
      74. 62
      75. 63
      76. 65
      77. 75
      78. 80
      79. 82
      80. 96
      1. 19
      2. 20
      3. 25
      4. 32
      5. 39
      6. 47
      7. 48
      8. 49
      9. 55
      10. 56
      11. 64
      12. 67
      13. 79
      14. 83
      15. 84
      16. 85
      17. 88
      18. 92
      19. 98
      20. 100
      21. 1
      22. 5
      23. 12
      24. 15
      25. 21
      26. 27
      27. 28
      28. 31
      29. 40
      30. 50
      31. 52
      32. 59
      33. 66
      34. 69
      35. 73
      36. 86
      37. 87
      38. 89
      39. 95
      40. 97
      41. 2
      42. 9
      43. 10
      44. 14
      45. 18
      46. 22
      47. 23
      48. 26
      49. 34
      50. 36
      51. 38
      52. 42
      53. 51
      54. 71
      55. 74
      56. 76
      57. 81
      58. 90
      59. 93
      60. 94
      61. 3
      62. 7
      63. 13
      64. 17
      65. 30
      66. 35
      67. 37
      68. 41
      69. 43
      70. 44
      71. 45
      72. 46
      73. 53
      74. 68
      75. 70
      76. 72
      77. 77
      78. 78
      79. 91
      80. 99
    $test_ids
      1. 19
      2. 20
      3. 25
      4. 32
      5. 39
      6. 47
      7. 48
      8. 49
      9. 55
      10. 56
      11. 64
      12. 67
      13. 79
      14. 83
      15. 84
      16. 85
      17. 88
      18. 92
      19. 98
      20. 100
      1. 1
      2. 5
      3. 12
      4. 15
      5. 21
      6. 27
      7. 28
      8. 31
      9. 40
      10. 50
      11. 52
      12. 59
      13. 66
      14. 69
      15. 73
      16. 86
      17. 87
      18. 89
      19. 95
      20. 97
      1. 2
      2. 9
      3. 10
      4. 14
      5. 18
      6. 22
      7. 23
      8. 26
      9. 34
      10. 36
      11. 38
      12. 42
      13. 51
      14. 71
      15. 74
      16. 76
      17. 81
      18. 90
      19. 93
      20. 94
      1. 3
      2. 7
      3. 13
      4. 17
      5. 30
      6. 35
      7. 37
      8. 41
      9. 43
      10. 44
      11. 45
      12. 46
      13. 53
      14. 68
      15. 70
      16. 72
      17. 77
      18. 78
      19. 91
      20. 99
      1. 4
      2. 6
      3. 8
      4. 11
      5. 16
      6. 24
      7. 29
      8. 33
      9. 54
      10. 57
      11. 58
      12. 60
      13. 61
      14. 62
      15. 63
      16. 65
      17. 75
      18. 80
      19. 82
      20. 96

For each \(k \in [K] = \lbrace 1, \ldots, K]\) the nuisance ML estimator

\[\hat{\eta}_{0,k} = \hat{\eta}_{0,k}\big((W_i)_{i\not\in I_k}\big)\]

is based on the observations of all other \(k-1\) folds. The values of the two score function components \(\psi_a(W_i; \hat{\eta}_0)\) and \(\psi_b(W_i; \hat{\eta}_0))\) for each observation index \(i \in I_k\) are computed and stored in the attributes psi_a and psi_b.

In [15]: dml_plr_obj.fit();

In [16]: print(dml_plr_obj.psi_elements['psi_a'][:5])
[[[-1.81676552e+00]]

 [[-1.56414991e+00]]

 [[-2.18956777e-02]]

 [[-4.92738338e-04]]

 [[-5.83772121e+00]]]

In [17]: print(dml_plr_obj.psi_elements['psi_b'][:5])
[[[-0.40976446]]

 [[-1.1688736 ]]

 [[ 0.06896522]]

 [[ 0.00804723]]

 [[ 3.61077157]]]
dml_plr_obj$fit()
print(dml_plr_obj$psi_a[1:5, ,1])
print(dml_plr_obj$psi_b[1:5, ,1])
[1] -0.86072929 -0.35820012 -0.01745738 -0.11944335 -8.36676485
[1]  0.42845644 -0.63847511 -0.09455589 -0.30801750  4.19188941

9.2. Repeated cross-fitting with K folds and M repetitions#

Repeated cross-fitting is obtained by choosing a value \(M>1\) for the number of repetition n_rep. It results in \(M\) random \(K\)-fold partitions being drawn.

In [18]: dml_plr_obj = dml.DoubleMLPLR(obj_dml_data, ml_l, ml_m, n_folds = 5, n_rep = 10)

In [19]: print(dml_plr_obj.n_folds)
5

In [20]: print(dml_plr_obj.n_rep)
10
dml_plr_obj = DoubleMLPLR$new(obj_dml_data, ml_l, ml_m, n_folds = 5, n_rep = 10)
print(dml_plr_obj$n_folds)
print(dml_plr_obj$n_rep)
[1] 5
[1] 10

For each of the \(M\) partitions, the nuisance ML models are estimated and score functions computed as described in Cross-fitting with K folds. The resulting values of the score functions are stored in 3-dimensional arrays psi_a and psi_b, where the row index corresponds the observation index \(i \in [N] = \lbrace 1, \ldots, N\rbrace\) and the column index to the partition \(m \in [M] = \lbrace 1, \ldots, M\rbrace\). The third dimension refers to the treatment variable and becomes non-singleton in case of multiple treatment variables.

In [21]: dml_plr_obj.fit();

In [22]: print(dml_plr_obj.psi_elements['psi_a'][:5, :, 0])
[[-1.75490070e+00 -1.41046758e+00 -2.30057541e+00 -1.84698342e+00
  -9.43985220e-01 -1.48134201e+00 -1.76481851e+00 -1.35664442e+00
  -1.73792324e+00 -1.59224373e+00]
 [-2.94576968e+00 -2.58969994e+00 -1.55676843e+00 -1.49426619e+00
  -2.37648108e+00 -1.50168661e+00 -1.22543208e+00 -2.22021494e+00
  -1.85400337e+00 -2.51688163e+00]
 [-4.86109800e-02 -1.06587889e-01 -5.14026461e-03 -2.15093765e-04
  -2.15055603e-02 -9.40346195e-04 -1.36994549e-02 -7.70372751e-02
  -2.58359745e-02 -7.55802542e-02]
 [-2.75087861e-03 -7.04862071e-03 -9.44349653e-03 -1.90481011e-02
  -2.65583829e-02 -2.46873669e-04 -8.86709699e-03 -7.64509433e-02
  -2.44940659e-04 -2.90090862e-03]
 [-4.30174871e+00 -4.17270587e+00 -5.72245056e+00 -4.77260527e+00
  -5.06534446e+00 -6.18323415e+00 -4.84307870e+00 -5.86429142e+00
  -3.98316935e+00 -5.27227828e+00]]

In [23]: print(dml_plr_obj.psi_elements['psi_b'][:5, :, 0])
[[-0.28378624 -0.24084412 -0.1018566  -0.40081686 -0.43963575 -0.3872246
  -0.02147332 -0.13683567 -0.29446732 -0.37594642]
 [-1.01968693 -1.4900045  -1.33823902 -1.39842187 -1.12067512 -1.10956349
  -0.82152436 -1.40255781 -1.08036685 -1.00298768]
 [ 0.03132564  0.06366163 -0.01874402  0.00511567 -0.04258852  0.00332808
   0.05211249  0.00557391  0.10301607  0.03437452]
 [-0.01397145 -0.02567156  0.04625777  0.05247536  0.0972051  -0.00396731
  -0.03634815  0.13911694 -0.00847932  0.02628082]
 [ 1.58456203  1.83112225  1.89529699  1.71005405  1.97301552  2.97344746
   1.65843094  2.24245294  0.80014581  3.1556604 ]]
dml_plr_obj$fit()
print(dml_plr_obj$psi_a[1:5, ,1])
print(dml_plr_obj$psi_b[1:5, ,1])
              [,1]          [,2]        [,3]         [,4]          [,5]
[1,] -0.1529986596 -6.689733e-02 -0.07688151 -0.005650784 -0.3691745773
[2,] -0.0022737584 -2.287695e-03 -0.03204565 -0.002177185 -0.0004371602
[3,] -0.1160690359 -2.678948e-04 -0.01890436 -0.013425744 -0.2756656444
[4,] -0.0002543341 -7.617910e-02 -0.13629629 -0.143788007 -0.4001450208
[5,] -8.8009777955 -1.208460e+01 -8.67701945 -8.827509606 -8.5922535586
             [,6]          [,7]          [,8]        [,9]         [,10]
[1,] -0.005984041 -0.0001297184 -0.1048828723 -0.06636678 -2.872744e-01
[2,] -0.164748823 -0.0393147231 -0.0020410731 -0.02225574 -4.548386e-02
[3,] -0.209665143 -0.0001433717 -0.0805805956 -0.04338652 -3.794776e-04
[4,] -0.024157297 -0.0326925565 -0.0005366127 -0.32759922 -6.725272e-02
[5,] -7.231111055 -7.8588816075 -6.1453664913 -7.01730577 -1.182343e+01
            [,1]        [,2]        [,3]        [,4]        [,5]         [,6]
[1,]  0.10657527  0.09025901  0.15868907  0.02685168  0.16991789  0.000758713
[2,] -0.05499411 -0.05346551  0.24184336 -0.05112213  0.02251331  0.615797460
[3,] -0.20443592 -0.01247294 -0.06198249 -0.11920082 -0.31910821 -0.361877476
[4,]  0.01643848 -0.25727742 -0.40984479  0.60087704 -0.55376292 -0.123275863
[5,]  4.61048756  3.37022438  3.43127280  4.62868962  4.43851949  2.744123873
             [,7]        [,8]         [,9]       [,10]
[1,]  0.006688332  0.08141542 -0.001123019 -0.03324812
[2,] -0.281400144 -0.03136796  0.187950618 -0.27756759
[3,] -0.005835778 -0.23347784 -0.132880567  0.01326637
[4,] -0.170560527 -0.01165857 -0.313886175 -0.08657353
[5,]  3.617882968  3.26289240  3.357555200  3.14418115

We estimate the causal parameter \(\tilde{\theta}_{0,m}\) for each of the \(M\) partitions with a DML algorithm as described in Double machine learning algorithms. Standard errors are obtained as described in Variance estimation and confidence intervals. The aggregation of the estimates of the causal parameter and its standard errors is done using the median

\[ \begin{align}\begin{aligned}\tilde{\theta}_{0} &= \text{Median}\big((\tilde{\theta}_{0,m})_{m \in [M]}\big),\\\hat{\sigma} &= \sqrt{\text{Median}\big((\hat{\sigma}_m^2 + (\tilde{\theta}_{0,m} - \tilde{\theta}_{0})^2)_{m \in [M]}\big)}.\end{aligned}\end{align} \]

The estimate of the causal parameter \(\tilde{\theta}_{0}\) is stored in the coef attribute and the asymptotic standard error \(\hat{\sigma}/\sqrt{N}\) in se.

In [24]: print(dml_plr_obj.coef)
[0.45939615]

In [25]: print(dml_plr_obj.se)
[0.08026436]
print(dml_plr_obj$coef)
print(dml_plr_obj$se)
        d 
0.5250477 
         d 
0.09816093 

The parameter estimates \((\tilde{\theta}_{0,m})_{m \in [M]}\) and asymptotic standard errors \((\hat{\sigma}_m/\sqrt{N})_{m \in [M]}\) for each of the \(M\) partitions are stored in the attributes _all_coef and _all_se, respectively.

In [26]: print(dml_plr_obj._all_coef)
[[0.45359775 0.40424675 0.47058888 0.47592974 0.40950183 0.48226638
  0.46588939 0.39593046 0.44972133 0.46519454]]

In [27]: print(dml_plr_obj._all_se)
[[0.07988863 0.07959541 0.08098413 0.08746582 0.08206336 0.07636116
  0.07433834 0.08038611 0.08434861 0.07689183]]
print(dml_plr_obj$all_coef)
print(dml_plr_obj$all_se)
          [,1]      [,2]      [,3]      [,4]      [,5]      [,6]      [,7]
[1,] 0.5121955 0.4858236 0.5795521 0.5912784 0.5150819 0.5168029 0.5332925
          [,8]      [,9]     [,10]
[1,] 0.5726396 0.5373433 0.4752931
          [,1]      [,2]       [,3]      [,4]       [,5]       [,6]       [,7]
[1,] 0.1029839 0.1009658 0.09860963 0.1011335 0.09127731 0.09595082 0.09755455
          [,8]       [,9]      [,10]
[1,] 0.1000506 0.09445192 0.09121904

9.3. Externally provide a sample splitting / partition#

All DML models allow a partition to be provided externally via the method set_sample_splitting(). In Python we can for example use the K-Folds cross-validator of sklearn KFold in order to generate a sample splitting and provide it to the DML model object. Note that by setting draw_sample_splitting = False one can prevent that a partition is drawn during initialization of the DML model object. The following calls are equivalent. In the first sample code, we use the standard interface and draw the sample-splitting with \(K=4\) folds during initialization of the DoubleMLPLR object.

In [28]: np.random.seed(314)

In [29]: dml_plr_obj_internal = dml.DoubleMLPLR(obj_dml_data, ml_l, ml_m, n_folds = 4)

In [30]: print(dml_plr_obj_internal.fit().summary)
       coef   std err         t         P>|t|     2.5 %   97.5 %
d  0.422609  0.083002  5.091553  3.551434e-07  0.259928  0.58529
set.seed(314)
dml_plr_obj_internal = DoubleMLPLR$new(obj_dml_data, ml_l, ml_m, n_folds = 4)
dml_plr_obj_internal$fit()
dml_plr_obj_internal$summary()
Estimates and significance testing of the effect of target variables
  Estimate. Std. Error t value Pr(>|t|)    
d   0.57386    0.09039   6.349 2.17e-10 ***
---
Signif. codes:  0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1


In the second sample code, we use the K-Folds cross-validator of sklearn KFold and set the partition via the set_sample_splitting() method.

In [31]: dml_plr_obj_external = dml.DoubleMLPLR(obj_dml_data, ml_l, ml_m, draw_sample_splitting = False)

In [32]: from sklearn.model_selection import KFold

In [33]: np.random.seed(314)

In [34]: kf = KFold(n_splits=4, shuffle=True)

In [35]: smpls = [(train, test) for train, test in kf.split(obj_dml_data.x)]

In [36]: dml_plr_obj_external.set_sample_splitting(smpls);

In [37]: print(dml_plr_obj_external.fit().summary)
       coef   std err         t         P>|t|     2.5 %   97.5 %
d  0.422609  0.083002  5.091553  3.551434e-07  0.259928  0.58529
dml_plr_obj_external = DoubleMLPLR$new(obj_dml_data, ml_l, ml_m, draw_sample_splitting = FALSE)

set.seed(314)
# set up a task and cross-validation resampling scheme in mlr3
my_task = Task$new("help task", "regr", data)
my_sampling = rsmp("cv", folds = 4)$instantiate(my_task)

train_ids = lapply(1:4, function(x) my_sampling$train_set(x))
test_ids = lapply(1:4, function(x) my_sampling$test_set(x))
smpls = list(list(train_ids = train_ids, test_ids = test_ids))

dml_plr_obj_external$set_sample_splitting(smpls)
dml_plr_obj_external$fit()
dml_plr_obj_external$summary()
Estimates and significance testing of the effect of target variables
  Estimate. Std. Error t value Pr(>|t|)    
d   0.57386    0.09039   6.349 2.17e-10 ***
---
Signif. codes:  0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1


9.4. Sample-splitting without cross-fitting#

The boolean flag apply_cross_fitting allows to estimate DML models without applying cross-fitting. It results in randomly splitting the sample into two parts. The first half of the data is used for the estimation of the nuisance ML models and the second half for estimating the causal parameter. Note that cross-fitting performs well empirically and is recommended to remove bias induced by overfitting, see also Sample splitting to remove bias induced by overfitting.

In [38]: np.random.seed(314)

In [39]: dml_plr_obj_external = dml.DoubleMLPLR(obj_dml_data, ml_l, ml_m,
   ....:                                     n_folds = 2, apply_cross_fitting = False)
   ....: 

In [40]: print(dml_plr_obj_external.fit().summary)
       coef   std err         t     P>|t|     2.5 %   97.5 %
d  0.411615  0.112551  3.657148  0.000255  0.191019  0.63221
dml_plr_obj_external = DoubleMLPLR$new(obj_dml_data, ml_l, ml_m,
                                    n_folds = 2, apply_cross_fitting = FALSE)
dml_plr_obj_external$fit()
dml_plr_obj_external$summary()
Estimates and significance testing of the effect of target variables
  Estimate. Std. Error t value Pr(>|t|)    
d    0.5988     0.1632   3.669 0.000243 ***
---
Signif. codes:  0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1


Note, that in order to split data unevenly into train and test sets the interface to externally set the sample splitting via set_sample_splitting() needs to be applied, like for example:

In [41]: np.random.seed(314)

In [42]: dml_plr_obj_external = dml.DoubleMLPLR(obj_dml_data, ml_l, ml_m,
   ....:                                     n_folds = 2, apply_cross_fitting = False, draw_sample_splitting = False)
   ....: 

In [43]: from sklearn.model_selection import train_test_split

In [44]: smpls = train_test_split(np.arange(obj_dml_data.n_obs), train_size=0.8)

In [45]: dml_plr_obj_external.set_sample_splitting(tuple(smpls));

In [46]: print(dml_plr_obj_external.fit().summary)
       coef   std err         t     P>|t|     2.5 %    97.5 %
d  0.599666  0.144894  4.138647  0.000035  0.315679  0.883653
dml_plr_obj_external = DoubleMLPLR$new(obj_dml_data, ml_l, ml_m,
                                        n_folds = 2, apply_cross_fitting = FALSE,
                                        draw_sample_splitting = FALSE)

set.seed(314)
# set up a task and cross-validation resampling scheme in mlr3
my_task = Task$new("help task", "regr", data)
my_sampling = rsmp("holdout", ratio = 0.8)$instantiate(my_task)

train_ids = list(my_sampling$train_set(1))
test_ids = list(my_sampling$test_set(1))
smpls = list(list(train_ids = train_ids, test_ids = test_ids))

dml_plr_obj_external$set_sample_splitting(smpls)
dml_plr_obj_external$fit()
dml_plr_obj_external$summary()
Estimates and significance testing of the effect of target variables
  Estimate. Std. Error t value Pr(>|t|)   
d    0.3841     0.1187   3.237  0.00121 **
---
Signif. codes:  0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1


9.5. Estimate DML models without sample-splitting#

The implementation of the DML models allows the estimation without sample splitting, i.e., all observations are used for learning the nuisance models as well as for the estimation of the causal parameter. Note that this approach usually results in a bias and is therefore not recommended without appropriate theoretical justification, see also Sample splitting to remove bias induced by overfitting.

In [47]: np.random.seed(314)

In [48]: dml_plr_no_split = dml.DoubleMLPLR(obj_dml_data, ml_l, ml_m,
   ....:                                 n_folds = 1, apply_cross_fitting = False)
   ....: 

In [49]: print(dml_plr_obj_external.fit().summary)
       coef   std err         t     P>|t|     2.5 %    97.5 %
d  0.566846  0.133116  4.258282  0.000021  0.305943  0.827749
dml_plr_no_split = DoubleMLPLR$new(obj_dml_data, ml_l, ml_m,
                                n_folds = 1, apply_cross_fitting = FALSE)

set.seed(314)
dml_plr_no_split$fit()
dml_plr_no_split$summary()
Estimates and significance testing of the effect of target variables
  Estimate. Std. Error t value Pr(>|t|)    
d   0.43938    0.08895    4.94 7.82e-07 ***
---
Signif. codes:  0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1