Front-door Versus Back-door Adjustment with Unmeasured Confounding: Bias Formulas for Front-door and Hybrid Adjustments (with Adam Glynn)

In this paper, we develop bias formulas for front-door and frontdoor/back-door hybrid estimators that utilize mechanistic information from post-treatment variables under general patterns of measured and unmeasured confounding. These bias formulas allow for sensitivity analysis, and also allow for comparisons to the bias resulting from standard pre-treatment covariate adjustments such as matching or regression adjustments (also known as back-door adjustments). We also present these bias comparisons in two special cases: nonrandomized program evaluations with one-sided noncompliance and linear structural equation models. These comparisons demonstrate that there are broad classes of applications for which the front-door or hybrid adjustments will be preferred to back-door adjustments. These comparisons also have surprising implications for the design of observational studies. First, the measurement of auxiliary post-treatment variables may be as important as the measurement of some pre-treatment covariates, even in the assessment of total effects. Second, in some applications it will be not be necessary to collect any information on control units. We illustrate these points with an application to the National JTPA (Job Training Partnership Act) Study.


Front-door Difference-in-Differences Estimators (with Adam Glynn)

In this paper, we develop front-door difference-in-differences estimators that utilize information from post-treatment variables in addition to information from pre-treatment covariates. Even when the front-door criterion does not hold, these estimators allow the identification of causal effects by utilizing assumptions that are analogous to standard difference-in-differences assumptions. We also demonstrate that causal effects can sometimes be bounded by front-door and front-door difference-in-differences estimators under relaxed assumptions. We illustrate these points with an application to the National JTPA (Job Training Partnership Act) Study and with an application to Florida’s early in-person voting program. For the JTPA study, we show that an experimental benchmark can be bracketed with front-door and front-door difference-in-differences estimates. Surprisingly, neither of these estimates use control units. For the Florida program, we find some evidence that early in-person voting had small positive effects on turnout in 2008 and 2012. This provides a counterpoint to recent claims that early voting had a negative effect on turnout in 2008.