Health Services Research has decided that studies using logistic regressions should report marginal effects rather than odds ratios. Why did they make this decision? A paper by Norton et al. (2024) identifies 3 key factors.
- Intelligibility. Consider the case of examining the impact of whether a hospital is in a disadvantaged area on readmission rates. Let’s say the coefficient is -0.2. This corresponds to an odds ratio of 0.82, or about an 18% % reduction in the readmission. However, the magnitude of this change is not clear. On the other hand, the authors recommend researchers “…report marginal effects in terms of a percentage point change in the probability of readmission, along with the base readmission rate for context.”
- Impact of covariates. When conducting a linear regression (e.g., ordinary least squares), adding new covariates should not change the coefficient of interest as long as the additional covariates are not mediators or confounders. This is not the case for logistic regressions. “The reason that the odds ratios change is because the estimated coefficients in a logistic regression are scaled by an arbitrary factor equal to the square root of the variance of the unexplained part of binary outcome, or σ. That is, logistic regressions estimate β/σ, not β…Furthermore and more problematic, σ is unknown to the researcher.” Because coefficients are scaled by σ so are the odds ratios (exp (β/σ)); adding more variables increases the logistic models ability to explain variation and thus σ decreases and the odds ratio increases.
- Ability to compare across studies. Because the covariates included in a regression impact the estimated odds ratios, it is difficult to compare odds ratios across studies.
- Sensitivity to outliers. Other papers have noted that odds ratios may be highly sensitive to very rare or very common events. Premier Insights gives the following example: “For example, denial rates of 2.5% vs. 0.5% yields an odds ratio of 5.103 despite only 2 applicants out of 100 being affected. Denial rates of 99.5% vs 97.5% yields the exact same Odds Ratio. However, denial rates of 60% vs. 30% (a 30% disparity) only yield an odds ratio of 3.5. It is clear from this that the Odds Ratio can not only be misleading but has little, if any, economic meaning. “
I agree with the authors that a move to use marginal effects is clearer and suffers from less technical issues. However, I do see two issues with the proposal. The first issue is precedent. In many medical journals, odds ratios are more commonly used and getting these researchers to change may be difficult. Second, odds ratios may be easier to extrapolate to other settings. For instance, you may often come across odds ratios in a clinical trial measuring impacts on readmissions. Because clinical trials are a somewhat artificial setting, you believe the proportional–but not absolute–reduction from the trial is correct and you want to extrapolate that impact to real-world data. In this case, having an odds ratio may make that extrapolation easier–although any extrapolation exercise should be done with caution.
Nevertheless, I think increasing the use and reporting of marginal effects would be a good thing.