Implicit versus explicit Bayesian priors for epistemic uncertainty estimation in clinical decision support.
- Innovation Center Computer Assisted Surgery (ICCAS), Leipzig University, Semmelweisstraße 14, Leipzig, Germany.
Deep learning models offer transformative potential for personalized medicine by providing automated, data-driven support for complex clinical decision-making. However, their reliability degrades on out-of-distribution inputs, and traditional point-estimate predictors can give overconfident outputs even in regions where the model has little evidence. This shortcoming highlights the need for decision-support systems that quantify and communicate per-query epistemic (knowledge) uncertainty. Approximate Bayesian deep learning methods address this need by introducing principled uncertainty estimates over the model's function. In this work, we compare three such methods on the task of predicting prostate cancer-specific mortality for treatment planning, using data from the PLCO cancer screening trial. All approaches achieve strong discriminative performance (AUROC = 0.86) and produce well-calibrated probabilities in-distribution, yet they differ markedly in the fidelity of their epistemic uncertainty estimates. We show that implicit functional-prior methods-specifically neural network ensembles and factorized weight prior variational Bayesian neural networks-exhibit reduced fidelity when approximating the posterior distribution and yield systematically biased estimates of epistemic uncertainty. By contrast, models employing explicitly defined, distance-aware priors-such as spectral-normalized neural Gaussian processes (SNGP)-provide more accurate posterior approximations and more reliable uncertainty quantification. These properties make explicitly distance-aware architectures particularly promising for building trustworthy clinical decision-support tools.