Model selection in the presence of intractable likelihoods remains a central challenge in Bayesian inference. Approximate Bayesian computation (ABC) provides a flexible likelihood-free framework, but its use for model choice is known to be sensitive to the choice of summary statistics, often leading to poorly calibrated posterior model probabilities. Recent ABC variants based on statistical distances allow comparisons to be performed directly on empirical distributions, avoiding data reduction and offering improved theoretical guarantees under suitable conditions. This paper provides a systematic evaluation of discrepancy-based ABC methods for Bayesian model selection, focusing on their empirical behavior across a range of simulation settings and levels of model complexity. We compare full data ABC approaches based on Wasserstein, Creamer-von-Mises, and maximum mean discrepancy metrics with summary-statistic-based ABC and neural network classifiers. The results highlight settings in which full data ABC yields stable and well-calibrated posterior model probabilities, as well as scenarios where performance degrades due to model overlap or dependence. An application to toad movement models illustrates the practical implications of these findings. Overall, the study clarifies the strengths and limitations of discrepancy-based ABC for likelihood-free model choice and provides guidance for its use in realistic inferential settings.
翻译:暂无翻译