Scientific modeling applications often require estimating a distribution of parameters consistent with a dataset of observations - an inference task also known as source distribution estimation. This problem can be ill-posed, however, since many different source distributions might produce the same distribution of data-consistent simulations. To make a principled choice among many equally valid sources, we propose an approach which targets the maximum entropy distribution, i.e., prioritizes retaining as much uncertainty as possible. Our method is purely sample-based - leveraging the Sliced-Wasserstein distance to measure the discrepancy between the dataset and simulations - and thus suitable for simulators with intractable likelihoods. We benchmark our method on several tasks, and show that it can recover source distributions with substantially higher entropy than recent source estimation methods, without sacrificing the fidelity of the simulations. Finally, to demonstrate the utility of our approach, we infer source distributions for parameters of the Hodgkin-Huxley model from experimental datasets with thousands of single-neuron measurements. In summary, we propose a principled method for inferring source distributions of scientific simulator parameters while retaining as much uncertainty as possible.
翻译:科学建模应用通常需要估计与观测数据集一致的参数分布——这一推断任务亦称为源分布估计。然而,该问题可能是病态的,因为许多不同的源分布可能产生相同的数据一致性模拟分布。为了在众多同样有效的源中做出原则性选择,我们提出一种以最大熵分布为目标的方法,即优先保留尽可能多的不确定性。我们的方法完全基于样本——利用切片瓦瑟斯坦距离衡量数据集与模拟之间的差异——因此适用于似然函数难以处理的模拟器。我们在多个任务上对本方法进行基准测试,结果表明其能够恢复比近期源估计方法熵值显著更高的源分布,且不牺牲模拟的保真度。最后,为展示本方法的实用性,我们基于包含数千个单神经元测量的实验数据集,推断了霍奇金-赫胥黎模型参数的源分布。总而言之,我们提出了一种在保留最大不确定性的前提下推断科学模拟器参数源分布的原则性方法。