Scientific modeling applications often require estimating a distribution of parameters consistent with a dataset of observations - an inference task also known as source distribution estimation. This problem can be ill-posed, however, since many different source distributions might produce the same distribution of data-consistent simulations. To make a principled choice among many equally valid sources, we propose an approach which targets the maximum entropy distribution, i.e., prioritizes retaining as much uncertainty as possible. Our method is purely sample-based - leveraging the Sliced-Wasserstein distance to measure the discrepancy between the dataset and simulations - and thus suitable for simulators with intractable likelihoods. We benchmark our method on several tasks, and show that it can recover source distributions with substantially higher entropy without sacrificing the fidelity of the simulations. Finally, to demonstrate the utility of our approach, we infer source distributions for parameters of the Hodgkin-Huxley neuron model from experimental datasets with thousands of measurements. In summary, we propose a principled framework for inferring unique source distributions of scientific simulator parameters while retaining as much uncertainty as possible.
翻译:科学建模应用通常需要估计与观测数据集一致的参数分布——这一推理任务也称为源分布估计。然而,该问题可能是不适定的,因为许多不同的源分布可能产生相同的数据一致性模拟分布。为了在众多同等有效的源分布中做出有原则的选择,我们提出了一种以最大熵分布为目标的方法,即优先保留尽可能多的不确定性。我们的方法纯粹基于样本——利用切片-瓦瑟斯坦距离衡量数据集与模拟之间的差异——因此适用于似然函数难以处理的模拟器。我们在多个任务上对方法进行了基准测试,结果表明它能够在不牺牲模拟保真度的前提下,恢复具有显著更高熵的源分布。最后,为展示该方法的实用性,我们从包含数千次测量值的实验数据集中,推断出霍奇金-赫胥黎神经元模型参数的源分布。总之,我们提出了一种有原则的框架,用于在尽可能保留不确定性的同时,推断科学模拟器参数的唯一源分布。