Extracting time-varying latent variables from computational cognitive models is a key step in model-based neural analysis, which aims to understand the neural correlates of cognitive processes. However, existing methods only allow researchers to infer latent variables that explain subjects' behavior in a relatively small class of cognitive models. For example, a broad class of relevant cognitive models with analytically intractable likelihood is currently out of reach from standard techniques, based on Maximum a Posteriori parameter estimation. Here, we present an approach that extends neural Bayes estimation to learn a direct mapping between experimental data and the targeted latent variable space using recurrent neural networks and simulated datasets. We show that our approach achieves competitive performance in inferring latent variable sequences in both tractable and intractable models. Furthermore, the approach is generalizable across different computational models and is adaptable for both continuous and discrete latent spaces. We then demonstrate its applicability in real world datasets. Our work underscores that combining recurrent neural networks and simulation-based inference to identify latent variable sequences can enable researchers to access a wider class of cognitive models for model-based neural analyses, and thus test a broader set of theories.
翻译:从计算认知模型中提取时变潜在变量是基于模型的神经分析的关键步骤,其旨在理解认知过程的神经关联。然而,现有方法仅允许研究者推断能解释被试行为、但仅限于相对较小一类认知模型中的潜在变量。例如,一大类具有解析上难以处理的似然函数的相关认知模型,目前无法通过基于最大后验参数估计的标准技术进行推断。在此,我们提出一种方法,将神经贝叶斯估计扩展至利用循环神经网络和模拟数据集,学习实验数据与目标潜在变量空间之间的直接映射。我们证明,我们的方法在可处理与不可处理模型中推断潜在变量序列方面均取得了有竞争力的性能。此外,该方法可推广至不同的计算模型,并能适应连续和离散的潜在空间。随后,我们在真实世界数据集中展示了其适用性。我们的工作强调,结合循环神经网络和基于模拟的推理来识别潜在变量序列,可以使研究者能够将基于模型的神经分析应用于更广泛的认知模型类别,从而检验更广泛的理论集合。