Extracting time-varying latent variables from computational cognitive models is a key step in model-based neural analysis, which aims to understand the neural correlates of cognitive processes. However, existing methods only allow researchers to infer latent variables that explain subjects' behavior in a relatively small class of cognitive models. For example, a broad class of relevant cognitive models with analytically intractable likelihood is currently out of reach from standard techniques, based on Maximum a Posteriori parameter estimation. Here, we present an approach that extends neural Bayes estimation to learn a direct mapping between experimental data and the targeted latent variable space using recurrent neural networks and simulated datasets. We show that our approach achieves competitive performance in inferring latent variable sequences in both tractable and intractable models. Furthermore, the approach is generalizable across different computational models and is adaptable for both continuous and discrete latent spaces. We then demonstrate its applicability in real world datasets. Our work underscores that combining recurrent neural networks and simulation-based inference to identify latent variable sequences can enable researchers to access a wider class of cognitive models for model-based neural analyses, and thus test a broader set of theories.
翻译:从计算认知模型中提取时变潜变量是基于模型的神经分析的关键步骤,其目标在于理解认知过程的神经关联。然而,现有方法仅允许研究者推断能够解释被试行为的一类相对有限的认知模型中的潜变量。例如,基于最大后验参数估计的标准技术目前无法处理一大类具有解析难解似然函数的相关认知模型。本文提出一种方法,将神经贝叶斯估计扩展至使用循环神经网络和模拟数据集,学习实验数据与目标潜变量空间之间的直接映射。我们证明,该方法在可解与难解模型中推断潜变量序列均能取得具有竞争力的性能。此外,该方法可泛化至不同的计算模型,并适用于连续和离散潜空间。我们进一步展示了其在真实数据集上的适用性。本研究强调,结合循环神经网络与基于模拟的推断来识别潜变量序列,能使研究者基于更广泛的认知模型类别进行基于模型的神经分析,从而检验更广泛的理论体系。