It has been proposed that, when processing a stream of events, humans divide their experiences in terms of inferred latent causes (LCs) to support context-dependent learning. However, when shared structure is present across contexts, it is still unclear how the "splitting" of LCs and learning of shared structure can be simultaneously achieved. Here, we present the Latent Cause Network (LCNet), a neural network model of LC inference. Through learning, it naturally stores structure that is shared across tasks in the network weights. Additionally, it represents context-specific structure using a context module, controlled by a Bayesian nonparametric inference algorithm, which assigns a unique context vector for each inferred LC. Across three simulations, we found that LCNet could 1) extract shared structure across LCs in a function learning task while avoiding catastrophic interference, 2) capture human data on curriculum effects in schema learning, and 3) infer the underlying event structure when processing naturalistic videos of daily events. Overall, these results demonstrate a computationally feasible approach to reconciling shared structure and context-specific structure in a model of LCs that is scalable from laboratory experiment settings to naturalistic settings.
翻译:在加工连续事件流时,人类会依据推理出的潜在原因(LCs)划分经验,以支持情境依赖的学习。然而,当不同情境存在共享结构时,LCs的"分离"与共享结构的学习如何同时实现仍不明确。本文提出潜在原因网络(LCNet)——一个用于LC推理的神经网络模型。通过学习过程,该模型能自然地将跨任务共享的结构存储于网络权重中,同时利用由贝叶斯非参数推理算法控制的情境模块表征情境特异性结构——该算法为每个推理出的LC分配唯一的情境向量。通过三项仿真实验,我们证实LCNet能够:1)在函数学习任务中提取跨LCs的共享结构并避免灾难性干扰,2)在模式学习中的课程效应研究中复现人类行为数据,3)在加工日常生活事件的自然视频时推断潜在的事件结构。这些结果证明了一种从实验室实验情景拓展至自然情景的可计算性方法,成功调和了LC模型中的共享结构与情境特异性结构。