Deep learning models struggle with systematic compositional generalization, a hallmark of human cognition. We propose \textsc{Mirage}, a neuro-inspired dual-process model that offers a processing account for this ability. It combines a fast, intuitive ``System~1'' (a meta-trained Transformer) with a deliberate, rule-based ``System~2'' (a Schema Engine), mirroring the brain's neocortical and hippocampal--prefrontal circuits. Trained to perform general, single-step decomposition on a stream of random grammars, Mirage achieves $>$99\% accuracy on all splits of the SCAN benchmark in a task-agnostic setting. Ablations confirm that the model's systematic behavior emerges from the architectural interplay of its two systems, particularly its use of explicit, prioritized schemas and iterative refinement. In line with recent progress on recursive/recurrent Transformer approaches, Mirage preserves an iterative neural update while externalizing declarative control into an interpretable schema module. Our work provides a concrete computational model for interpreting how compositional reasoning can arise from a modular cognitive architecture.
翻译:深度学习模型在系统性组合泛化方面存在困难,而这是人类认知的标志性特征。我们提出\\textsc{Mirage},一种受神经科学启发的双过程模型,为这种能力提供了处理机制的解释。该模型结合了一个快速、直觉的“系统1”(一个元训练的Transformer)和一个审慎、基于规则的“系统2”(一个模式引擎),模拟了大脑的新皮层和海马-前额叶回路。通过在随机语法流上进行通用、单步分解的训练,Mirage在任务无关的设置下,在SCAN基准测试的所有划分上实现了$>$99\\%的准确率。消融实验证实,模型的系统性行为源于其两个系统的架构交互,特别是其对显式、优先级模式及迭代优化的运用。与近期递归/循环Transformer方法的进展一致,Mirage保留了迭代神经更新,同时将声明性控制外部化为一个可解释的模式模块。我们的工作为解释组合推理如何从模块化认知架构中产生提供了一个具体的计算模型。