A central challenge for cognitive science is to explain how abstract concepts are acquired from limited experience. This has often been framed in terms of a dichotomy between connectionist and symbolic cognitive models. Here, we highlight a recently emerging line of work that suggests a novel reconciliation of these approaches, by exploiting an inductive bias that we term the relational bottleneck. In that approach, neural networks are constrained via their architecture to focus on relations between perceptual inputs, rather than the attributes of individual inputs. We review a family of models that employ this approach to induce abstractions in a data-efficient manner, emphasizing their potential as candidate models for the acquisition of abstract concepts in the human mind and brain.
翻译:认知科学的一个核心挑战在于解释抽象概念如何从有限的经验中获得。这一问题通常被表述为联结主义与符号认知模型之间的二元对立。本文着重介绍近期涌现的一系列研究工作,该研究通过利用我们称之为"关系瓶颈"的归纳偏置,为调和这两种方法提供了新思路。在这种方法中,神经网络通过其架构设计被约束为聚焦于感知输入之间的相互关系,而非单个输入的属性。我们系统评述了采用该方法以数据高效方式诱导抽象概念的模型家族,并强调其作为人类心智与大脑中抽象概念习得候选模型的潜力。