Can neural networks systematically capture discrete, compositional task structure despite their continuous, distributed nature? The impressive capabilities of large-scale neural networks suggest that the answer to this question is yes. However, even for the most capable models, there are still frequent failure cases that raise doubts about their compositionality. Here, we seek to understand what it takes for a standard neural network to generalize over tasks that share compositional structure. We find that simply scaling data and model size leads to compositional generalization. We show that this holds across different task encodings as long as the training distribution sufficiently covers the task space. In line with this finding, we prove that standard multilayer perceptrons can approximate a general class of compositional task families to arbitrary precision using only a linear number of neurons with respect to the number of task modules. Finally, we uncover that if networks successfully compositionally generalize, the constituents of a task can be linearly decoded from their hidden activations. We show that this metric correlates with failures of text-to-image generation models to compose known concepts.
翻译:尽管神经网络具有连续、分布式的本质,它们能否系统性地捕捉离散的、组合式的任务结构?大规模神经网络令人印象深刻的能力表明,这个问题的答案是肯定的。然而,即使对于能力最强的模型,仍然存在频繁的失败案例,这引发了对其组合性的质疑。在此,我们试图理解标准神经网络需要具备什么条件,才能在共享组合结构的任务上实现泛化。我们发现,仅仅扩大数据和模型规模就能带来组合泛化。我们证明,只要训练分布充分覆盖任务空间,这一结论在不同任务编码方式下均成立。与此发现一致,我们证明了标准多层感知器仅需使用与任务模块数量成线性关系的神经元数量,就能以任意精度逼近一类通用的组合式任务族。最后,我们发现如果网络成功实现了组合泛化,任务的构成要素可以从其隐藏激活中线性解码出来。我们证明这一指标与文本到图像生成模型在组合已知概念时的失败案例存在相关性。