Characterizing the computational power of neural network architectures in terms of formal language theory remains a crucial line of research, as it describes lower and upper bounds on the reasoning capabilities of modern AI. However, when empirically testing these bounds, existing work often leaves a discrepancy between experiments and the formal claims they are meant to support. The problem is that formal language theory pertains specifically to recognizers: machines that receive a string as input and classify whether it belongs to a language. On the other hand, it is common to instead use proxy tasks that are similar in only an informal sense, such as language modeling or sequence-to-sequence transduction. We correct this mismatch by training and evaluating neural networks directly as binary classifiers of strings, using a general method that can be applied to a wide variety of languages. As part of this, we extend an algorithm recently proposed by Sn{\ae}bjarnarson et al. (2024) to do length-controlled sampling of strings from regular languages, with much better asymptotic time complexity than previous methods. We provide results on a variety of languages across the Chomsky hierarchy for three neural architectures: a simple RNN, an LSTM, and a causally-masked transformer. We find that the RNN and LSTM often outperform the transformer, and that auxiliary training objectives such as language modeling can help, although no single objective uniformly improves performance across languages and architectures. Our contributions will facilitate theoretically sound empirical testing of language recognition claims in future work. We have released our datasets as a benchmark called FLaRe (Formal Language Recognition), along with our code.
翻译:从形式语言理论角度刻画神经网络架构的计算能力,仍然是至关重要的研究方向,因为它描述了现代人工智能推理能力的下界与上界。然而,在实证检验这些界限时,现有工作往往存在实验与所支持的形式论断之间的脱节。问题在于,形式语言理论专门针对识别器:即接收字符串作为输入并判断其是否属于某语言的机器。另一方面,现有研究通常使用仅在非正式意义上相似的代理任务,例如语言建模或序列到序列转换。我们通过直接训练和评估神经网络作为字符串的二元分类器来纠正这种不匹配,该方法可广泛应用于多种语言。为此,我们扩展了Snæbjarnarson等人(2024)最近提出的算法,实现了正则语言字符串的长度控制采样,其渐近时间复杂度远优于先前方法。我们针对乔姆斯基层次结构中的多种语言,在三种神经架构上提供了实验结果:简单RNN、LSTM以及因果掩码Transformer。研究发现,RNN和LSTM通常优于Transformer,且语言建模等辅助训练目标可能带来帮助,但没有任何单一目标能在所有语言和架构上一致提升性能。我们的贡献将为未来工作中语言识别论断的理论严谨实证检验提供便利。我们已将数据集作为名为FLaRe(形式语言识别)的基准测试集与代码一同开源发布。