The present paper evaluates the learning behaviour of a transformer-based neural network with regard to an irregular inflectional paradigm. We apply the paradigm cell filling problem to irregular patterns. We approach this problem using the morphological reinflection task and model it as a character sequence-to-sequence learning problem. The test case under investigation are irregular verbs in Spanish. Besides many regular verbs in Spanish L-shaped verbs the first person singular indicative stem irregularly matches the subjunctive paradigm, while other indicative forms remain unaltered. We examine the role of frequency during learning and compare models under differing input frequency conditions. We train the model on a corpus of Spanish with a realistic distribution of regular and irregular verbs to compare it with models trained on input with augmented distributions of (ir)regular words. We explore how the neural models learn this L-shaped pattern using post-hoc analyses. Our experiments show that, across frequency conditions, the models are surprisingly capable of learning the irregular pattern. Furthermore, our post-hoc analyses reveal the possible sources of errors. All code and data are available at \url{https://anonymous.4open.science/r/modeling_spanish_acl-7567/} under MIT license.
翻译:本文评估了基于Transformer的神经网络在学习不规则屈折范式方面的行为特征。我们将范式单元填充问题应用于不规则模式,通过形态重屈折任务处理该问题,并将其建模为字符序列到序列的学习问题。本研究以西班牙语不规则动词为测试案例。西班牙语中存在大量规则动词,而L型动词的第一人称单数直陈式词干不规则地匹配虚拟式范式,其他直陈式形式则保持不变。我们探究了频率在学习过程中的作用,并比较了不同输入频率条件下的模型表现。我们在真实分布的西班牙语规则与不规则动词语料上训练模型,并与通过增强(不)规则词分布输入训练的模型进行对比。通过事后分析,我们深入探究了神经网络学习这种L型模式的具体机制。实验结果表明,在不同频率条件下,模型学习不规则模式的能力出人意料地优异。此外,事后分析揭示了模型误差的可能来源。所有代码和数据均依据MIT许可证发布于\url{https://anonymous.4open.science/r/modeling_spanish_acl-7567/}。