We analyze the operation of transformer language adapters, which are small modules trained on top of a frozen language model to adapt its predictions to new target languages. We show that adapted predictions mostly evolve in the source language the model was trained on, while the target language becomes pronounced only in the very last layers of the model. Moreover, the adaptation process is gradual and distributed across layers, where it is possible to skip small groups of adapters without decreasing adaptation performance. Last, we show that adapters operate on top of the model's frozen representation space while largely preserving its structure, rather than on an 'isolated' subspace. Our findings provide a deeper view into the adaptation process of language models to new languages, showcasing the constraints imposed on it by the underlying model and introduces practical implications to enhance its efficiency.
翻译:我们分析了Transformer语言适配器的运行机制,这类小型模块训练在冻结的语言模型之上,用于将模型预测适配至新的目标语言。研究表明,适配后的预测结果主要在模型训练时所使用的源语言空间中演化,而目标语言仅在模型最后几层中才变得显著。此外,适配过程是渐进且跨层分布的,跳过少量适配器组并不会降低适配性能。最后,我们发现适配器在模型冻结的表示空间之上运作,同时基本保持其结构不变,而非运行于一个“孤立”的子空间中。我们的发现为语言模型适配新语言的过程提供了更深入的视角,揭示了底层模型对其施加的约束,并引入了提升适配效率的实践意义。