Smaller LLMs still face significant challenges even in medium-resourced languages, particularly when it comes to language-specific knowledge -- a problem not easily resolved with machine-translated data. In this case study on Icelandic, we aim to enhance the generation performance of an LLM by specialising it using unstructured text corpora. A key focus is on preventing interference with the models' capabilities of handling longer context during this adaptation. Through ablation studies using various parameter-efficient fine-tuning (PEFT) methods and setups, we find that increasing the number of trainable parameters leads to better and more robust language adaptation. LoRAs placed in the feed-forward layers and bottleneck adapters show promising results with sufficient parameters, while prefix tuning and (IA)3 are not suitable. Although improvements are consistent in 0-shot summarisation, some adapted models struggle with longer context lengths, an issue that can be mitigated by adapting only the final layers.
翻译:即使对于中等资源语言,小型大语言模型仍面临显著挑战,尤其在处理语言特定知识方面——这一问题难以通过机器翻译数据解决。本案例研究以冰岛语为例,旨在通过非结构化文本语料库的专业化训练提升大语言模型的生成性能。研究重点在于防止模型在适应过程中处理长上下文能力受到干扰。通过采用不同参数高效微调方法与设置的消融实验,我们发现增加可训练参数数量能实现更优且更稳健的语言适应。置于前馈层的LoRA模块与瓶颈适配器在参数充足时表现优异,而前缀调优与(IA)³方法则不适合。尽管在零样本摘要任务中改进效果稳定,但部分适应模型在处理长上下文时仍存在困难,该问题可通过仅调整最终网络层得到缓解。