Indigenous languages are a fundamental legacy in the development of human communication, embodying the unique identity and culture of local communities in America. The Second AmericasNLP (Americas Natural Language Processing) Competition Track 1 of NeurIPS (Neural Information Processing Systems) 2022 proposed the task of training automatic speech recognition (ASR) systems for five Indigenous languages: Quechua, Guarani, Bribri, Kotiria, and Wa'ikhana. In this paper, we describe the fine-tuning of a state-of-the-art ASR model for each target language, using approximately 36.65 h of transcribed speech data from diverse sources enriched with data augmentation methods. We systematically investigate, using a Bayesian search, the impact of the different hyperparameters on the Wav2vec2.0 XLS-R (Cross-Lingual Speech Representations) variants of 300 M and 1 B parameters. Our findings indicate that data and detailed hyperparameter tuning significantly affect ASR accuracy, but language complexity determines the final result. The Quechua model achieved the lowest character error rate (CER) (12.14), while the Kotiria model, despite having the most extensive dataset during the fine-tuning phase, showed the highest CER (36.59). Conversely, with the smallest dataset, the Guarani model achieved a CER of 15.59, while Bribri and Wa'ikhana obtained, respectively, CERs of 34.70 and 35.23. Additionally, Sobol' sensitivity analysis highlighted the crucial roles of freeze fine-tuning updates and dropout rates. We release our best models for each language, marking the first open ASR models for Wa'ikhana and Kotiria. This work opens avenues for future research to advance ASR techniques in preserving minority Indigenous languages
翻译:原住民语言是人类交流发展过程中的根本遗产,体现了美洲当地社区独特的身份认同与文化。2022年神经信息处理系统大会(NeurIPS)第二届美洲自然语言处理竞赛(AmericasNLP)第一赛道提出了为五种原住民语言训练自动语音识别(ASR)系统的任务,这些语言包括:Quechua、Guarani、Bribri、Kotiria和Wa'ikhana。本文描述了针对每种目标语言对最先进的ASR模型进行微调的过程,使用了约36.65小时来自不同来源的转录语音数据,并通过数据增强方法进行了丰富。我们采用贝叶斯搜索方法,系统研究了不同超参数对300M和1B参数规模的Wav2vec2.0 XLS-R(跨语言语音表示)变体模型的影响。研究结果表明,数据和精细的超参数调优显著影响ASR准确率,但语言复杂度决定了最终结果。Quechua模型获得了最低的字符错误率(CER)(12.14),而Kotiria模型虽然在微调阶段拥有最广泛的数据集,却表现出最高的CER(36.59)。相反,使用最小数据集的Guarani模型实现了15.59的CER,而Bribri和Wa'ikhana分别获得了34.70和35.23的CER。此外,Sobol敏感性分析揭示了冻结微调更新次数和丢弃率的关键作用。我们发布了每种语言的最佳模型,标志着首次为Wa'ikhana和Kotiria语言提供开源的ASR模型。这项工作为未来研究开辟了推进ASR技术以保护少数民族原住民语言的新途径。