This study explores transformer-based models such as BERT, mBERT, and XLM-R for multi-lingual sentiment analysis across diverse linguistic structures. Key contributions include the identification of XLM-R superior adaptability in morphologically complex languages, achieving accuracy levels above 88%. The work highlights fine-tuning strategies and emphasizes their significance for improving sentiment classification in underrepresented languages.
翻译:本研究探讨了基于Transformer的模型(如BERT、mBERT和XLM-R)在跨不同语言结构的多语言情感分析中的应用。主要贡献包括发现XLM-R在形态复杂语言中具有卓越的适应性,其准确率超过88%。该研究重点分析了微调策略,并强调了这些策略对改进资源匮乏语言情感分类的重要性。