South Africa and the Democratic Republic of Congo (DRC) present a complex linguistic landscape with languages such as Zulu, Sepedi, Afrikaans, French, English, and Tshiluba (Ciluba), which creates unique challenges for AI-driven translation and sentiment analysis systems due to a lack of accurately labeled data. This study seeks to address these challenges by developing a multilingual lexicon designed for French and Tshiluba, now expanded to include translations in English, Afrikaans, Sepedi, and Zulu. The lexicon enhances cultural relevance in sentiment classification by integrating language-specific sentiment scores. A comprehensive testing corpus is created to support translation and sentiment analysis tasks, with machine learning models such as Random Forest, Support Vector Machine (SVM), Decision Trees, and Gaussian Naive Bayes (GNB) trained to predict sentiment across low resource languages (LRLs). Among them, the Random Forest model performed particularly well, capturing sentiment polarity and handling language-specific nuances effectively. Furthermore, Bidirectional Encoder Representations from Transformers (BERT), a Large Language Model (LLM), is applied to predict context-based sentiment with high accuracy, achieving 99% accuracy and 98% precision, outperforming other models. The BERT predictions were clarified using Explainable AI (XAI), improving transparency and fostering confidence in sentiment classification. Overall, findings demonstrate that the proposed lexicon and machine learning models significantly enhance translation and sentiment analysis for LRLs in South Africa and the DRC, laying a foundation for future AI models that support underrepresented languages, with applications across education, governance, and business in multilingual contexts.
翻译:南非和刚果民主共和国(DRC)呈现出复杂的语言景观,包含祖鲁语、塞佩迪语、南非荷兰语、法语、英语和奇卢巴语(Ciluba)等语言。由于缺乏准确标注的数据,这为人工智能驱动的翻译和情感分析系统带来了独特的挑战。本研究旨在通过开发一个专为法语和奇卢巴语设计、现已扩展至包含英语、南非荷兰语、塞佩迪语和祖鲁语翻译的多语言词典来应对这些挑战。该词典通过整合语言特定的情感分数,增强了情感分类的文化相关性。我们创建了一个全面的测试语料库以支持翻译和情感分析任务,并训练了包括随机森林、支持向量机(SVM)、决策树和高斯朴素贝叶斯(GNB)在内的机器学习模型,以预测低资源语言(LRLs)的情感。其中,随机森林模型表现尤为出色,能够有效捕捉情感极性并处理语言特定的细微差别。此外,本研究应用了基于Transformer的双向编码器表示(BERT)这一大型语言模型(LLM)来预测基于上下文的情感,取得了99%的准确率和98%的精确率,性能优于其他模型。通过使用可解释人工智能(XAI)对BERT的预测结果进行解释,提高了透明度并增强了情感分类的可信度。总体而言,研究结果表明,所提出的词典和机器学习模型显著提升了针对南非和刚果民主共和国低资源语言的翻译和情感分析能力,为未来支持代表性不足语言的人工智能模型奠定了基础,并在多语言环境下的教育、治理和商业等领域具有应用前景。