Large Language Models (LLMs) have achieved significant success in recent years; yet, issues of intrinsic gender bias persist, especially in non-English languages. Although current research mostly emphasizes English, the linguistic and cultural biases inherent in Global South languages, like Bengali, are little examined. This research seeks to examine the characteristics and magnitude of gender bias in Bengali, evaluating the efficacy of current approaches in identifying and alleviating bias. We use several methods to extract gender-biased utterances, including lexicon-based mining, computational classification models, translation-based comparison analysis, and GPT-based bias creation. Our research indicates that the straight application of English-centric bias detection frameworks to Bengali is severely constrained by language disparities and socio-cultural factors that impact implicit biases. To tackle these difficulties, we executed two field investigations inside rural and low-income areas, gathering authentic insights on gender bias. The findings demonstrate that gender bias in Bengali presents distinct characteristics relative to English, requiring a more localized and context-sensitive methodology. Additionally, our research emphasizes the need of integrating community-driven research approaches to identify culturally relevant biases often neglected by automated systems. Our research enhances the ongoing discussion around gender bias in AI by illustrating the need to create linguistic tools specifically designed for underrepresented languages. This study establishes a foundation for further investigations into bias reduction in Bengali and other Indic languages, promoting the development of more inclusive and fair NLP systems.
翻译:大型语言模型近年来取得了显著成功;然而,固有的性别偏见问题依然存在,在非英语语言中尤为突出。尽管当前研究主要聚焦于英语,但针对全球南方语言(如孟加拉语)所固有的语言与文化偏见却鲜有探讨。本研究旨在探究孟加拉语中性别偏见的特征与程度,评估现有方法在识别与缓解偏见方面的有效性。我们采用多种方法提取性别偏见表述,包括基于词典的挖掘、计算分类模型、基于翻译的比较分析以及基于GPT的偏见生成。研究表明,将英语中心的偏见检测框架直接应用于孟加拉语时,会受到语言差异及影响隐性偏见的社会文化因素的严重制约。为应对这些挑战,我们在农村与低收入地区开展了两次实地调查,收集关于性别偏见的真实认知。研究结果表明,孟加拉语中的性别偏见呈现出与英语截然不同的特征,需要采用更具本地化与情境敏感性的研究方法。此外,本研究强调必须整合社区驱动的研究方法,以识别自动化系统常忽视的文化相关性偏见。通过阐明为资源不足语言量身定制语言工具的必要性,本研究深化了关于人工智能中性别偏见的持续讨论。此项研究为后续探索孟加拉语及其他印度语系语言的偏见消减奠定了基础,有助于推动更具包容性与公平性的自然语言处理系统的发展。