Previous discussions have highlighted the need for generative AI tools to become more culturally sensitive, yet often neglect the complexities of handling content about minorities, who are perceived differently across cultures and religions. Our study examined how two generative AI systems respond to homophobic statements with varying cultural and religious context information. Findings showed ChatGPT 3.5's replies exhibited cultural relativism, in contrast to Bard's, which stressed human rights and provided more support for LGBTQ+ issues. Both demonstrated significant change in responses based on contextual information provided in the prompts, suggesting that AI systems may adjust in their responses the degree and forms of support for LGBTQ+ people according to information they receive about the user's background. The study contributes to understanding the social and ethical implications of AI responses and argues that any work to make generative AI outputs more culturally diverse requires a grounding in fundamental human rights.
翻译:先前的研究已强调生成式AI工具需提升文化敏感性,但往往忽视处理少数群体内容的复杂性——这些群体在不同文化与宗教背景下的认知存在差异。本研究考察了两种生成式AI系统如何回应附带不同文化及宗教背景信息的恐同言论。结果显示,ChatGPT 3.5的回复呈现出文化相对主义倾向,而Bard则更强调人权立场并为LGBTQ+议题提供更多支持。两种系统均会根据提示词中提供的语境信息显著改变其回应模式,表明AI系统可能依据所获取的用户背景信息,调整其对LGBTQ+群体支持的程度与形式。本研究有助于理解AI回应的社会与伦理影响,并论证任何旨在增强生成式AI输出文化多样性的工作都必须以基本人权原则为基石。