Previous discussions have highlighted the need for generative AI tools to become more culturally sensitive, yet often neglect the complexities of handling content about minorities, who are perceived differently across cultures and religions. Our study examined how two generative AI systems respond to homophobic statements with varying cultural and religious context information. Findings showed ChatGPT 3.5's replies exhibited cultural relativism, in contrast to Bard's, which stressed human rights and provided more support for LGBTQ+ issues. Both demonstrated significant change in responses based on contextual information provided in the prompts, suggesting that AI systems may adjust in their responses the degree and forms of support for LGBTQ+ people according to information they receive about the user's background. The study contributes to understanding the social and ethical implications of AI responses and argues that any work to make generative AI outputs more culturally diverse requires a grounding in fundamental human rights. A revised edition of this preprint is available open access at Big Data & Society at https://doi.org/10.1177/20539517251396069
翻译:先前的研究已指出生成式AI工具需提升文化敏感性,但往往忽略了处理少数群体内容时的复杂性——这些群体在不同文化和宗教背景下的认知存在差异。本研究考察了两款生成式AI系统如何回应附带不同文化宗教背景信息的恐同言论。结果显示,ChatGPT 3.5的答复呈现出文化相对主义倾向,而Bard则更强调人权原则并为LGBTQ+议题提供更多支持。两者均会根据提示语中提供的背景信息显著改变回应方式,这表明AI系统可能依据所获取的用户背景信息,调整其对LGBTQ+群体支持的程度与形式。本研究有助于理解AI回应的社会与伦理影响,并主张任何旨在增强生成式AI输出文化多样性的工作都必须以基本人权为基石。该预印本的修订版已在《大数据与社会》期刊开放获取:https://doi.org/10.1177/20539517251396069