Large language models (LLMs) are increasingly deployed as analytical tools across multilingual contexts, yet their outputs may carry systematic biases conditioned by the language of the prompt. This study presents an experimental comparison of LLM-generated political analyses of a Ukrainian civil society document, using semantically equivalent prompts in Russian and Ukrainian. Despite identical source material and parallel query structures, the resulting analyses varied substantially in rhetorical positioning, ideological orientation, and interpretive conclusions. The Russian-language output echoed narratives common in Russian state discourse, characterizing civil society actors as illegitimate elites undermining democratic mandates. The Ukrainian-language output adopted vocabulary characteristic of Western liberal-democratic political science, treating the same actors as legitimate stakeholders within democratic contestation. These findings demonstrate that prompt language alone can produce systematically different ideological orientations from identical models analyzing identical content, with significant implications for AI deployment in polarized information environments, cross-lingual research applications, and the governance of AI systems in multilingual societies.
翻译:大型语言模型(LLM)在多语言环境中日益被部署为分析工具,但其输出可能携带由提示语言所引发的系统性偏见。本研究通过实验比较了LLM对一份乌克兰公民社会文件的政治分析,使用了语义上等效的俄语和乌克兰语提示。尽管源材料相同且查询结构平行,生成的分析在修辞立场、意识形态取向和解释性结论上存在显著差异。俄语输出呼应了俄罗斯国家话语中常见的叙事,将公民社会行为者描述为破坏民主授权的非法精英。乌克兰语输出则采用了西方自由民主政治学特有的词汇,将同一批行为者视为民主竞争中的合法利益相关者。这些发现表明,仅提示语言即可导致相同模型分析相同内容时产生系统性不同的意识形态取向,这对AI在极化信息环境中的部署、跨语言研究应用以及多语言社会中AI系统的治理具有重要影响。