When LLMs judge moral dilemmas, do they reach different conclusions in different languages, and if so, why? Two factors could drive such differences: the language of the dilemma itself, or the language in which the model reasons. Standard evaluation conflates these by testing only matched conditions (e.g., English dilemma with English reasoning). We introduce a methodology that separately manipulates each factor, covering also mismatched conditions (e.g., English dilemma with Chinese reasoning), enabling decomposition of their contributions. To study \emph{what} changes, we propose an approach to interpret the moral judgments in terms of Moral Foundations Theory. As a side result, we identify evidence for splitting the Authority dimension into a family-related and an institutional dimension. Applying this methodology to English-Chinese moral judgment with 13 LLMs, we demonstrate its diagnostic power: (1) the framework isolates reasoning-language effects as contributing twice the variance of input-language effects; (2) it detects context-dependency in nearly half of models that standard evaluation misses; and (3) a diagnostic taxonomy translates these patterns into deployment guidance. We release our code and datasets at https://anonymous.4open.science/r/CrossCulturalMoralJudgement.
翻译:当大语言模型(LLM)判断道德困境时,它们是否会因语言不同而得出不同结论?若存在差异,其原因为何?两种因素可能驱动此类差异:困境描述语言本身,或模型进行推理时使用的语言。标准评估方法仅测试匹配条件(例如英语困境配英语推理),将这两者混为一谈。我们提出一种能分别操控各因素的方法论,同时涵盖非匹配条件(例如英语困境配中文推理),从而实现对两者贡献的分解。为探究“何种内容”发生变化,我们提出一种基于道德基础理论解释道德判断的方法。作为附带成果,我们发现了将权威维度拆分为家庭相关维度与制度维度的证据。将该方法论应用于13个LLM的英汉道德判断研究,我们证明了其诊断能力:(1)该框架分离出推理语言效应,其贡献的方差是输入语言效应的两倍;(2)它在近半数模型中检测到标准评估所遗漏的情境依赖性;(3)一套诊断分类法将这些模式转化为部署指导原则。我们在 https://anonymous.4open.science/r/CrossCulturalMoralJudgement 公开了代码与数据集。