Large Language Models (LLMs) increasingly support culturally sensitive decision making, yet often exhibit misalignment due to skewed pretraining data and the absence of structured value representations. Existing methods can steer outputs, but often lack demographic grounding and treat values as independent, unstructured signals, reducing consistency and interpretability. We propose OG-MAR, an Ontology-Guided Multi-Agent Reasoning framework. OG-MAR summarizes respondent-specific values from the World Values Survey (WVS) and constructs a global cultural ontology by eliciting relations over a fixed taxonomy via competency questions. At inference time, it retrieves ontology-consistent relations and demographically similar profiles to instantiate multiple value-persona agents, whose outputs are synthesized by a judgment agent that enforces ontology consistency and demographic proximity. Experiments on regional social-survey benchmarks across four LLM backbones show that OG-MAR improves cultural alignment and robustness over competitive baselines, while producing more transparent reasoning traces.
翻译:大型语言模型(LLM)日益支持文化敏感的决策,但由于预训练数据偏差和结构化价值表征的缺失,常出现错位问题。现有方法虽能引导输出,但往往缺乏人口统计学基础,并将价值观视为独立、非结构化的信号,导致一致性和可解释性降低。我们提出OG-MAR——一种本体指导的多智能体推理框架。OG-MAR从世界价值观调查(WVS)中提取受访者特定价值观,并通过能力问题在固定分类体系上推导关系,构建全球文化本体。在推理阶段,框架检索本体一致的关系和人口统计学相似的特征,实例化多个价值-角色智能体,其输出由执行本体一致性与人口统计学邻近性约束的评判智能体进行综合。在四个LLM骨干网络上进行的区域社会调查基准测试表明,OG-MAR在文化对齐性和鲁棒性上优于现有基线方法,同时生成更具透明度的推理轨迹。