The rapid advancement of large language models (LLMs) has opened new possibilities for AI for good applications. As LLMs increasingly mediate online communication, their potential to foster empathy and constructive dialogue becomes an important frontier for responsible AI research. This work explores whether LLMs can serve not only as moderators that detect harmful content, but as mediators capable of understanding and de-escalating online conflicts. Our framework decomposes mediation into two subtasks: judgment, where an LLM evaluates the fairness and emotional dynamics of a conversation, and steering, where it generates empathetic, de-escalatory messages to guide participants toward resolution. To assess mediation quality, we construct a large Reddit-based dataset and propose a multi-stage evaluation pipeline combining principle-based scoring, user simulation, and human comparison. Experiments show that API-based models outperform open-source counterparts in both reasoning and intervention alignment when doing mediation. Our findings highlight both the promise and limitations of current LLMs as emerging agents for online social mediation.
翻译:大型语言模型(LLM)的快速发展为"AI向善"应用开辟了新的可能性。随着LLM日益介入在线交流,其促进共情与建设性对话的潜力已成为负责任人工智能研究的重要前沿。本研究探讨LLM是否不仅能作为检测有害内容的审核者,更能成为理解并化解在线冲突的调解者。我们提出一个将调解分解为两个子任务的框架:判断(LLM评估对话的公平性与情感动态)与引导(LLM生成具有共情力、能缓和冲突的讯息以引导参与者走向和解)。为评估调解质量,我们构建了基于Reddit的大规模数据集,并提出结合原则评分、用户模拟和人工对比的多阶段评估流程。实验表明,在执行调解任务时,基于API的模型在推理能力与干预一致性方面均优于开源模型。我们的研究结果既揭示了当前LLM作为在线社交调解新兴主体的潜力,也指出了其现阶段的局限性。