This study proposes a novel methodology for generating personalized fake news debunking messages by prompting Large Language Models (LLMs) with persona-based inputs aligned to the Big Five personality traits: Extraversion, Agreeableness, Conscientiousness, Neuroticism, and Openness. Our approach guides LLMs to transform generic debunking content into personalized versions tailored to specific personality profiles. To assess the effectiveness of these transformations, we employ a separate LLM as an automated evaluator simulating corresponding personality traits, thereby eliminating the need for costly human evaluation panels. Our results show that personalized messages are generally seen as more persuasive than generic ones. We also find that traits like Openness tend to increase persuadability, while Neuroticism can lower it. Differences between LLM evaluators suggest that using multiple models provides a clearer picture. Overall, this work demonstrates a practical way to create more targeted debunking messages exploiting LLMs, while also raising important ethical questions about how such technology might be used.
翻译:本研究提出了一种新颖的方法,通过向大语言模型(LLMs)输入基于人格的提示(与“大五”人格特质——外向性、宜人性、尽责性、神经质和开放性——对齐)来生成个性化的虚假新闻辟谣信息。我们的方法引导LLMs将通用的辟谣内容转化为针对特定人格特征定制的个性化版本。为了评估这些转化的有效性,我们采用另一个独立的LLM作为自动评估器,模拟相应的人格特质,从而避免了成本高昂的人工评估小组。我们的结果表明,个性化信息通常被认为比通用信息更具说服力。我们还发现,像开放性这样的人格特质往往会提高可说服性,而神经质则会降低可说服性。不同LLM评估器之间的差异表明,使用多个模型可以提供更清晰的图景。总体而言,这项工作展示了一种利用LLMs创建更具针对性辟谣信息的实用方法,同时也引发了关于此类技术可能如何被使用的重要伦理问题。