Generative Social Agents (GSAs) are increasingly impacting human users through persuasive means. On the one hand, they might motivate users to pursue personal goals, such as healthier lifestyles. On the other hand, they are associated with potential risks like manipulation and deception, which are induced by limited control over probabilistic agent outputs. However, as GSAs manifest communicative patterns based on available knowledge, their behavior may be regulated through their access to such knowledge. Following this approach, we explored persuasive ChatGPT-generated messages in the context of human-robot physiotherapy motivation. We did so by comparing ChatGPT-generated responses to predefined inputs from a hypothetical physiotherapy patient. In Study 1, we qualitatively analyzed 13 ChatGPT-generated dialogue scripts with varying knowledge configurations regarding persuasive message characteristics. In Study 2, third-party observers (N = 27) rated a selection of these dialogues in terms of the agent's expressiveness, assertiveness, and persuasiveness. Our findings indicate that LLM-based GSAs can adapt assertive and expressive personality traits - significantly enhancing perceived persuasiveness. Moreover, persuasiveness significantly benefited from the availability of information about the patients' age and past profession, mediated by perceived assertiveness and expressiveness. Contextual knowledge about physiotherapy benefits did not significantly impact persuasiveness, possibly because the LLM had inherent knowledge about such benefits even without explicit prompting. Overall, the study highlights the importance of empirically studying behavioral patterns of GSAs, specifically in terms of what information generative AI systems require for consistent and responsible communication.
翻译:生成式社交智能体正日益通过说服性手段对人类用户产生影响。一方面,它们可能激励用户追求个人目标,例如更健康的生活方式。另一方面,它们也存在潜在风险,如操纵和欺骗,这些风险源于对概率性智能体输出的有限控制。然而,由于生成式社交智能体基于可用知识展现沟通模式,其行为可通过调整知识获取途径进行调控。基于此思路,我们在人机物理治疗动机交互场景中探究了ChatGPT生成的说服性信息。我们通过比较ChatGPT对假设物理治疗患者预设输入的生成响应展开研究。在研究1中,我们定性分析了13个具有不同知识配置的ChatGPT生成对话脚本,重点关注说服性信息特征。在研究2中,第三方观察者(N = 27)从智能体的表达力、自信度和说服力三个维度对这些对话样本进行评分。研究结果表明,基于大语言模型的生成式社交智能体能够自适应地调整自信与表达的人格特质——显著提升感知说服力。此外,关于患者年龄和既往职业的信息可获取性显著增强了说服力,这种效应通过感知自信度和表达力中介实现。而关于物理治疗益处的背景知识未对说服力产生显著影响,这可能源于大语言模型即使在没有明确提示的情况下也已具备此类知识。总体而言,本研究强调了实证研究生成式社交智能体行为模式的重要性,特别是在生成式人工智能系统需要何种信息才能实现持续且负责任的沟通方面。