Large Language Model-powered conversational agents (CAs) are increasingly capable of projecting sophisticated personalities through language, but how these projections affect users is unclear. We thus examine how CA personalities expressed linguistically affect user decisions and perceptions in the context of charitable giving. In a crowdsourced study, 360 participants interacted with one of eight CAs, each projecting a personality composed of three linguistic aspects: attitude (optimistic/pessimistic), authority (authoritative/submissive), and reasoning (emotional/rational). While the CA's composite personality did not affect participants' decisions, it did affect their perceptions and emotional responses. Particularly, participants interacting with pessimistic CAs felt lower emotional state and lower affinity towards the cause, perceived the CA as less trustworthy and less competent, and yet tended to donate more toward the charity. Perceptions of trust, competence, and situational empathy significantly predicted donation decisions. Our findings emphasize the risks CAs pose as instruments of manipulation, subtly influencing user perceptions and decisions.
翻译:大型语言模型驱动的对话代理(CAs)日益能够通过语言展现复杂的人格特质,但这些人格投射如何影响用户尚不明确。为此,我们研究了在慈善捐赠情境下,CA通过语言表达的人格如何影响用户的决策与感知。在一项众包研究中,360名参与者与八种CA中的一种进行互动,每种CA均展现出由三个语言维度构成的人格特质:态度(乐观/悲观)、权威性(权威/顺从)与推理方式(情感/理性)。虽然CA的复合人格未影响参与者的捐赠决策,但其显著影响了参与者的感知与情绪反应。具体而言,与悲观型CA互动的参与者情绪状态更低、对慈善事业的认同感更弱,认为CA的可信度与能力较低,然而其慈善捐款金额反而更高。对CA的信任度、能力感知及情境共情显著预测了捐赠决策。我们的研究结果凸显了CA作为操纵工具所带来的风险——其能够微妙地影响用户的感知与决策。