People increasingly turn to conversational agents such as ChatGPT to seek guidance for their personal problems. As these systems grow in capability, many now display elements of "thinking": short reflective statements that reveal a model's intentions or values before responding. While initially introduced to promote transparency, such visible thinking can also anthropomorphise the agent and shape user expectations. Yet little is known about how these displays affect user perceptions in help-seeking contexts. We conducted a 3 x 2 mixed design experiment examining the impact of 'Thinking Content' (None, Emotionally-Supportive, Expertise-Supportive) and 'Conversation Context' (Habit-related vs. Feelings-related problems) on users' perceptions of empathy, warmth, competence, and engagement. Participants interacted with a chatbot that either showed no visible thinking or presented value-oriented reflections prior to its response. Our findings contribute to understanding how thinking transparency influences user experience in supportive dialogues, and offer implications for designing conversational agents that communicate intentions in sensitive, help-seeking scenarios.
翻译:随着ChatGPT等对话系统能力的增强,人们越来越多地转向此类对话代理寻求个人问题的指导。许多系统现在会展示"思考"元素:即在回应前显示揭示模型意图或价值观的简短反思性陈述。虽然最初引入这种可见思考是为了促进透明度,但它也可能使代理拟人化并塑造用户期望。然而,在求助情境中,这些展示如何影响用户感知尚不明确。我们进行了一项3×2混合设计实验,考察"思考内容"(无思考、情感支持型思考、专业支持型思考)和"对话情境"(习惯相关问题 vs. 情感相关问题)对用户感知共情能力、亲和力、专业能力和参与度的影响。参与者与聊天机器人互动,该机器人要么不显示可见思考,要么在回应前呈现价值导向的反思陈述。我们的研究结果有助于理解思考透明度如何影响支持性对话中的用户体验,并为设计能在敏感求助场景中有效传达意图的对话代理提供了设计启示。