This research examines how the emotional tone of human-AI interactions shapes ChatGPT and human behavior. In a between-subject experiment, we asked participants to express a specific emotion while working with ChatGPT (GPT-4.0) on two tasks, including writing a public response and addressing an ethical dilemma. We found that compared to interactions where participants maintained a neutral tone, ChatGPT showed greater improvement in its answers when participants praised ChatGPT for its responses. Expressing anger towards ChatGPT also led to a higher albeit smaller improvement relative to the neutral condition, whereas blaming ChatGPT did not improve its answers. When addressing an ethical dilemma, ChatGPT prioritized corporate interests less when participants expressed anger towards it, while blaming increases its emphasis on protecting the public interest. Additionally, we found that people used more negative, hostile, and disappointing expressions in human-human communication after interactions during which participants blamed rather than praised for their responses. Together, our findings demonstrate that the emotional tone people apply in human-AI interactions not only shape ChatGPT's outputs but also carry over into subsequent human-human communication.
翻译:本研究探讨人机交互中的情感基调如何塑造ChatGPT与人类行为。通过一项组间实验,我们要求参与者在与ChatGPT(GPT-4.0)协作完成两项任务(包括撰写公开回应和处理伦理困境)时表达特定情感。研究发现:相较于参与者保持中性语调的交互,当参与者赞扬ChatGPT的回应时,ChatGPT的回答表现出更显著的改进;尽管提升幅度较小,表达愤怒相较于中性条件也能带来一定改善,而指责行为则未能提升其回答质量。在处理伦理困境时,当参与者表达愤怒情绪,ChatGPT会降低对企业利益的优先考量;相反,指责行为会增强其对公共利益的重视程度。此外,我们发现当参与者在人机交互中采用指责(而非赞扬)的反馈方式后,他们在后续人人交流中会使用更多消极、敌对与失望的表达。综合而言,我们的研究结果表明:人们在人机交互中采用的情感基调不仅影响ChatGPT的输出结果,还会延续至后续的人与人沟通之中。