ChatGPT disrupted the application of machine-learning methods and drastically reduced the usage barrier. Chatbots are now widely used in a lot of different situations. They provide advice, assist in writing source code, or assess and summarize information from various sources. However, their scope is not only limited to aiding humans; they can also be used to take on tasks like negotiating or bargaining. To understand the implications of Chatbot usage on bargaining situations, we conduct a laboratory experiment with the ultimatum game. In the ultimatum game, two human players interact: The receiver decides on accepting or rejecting a monetary offer from the proposer. To shed light on the new bargaining situation, we let ChatGPT provide an offer to a human player. In the novel design, we vary the wealth of the receivers. Our results indicate that humans have the same beliefs about other humans and chatbots. However, our results contradict these beliefs in an important point: Humans favor poor receivers as correctly anticipated by the humans, but ChatGPT favors rich receivers which the humans did not expect to happen. These results imply that ChatGPT's answers are not aligned with those of humans and that humans do not anticipate this difference.
翻译:ChatGPT颠覆了机器学习方法的应用,并大幅降低了使用门槛。聊天机器人现已在众多不同情境中得到广泛应用。它们提供建议、协助编写源代码,或评估汇总来自不同来源的信息。然而,其功能不仅限于辅助人类;它们还可用于承担谈判或议价等任务。为理解聊天机器人在议价情境中应用的影响,我们通过最后通牒博弈进行了实验室实验。在最后通牒博弈中,两名人类参与者互动:接收者决定是否接受提议者提出的金钱分配方案。为阐明新型议价情境,我们让ChatGPT向人类参与者提出分配方案。在这一创新设计中,我们改变了接收者的财富水平。研究结果表明,人类对他人和聊天机器人持有相同的信念。然而,我们的结果在一个关键点上与这些信念相矛盾:人类倾向于帮助贫困的接收者(正如人类所正确预期的),但ChatGPT却倾向于帮助富裕的接收者(这是人类未曾预料到的)。这些结果表明,ChatGPT的答案与人类答案不一致,且人类未能预见到这种差异。