Advancements in robotic capabilities for providing physical assistance, psychological support, and daily health management are making the deployment of intelligent healthcare robots in home environments increasingly feasible in the near future. However, challenges arise when the information provided by these robots contradicts users' memory, raising concerns about user trust and decision-making. This paper presents a study that examines how varying a robot's level of transparency and sociability influences user interpretation, decision-making and perceived trust when faced with conflicting information from a robot. In a 2 x 2 between-subjects online study, 176 participants watched videos of a Furhat robot acting as a family healthcare assistant and suggesting a fictional user to take medication at a different time from that remembered by the user. Results indicate that robot transparency influenced users' interpretation of information discrepancies: with a low transparency robot, the most frequent assumption was that the user had not correctly remembered the time, while with the high transparency robot, participants were more likely to attribute the discrepancy to external factors, such as a partner or another household member modifying the robot's information. Additionally, participants exhibited a tendency toward overtrust, often prioritizing the robot's recommendations over the user's memory, even when suspecting system malfunctions or third-party interference. These findings highlight the impact of transparency mechanisms in robotic systems, the complexity and importance associated with system access control for multi-user robots deployed in home environments, and the potential risks of users' over reliance on robots in sensitive domains such as healthcare.
翻译:随着机器人在提供物理协助、心理支持和日常健康管理方面的能力不断提升,智能医疗机器人在家庭环境中的部署在不久的将来变得越来越可行。然而,当这些机器人提供的信息与用户的记忆相矛盾时,便会产生挑战,引发对用户信任和决策的担忧。本文通过一项研究,探讨了当机器人提供矛盾信息时,其透明度和社交性的变化如何影响用户的解读、决策和感知信任。在一项2×2的在线组间实验中,176名参与者观看了Furhat机器人作为家庭医疗助理的视频,其中机器人建议一位虚构用户在与其记忆不同的时间服药。结果表明,机器人的透明度影响了用户对信息差异的解读:对于低透明度的机器人,最常见的假设是用户没有正确记住时间;而对于高透明度的机器人,参与者更倾向于将差异归因于外部因素,例如伴侣或其他家庭成员修改了机器人的信息。此外,参与者表现出过度信任的倾向,即使怀疑系统故障或第三方干扰,也常常优先考虑机器人的建议而非用户的记忆。这些发现凸显了机器人系统中透明度机制的影响、多用户机器人在家庭环境中部署时系统访问控制的复杂性和重要性,以及用户在医疗等敏感领域过度依赖机器人的潜在风险。