This paper explores how a recent European Union proposal, the so-called Chat Control, which creates regulatory incentives for providers to implement content detection and communication scanning, could transform the foundations of human-robot interaction (HRI). As robots increasingly act as interpersonal communication channels in care, education, and telepresence, they convey not only speech but also gesture, emotion, and contextual cues. We argue that extending digital surveillance laws to such embodied systems would entail continuous monitoring, embedding observation into the very design of everyday robots. This regulation blurs the line between protection and control, turning companions into potential informants. At the same time, monitoring mechanisms that undermine end-to-end encryption function as de facto backdoors, expanding the attack surface and allowing adversaries to exploit legally induced monitoring infrastructures. This creates a paradox of safety through insecurity: systems introduced to protect users may instead compromise their privacy, autonomy, and trust. This work does not aim to predict the future, but to raise awareness and help prevent certain futures from materialising.
翻译:本文探讨了欧盟近期提出的所谓“聊天控制”提案——该提案通过激励服务提供商实施内容检测和通信扫描的监管机制——可能如何重塑人机交互(HRI)的基础。随着机器人在护理、教育和远程临场等领域日益成为人际通信的中介,它们不仅传递语音,还承载手势、情感和情境线索。我们认为,将数字监控法律延伸到这类具身系统意味着持续监控,并将观察行为嵌入日常机器人的设计之中。这种监管模糊了保护与控制之间的界限,使伴侣机器人在现实中可能沦为潜在的信息提供者。与此同时,破坏端到端加密的监控机制实际上充当了后门,扩大了攻击面,使攻击者得以利用法律诱导的监控基础设施。这造成了一种“通过不安全实现安全”的悖论:旨在保护用户的系统反而可能损害其隐私、自主权和信任。本研究并非试图预测未来,而是旨在提高认知,并帮助防止某些未来情景成为现实。