This paper explores how a recent European Union proposal, the so-called Chat Control law, which creates regulatory incentives for providers to implement content detection and communication scanning, could transform the foundations of human-robot interaction (HRI). As robots increasingly act as interpersonal communication channels in care, education, and telepresence, they convey not only speech but also gesture, emotion, and contextual cues. We argue that extending digital surveillance laws to such embodied systems would entail continuous monitoring, embedding observation into the very design of everyday robots. This regulation blurs the line between protection and control, turning companions into potential informants. At the same time, monitoring mechanisms that undermine end-to-end encryption function as de facto backdoors, expanding the attack surface and allowing adversaries to exploit legally induced monitoring infrastructures. This creates a paradox of safety through insecurity: systems introduced to protect users may instead compromise their privacy, autonomy, and trust. This work does not aim to predict the future, but to raise awareness and help prevent certain futures from materialising.
翻译:本文探讨了欧盟近期一项提案(即所谓的"聊天控制"法案)如何可能改变人机交互(HRI)的基础。该法案通过建立监管激励机制,促使服务提供商实施内容检测与通信扫描。随着机器人在护理、教育和远程呈现领域日益成为人际沟通渠道,它们传递的不仅是语音,还包括手势、情感和情境线索。我们认为,将数字监控法律延伸至此类具身系统,将导致持续监控机制被嵌入日常机器人的设计核心。这种监管模糊了保护与控制的界限,使陪伴型机器人可能转变为潜在的信息收集者。同时,破坏端到端加密的监控机制实际上构成了系统后门,既扩大了攻击面,也为恶意行为者利用法律强制建立的监控基础设施提供了可能。这形成了"通过不安全实现安全"的悖论:本为保护用户而引入的系统,反而可能损害其隐私权、自主性和信任关系。本研究并非旨在预测未来,而是希望提高认知,帮助防止某些可能成为现实的未来图景。