Mixed reality systems support shared anchors and co-located interaction, yet they lack a socially legible protocol for entering another person's mixed reality in public settings. We frame this as a protocol problem: co-located MR sharing requires a staged sequence -- Discover, Consent, Confirm, Allow, Spatial Colocation, Sync Objects, Permission Management -- each demanding user understanding and agreement. Using AirDrop and Apple Vision Pro SharePlay as a baseline, we show that MR encounter complexity far exceeds file transfer, yet must feel equally effortless. We present TouchPort, an embodied sharing protocol that collapses this multi-stage sequence into a single gesture: a handshake and pull that simultaneously signals intent, negotiates consent, and initiates a temporary shared encounter layer between otherwise separate mixed realities. Through three implied scenarios, we demonstrate the protocol's expressive range in the transition from isolated to spontaneously shared realities. We discuss how embodied gestures can address the consent problem in ubiquitous MR and examine the ethical tensions of encounter protocols for MR futures.
翻译:混合现实系统支持共享锚点与协同定位交互,但在公共场合进入他人混合现实时,仍缺乏一种具有社会可读性的协议。我们将此问题框架化为协议设计问题:共处同一空间的混合现实共享需要分阶段序列——发现、同意、确认、准入、空间协同定位、对象同步、权限管理——每个阶段都需要用户理解并达成共识。以AirDrop和Apple Vision Pro SharePlay为基准,我们发现混合现实相遇的复杂度远超文件传输,却需要同样轻松的操作体验。我们提出TouchPort,一种具身化共享协议,将这一多阶段序列压缩为单一手势:通过握手-拉取动作同时传递意图、协商同意,并在原本分离的混合现实之间建立临时共享相遇层。通过三个隐含场景,我们展示了该协议在从隔离到自发共享现实过渡中的表达能力。最后,我们讨论了具身化手势如何解决普适计算环境中的知情同意问题,并审视了面向混合现实未来的相遇协议所涉及的伦理张力。