The rapid development of generative artificial intelligence (AI) and large language models (LLMs), and the availability of services that make them accessible, have led the general public to begin incorporating them into everyday life. The extended reality (XR) community has also sought to integrate LLMs, particularly in the form of conversational agents, to enhance user experience and task efficiency. When interacting with such conversational agents, users may easily disclose sensitive information due to the naturalistic flow of the conversations, and combining such conversational data with fine-grained sensor data may lead to novel privacy issues. To address these issues, a user-centric understanding of technology acceptance and concerns is essential. Therefore, to this end, we conducted a large-scale crowdsourcing study with 1036 participants, examining user decision-making processes regarding LLM-powered conversational agents in XR, across factors of XR setting type, speech interaction type, and data processing location. We found that while users generally accept these technologies, they express concerns related to security, privacy, social implications, and trust. Our results suggest that familiarity plays a crucial role, as daily generative AI use is associated with greater acceptance. In contrast, previous ownership of XR devices is linked to less acceptance, possibly due to existing familiarity with the settings. We also found that men report higher acceptance with fewer concerns than women. Regarding data type sensitivity, location data elicited the most significant concern, while body temperature and virtual object states were considered least sensitive. Overall, our study highlights the importance of practitioners effectively communicating their measures to users, who may remain distrustful. We conclude with implications and recommendations for LLM-powered XR.


翻译:生成式人工智能(AI)与大型语言模型(LLMs)的快速发展,以及使其易于获取的服务普及,已促使公众开始将其融入日常生活。扩展现实(XR)领域同样寻求整合LLMs,特别是以对话智能体的形式,以提升用户体验与任务效率。在与这类对话智能体交互时,由于对话的自然流畅性,用户可能轻易泄露敏感信息;而将此类对话数据与细粒度传感器数据结合,可能引发新的隐私问题。为解决这些问题,以用户为中心的技术接受度与关切理解至关重要。为此,我们开展了一项涉及1036名参与者的大规模众包研究,考察用户在XR环境中对LLM驱动对话智能体的决策过程,涵盖XR场景类型、语音交互类型及数据处理位置等因素。研究发现,尽管用户普遍接受这些技术,但他们表达了对安全性、隐私、社会影响及信任的关切。结果表明,熟悉度起关键作用:日常使用生成式AI与更高的接受度相关;相反,先前拥有XR设备则与较低接受度相关,可能源于对现有场景的已有认知。此外,男性比女性报告更高的接受度及更少的担忧。在数据类型敏感性方面,位置数据引发最显著的关切,而体温与虚拟对象状态被视为最不敏感。总体而言,本研究强调从业者需有效向可能持怀疑态度的用户传达其保障措施的重要性。最后,我们总结了LLM驱动XR的启示与建议。

0
下载
关闭预览
VIP会员
最新内容
《美陆军条例:陆军指挥政策(2026版)》
专知会员服务
3+阅读 · 今天8:10
《军用自主人工智能系统的治理与安全》
专知会员服务
3+阅读 · 今天8:02
《系统簇式多域作战规划范畴论框架》
专知会员服务
7+阅读 · 4月20日
高效视频扩散模型:进展与挑战
专知会员服务
3+阅读 · 4月20日
乌克兰前线的五项创新
专知会员服务
7+阅读 · 4月20日
 军事通信系统与设备的技术演进综述
专知会员服务
6+阅读 · 4月20日
《北约标准:医疗评估手册》174页
专知会员服务
5+阅读 · 4月20日
Top
微信扫码咨询专知VIP会员