As Smart Home Personal Assistants (SPAs) evolve into social agents, understanding user privacy necessitates interpersonal communication frameworks, such as Privacy Boundary Theory (PBT). To ground our investigation, our three-phase preliminary study (1) identified transmission and sharing ranges as key boundary-related risk factors, (2) categorized relevant SPA functions and data types, and (3) analyzed commercial practices, revealing widespread data sharing and non-transparent safeguards. A subsequent mixed-methods study (N=412 survey, N=40 interviews among the survey participants) assessed users' perceived privacy risks across data types, transmission ranges and sharing ranges. Results demonstrate a significant, non-linear escalation in perceived risk when data crosses two critical boundaries: the `public network' (transmission) and `third parties' (sharing). This boundary effect holds robustly across data types and demographics. Furthermore, risk perception is modulated by data attributes (e.g., social relational data), and contextual privacy calculus. Conversely, anonymization safeguards show limited efficacy especially for third-party sharing, a finding attributed to user distrust. These findings empirically ground PBT in the SPA context and inform design of boundary-aware privacy protection.
翻译:随着智能家居个人助手(SPAs)演变为社交代理,理解用户隐私需要人际沟通框架,例如隐私边界理论(PBT)。为奠定研究基础,我们通过三阶段初步研究:(1)识别出传输与共享范围作为关键边界相关风险因素,(2)对相关SPA功能与数据类型进行分类,以及(3)分析商业实践,揭示了普遍存在的数据共享与不透明的保障措施。随后一项混合方法研究(N=412问卷调查,并从参与者中选取N=40进行访谈)评估了用户在不同数据类型、传输范围与共享范围内的感知隐私风险。结果表明,当数据跨越两个关键边界——`公共网络'(传输)与`第三方'(共享)时,感知风险呈现显著的非线性升级。这一边界效应在不同数据类型与人口统计学特征中均稳健成立。此外,风险感知受数据属性(例如社交关系数据)与情境隐私权衡的调节。相反,匿名化保障措施显示出有限的效力,尤其在第三方共享场景中,这一发现归因于用户的不信任。这些发现从实证上将PBT置于SPA情境中,并为设计具有边界意识的隐私保护提供了依据。