Smart voice assistants (SVAs) are embedded in the daily lives of youth, yet their privacy controls often remain opaque and difficult to manage. Through five semi-structured focus groups (N=26) with young Canadians (ages 16-24), we investigate how perceived privacy risks (PPR) and benefits (PPBf) intersect with algorithmic transparency and trust (ATT) and privacy self-efficacy (PSE) to shape privacy-protective behaviors (PPB). Our analysis reveals that policy overload, fragmented settings, and unclear data retention undermine self-efficacy and discourage protective actions. Conversely, simple transparency cues were associated with greater confidence without diminishing the utility of hands-free tasks and entertainment. We synthesize these findings into a qualitative model in which transparency friction erodes PSE, which in turn weakens PPB. From this model, we derive actionable design guidance for SVAs, including a unified privacy hub, plain-language "data nutrition" labels, clear retention defaults, and device-conditional micro-tutorials. This work foregrounds youth perspectives and offers a path for SVA governance and design that empowers young digital citizens while preserving convenience.
翻译:智能语音助手已深度融入青少年的日常生活,但其隐私控制机制往往不透明且难以管理。通过对26名加拿大青少年(16-24岁)开展五场半结构化焦点小组访谈,本研究探讨了感知隐私风险与感知隐私收益如何与算法透明度信任及隐私自我效能相互作用,进而影响隐私保护行为。分析表明:政策信息过载、设置选项碎片化及数据保留规则不明确会削弱自我效能感,阻碍保护性行为;反之,简洁的透明度提示能在不影响免提任务与娱乐功能实用性的前提下提升用户信心。我们将这些发现整合为定性模型,揭示透明度摩擦会侵蚀隐私自我效能,继而弱化隐私保护行为。基于该模型,我们提出可行的智能语音助手设计指南,包括:统一隐私管理中心、通俗化“数据营养”标签、清晰的数据保留默认设置,以及设备情境化微教程。本研究凸显青少年视角,为智能语音助手的治理与设计提供新路径,在保障便捷性的同时赋能年轻数字公民。