Smart Voice Assistants (SVAs) are deeply embedded in the lives of youth, yet the mechanisms driving the privacy-protective behaviors among young users remain poorly understood. This study investigates how Canadian youth (aged 16-24) negotiate privacy with SVAs by developing and testing a structural model grounded in five key constructs: perceived privacy risks (PPR), perceived benefits (PPBf), algorithmic transparency and trust (ATT), privacy self-efficacy (PSE), and privacy-protective behaviors (PPB). A cross-sectional survey of N=469 youth was analyzed using partial least squares structural equation modeling. Results reveal that PSE is the strongest predictor of PPB, while the effect of ATT on PPB is fully mediated by PSE. This identifies a critical efficacy gap, where youth's confidence must first be built up for them to act. The model confirms that PPBf directly discourages protective action, yet also indirectly fosters it by slightly boosting self-efficacy. These findings empirically validate and extend earlier qualitative work, quantifying how policy overload and hidden controls erode the self-efficacy necessary for protective action. This study contributes an evidence-based pathway from perception to action and translates it into design imperatives that empower young digital citizens without sacrificing the utility of SVAs.
翻译:智能语音助手已深度融入青少年生活,然而驱动年轻用户采取隐私保护行为的机制仍不甚明晰。本研究通过构建并检验基于五个关键构念的结构模型,探究加拿大青少年(16-24岁)如何与智能语音助手进行隐私协商:感知隐私风险、感知利益、算法透明度与信任、隐私自我效能感以及隐私保护行为。通过对469名青少年开展的横断面调查数据进行偏最小二乘结构方程建模分析,结果显示:隐私自我效能感是隐私保护行为的最强预测因子,而算法透明度与信任对隐私保护行为的影响完全通过隐私自我效能感中介。这揭示了关键的能力鸿沟——必须首先建立青少年的信心才能促使其采取行动。模型证实感知利益会直接抑制保护行为,但同时通过轻微提升自我效能感间接促进保护行为。这些发现实证验证并拓展了早期质性研究,量化揭示了政策过载与隐性控制如何侵蚀保护行动所需的自我效能感。本研究提出了从感知到行动的循证路径,并将其转化为设计准则,在保障智能语音助手实用性的同时赋能年轻数字公民。