Most privacy regulations function as a passive defensive shield that users must wield themselves. Users are incessantly asked to "opt-in" or "opt-out" of data collection, forced to make defensive decisions whose consequences are increasingly difficult to predict. Viewed through the Johari Window, a psychological framework of self-awareness based on what is known and unknown to self and others, current policies require users to manage the Open Self and shield the Hidden Self through notice and consent. However, as organizations increasingly use AI to make inferences, the rapid expansion of Blind Self, attributes known to algorithms but unknown to the user, emerges as a critical challenge. We illustrate how current regulations fall short because they focus on data collection rather than inference and leave this blind spot unguarded. Building on the theory of Contextual Integrity, we propose a paradigm shift from defensive privacy management to proactive privacy advocacy. We argue for the necessity of personal advocacy agents capable of operationalizing social norms to harness the power of AI inference. By illuminating the hidden inferences that users can strategically leverage or suppress, these agents not only restrain the growth of Blind Self but also mine it for value. By transforming the Unknown Self into a personal asset for users, we can foster a flow of personal information that is equitable, transparent, and individually beneficial in the age of AI.
翻译:大多数隐私法规仅作为用户必须自行挥舞的被动防御盾牌。用户不断被要求"选择加入"或"选择退出"数据收集,被迫做出防御性决策,而这些决策的后果日益难以预测。通过乔哈里视窗(一种基于自我与他人认知状态的心理自我意识框架)观察,现行政策要求用户通过告知与同意机制管理"开放自我"并保护"隐藏自我"。然而,随着组织越来越多地使用人工智能进行推理,"盲区自我"(算法已知而用户未知的属性)的快速扩张已成为关键挑战。我们阐释了当前法规的不足:其关注点局限于数据收集而非推理过程,导致这一盲区处于无防护状态。基于情境完整性理论,我们提出从防御性隐私管理向主动性隐私倡导的范式转变。我们主张建立能够将社会规范操作化的个人倡导代理,以驾驭人工智能推理的力量。通过揭示用户可策略性利用或抑制的隐藏推理,这些代理不仅能抑制"盲区自我"的扩张,更能从中挖掘价值。通过将"未知自我"转化为用户的个人资产,我们可以在人工智能时代培育公平、透明且有益于个体的个人信息流动。