Privacy policies are expected to inform data subjects about their data protection rights and should explain the data controller's data management practices. Privacy policies only fulfill their purpose, if they are correctly interpreted, understood, and trusted by the data subject. This implies that a privacy policy is written in a fair way, e.g., it does not use polarizing terms, does not require a certain education, or does not assume a particular social background. We outline our approach to assessing fairness in privacy policies. We identify from fundamental legal sources and fairness research, how the dimensions informational fairness, representational fairness and ethics / morality are related to privacy policies. We propose options to automatically assess policies in these fairness dimensions, based on text statistics, linguistic methods and artificial intelligence. We conduct initial experiments with German privacy policies to provide evidence that our approach is applicable. Our experiments indicate that there are issues in all three dimensions of fairness. This is important, as future privacy policies may be used in a corpus for legal artificial intelligence models.
翻译:隐私政策旨在告知数据主体其数据保护权利,并应解释数据控制者的数据管理实践。只有当数据主体正确理解、信任并准确解读隐私政策时,这些政策才能实现其目的。这意味着隐私政策应以公平的方式撰写,例如,不使用带有偏见的术语,不要求特定教育程度,或不假设特定的社会背景。我们概述了评估隐私政策公平性的方法。通过基本法律来源和公平性研究,我们识别了信息公平性、表征公平性以及伦理/道德维度与隐私政策之间的关联。我们提出了基于文本统计、语言学方法和人工智能的自动化评估选项,用于在这些公平性维度中评估政策。我们以德语隐私政策进行了初步实验,以证明我们方法的适用性。实验结果表明,在公平性的所有三个维度中都存在问题。这一点至关重要,因为未来的隐私政策可能被用于法律人工智能模型的语料库。