This paper proposes a reasoning framework for privacy properties of systems and their environments that can capture any knowledge leaks on different logical levels of the system to answer the question: which entity can learn what? With the term knowledge we refer to any kind of data, meta-data or interpretation of those that might be relevant. To achieve this, we present a modeling framework that forces the developers to explicitly describe which knowledge is available at which entity, which knowledge flows between entities and which knowledge can be inferred from other knowledge. In addition, privacy requirements are specified as rules describing forbidden knowledge for entities. Our modeling approach is incremental, starting from an abstract view of the system and adding details through well-defined transformations. This work is intended to complement existing approaches and introduces steps towards more formal foundations for privacy oriented analyses while keeping them as accessible as possible. It is designed to be extensible through schemata and vocabulary to enable compatibility with external requirements and standards.
翻译:本文提出了一种面向系统及其环境的隐私属性推理框架,该框架能够捕获系统不同逻辑层面上的任何知识泄露,从而回答以下问题:哪个实体能了解什么?我们所说的“知识”泛指任何可能相关的数据、元数据或对这些数据的解读。为实现这一目标,我们提出了一种建模框架,该框架强制开发者明确描述:哪些知识在哪个实体处可用,哪些知识在实体间流动,以及哪些知识可从其他知识中推断得出。此外,隐私要求被指定为描述实体禁止获得知识的规则。我们的建模方法是增量式的,从系统的抽象视图开始,通过明确定义的转换逐步添加细节。本工作旨在补充现有方法,并朝着为隐私导向分析奠定更正式基础的方向迈出一步,同时尽可能保持其易用性。该框架设计为可通过模式(schema)和词汇表进行扩展,以实现与外部需求和标准的兼容性。