Different approaches have been adopted in addressing the challenges of Artificial Intelligence (AI), some centred on personal data and others on ethics, respectively narrowing and broadening the scope of AI regulation. This contribution aims to demonstrate that a third way is possible, starting from the acknowledgement of the role that human rights can play in regulating the impact of data-intensive systems. The focus on human rights is neither a paradigm shift nor a mere theoretical exercise. Through the analysis of more than 700 decisions and documents of the data protection authorities of six countries, we show that human rights already underpin the decisions in the field of data use. Based on empirical analysis of this evidence, this work presents a methodology and a model for a Human Rights Impact Assessment (HRIA). The methodology and related assessment model are focused on AI applications, whose nature and scale require a proper contextualisation of HRIA methodology. Moreover, the proposed models provide a more measurable approach to risk assessment which is consistent with the regulatory proposals centred on risk thresholds. The proposed methodology is tested in concrete case-studies to prove its feasibility and effectiveness. The overall goal is to respond to the growing interest in HRIA, moving from a mere theoretical debate to a concrete and context-specific implementation in the field of data-intensive applications based on AI.
翻译:在应对人工智能挑战方面,已出现不同路径:一些以个人数据为中心,另一些则聚焦伦理框架,分别从收窄和拓宽两个方向影响着AI监管的范畴。本文旨在论证存在第三条路径的可能性——其起点在于承认人权在规范数据密集型系统影响方面所能发挥的作用。聚焦人权既非范式转换,亦非纯粹的理论推演。通过对六个国家数据保护机构七百余份裁决与文件的分析,我们证明人权原则已实际支撑着数据使用领域的决策实践。基于对此类证据的实证分析,本研究提出一套人权影响评估的方法论与模型框架。该方法论及相应评估模型专门针对人工智能应用设计,其性质与规模要求对HRIA方法进行恰当的语境化适配。此外,所提出的模型提供了更具可测性的风险评估路径,这与以风险阈值为核心的监管提案具有内在一致性。通过具体案例研究验证了该方法的可行性与有效性。总体目标是回应对HRIA日益增长的需求,推动该领域从单纯的理论探讨转向基于人工智能的数据密集型应用中具体且情境化的实践落地。