When does a digital image resemble reality? The relevance of this question increases as the generation of synthetic images -- so called deep fakes -- becomes increasingly popular. Deep fakes have gained much attention for a number of reasons -- among others, due to their potential to disrupt the political climate. In order to mitigate these threats, the EU AI Act implements specific transparency regulations for generating synthetic content or manipulating existing content. However, the distinction between real and synthetic images is -- even from a computer vision perspective -- far from trivial. We argue that the current definition of deep fakes in the AI act and the corresponding obligations are not sufficiently specified to tackle the challenges posed by deep fakes. By analyzing the life cycle of a digital photo from the camera sensor to the digital editing features, we find that: (1.) Deep fakes are ill-defined in the EU AI Act. The definition leaves too much scope for what a deep fake is. (2.) It is unclear how editing functions like Google's ``best take'' feature can be considered as an exception to transparency obligations. (3.) The exception for substantially edited images raises questions about what constitutes substantial editing of content and whether or not this editing must be perceptible by a natural person. Our results demonstrate that complying with the current AI Act transparency obligations is difficult for providers and deployers. As a consequence of the unclear provisions, there is a risk that exceptions may be either too broad or too limited. We intend our analysis to foster the discussion on what constitutes a deep fake and to raise awareness about the pitfalls in the current AI Act transparency obligations.
翻译:数字图像何时与现实相似?随着合成图像(即所谓的深度伪造)的生成日益普及,这一问题的重要性与日俱增。深度伪造因多种原因备受关注,尤其因其可能扰乱政治气候的潜力。为减轻这些威胁,欧盟《人工智能法案》针对生成合成内容或操纵现有内容制定了具体的透明度规定。然而,即使从计算机视觉的角度来看,真实图像与合成图像之间的区分也远非易事。我们认为,当前《人工智能法案》中对深度伪造的定义及相关义务规定不够明确,难以应对深度伪造带来的挑战。通过分析数字照片从相机传感器到数字编辑功能的整个生命周期,我们发现:(1)欧盟《人工智能法案》对深度伪造的定义不明确,该定义对何为深度伪造留下了过多解释空间。(2)诸如谷歌“最佳拍摄”功能等编辑功能如何被视为透明度义务的例外尚不清晰。(3)对“实质性编辑”图像的例外规定引发了以下疑问:何为内容的实质性编辑?以及这种编辑是否必须为自然人所感知?我们的研究结果表明,提供者和部署者难以遵守当前《人工智能法案》的透明度义务。由于条款不明确,例外规定可能存在过于宽泛或过于局限的风险。我们希望通过本分析促进关于深度伪造构成要件的讨论,并提高对当前《人工智能法案》透明度义务中潜在缺陷的认识。