Article 5 of the European Union's Artificial Intelligence Act is intended to regulate AI use to prevent potentially harmful consequences. Nevertheless, applying this legislation practically is likely to be challenging because of ambiguously used terminologies and because it fails to specify which manipulation techniques may be invoked by AI, potentially leading to significant harm. This paper aims to bridge this gap by defining key terms and demonstrating how AI may invoke these techniques, drawing from insights in psychology and behavioural economics. First, this paper provides definitions of the terms "subliminal techniques", "manipulative techniques" and "deceptive techniques". Secondly, we identified from the literature in cognitive psychology and behavioural economics three subliminal and five manipulative techniques and exemplify how AI might implement these techniques to manipulate users in real-world case scenarios. These illustrations may serve as a practical guide for stakeholders to detect cases of AI manipulation and consequently devise preventive measures. Article 5 has also been criticised for offering inadequate protection. We critically assess the protection offered by Article 5, proposing specific revisions to paragraph 1, points (a) and (b) of Article 5 to increase its protective effectiveness.
翻译:欧盟《人工智能法案》第5条旨在规制人工智能的使用,以防止潜在的有害后果。然而,由于术语使用模糊不清,且该条款未能明确人工智能可能调用哪些操纵技术从而引发重大损害,该立法的实际适用很可能面临挑战。本文旨在通过界定关键术语,并借鉴心理学与行为经济学的研究成果阐明人工智能如何运用这些技术来弥合这一差距。首先,本文对"潜意识技术"、"操纵性技术"和"欺骗性技术"等术语进行了定义。其次,我们从认知心理学和行为经济学的文献中识别出三种潜意识技术和五种操纵性技术,并通过真实世界案例场景示例说明人工智能可能如何运用这些技术操纵用户。这些示例可作为利益相关方检测人工智能操纵案例并进而制定预防措施的实用指南。此外,第5条因保护不足也受到批评。我们对该条款提供的保护进行了批判性评估,并针对第5条第1款(a)项和(b)项提出了具体修订建议,以增强其保护效力。