AI is increasingly being used in the public sector, including public security. In this context, the use of AI-powered remote biometric identification (RBI) systems is a much-discussed technology. RBI systems are used to identify criminal activity in public spaces, but are criticised for inheriting biases and violating fundamental human rights. It is therefore important to ensure that such systems are developed in the public interest, which means that any technology that is deployed for public use needs to be scrutinised. While there is a consensus among business leaders, policymakers and scientists that AI must be developed in an ethical and trustworthy manner, scholars have argued that ethical guidelines do not guarantee ethical AI, but rather prevent stronger regulation of AI. As a possible counterweight, public opinion can have a decisive influence on policymakers to establish boundaries and conditions under which AI systems should be used -- if at all. However, we know little about the conditions that lead to regulatory demand for AI systems. In this study, we focus on the role of trust in AI as well as trust in law enforcement as potential factors that may lead to demands for regulation of AI technology. In addition, we explore the mediating effects of discrimination perceptions regarding RBI. We test the effects on four different use cases of RBI varying the temporal aspect (real-time vs. post hoc analysis) and purpose of use (persecution of criminals vs. safeguarding public events) in a survey among German citizens. We found that German citizens do not differentiate between the different modes of application in terms of their demand for RBI regulation. Furthermore, we show that perceptions of discrimination lead to a demand for stronger regulation, while trust in AI and trust in law enforcement lead to opposite effects in terms of demand for a ban on RBI systems.
翻译:人工智能正越来越多地应用于公共领域,包括公共安全领域。在此背景下,基于人工智能的远程生物识别系统成为备受争议的技术。这类系统用于识别公共场所的犯罪活动,但因存在算法偏见和侵犯基本人权而受到批评。因此,确保此类系统符合公共利益开发至关重要,这意味着所有用于公共用途的技术都必须接受严格审查。尽管商界领袖、政策制定者和科学家一致认为人工智能必须以道德且可信赖的方式发展,但学者们指出伦理准则并不能保证人工智能的道德性,反而可能阻碍对人工智能的更严格监管。作为可能的制衡力量,公众舆论能够对政策制定者产生决定性影响,促使他们界定人工智能系统得以应用的条件与边界——即便系统最终被允许使用。然而,目前我们尚不清楚促使公众产生监管需求的具体条件。本研究聚焦于"对人工智能的信任"与"对执法机构的信任"这两个潜在影响因素,探讨它们如何催生对人工智能技术的监管诉求。此外,我们还考察了歧视感知在远程生物识别系统中的中介效应。通过对德国公民进行问卷调查,我们测试了四个不同应用场景(涉及时间维度:实时分析vs.事后分析,以及使用目的:刑事追诉vs.公共活动安全保障)中的影响效应。研究发现,德国公民对不同应用场景的远程生物识别监管需求未表现出显著差异。同时,研究表明歧视感知会催生更严格的监管需求,而"对人工智能的信任"与"对执法机构的信任"则对禁止远程生物识别系统的需求产生相反的效应。