This short essay is a reworking of the answers offered by the author at the Debate Session of the AIHUB (CSIC) and EduCaixa Summer School, organized by Marta Garcia-Matos and Lissette Lemus, and coordinated by Albert Sabater (OEIAC, UG), with the participation of Vanina Martinez-Posse (IIIA-CSIC), Eulalia Soler (Eurecat) and Pompeu Casanovas (IIIA-CSIC) on July 4th 2025. Albert Sabater posed three questions: (1) How can regulatory frameworks priori-tise the protection of fundamental rights (privacy, non-discrimination, autonomy, etc.) in the development of AI, without falling into the false dichotomy between regulation and innova-tion? (2) Given the risks of AI (bias, mass surveillance, manipulation), what examples of regu-lations or policies have demonstrated that it is possible to foster responsible innovation, putting the public interest before profitability, without giving in to competitive pressure from actors such as China or the US? (3) In a scenario where the US prioritizes flexibility, what mecha-nisms could ensure that international cooperation in AI does not become a race to the bottom in rights, but rather a global standard of accountability? The article attempts to answer these three questions and concludes with some reflections on the relevance of the answers for education and research.
翻译:本文是对作者在AIHUB(CSIC)与EduCaixa暑期学校辩论环节所提供回答的重新整理。该活动由Marta García-Matos与Lissette Lemus组织,Albert Sabater(OEIAC, UG)协调,并于2025年7月4日举行,参与者包括Vanina Martínez-Posse(IIIA-CSIC)、Eulalia Soler(Eurecat)和Pompeu Casanovas(IIIA-CSIC)。Albert Sabater提出了三个问题:(1)在人工智能发展过程中,监管框架如何能优先保护基本权利(隐私、非歧视、自主权等),同时避免陷入规制与创新之间的虚假二分法?(2)鉴于人工智能的风险(偏见、大规模监控、操纵),有哪些法规或政策实例已证明,能够促进负责任创新,将公共利益置于盈利能力之前,且不屈服于来自中国或美国等行为体的竞争压力?(3)在美国优先考虑灵活性的情境下,何种机制可确保人工智能领域的国际合作不会沦为权利保护的“逐底竞争”,而是形成全球性的问责标准?本文试图回答这三个问题,并以关于这些回答对教育与研究之意义的若干思考作结。