Algorithms of online platforms are required under the Digital Services Act (DSA) to comply with specific obligations concerning algorithmic transparency, user protection and privacy. To verify compliance with these requirements, DSA mandates platforms to undergo independent audits. Little is known about current auditing practices and their effectiveness in ensuring such compliance. To this end, we bridge regulatory and technical perspectives by critically examining selected audit reports across three critical algorithmic-related provisions: restrictions on profiling minors, transparency in recommender systems, and limitations on targeted advertising using sensitive data. Our analysis shows significant inconsistencies in methodologies and lack of technical depth when evaluating AI-powered systems. To enhance the depth, scale, and independence of compliance assessments, we propose to employ algorithmic auditing -- a process of behavioural assessment of AI algorithms by means of simulating user behaviour, observing algorithm responses and analysing them for audited phenomena.
翻译:根据《数字服务法案》(DSA)要求,在线平台算法须在算法透明度、用户保护与隐私方面履行特定义务。为验证合规情况,DSA强制要求平台接受独立审计。当前审计实践及其确保合规的有效性尚不明确。为此,我们通过批判性审视三个关键算法相关条款(限制未成年人画像分析、推荐系统透明度、基于敏感数据的定向广告限制)的选定审计报告,搭建起监管与技术视角的桥梁。分析表明,现有评估方法在检验人工智能系统时存在显著的方法论不一致与技术深度缺失。为提升合规评估的深度、规模与独立性,我们建议采用算法审计——通过模拟用户行为、观测算法响应并分析受审计现象,对人工智能算法进行行为评估的过程。