As vendors adopt AI technologies, security researchers are working to uncover and fix related vulnerabilities, which is important given AI systems handle sensitive data and critical functions. This process relies on vendors receiving and rewarding AI vulnerability reports. To assess current practices, we analyzed the vulnerability disclosure policies of 264 AI vendors. We employed a mixed-methods approach, combining snapshot and longitudinal qualitative analysis, as well as comparing alignment with 320 AI incidents and 260 academic articles. Our analysis reveals that 36% of AI vendors have no established policy, and only 18% mention AI risks. Data access, authorization, and model extraction vulnerabilities are most consistently declared in-scope. Jailbreaking and hallucination are most commonly declared out-of-scope. We identify three profiles that reflect vendors' different positions toward AI vulnerabilities: proactive clarification (n = 46), silent (n = 115), and restrictive (n = 103). Our alignment results suggest that vendors may address AI vulnerability disclosure later than academic research and real-world incidents.
翻译:随着供应商采用人工智能技术,安全研究人员正致力于发现并修复相关漏洞,鉴于AI系统处理敏感数据和关键功能,这一过程至关重要。该流程依赖于供应商接收并奖励AI漏洞报告。为评估当前实践,我们对264家AI供应商的漏洞披露政策进行了分析。我们采用混合研究方法,结合快照与纵向定性分析,并对比了320起AI事件与260篇学术论文的关联性。分析显示,36%的AI供应商未建立明确政策,仅18%提及AI风险。数据访问、授权和模型提取漏洞在政策范围内被最一致地声明,而越狱攻击和幻觉问题则最常被排除在范围之外。我们识别出反映供应商对AI漏洞不同态度的三种典型模式:主动澄清型(n=46)、沉默型(n=115)和限制型(n=103)。比对结果表明,供应商处理AI漏洞披露的时间可能晚于学术研究和实际事件的发生。