Face-to-face interactions between police officers and the public affect both individual well-being and democratic legitimacy. Many government-public interactions are captured on video, including interactions between police officers and drivers captured on bodyworn cameras (BWCs). New advances in AI technology enable these interactions to be analyzed at scale, opening promising avenues for improving government transparency and accountability. However, for AI to serve democratic governance effectively, models must be designed to include the preferences and perspectives of the governed. This article proposes a community-informed, approach to developing multi-perspective AI tools for government accountability. We illustrate our approach by describing the research project through which the approach was inductively developed: an effort to build AI tools to analyze BWC footage of traffic stops conducted by the Los Angeles Police Department. We focus on the role of social scientists as members of multidisciplinary teams responsible for integrating the perspectives of diverse stakeholders into the development of AI tools in the domain of police -- and government -- accountability.
翻译:警察与公众之间的面对面互动既影响个人福祉,也关乎民主合法性。许多政府与公众的互动被视频记录,包括警察与驾驶员之间通过随身摄像机(BWCs)拍摄的互动。人工智能技术的新进展使得这些互动能够被大规模分析,为提升政府透明度和问责制开辟了前景广阔的途径。然而,要使人工智能有效服务于民主治理,模型的设计必须纳入被治理者的偏好与视角。本文提出一种社区知情的方法,用于开发面向政府问责的多视角人工智能工具。我们通过描述归纳发展出该方法的研究项目来阐明这一路径:该项目旨在构建人工智能工具,以分析洛杉矶警察局交通拦截过程中随身摄像机拍摄的影像。我们重点关注社会科学家作为多学科团队成员的作用,他们负责将多元利益相关方的视角整合到警务——乃至政府——问责领域的人工智能工具开发中。