Concerns about the risks and harms posed by artificial intelligence (AI) have resulted in significant study into algorithmic transparency, giving rise to a sub-field known as Explainable AI (XAI). Unfortunately, despite a decade of development in XAI, an existential challenge remains: progress in research has not been fully translated into the actual implementation of algorithmic transparency by organizations. In this work, we test an approach for addressing the challenge by creating transparency advocates, or motivated individuals within organizations who drive a ground-up cultural shift towards improved algorithmic transparency. Over several years, we created an open-source educational workshop on algorithmic transparency and advocacy. We delivered the workshop to professionals across two separate domains to improve their algorithmic transparency literacy and willingness to advocate for change. In the weeks following the workshop, participants applied what they learned, such as speaking up for algorithmic transparency at an organization-wide AI strategy meeting. We also make two broader observations: first, advocacy is not a monolith and can be broken down into different levels. Second, individuals' willingness for advocacy is affected by their professional field. For example, news and media professionals may be more likely to advocate for algorithmic transparency than those working at technology start-ups.
翻译:人工智能(AI)带来的风险与危害引发了学界对算法透明度的广泛研究,并催生了可解释人工智能(XAI)这一子领域。然而,尽管XAI领域已发展十年,一个根本性挑战依然存在:研究成果未能充分转化为组织实践中对算法透明度的实际落实。本研究通过培养透明度倡导者——即在组织内部推动自下而上文化变革、致力于提升算法透明度的积极个体——来应对这一挑战。历经数年,我们开发了一套关于算法透明度与倡导的开源教育研讨会,并向来自两个不同领域的专业人士开展培训,以提升其算法透明度素养及推动变革的意愿。研讨结束后数周内,参与者积极应用所学知识,例如在组织级AI战略会议中为算法透明度发声。我们还提出两点更广泛的观察:首先,倡导行为并非单一整体,可划分为不同层级;其次,个体的倡导意愿受其专业领域影响。例如,新闻媒体从业者可能比科技初创企业员工更倾向于倡导算法透明度。