Explainability and transparency of AI systems are undeniably important, leading to several research studies and tools addressing them. Existing works fall short of accounting for the diverse stakeholders of the AI supply chain who may differ in their needs and consideration of the facets of explainability and transparency. In this paper, we argue for the need to revisit the inquiries of these vital constructs in the context of LLMs. To this end, we report on a qualitative study with 71 different stakeholders, where we explore the prevalent perceptions and needs around these concepts. This study not only confirms the importance of exploring the ``who'' in XAI and transparency for LLMs, but also reflects on best practices to do so while surfacing the often forgotten stakeholders and their information needs. Our insights suggest that researchers and practitioners should simultaneously clarify the ``who'' in considerations of explainability and transparency, the ``what'' in the information needs, and ``why'' they are needed to ensure responsible design and development across the LLM supply chain.
翻译:人工智能系统的可解释性与透明度无疑至关重要,这已催生了多项相关研究及工具。现有工作未能充分考虑AI供应链中不同利益相关者的多样性——他们在可解释性与透明度的需求及关注维度上可能存在差异。本文主张有必要在大型语言模型(LLM)的背景下重新审视这些关键构念的探究。为此,我们开展了一项涉及71位不同利益相关者的质性研究,深入探讨了围绕这些概念的普遍认知与需求。该研究不仅证实了在LLM的可解释人工智能(XAI)与透明度研究中探究"对象主体"(who)的重要性,同时反思了相关最佳实践,并揭示了常被忽视的利益相关者及其信息需求。我们的研究启示表明,研究者与实践者应同步厘清可解释性与透明度考量中的"对象主体"(who)、信息需求的"内容维度"(what)及其"必要性根源"(why),以确保LLM供应链全流程的责任化设计与开发。