While Explainable AI (XAI) aims to make AI understandable and useful to humans, it has been criticised for relying too much on formalism and solutionism, focusing more on mathematical soundness than user needs. We propose an alternative to this bottom-up approach inspired by design thinking: the XAI research community should adopt a top-down, user-focused perspective to ensure user relevance. We illustrate this with a relatively young subfield of XAI, Training Data Attribution (TDA). With the surge in TDA research and growing competition, the field risks repeating the same patterns of solutionism. We conducted a needfinding study with a diverse group of AI practitioners to identify potential user needs related to TDA. Through interviews (N=10) and a systematic survey (N=31), we uncovered new TDA tasks that are currently largely overlooked. We invite the TDA and XAI communities to consider these novel tasks and improve the user relevance of their research outcomes.
翻译:尽管可解释人工智能(XAI)旨在使人工智能对人类而言可理解且有用,但它因过度依赖形式主义与解决方案主义而受到批评——其关注点更多在于数学严谨性而非用户需求。我们提出一种受设计思维启发的、可替代当前自下而上研究范式的方法:XAI研究社区应采用自上而下、以用户为中心的视角,以确保研究成果与用户的相关性。我们以XAI中相对新兴的子领域——训练数据归因(TDA)为例阐释这一观点。随着TDA研究的激增和竞争加剧,该领域正面临重蹈解决方案主义覆辙的风险。我们与多元化的AI从业者群体开展了一项需求探索研究,以识别与TDA相关的潜在用户需求。通过访谈(N=10)和系统性问卷调查(N=31),我们发现了当前被严重忽视的新型TDA任务。我们呼吁TDA与XAI研究社区关注这些新颖任务,并提升其研究成果的用户相关性。