Two goals - improving replicability and accountability of Machine Learning research respectively, have accrued much attention from the AI ethics and the Machine Learning community. Despite sharing the measures of improving transparency, the two goals are discussed in different registers - replicability registers with scientific reasoning whereas accountability registers with ethical reasoning. Given the existing challenge of the Responsibility Gap - holding Machine Learning scientists accountable for Machine Learning harms due to them being far from sites of application, this paper posits that reconceptualizing replicability can help bridge the gap. Through a shift from model performance replicability to claim replicability, Machine Learning scientists can be held accountable for producing non-replicable claims that are prone to eliciting harm due to misuse and misinterpretation. In this paper, I make the following contributions. First, I define and distinguish two forms of replicability for ML research that can aid constructive conversations around replicability. Second, I formulate an argument for claim-replicability's advantage over model performance replicability in justifying assigning accountability to Machine Learning scientists for producing non-replicable claims and show how it enacts a sense of responsibility that is actionable. In addition, I characterize the implementation of claim replicability as more of a social project than a technical one by discussing its competing epistemological principles, practical implications on Circulating Reference, Interpretative Labor, and research communication.
翻译:提升机器学习的可复现性与问责性这两个目标,分别引起了人工智能伦理界与机器学习界的广泛关注。尽管两者都致力于提高透明度,但其讨论语境截然不同——可复现性遵循科学推理逻辑,而问责性则遵循伦理推理逻辑。鉴于现有"责任鸿沟"的挑战——由于机器学习科学家远离应用场景,难以使其对机器学习危害负责——本文提出,重新概念化可复现性有助于弥合这一鸿沟。通过从模型性能可复现性转向主张可复现性,机器学习科学家可因产生这种易于因误用和误解引发危害的不可复现主张而被问责。本文的贡献如下:首先,区分并定义了机器学习研究的两种可复现性形式,以促进围绕可复现性的建设性对话。其次,论证了主张可复现性相较于模型性能可复现性在证明将问责性归于机器学习科学家方面的优势,并展示其如何催生可操作的负责任意识。最后,通过讨论其竞争性的认识论原则、对"循环参照"、"解释性劳动"及研究传播的实践影响,将主张可复现性的实施定性为一项社会工程而非技术工程。