There has been increasing research interest in AI/ML for social impact, and correspondingly more publication venues have refined review criteria for practice-driven AI/ML research. However, these review guidelines tend to most concretely recognize projects that simultaneously achieve deployment and novel ML methodological innovation. We argue that this introduces incentives for researchers that undermine the sustainability of a broader research ecosystem of social impact, which benefits from projects that make contributions on single front (applied or methodological) that may better meet project partner needs. Our position is that researchers and reviewers in machine learning for social impact must simultaneously adopt: 1) a more expansive conception of social impacts beyond deployment and 2) more rigorous evaluations of the impact of deployed systems.
翻译:近年来,人工智能/机器学习(AI/ML)在社会影响领域的研究兴趣日益增长,相应的出版平台也针对实践驱动的AI/ML研究完善了评审标准。然而,这些评审准则往往最明确地认可那些同时实现实际部署与新颖ML方法创新的项目。我们认为,这为研究者引入了不当激励,可能削弱更广泛的社会影响研究生态系统的可持续性——该生态系统受益于在单一层面(应用层面或方法层面)作出贡献的项目,此类项目可能更好地满足合作方的实际需求。我们的立场是,社会影响机器学习领域的研究者与评审者必须同时采纳以下原则:1)拓展社会影响的概念范畴,使其超越单纯部署;2)对已部署系统的影响实施更为严格的评估。