Crises in peer review capacity, study replication, and AI-fabricated science have intensified interest in automated tools for assessing scientific research. However, the scientific community has a history of decontextualizing and repurposing credibility markers in inapt ways. I caution that AI science evaluation tools are particularly prone to these kinds of inference by false ascent due to contestation about the purposes to which they should be put, their portability across purposes, and technical demands that prioritize data set size over epistemic fit. To counter this, I argue for a social, pragmatist epistemology and a newly articulated norm of Critically Engaged Pragmatism to enjoin scientific communities to vigorously scrutinize the purposes and purpose-specific reliability of AI science evaluation tools. Under this framework, AI science evaluation tools are not objective arbiters of scientific credibility, but the object of the kinds of critical discursive practices that ground the credibility of scientific communities.
翻译:同行评审能力、研究可复现性以及AI生成科学所引发的危机,加剧了学界对科研自动化评估工具的关注。然而,科学界历来存在将可信度标记脱离语境并误用于不适宜场景的倾向。本文警示,由于对AI科学评估工具的应用目标、跨场景可移植性存在争议,且技术需求往往优先考虑数据集规模而非认识论适配性,此类工具尤其容易引发虚假递推式推断。为应对这一问题,本文主张建立一种社会实用主义认识论,并提出新阐明的"批判性参与实用主义"规范,要求科学界积极审视AI科学评估工具的应用目标及其在特定目标下的可靠性。在此框架下,AI科学评估工具并非科学可信度的客观仲裁者,而是构成科学共同体可信度基础的批判性话语实践的对象。