Evaluating novelty is critical yet challenging in peer review, as reviewers must assess submissions against a vast, rapidly evolving literature. This report presents OpenNovelty, an LLM-powered agentic system for transparent, evidence-based novelty analysis. The system operates through four phases: (1) extracting the core task and contribution claims to generate retrieval queries; (2) retrieving relevant prior work based on extracted queries via semantic search engine; (3) constructing a hierarchical taxonomy of core-task-related work and performing contribution-level full-text comparisons against each contribution; and (4) synthesizing all analyses into a structured novelty report with explicit citations and evidence snippets. Unlike naive LLM-based approaches, \textsc{OpenNovelty} grounds all assessments in retrieved real papers, ensuring verifiable judgments. We deploy our system on 500+ ICLR 2026 submissions with all reports publicly available on our website, and preliminary analysis suggests it can identify relevant prior work, including closely related papers that authors may overlook. OpenNovelty aims to empower the research community with a scalable tool that promotes fair, consistent, and evidence-backed peer review.
翻译:在同行评审中,评估新颖性至关重要但也极具挑战,因为评审人必须在浩瀚且快速发展的文献背景下评估投稿。本报告介绍了OpenNovelty,一个基于大语言模型(LLM)的智能系统,用于进行透明、基于证据的新颖性分析。该系统通过四个阶段运行:(1) 提取核心任务与贡献声明以生成检索查询;(2) 通过语义搜索引擎基于提取的查询检索相关先前工作;(3) 构建与核心任务相关工作的层次化分类体系,并对每项贡献进行贡献级别的全文比较;(4) 将所有分析综合成一份结构化的新颖性报告,其中包含明确的引用和证据片段。与基于LLM的简单方法不同,\textsc{OpenNovelty}将所有评估都建立在检索到的真实论文基础上,确保判断可验证。我们在500多篇ICLR 2026投稿上部署了该系统,所有报告均在我们的网站上公开可用,初步分析表明其能够识别相关先前工作,包括作者可能忽略的密切相关的论文。OpenNovelty旨在为研究社区提供一个可扩展的工具,以促进公平、一致且基于证据的同行评审。