Competitive access to modern observatories has intensified as proposal volumes outpace available telescope time, making timely, consistent, and transparent peer review a critical bottleneck for the advancement of astronomy. Automating parts of this process is therefore both scientifically significant and operationally necessary to ensure fair allocation and reproducible decisions at scale. We present AstroReview, an open-source, agent-based framework that automates proposal review in three stages: (i) novelty and scientific merit, (ii) feasibility and expected yield, and (iii) meta-review and reliability verification. Task isolation and explicit reasoning traces curb hallucinations and improve transparency. Without any domain specific fine tuning, AstroReview used in our experiments only for the last stage, correctly identifies genuinely accepted proposals with an accuracy of 87%. The AstroReview in Action module replicates the review and refinement loop; with its integrated Proposal Authoring Agent, the acceptance rate of revised drafts increases by 66% after two iterations, showing that iterative feedback combined with automated meta-review and reliability verification delivers measurable quality gains. Together, these results point to a practical path toward scalable, auditable, and higher throughput proposal review for resource limited facilities.
翻译:随着现代天文台提案数量远超可用望远镜时间,竞争性观测申请日趋激烈,使得及时、一致且透明的同行评审成为天文学发展的关键瓶颈。因此,自动化部分评审流程不仅具有重要的科学意义,也是确保大规模公平分配和可复现决策的操作性需求。本文提出AstroReview,一个开源的基于智能体的框架,通过三个阶段自动化提案评审:(i)创新性与科学价值,(ii)可行性与预期产出,(iii)元评审与可靠性验证。任务隔离与显式推理轨迹有效抑制了幻觉问题并提升了透明度。在实验中,未经任何领域特定微调的AstroReview仅用于最后阶段,便能以87%的准确率正确识别实际被接受的提案。AstroReview in Action模块复现了评审与优化循环;通过其集成的提案撰写智能体,修订稿的接受率在两次迭代后提升了66%,表明迭代反馈结合自动化元评审与可靠性验证能够带来可量化的质量提升。这些结果共同为资源有限的天文设施实现可扩展、可审计且更高通量的提案评审指明了一条实用路径。