Scientific publications significantly impact academic-related decisions in computer science, where top-tier conferences are particularly influential. However, efforts required to produce a publication differ drastically across various subfields. While existing citation-based studies compare venues within areas, cross-area comparisons remain challenging due to differing publication volumes and citation practices. To address this gap, we introduce the concept of ICLR points, defined as the average effort required to produce one publication at top-tier machine learning conferences such as ICLR, ICML, and NeurIPS. Leveraging comprehensive publication data from DBLP (2019--2023) and faculty information from CSRankings, we quantitatively measure and compare the average publication effort across 27 computer science sub-areas. Our analysis reveals significant differences in average publication effort, validating anecdotal perceptions: systems conferences generally require more effort per publication than AI conferences. We further demonstrate the utility of the ICLR points metric by evaluating publication records of current faculties and recent faculty candidates. Our findings highlight how using this metric enables more meaningful cross-area comparisons in academic evaluation processes. Lastly, we discuss the metric's limitations and caution against its misuse, emphasizing the necessity of holistic assessment criteria beyond publication metrics alone.
翻译:在计算机科学领域,学术出版物对相关决策具有重要影响,其中顶级会议尤为关键。然而,不同子领域产出一篇出版物所需的研究投入存在显著差异。现有基于引用的研究虽能比较领域内学术会议,但由于各领域出版物数量与引用惯例的差异,跨领域比较仍具挑战性。为填补这一空白,我们提出ICLR点数的概念,其定义为在ICLR、ICML、NeurIPS等顶级机器学习会议上产出一篇论文所需的平均投入量。通过整合DBLP(2019-2023年)的完整出版物数据与CSRankings的教职人员信息,我们对27个计算机科学子领域的平均论文产出投入进行了量化测量与比较。分析结果表明:不同领域的平均论文产出投入存在显著差异,这验证了学界既有的经验认知——系统领域会议单篇论文所需投入通常高于人工智能领域会议。我们进一步通过评估现任教职人员及近期教职候选人的发表记录,验证了ICLR点数指标的实用性。研究结果凸显了该指标如何在学术评价过程中实现更具意义的跨领域比较。最后,我们讨论了该指标的局限性,警示其可能被误用的风险,并强调在出版物计量指标之外建立综合性评估标准的必要性。