Scientific publications significantly impact academic-related decisions in computer science, where top-tier conferences are particularly influential. However, efforts required to produce a publication differ drastically across various subfields. While existing citation-based studies compare venues within areas, cross-area comparisons remain challenging due to differing publication volumes and citation practices. To address this gap, we introduce the concept of ICLR points, defined as the average effort required to produce one publication at top-tier machine learning conferences such as ICLR, ICML, and NeurIPS. Leveraging comprehensive publication data from DBLP (2019--2023) and faculty information from CSRankings, we quantitatively measure and compare the average publication effort across 27 computer science sub-areas. Our analysis reveals significant differences in average publication effort, validating anecdotal perceptions: systems conferences generally require more effort per publication than AI conferences. We further demonstrate the utility of the ICLR points metric by evaluating publication records of universities, current faculties and recent faculty candidates. Our findings highlight how using this metric enables more meaningful cross-area comparisons in academic evaluation processes. Lastly, we discuss the metric's limitations and caution against its misuse, emphasizing the necessity of holistic assessment criteria beyond publication metrics alone.
翻译:在计算机科学领域,学术出版物对学术相关决策具有重要影响,其中顶级会议尤为关键。然而,不同子领域产出一篇论文所需的研究投入存在显著差异。现有基于引用的研究虽能比较同一领域内的学术会议,但由于各领域发表数量和引用惯例的差异,跨领域比较仍具挑战性。为填补这一空白,我们提出ICLR点数的概念,其定义为在ICLR、ICML和NeurIPS等顶级机器学习会议上发表一篇论文所需的平均研究投入。基于DBLP(2019-2023)的完整出版物数据与CSRankings的教职人员信息,我们对27个计算机科学子领域的平均发表投入进行了量化测量与比较。分析结果显示,各领域平均发表投入存在显著差异,这印证了学界既有观察:系统领域会议通常比人工智能会议需要更多的单篇发表投入。通过评估高校、现任教职人员及近期教职候选人的发表记录,我们进一步论证了ICLR点数指标的实际应用价值。研究结果表明,采用该指标能够在学术评估过程中实现更具意义的跨领域比较。最后,我们讨论了该指标的局限性,强调需警惕其误用,并指出学术评估必须超越单纯发表指标,采用综合评估标准。