The recent surge of language models (LMs) has rapidly expanded NLP/AI research, driving an exponential rise in submissions and acceptances at major conferences. Yet this growth has been shadowed by escalating concerns over conference quality, such as plagiarism, reviewer inexperience, and collusive bidding. However, existing studies rely largely on qualitative accounts, for example expert interviews and social media discussions, lacking longitudinal empirical evidence. To fill this gap, we conduct a ten-year empirical study (2014-2024) spanning seven leading conferences. We build a four-dimensional bibliometric framework covering conference scale, core citation statistics, impact dispersion, and cross-venue and journal influence. Notably, we further propose a metric called Quality-Quantity Elasticity (QQE), which measures the elasticity of citation growth relative to acceptance growth. We highlight two key findings. First, conference expansion does not lead to proportional growth in scholarly impact, as QQE consistently declines over time across all venues. Second, ACL has not lost its crown, continuing to outperform other NLP conferences in median citations, milestone contributions, and citation coverage. This study provides the first decade-long, cross-venue empirical evidence on the evolution of major NLP/AI conferences. Our code is available at https://anonymous.4open.science/r/acl-crown-analysis-38D5.
翻译:近年来,语言模型(LMs)的兴起迅速扩展了NLP/AI研究领域,推动主要会议投稿量与接收量的指数级增长。然而,这种增长背后伴随着对会议质量日益加剧的担忧,例如抄袭、评审经验不足以及合谋投标等问题。现有研究主要依赖定性描述(如专家访谈和社交媒体讨论),缺乏纵向实证证据。为填补这一空白,我们开展了一项跨越七个顶级会议、为期十年(2014-2024)的实证研究。我们构建了一个四维文献计量框架,涵盖会议规模、核心引用统计、影响力离散度以及跨会场与期刊影响力。特别地,我们进一步提出了一个称为"质量-数量弹性"(QQE)的指标,用于衡量引用增长相对于接收量增长的弹性。我们强调两项关键发现:首先,会议规模的扩张并未带来学术影响力的成比例增长,所有会场的QQE均随时间持续下降;其次,ACL并未失去其领先地位,在中位数引用量、里程碑贡献和引用覆盖率方面持续优于其他NLP会议。本研究首次为重要NLP/AI会议的演进提供了跨越十年、覆盖多会场的实证证据。代码已公开于:https://anonymous.4open.science/r/acl-crown-analysis-38D5。