The success of Large Language Models (LLMs) has established that scaling compute, through joint increases in model capacity and dataset size, is the primary driver of performance in modern machine learning. While machine learning has long been an integral component of High Energy Physics (HEP) data analysis workflows, the compute used to train state-of-the-art HEP models remains orders of magnitude below that of industry foundation models. With scaling laws only beginning to be studied in the field, we investigate neural scaling laws for boosted jet classification using the public JetClass dataset. We derive compute optimal scaling laws and identify an effective performance limit that can be consistently approached through increased compute. We study how data repetition, common in HEP where simulation is expensive, modifies the scaling yielding a quantifiable effective dataset size gain. We then study how the scaling coefficients and asymptotic performance limits vary with the choice of input features and particle multiplicity, demonstrating that increased compute reliably drives performance toward an asymptotic limit, and that more expressive, lower-level features can raise the performance limit and improve results at fixed dataset size.
翻译:大型语言模型(LLM)的成功表明,通过同步提升模型容量与数据集规模来扩展计算资源,是现代机器学习性能提升的主要驱动力。尽管机器学习长期以来一直是高能物理(HEP)数据分析工作流的核心组成部分,但用于训练最先进HEP模型的计算资源仍比工业界基础模型低数个数量级。鉴于缩放定律在该领域的研究尚处于起步阶段,我们利用公开的JetClass数据集,研究了用于增强喷注分类的神经缩放定律。我们推导出计算最优的缩放定律,并识别出一个可通过增加计算资源持续逼近的有效性能极限。我们研究了在HEP中因模拟成本高昂而常见的数据重复现象如何改变缩放行为,从而量化了有效数据集规模的增益。随后,我们探讨了缩放系数与渐近性能极限如何随输入特征和粒子多重性的选择而变化,结果表明:增加计算资源可可靠地将性能推向渐近极限,而更具表达力的底层特征能够提升性能极限,并在固定数据集规模下改善结果。