In this work, we significantly enhance masked particle modeling (MPM), a self-supervised learning scheme for constructing highly expressive representations of unordered sets relevant to developing foundation models for high-energy physics. In MPM, a model is trained to recover the missing elements of a set, a learning objective that requires no labels and can be applied directly to experimental data. We achieve significant performance improvements over previous work on MPM by addressing inefficiencies in the implementation and incorporating a more powerful decoder. We compare several pre-training tasks and introduce new reconstruction methods that utilize conditional generative models without data tokenization or discretization. We show that these new methods outperform the tokenized learning objective from the original MPM on a new test bed for foundation models for jets, which includes using a wide variety of downstream tasks relevant to jet physics, such as classification, secondary vertex finding, and track identification.
翻译:在本研究中,我们显著改进了掩码粒子建模(MPM)——一种用于构建无序集合的高表达能力表示的自监督学习方案,该方案对开发高能物理基础模型具有重要意义。在MPM中,模型通过训练来恢复集合中缺失的元素,这一学习目标无需标签即可实现,并可直接应用于实验数据。我们通过解决原实现中的效率问题并引入更强大的解码器,在MPM任务上取得了较先前工作显著的性能提升。我们比较了多种预训练任务,并提出了无需数据分词或离散化的条件生成模型重构新方法。实验表明,在面向喷注基础模型的新测试平台上(包含喷注物理相关的多种下游任务,如分类、次级顶点发现与径迹识别),这些新方法均优于原始MPM中基于分词的学习目标。