Neural networks have emerged as a powerful paradigm for tasks in high energy physics, yet their opaque training process renders them as a black box. In contrast, the traditional cut flow method offers simplicity and interpretability but demands human effort to identify optimal boundaries. To merge the strengths of both approaches, we propose the Learnable Cut Flow (LCF), a neural network that transforms the traditional cut selection into a fully differentiable, data-driven process. LCF implements two cut strategies-parallel, where observable distributions are treated independently, and sequential, where prior cuts shape subsequent ones-to flexibly determine optimal boundaries. Building on this, we introduce the Learnable Importance, a metric that quantifies feature importance and adjusts their contributions to the loss accordingly, offering model-driven insights unlike ad-hoc metrics. To ensure differentiability, a modified loss function replaces hard cuts with mask operations, preserving data shape throughout the training process. LCF is tested on six varied mock datasets and a realistic diboson vs. QCD dataset. Results demonstrate that LCF (1) accurately learns cut boundaries across typical feature distributions in both parallel and sequential strategies, (2) assigns higher importance to discriminative features with minimal overlap, (3) handles redundant or correlated features robustly, and (4) performs effectively in real-world scenarios. In diboson dataset, LCF initially underperforms boosted decision trees and multiplayer perceptrons when using all observables. However, pruning less critical features-guided by learned importance-boosts its performance to match or exceed these baselines. LCF bridges the gap between traditional cut flow method and modern black-box neural networks, delivering actionable insights into the training process and feature importance.
翻译:神经网络已成为高能物理任务中的强大范式,但其不透明的训练过程使其成为黑箱。相比之下,传统的截断流方法提供了简洁性和可解释性,但需要人工努力来确定最优边界。为了融合两种方法的优势,我们提出了可学习截断流(LCF),这是一种将传统截断选择转变为完全可微分、数据驱动过程的神经网络。LCF实现了两种截断策略——并行策略(其中可观测量分布被独立处理)和顺序策略(其中先前的截断影响后续截断)——以灵活确定最优边界。在此基础上,我们引入了可学习重要性,这是一种量化特征重要性并相应调整其对损失贡献的度量,提供了与临时性度量不同的模型驱动洞察。为了确保可微性,修改后的损失函数用掩码操作替代硬截断,在整个训练过程中保持数据形状。LCF在六个不同的模拟数据集和一个真实的双玻色子与QCD数据集上进行了测试。结果表明,LCF(1)在并行和顺序策略中均能准确学习典型特征分布的截断边界,(2)为重叠最小、区分性强的特征分配更高重要性,(3)稳健处理冗余或相关特征,以及(4)在实际场景中表现有效。在双玻色子数据集中,当使用所有观测量时,LCF最初的表现弱于提升决策树和多层感知机。然而,根据学习到的重要性修剪次要特征后,其性能提升至与这些基线方法相当或更优。LCF弥合了传统截断流方法与现代黑箱神经网络之间的鸿沟,为训练过程和特征重要性提供了可操作的见解。