Coreset selection compresses large datasets into compact, representative subsets, reducing the energy and computational burden of training deep neural networks. Existing methods are either: (i) DNN-based, which are tied to model-specific parameters and introduce architectural bias; or (ii) DNN-free, which rely on heuristics lacking theoretical guarantees. Neither approach explicitly constrains distributional equivalence, largely because continuous distribution matching is considered inapplicable to discrete sampling. Moreover, prevalent metrics (e.g., MSE, KL, CE, MMD) cannot accurately capture higher-order moment discrepancies, leading to suboptimal coresets. In this work, we propose FAST, the first DNN-free distribution-matching coreset selection framework that formulates the coreset selection task as a graph-constrained optimization problem grounded in spectral graph theory and employs the Characteristic Function Distance (CFD) to capture full distributional information in the frequency domain. We further discover that naive CFD suffers from a "vanishing phase gradient" issue in medium and high-frequency regions; to address this, we introduce an Attenuated Phase-Decoupled CFD. Furthermore, for better convergence, we design a Progressive Discrepancy-Aware Sampling strategy that progressively schedules frequency selection from low to high, preserving global structure before refining local details and enabling accurate matching with fewer frequencies while avoiding overfitting. Extensive experiments demonstrate that FAST significantly outperforms state-of-the-art coreset selection methods across all evaluated benchmarks, achieving an average accuracy gain of 9.12%. Compared to other baseline coreset methods, it reduces power consumption by 96.57% and achieves a 2.2x average speedup, underscoring its high performance and energy efficiency.
翻译:核心集选择通过将大型数据集压缩为紧凑且具有代表性的子集,以降低训练深度神经网络的能耗与计算负担。现有方法主要分为两类:(i) 基于深度神经网络的方法,其依赖于模型特定参数并引入架构偏差;(ii) 无需深度神经网络的方法,其依赖缺乏理论保证的启发式策略。两类方法均未显式约束分布等价性,这主要是因为连续分布匹配被认为不适用于离散采样场景。此外,常用度量指标(如均方误差、KL散度、交叉熵、最大均值差异)无法准确捕捉高阶矩差异,导致所得核心集次优。本文提出FAST,首个无需深度神经网络的分布匹配核心集选择框架。该框架将核心集选择任务构建为基于谱图理论的图约束优化问题,并采用特征函数距离在频域中捕捉完整的分布信息。我们进一步发现,原始特征函数距离在中高频区域存在"相位梯度消失"问题;为此,我们提出了衰减式相位解耦特征函数距离。此外,为获得更好的收敛性,我们设计了渐进式差异感知采样策略,该策略从低频到高频渐进调度频率选择,在细化局部细节前先保留全局结构,从而能够以更少的频率实现精确匹配并避免过拟合。大量实验表明,FAST在所有评估基准上均显著优于现有核心集选择方法,平均准确率提升达9.12%。与其他基线核心集方法相比,其功耗降低96.57%,平均加速比达到2.2倍,充分证明了该方法的高性能与高能效特性。