Deep models in industrial applications rely on thousands of features for accurate predictions, such as deep recommendation systems. While new features are introduced to capture evolving user behavior, outdated or redundant features often remain, significantly increasing storage and computational costs. To address this issue, feature selection methods are widely adopted to identify and remove less important features. However, existing approaches face two major challenges: (1) they often require complex hyperparameter (Hp) tuning, making them difficult to employ in practice, and (2) they fail to produce well-separated feature importance scores, which complicates straightforward feature removal. Moreover, the impact of removing unimportant features can only be evaluated through retraining the model, a time-consuming and resource-intensive process that severely hinders efficient feature selection. To solve these challenges, we propose a novel feature selection approach, ShuffleGate. In particular, it shuffles all feature values across instances simultaneously and uses a gating mechanism that allows the model to dynamically learn the weights for combining the original and shuffled inputs. Notably, it can generate well-separated feature importance scores and estimate the performance without retraining the model, while introducing only a single Hp. Experiments on four public datasets show that our approach outperforms state-of-the-art methods in feature selection for model retraining. Moreover, it has been successfully integrated into the daily iteration of Bilibili's search models across various scenarios, where it significantly reduces feature set size (up to 60%+) and computational resource usage (up to 20%+), while maintaining comparable performance.
翻译:工业应用中的深度模型(如深度推荐系统)依赖数千个特征以实现精准预测。尽管引入新特征以捕捉不断演变的用户行为,但过时或冗余的特征往往被保留,显著增加了存储与计算成本。为应对此问题,特征选择方法被广泛采用以识别并移除次要特征。然而,现有方法面临两大挑战:(1)通常需要复杂的超参数调优,导致实际部署困难;(2)无法生成区分度良好的特征重要性分数,使得直接移除特征变得复杂。此外,移除非重要特征的影响只能通过重新训练模型来评估,这一过程耗时且资源密集,严重阻碍了高效特征选择。为解决这些挑战,我们提出一种新颖的特征选择方法ShuffleGate。该方法通过同时打乱所有样本的特征值,并采用门控机制使模型动态学习原始输入与打乱后输入的组合权重。值得注意的是,该方法能够生成区分度良好的特征重要性分数,并在无需重新训练模型的情况下评估性能,同时仅引入单个超参数。在四个公开数据集上的实验表明,本方法在模型重训练的特征选择任务中优于现有最优方法。此外,该方法已成功集成至Bilibili搜索模型在多场景下的日常迭代流程中,在保持可比性能的同时,显著减少了特征集规模(最高达60%以上)与计算资源消耗(最高达20%以上)。