In pruning, the Lottery Ticket Hypothesis posits that large networks contain sparse subnetworks, or winning tickets, that can be trained in isolation to match the performance of their dense counterparts. However, most existing approaches assume a single universal winning ticket shared across all inputs, ignoring the inherent heterogeneity of real-world data. In this work, we propose Routing the Lottery (RTL), an adaptive pruning framework that discovers multiple specialized subnetworks, called adaptive tickets, each tailored to a class, semantic cluster, or environmental condition. Across diverse datasets and tasks, RTL consistently outperforms single- and multi-model baselines in balanced accuracy and recall, while using up to 10 times fewer parameters than independent models and exhibiting semantically aligned. Furthermore, we identify subnetwork collapse, a performance drop under aggressive pruning, and introduce a subnetwork similarity score that enables label-free diagnosis of oversparsification. Overall, our results recast pruning as a mechanism for aligning model structure with data heterogeneity, paving the way toward more modular and context-aware deep learning.
翻译:在剪枝领域,彩票假说认为大型网络包含稀疏子网络(即中奖彩票),这些子网络可被独立训练以达到与稠密网络相当的性能。然而,现有方法大多假设存在一个适用于所有输入的通用中奖彩票,忽视了现实世界数据固有的异构性。本研究提出"路由彩票"(RTL),一种自适应剪枝框架,能够发现多个专用子网络(称为自适应彩票),每个子网络分别针对特定类别、语义簇或环境条件进行定制。在多样化数据集和任务中,RTL在平衡准确率和召回率上持续优于单模型与多模型基线方法,同时使用的参数比独立模型少10倍,并展现出语义对齐特性。此外,我们发现了在激进剪枝下出现的子网络坍缩现象(即性能骤降),并提出一种子网络相似度评分,能够实现无标签的过度稀疏化诊断。总体而言,我们的研究将剪枝重新定义为一种使模型结构与数据异构性对齐的机制,为构建更具模块化和环境感知能力的深度学习系统开辟了新路径。