We present PAODING, a toolkit to debloat pretrained neural network models through the lens of data-free pruning. To preserve the model fidelity, PAODING adopts an iterative process, which dynamically measures the effect of deleting a neuron to identify candidates that have the least impact to the output layer. Our evaluation shows that PAODING can significantly reduce the model size, generalize on different datasets and models, and meanwhile preserve the model fidelity in terms of test accuracy and adversarial robustness. PAODING is publicly available on PyPI via https://pypi.org/project/paoding-dl.
翻译:我们提出了PAODING,一个通过无数据剪枝视角来精简预训练神经网络模型的工具包。为保持模型保真度,PAODING采用迭代过程,动态衡量删除某个神经元对输出层的影响,从而识别出影响最小的候选神经元。我们的评估表明,PAODING能够显著减小模型规模,在不同数据集和模型上具有良好的泛化能力,同时能在测试准确率和对抗鲁棒性方面保持模型保真度。PAODING已通过https://pypi.org/project/paoding-dl在PyPI上公开提供。