To enhance the reproducibility and reliability of deep learning models, we address a critical gap in current training methodologies: the lack of mechanisms that ensure consistent and robust performance across runs. Our empirical analysis reveals that even under controlled initialization and training conditions, the accuracy of the model can exhibit significant variability. To address this issue, we propose a Custom Loss Function (CLF) that reduces the sensitivity of training outcomes to stochastic factors such as weight initialization and data shuffling. By fine-tuning its parameters, CLF explicitly balances predictive accuracy with training stability, leading to more consistent and reliable model performance. Extensive experiments across diverse architectures for both image classification and time series forecasting demonstrate that our approach significantly improves training robustness without sacrificing predictive performance. These results establish CLF as an effective and efficient strategy for developing more stable, reliable and trustworthy neural networks.
翻译:为提升深度学习模型的可复现性与可靠性,本文针对当前训练方法中的一个关键缺陷展开研究:缺乏确保模型在不同运行间保持稳定鲁棒性能的机制。我们的实证分析表明,即使在受控的初始化与训练条件下,模型的准确率仍可能表现出显著波动。为解决此问题,我们提出一种定制损失函数(CLF),该函数通过降低训练结果对权重初始化及数据洗牌等随机因素的敏感性,在调整其参数时显式权衡预测精度与训练稳定性,从而获得更一致可靠的模型性能。在图像分类与时间序列预测的多种架构上进行的大量实验表明,该方法在保持预测性能的同时显著提升了训练鲁棒性。这些结果确立了CLF作为一种高效策略,可用于开发更稳定、可靠且可信的神经网络。