All machine learning algorithms use a loss, cost, utility or reward function to encode the learning objective and oversee the learning process. This function that supervises learning is a frequently unrecognized hyperparameter that determines how incorrect outputs are penalized and can be tuned to improve performance. This paper shows that training speed and final accuracy of neural networks can significantly depend on the loss function used to train neural networks. In particular derivative values can be significantly different with different loss functions leading to significantly different performance after gradient descent based Backpropagation (BP) training. This paper explores the effect on performance of using new loss functions that are also convex but penalize errors differently compared to the popular Cross-entropy loss. Two new classification loss functions that significantly improve performance on a wide variety of benchmark tasks are proposed. A new loss function call smooth absolute error that outperforms the Squared error, Huber and Log-Cosh losses on datasets with significantly many outliers is proposed. This smooth absolute error loss function is infinitely differentiable and more closely approximates the absolute error loss compared to the Huber and Log-Cosh losses used for robust regression.
翻译:所有机器学习算法均通过损失函数、代价函数、效用函数或奖励函数来编码学习目标并监督学习过程。这一监督学习的函数常被视为未被充分认知的超参数,它决定了错误输出所受的惩罚程度,并可通过调优以提升性能。本文表明,神经网络的训练速度与最终精度可能显著依赖于训练所使用的损失函数。具体而言,不同损失函数产生的梯度值可能存在显著差异,从而导致基于梯度下降的反向传播训练后产生明显不同的性能表现。本文探究了采用新型损失函数对性能的影响——这些函数同样具有凸性,但对错误的惩罚机制与常用的交叉熵损失函数存在差异。我们提出了两种新的分类损失函数,在多种基准任务上显著提升了性能。同时提出了一种称为平滑绝对误差的新损失函数,在包含大量异常值的数据集上,其表现优于平方误差损失、Huber损失及Log-Cosh损失。该平滑绝对误差损失函数具有无限可微性,且相较于常用于鲁棒回归的Huber损失与Log-Cosh损失,能更精确地逼近绝对误差损失。