In the field of machine learning, traditional regularization methods generally tend to directly add regularization terms to the loss function. This paper introduces the "Lai loss", a novel loss design that integrates the regularization terms (gradient component) into the traditional loss function through a straightforward geometric ideation. This design innovatively penalizes the gradient vectors through the loss, effectively controlling the model's smoothness and offering the dual benefits of reducing overfitting and avoiding underfitting. Subsequently, we proposed a random sampling method that successfully addresses the challenges associated with its application under large sample conditions. We conducted preliminary experiments using publicly available datasets from Kaggle, demonstrating that the design of Lai loss can control the model's smoothness while ensuring maximum accuracy.
翻译:在机器学习领域,传统正则化方法通常倾向于直接将正则化项添加到损失函数中。本文提出了一种名为"Lai损失"的新型损失设计,通过一种简单的几何思想将正则化项(梯度分量)融入传统损失函数。该设计创新性地通过损失对梯度向量施加惩罚,从而有效控制模型的平滑性,兼具减少过拟合和避免欠拟合的双重优势。随后,我们提出了一种随机采样方法,成功解决了在大样本条件下应用该方法的挑战。我们利用Kaggle公开数据集进行了初步实验,结果表明Lai损失的设计能够在保证最大准确度的同时控制模型的平滑性。