In traditional boosting algorithms, the focus on misclassified training samples emphasizes their importance based on difficulty during the learning process. While using a standard Support Vector Machine (SVM) as a weak learner in an AdaBoost framework can enhance model performance by concentrating on error samples, this approach introduces significant challenges. Specifically, SVMs, characterized by their stability and robustness, may require destabilization to fit the boosting paradigm, which in turn can constrain performance due to reliance on the weighted results from preceding iterations. To address these challenges, we propose the Support Vector Boosting Machine (SVBM), which integrates a novel subsampling process with SVM algorithms and residual connection techniques. This method updates sample weights by considering both the current model's predictions and the outputs from prior rounds, allowing for effective sparsity control. The SVBM framework enhances the ability to form complex decision boundaries, thereby improving classification performance. The MATLAB source code for SVBM can be accessed at https://github.com/junbolian/SVBM.
翻译:在传统提升算法中,对误分类训练样本的关注基于其在学习过程中的难度来强调其重要性。虽然将标准支持向量机(SVM)作为弱学习器用于AdaBoost框架中,可以通过聚焦于错误样本来提升模型性能,但这种方法也带来了显著挑战。具体而言,以稳定性和鲁棒性为特点的SVM可能需要被"去稳定化"以适应提升范式,而这反过来又会因依赖于前几轮迭代的加权结果而限制性能。为应对这些挑战,我们提出了支持向量提升机(SVBM),该方法将一种新颖的子采样过程与SVM算法及残差连接技术相结合。此方法通过同时考虑当前模型的预测结果和先前轮次的输出来更新样本权重,从而实现了有效的稀疏性控制。SVBM框架增强了形成复杂决策边界的能力,进而提升了分类性能。SVBM的MATLAB源代码可在 https://github.com/junbolian/SVBM 获取。