The Extreme Learning Machine (ELM) is a growing statistical technique widely applied to regression problems. In essence, ELMs are single-layer neural networks where the hidden layer weights are randomly sampled from a specific distribution, while the output layer weights are learned from the data. Two of the key challenges with this approach are the architecture design, specifically determining the optimal number of neurons in the hidden layer, and the method's sensitivity to the random initialization of hidden layer weights. This paper introduces a new and enhanced learning algorithm for regression tasks, the Effective Non-Random ELM (ENR-ELM), which simplifies the architecture design and eliminates the need for random hidden layer weight selection. The proposed method incorporates concepts from signal processing, such as basis functions and projections, into the ELM framework. We introduce two versions of the ENR-ELM: the approximated ENR-ELM and the incremental ENR-ELM. Experimental results on both synthetic and real datasets demonstrate that our method overcomes the problems of traditional ELM while maintaining comparable predictive performance.
翻译:极限学习机(ELM)是一种日益发展的统计技术,广泛应用于回归问题。本质上,ELM是单层神经网络,其隐藏层权重从特定分布中随机采样,而输出层权重则从数据中学习得到。该方法面临的两个关键挑战是架构设计(特别是确定隐藏层神经元的最佳数量)以及方法对隐藏层权重随机初始化的敏感性。本文针对回归任务提出一种新颖且增强的学习算法——高效非随机极限学习机(ENR-ELM),该算法简化了架构设计并消除了随机选择隐藏层权重的需求。所提出的方法将信号处理中的基函数和投影等概念融入ELM框架。我们引入了ENR-ELM的两种版本:近似ENR-ELM与增量ENR-ELM。在合成数据集和真实数据集上的实验结果表明,我们的方法克服了传统ELM的问题,同时保持了可比的预测性能。