Linear feature extraction at the presence of nonlinear dependencies among the data is a fundamental challenge in unsupervised learning. We propose using a probabilistic Gram-Schmidt (GS) type orthogonalization process in order to detect and map out redundant dimensions. Specifically, by applying the GS process over a family of functions which presumably captures the nonlinear dependencies in the data, we construct a series of covariance matrices that can either be used to identify new large-variance directions, or to remove those dependencies from the principal components. In the former case, we provide information-theoretic guarantees in terms of entropy reduction. In the latter, we prove that under certain assumptions the resulting algorithms detect and remove nonlinear dependencies whenever those dependencies lie in the linear span of the chosen function family. Both proposed methods extract linear features from the data while removing nonlinear redundancies. We provide simulation results on synthetic and real-world datasets which show improved performance over PCA and state-of-the-art linear feature extraction algorithms, both in terms of variance maximization of the extracted features, and in terms of improved performance of classification algorithms. Additionally, our methods are comparable and often outperform the non-linear method of kernel PCA.
翻译:在数据存在非线性依赖关系的情况下进行线性特征提取,是无监督学习中的一项基本挑战。我们提出采用概率型格拉姆-施密特(GS)正交化过程来检测和排除冗余维度。具体而言,通过对一组能够捕捉数据非线性依赖关系的函数族应用GS过程,我们构造了一系列协方差矩阵,这些矩阵既可被用于识别新的高方差方向,也可用于从主成分中移除这些依赖关系。在前一种情形下,我们给出了基于熵减的信息论保证;在后一种情形下,我们证明在特定假设下,只要非线性依赖关系位于所选函数族的线性张成空间中,所提出的算法就能检测并移除这些依赖。两种方法均能从数据中提取线性特征,同时消除非线性冗余。我们在合成数据集与真实数据集上的仿真结果表明,无论是在提取特征的方法最大化方面,还是在分类算法性能提升方面,本方法均优于PCA及当前最先进的线性特征提取算法。此外,我们的方法与核PCA(一种非线性方法)相比具有可比性,且通常表现更优。