The Intrinsic Dimension (ID) is a key concept in unsupervised learning and feature selection, as it is a lower bound to the number of variables which are necessary to describe a system. However, in almost any real-world dataset the ID depends on the scale at which the data are analysed. Quite typically at a small scale, the ID is very large, as the data are affected by measurement errors. At large scale, the ID can also be erroneously large, due to the curvature and the topology of the manifold containing the data. In this work, we introduce an automatic protocol to select the sweet spot, namely the correct range of scales in which the ID is meaningful and useful. This protocol is based on imposing that for distances smaller than the correct scale the density of the data is constant. In the presented framework, to estimate the density it is necessary to know the ID, therefore, this condition is imposed self-consistently. We illustrate the usefulness and robustness of this procedure to noise by benchmarks on artificial and real-world datasets.
翻译:本征维度是无监督学习与特征选择中的核心概念,它描述了表征一个系统所需变量数目的下界。然而,在几乎所有现实数据集中,本征维度的估计结果都依赖于分析数据时所选取的尺度。典型情况下,在较小尺度上,由于测量误差的影响,本征维度估计值往往偏高;而在较大尺度上,数据所处流形的曲率与拓扑结构也可能导致估计值虚高。本研究提出一种自动选择最优尺度区间的计算方案,该区间对应的本征维度估计结果具有明确的物理意义与实际应用价值。该方案的核心原理在于:在小于正确尺度的距离范围内,数据密度应保持恒定。在此框架中,密度估计需要以已知本征维度为前提,因此我们通过自洽迭代的方式实现该约束条件。通过人工合成数据集与真实数据集的基准测试,我们验证了该方法对噪声干扰的鲁棒性与实际应用效能。