The Intrinsic Dimension (ID) is a key concept in unsupervised learning and feature selection, as it is a lower bound to the number of variables which are necessary to describe a system. However, in almost any real-world dataset the ID depends on the scale at which the data are analysed. Quite typically at a small scale, the ID is very large, as the data are affected by measurement errors. At large scale, the ID can also be erroneously large, due to the curvature and the topology of the manifold containing the data. In this work, we introduce an automatic protocol to select the sweet spot, namely the correct range of scales in which the ID is meaningful and useful. This protocol is based on imposing that for distances smaller than the correct scale the density of the data is constant. Since to estimate the density it is necessary to know the ID, this condition is imposed self-consistently. We illustrate the usefulness and robustness of this procedure by benchmarks on artificial and real-world datasets.
翻译:本征维度是无监督学习与特征选择中的核心概念,它描述了表征一个系统所需变量数目的下界。然而,在几乎所有现实数据集中,本征维度的取值依赖于数据被分析时所采用的尺度。典型情况下,在较小尺度上,由于数据受测量误差影响,本征维度会显得非常高;而在较大尺度上,由于数据所处流形的曲率与拓扑结构影响,本征维度同样可能被错误地估计为较大值。本研究提出一种自动选择最优尺度区间的协议,该区间对应的本征维度估计具有实际意义与实用价值。该协议的核心原理是:在小于正确尺度的距离范围内,数据密度应保持恒定。由于密度估计本身需要已知本征维度,这一条件是通过自洽方式实现的。我们通过对人工数据集与真实数据集的基准测试,验证了该方法的实用性与鲁棒性。