We present a set of algorithms implementing multidimensional scaling (MDS) for large data sets. MDS is a family of dimensionality reduction techniques using a $n \times n$ distance matrix as input, where $n$ is the number of individuals, and producing a low dimensional configuration: a $n\times r$ matrix with $r<<n$. When $n$ is large, MDS is unaffordable with classical MDS algorithms because of their extremely large memory and time requirements. We compare six non-standard algorithms intended to overcome these difficulties. They are based on the central idea of partitioning the data set into small pieces, where classical MDS methods can work. Two of these algorithms are original proposals. In order to check the performance of the algorithms as well as to compare them, we have done a simulation study. Additionally, we have used the algorithms to obtain an MDS configuration for EMNIST: a real large data set with more than $800000$ points. We conclude that all the algorithms are appropriate to use for obtaining an MDS configuration, but we recommend using one of our proposals since it is a fast algorithm with satisfactory statistical properties when working with big data. An R package implementing the algorithms has been created.
翻译:本文提出了一组适用于大规模数据集的多维缩放(MDS)算法。MDS是一类降维技术,它以$n \times n$距离矩阵作为输入(其中$n$为个体数量),并生成低维配置:一个满足$r<<n$的$n\times r$矩阵。当$n$较大时,经典MDS算法因极高的内存和时间需求而难以使用。我们比较了六种旨在克服这些困难的非标准算法。这些算法基于将数据集划分为小块的核心思想,使得经典MDS方法能够在这些小块上运行。其中两种算法为原创性方法。为检验算法性能并进行比较,我们开展了仿真研究。此外,我们将这些算法应用于EMNIST数据集(一个包含超过80万个点的真实大规模数据集),以获得相应的MDS配置。研究结果表明,所有算法均适用于获取MDS配置,但我们推荐使用其中一种原创算法,该算法在处理大数据时速度较快且具有令人满意的统计特性。我们已开发实现这些算法的R语言软件包。