Recently, neural networks have gained attention for creating parametric and invertible multidimensional data projections. Parametric projections allow for embedding previously unseen data without recomputing the projection as a whole, while invertible projections enable the generation of new data points. However, these properties have never been explored simultaneously for arbitrary projection methods. We evaluate three autoencoder (AE) architectures for creating parametric and invertible projections. Based on a given projection, we train AEs to learn a mapping into 2D space and an inverse mapping into the original space. We perform a quantitative and qualitative comparison on four datasets of varying dimensionality and pattern complexity using t-SNE. Our results indicate that AEs with a customized loss function can create smoother parametric and inverse projections than feed-forward neural networks while giving users control over the strength of the smoothing effect.
翻译:近年来,神经网络在创建参数化和可逆的多维数据投影方面受到关注。参数化投影允许嵌入先前未见过的数据,而无需重新计算整个投影;而可逆投影则能够生成新的数据点。然而,这些特性从未在任意投影方法中被同时探索。我们评估了三种自编码器(AE)架构,用于创建参数化和可逆的投影。基于给定的投影,我们训练AE学习到二维空间的映射以及返回到原始空间的逆映射。我们使用t-SNE在四个不同维度和模式复杂度的数据集上进行了定量和定性比较。我们的结果表明,具有定制损失函数的自编码器可以创建比前馈神经网络更平滑的参数化和逆投影,同时让用户能够控制平滑效果的强度。