Neural models learn data representations that lie on low-dimensional manifolds, yet modeling the relation between these representational spaces is an ongoing challenge. By integrating spectral geometry principles into neural modeling, we show that this problem can be better addressed in the functional domain, mitigating complexity, while enhancing interpretability and performances on downstream tasks. To this end, we introduce a multi-purpose framework to the representation learning community, which allows to: (i) compare different spaces in an interpretable way and measure their intrinsic similarity; (ii) find correspondences between them, both in unsupervised and weakly supervised settings, and (iii) to effectively transfer representations between distinct spaces. We validate our framework on various applications, ranging from stitching to retrieval tasks, demonstrating that latent functional maps can serve as a swiss-army knife for representation alignment.
翻译:神经网络模型学习到的数据表示位于低维流形上,然而对这些表示空间之间的关系进行建模仍是一个持续的挑战。通过将谱几何原理融入神经建模,我们证明该问题可以在函数域中得到更好的解决,从而降低复杂度,同时提升下游任务的可解释性与性能。为此,我们向表示学习社区引入了一个多用途框架,该框架能够:(i)以可解释的方式比较不同空间并度量其内在相似性;(ii)在无监督和弱监督设置下寻找空间之间的对应关系;以及(iii)在不同空间之间高效地迁移表示。我们在从拼接任务到检索任务等多种应用上验证了该框架,证明潜在函数映射可作为表示对齐的通用工具。