We develop a mathematical framework to address a broad class of metric and preference learning problems within a Hilbert space. We obtain a novel representer theorem for the simultaneous task of metric and preference learning. Our key observation is that the representer theorem for this task can be derived by regularizing the problem with respect to the norm inherent in the task structure. For the general task of metric learning, our framework leads to a simple and self-contained representer theorem and offers new geometric insights into the derivation of representer theorems for this task. In the case of Reproducing Kernel Hilbert Spaces (RKHSs), we illustrate how our representer theorem can be used to express the solution of the learning problems in terms of finite kernel terms similar to classical representer theorems. Lastly, our representer theorem leads to a novel nonlinear algorithm for metric and preference learning. We compare our algorithm against challenging baseline methods on real-world rank inference benchmarks, where it achieves competitive performance. Notably, our approach significantly outperforms vanilla ideal point methods and surpasses strong baselines across multiple datasets. Code available at: https://github.com/PeymanMorteza/Metric-Preference-Learning-RKHS
翻译:我们建立了一个数学框架,用于在希尔伯特空间中处理一大类度量与偏好学习问题。我们为度量与偏好学习的联合任务获得了一个新颖的表示定理。我们的核心观察是,该任务的表示定理可以通过利用任务结构内在的范数对问题进行正则化而推导得出。对于一般的度量学习任务,我们的框架导出了一个简洁且自洽的表示定理,并为该任务表示定理的推导提供了新的几何洞见。在再生核希尔伯特空间(RKHSs)的情形下,我们阐述了如何利用我们的表示定理,以类似于经典表示定理的有限核项形式来表达学习问题的解。最后,我们的表示定理催生了一种用于度量与偏好学习的新型非线性算法。我们在真实世界的排序推理基准测试中,将我们的算法与具有挑战性的基线方法进行了比较,结果表明其具有竞争力。值得注意的是,我们的方法显著优于朴素的理想点方法,并在多个数据集上超越了强基线。代码可见:https://github.com/PeymanMorteza/Metric-Preference-Learning-RKHS