Ranking or assessing centrality in multivariate and non-Euclidean data is difficult because there is no canonical order and many depth notions become computationally fragile in high-dimensional or structured settings. We introduce a preference-based notion of centrality defined through population proximity comparisons with respect to a random reference draw, yielding a metric-intrinsic statistical functional that is well-defined on general metric spaces. Because the induced pairwise preferences may be non-transitive, we map them to a coherent one-dimensional score via a Bradley--Terry--Luce cross-entropy projection, viewed as a calibrated aggregation device rather than a correctly specified model. We develop two finite-sample estimators a convex M-estimator and a fast spectral estimator based on a comparison operator, and establish identifiability and consistency under mild conditions. Simulations and real-data examples, including high-dimensional and functional observations, illustrate that the proposed scores provide stable, interpretable rankings aligned with the underlying preference centrality.
翻译:在多变量与非欧几里得数据中进行排序或评估中心性具有挑战性,因为缺乏规范顺序,且许多深度概念在高维或结构化场景中计算上变得脆弱。我们引入一种基于偏好的中心性概念,该概念通过相对于随机参考抽取的总体邻近度比较来定义,从而产生一个在广义度量空间上定义良好的度量内在统计泛函。由于导出的成对偏好可能不具备传递性,我们通过一个Bradley–Terry–Luce交叉熵投影将其映射到一个一致的一维分数,该投影被视为一种校准的聚合工具,而非正确设定的模型。我们开发了两种有限样本估计器:一种凸M-估计器,以及一种基于比较算子的快速谱估计器,并在温和条件下建立了可识别性与一致性。模拟与真实数据示例(包括高维与函数型观测)表明,所提出的分数能够提供稳定、可解释的排序,并与底层偏好中心性保持一致。