The k-nearest neighbors (kNN) algorithm is a cornerstone of non-parametric classification in artificial intelligence, yet its deployment in large-scale applications is persistently constrained by the computational trade-off between inference speed and accuracy. Existing approximate nearest neighbor solutions accelerate retrieval but often degrade classification precision and lack adaptability in selecting the optimal neighborhood size (k). Here, we present an adaptive graph model that decouples inference latency from computational complexity. By integrating a Hierarchical Navigable Small World (HNSW) graph with a pre-computed voting mechanism, our framework completely transfers the computational burden of neighbor selection and weighting to the training phase. Within this topological structure, higher graph layers enable rapid navigation, while lower layers encode precise, node-specific decision boundaries with adaptive neighbor counts. Benchmarking against eight state-of-the-art baselines across six diverse datasets, we demonstrate that this architecture significantly accelerates inference speeds, achieving real-time performance, without compromising classification accuracy. These findings offer a scalable, robust solution to the long-standing inference bottleneck of kNN, establishing a new structural paradigm for graph-based nonparametric learning.
翻译:$k$近邻(kNN)算法是人工智能中非参数分类的基石,但其在大规模应用中的部署始终受限于推理速度与精度之间的计算权衡。现有的近似最近邻解决方案虽加速了检索,但通常会降低分类精度,且在选取最优邻域大小($k$)时缺乏自适应性。本文提出一种自适应图模型,将推理延迟与计算复杂度解耦。通过将分层可导航小世界(HNSW)图与预计算的投票机制相结合,我们的框架将邻域选择与加权的计算负担完全转移至训练阶段。在此拓扑结构中,较高的图层实现快速导航,而较低的图层则编码具有自适应邻居数量的、精确的节点特定决策边界。在六个不同数据集上对八种最先进的基线方法进行基准测试表明,该架构在保持分类精度的同时,显著提升了推理速度,实现了实时性能。这些发现为kNN长期存在的推理瓶颈问题提供了一个可扩展、鲁棒的解决方案,为基于图的非参数学习建立了新的结构范式。