Multiview datasets are common in scientific and engineering applications, yet existing fusion methods offer limited theoretical guarantees, particularly in the presence of heterogeneous and high-dimensional noise. We propose Generalized Robust Adaptive-Bandwidth Multiview Diffusion Maps (GRAB-MDM), a new kernel-based diffusion geometry framework for integrating multiple noisy data sources. The key innovation of GRAB-MDM is a {view}-dependent bandwidth selection strategy that adapts to the geometry and noise level of each view, enabling a stable and principled construction of multiview diffusion operators. Under a common-manifold model, we establish asymptotic convergence results and show that the adaptive bandwidths lead to provably robust recovery of the shared intrinsic structure, even when noise levels and sensor dimensions differ across views. Numerical experiments demonstrate that GRAB-MDM significantly improves robustness and embedding quality compared with fixed-bandwidth and equal-bandwidth baselines, and usually outperform existing algorithms. The proposed framework offers a practical and theoretically grounded solution for multiview sensor fusion in high-dimensional noisy environments.
翻译:多视角数据集在科学与工程应用中普遍存在,然而现有的融合方法提供的理论保证有限,尤其是在存在异构和高维噪声的情况下。我们提出了广义鲁棒自适应带宽多视角扩散映射(GRAB-MDM),这是一种新的基于核的扩散几何框架,用于集成多个含噪声数据源。GRAB-MDM 的核心创新是一种依赖于视角的自适应带宽选择策略,该策略能够适应每个视角的几何结构与噪声水平,从而实现多视角扩散算子的稳定且原理驱动的构建。在公共流形模型下,我们建立了渐近收敛性结果,并证明即使各视角间的噪声水平和传感器维度存在差异,自适应带宽也能实现共享内在结构的可证明鲁棒恢复。数值实验表明,与固定带宽和等带宽基线方法相比,GRAB-MDM 显著提高了鲁棒性和嵌入质量,并且通常优于现有算法。所提出的框架为高维噪声环境下的多视角传感器融合提供了一个实用且具有理论依据的解决方案。