This paper addresses critical challenges in machine learning, particularly the stability, consistency, and convergence of neural networks under non-IID data, distribution shifts, and high-dimensional settings. We provide new theoretical results on uniform stability for neural networks with dynamic learning rates in non-convex settings. Further, we establish consistency bounds for federated learning models in non-Euclidean spaces, accounting for distribution shifts and curvature effects. For Physics-Informed Neural Networks (PINNs), we derive stability, consistency, and convergence guarantees for solving Partial Differential Equations (PDEs) in noisy environments. These results fill significant gaps in understanding model behavior in complex, non-ideal conditions, paving the way for more robust and reliable machine learning applications.
翻译:本文针对机器学习中的关键挑战展开研究,重点关注神经网络在非独立同分布数据、分布偏移及高维场景下的稳定性、一致性与收敛性问题。我们在非凸场景中,针对动态学习率的神经网络提出了新的均匀稳定性理论结果。进一步地,我们建立了非欧几里得空间中联邦学习模型的一致性边界,该边界充分考虑了分布偏移与曲率效应的影响。针对物理信息神经网络,我们推导了其在噪声环境下求解偏微分方程时的稳定性、一致性及收敛性保证。这些研究成果填补了复杂非理想条件下模型行为理解的重要空白,为构建更鲁棒可靠的机器学习应用奠定了理论基础。