Deep learning models trained on finite data lack a complete understanding of the physical world. On the other hand, physics-informed neural networks (PINNs) are infused with such knowledge through the incorporation of mathematically expressible laws of nature into their training loss function. By complying with physical laws, PINNs provide advantages over purely data-driven models in limited-data regimes. This feature has propelled them to the forefront of scientific machine learning, a domain characterized by scarce and costly data. However, the vision of accurate physics-informed learning comes with significant challenges. This review examines PINNs for the first time in terms of model optimization and generalization, shedding light on the need for new algorithmic advances to overcome issues pertaining to the training speed, precision, and generalizability of today's PINN models. Of particular interest are the gradient-free methods of neuroevolution for optimizing the uniquely complex loss landscapes arising in PINN training. Methods synergizing gradient descent and neuroevolution for discovering bespoke neural architectures and balancing multiple conflicting terms in physics-informed learning objectives are positioned as important avenues for future research. Yet another exciting track is to cast neuroevolution as a meta-learner of generalizable PINN models.
翻译:在有限数据上训练的深度学习模型缺乏对物理世界的完整理解。另一方面,基于物理信息的神经网络通过将可数学表达的自然定律纳入其训练损失函数,从而融入了此类知识。通过遵循物理定律,PINNs在数据有限的场景中相比纯数据驱动模型具有优势。这一特性使其成为科学机器学习领域的前沿方法,该领域的特点正是数据稀缺且获取成本高昂。然而,实现精确的物理信息学习愿景面临着重大挑战。本文首次从模型优化与泛化的角度审视PINNs,揭示了需要新的算法进展以克服当前PINN模型在训练速度、精度和泛化能力方面的问题。其中,用于优化PINN训练中出现的独特复杂损失景观的无梯度神经进化方法尤其值得关注。将梯度下降与神经进化相结合以发现定制化神经网络架构、并平衡物理信息学习目标中多个冲突项的方法,被认为是未来研究的重要方向。另一条令人兴奋的路径是将神经进化作为可泛化PINN模型的元学习器。