We present a machine learning framework capable of consistently inferring mathematical expressions of hyperelastic energy functionals for incompressible materials from sparse experimental data and physical laws. To achieve this goal, we propose a polyconvex neural additive model (PNAM) that enables us to express the hyperelastic model in a learnable feature space while enforcing polyconvexity. An upshot of this feature space obtained via the PNAM is that (1) it is spanned by a set of univariate basis that can be re-parametrized with a more complex mathematical form, and (2) the resultant elasticity model is guaranteed to fulfill the polyconvexity, which ensures that the acoustic tensor remains elliptic for any deformation. To further improve the interpretability, we use genetic programming to convert each univariate basis into a compact mathematical expression. The resultant multi-variable mathematical models obtained from this proposed framework are not only more interpretable but are also proven to fulfill physical laws. By controlling the compactness of the learned symbolic form, the machine learning-generated mathematical model also requires fewer arithmetic operations than its deep neural network counterparts during deployment. This latter attribute is crucial for scaling large-scale simulations where the constitutive responses of every integration point must be updated within each incremental time step. We compare our proposed model discovery framework against other state-of-the-art alternatives to assess the robustness and efficiency of the training algorithms and examine the trade-off between interpretability, accuracy, and precision of the learned symbolic hyperelastic models obtained from different approaches. Our numerical results suggest that our approach extrapolates well outside the training data regime due to the precise incorporation of physics-based knowledge.
翻译:我们提出了一种机器学习框架,能够从稀疏实验数据和物理定律中一致地推断不可压缩材料的超弹性能量泛函的数学表达式。为实现这一目标,我们提出了一种多凸神经加性模型(PNAM),该模型能够在可学习的特征空间中表达超弹性模型,同时强制满足多凸性。通过PNAM获得的这一特征空间的优势在于:(1)它由一组单变量基函数张成,这些基函数可以通过更复杂的数学形式重新参数化;(2)由此产生的弹性模型保证满足多凸性,从而确保任意变形下声学张量保持椭圆性。为进一步提升可解释性,我们使用遗传编程将每个单变量基函数转换为简洁的数学表达式。从该框架得到的多变量数学模型不仅更具可解释性,且经验证满足物理定律。通过控制所学符号形式的紧凑度,机器学习生成的数学模型在部署时相比深度神经网络模型所需算术运算更少。这一特性对于大规模模拟至关重要,因为在此类模拟中需在每一增量时间步更新所有积分点的本构响应。我们将所提出的模型发现框架与其他先进方法进行对比,以评估训练算法的鲁棒性和效率,并考察不同方法所得符号超弹性模型在可解释性、准确性和精度之间的权衡。数值结果表明,由于精确融入了基于物理的知识,我们的方法在训练数据范围之外具有良好的外推能力。