Data-driven modeling in mechanics is evolving rapidly based on recent machine learning advances, especially on artificial neural networks. As the field matures, new data and models created by different groups become available, opening possibilities for cooperative modeling. However, artificial neural networks suffer from catastrophic forgetting, i.e. they forget how to perform an old task when trained on a new one. This hinders cooperation because adapting an existing model for a new task affects the performance on a previous task trained by someone else. The authors developed a continual learning method that addresses this issue, applying it here for the first time to solid mechanics. In particular, the method is applied to recurrent neural networks to predict history-dependent plasticity behavior, although it can be used on any other architecture (feedforward, convolutional, etc.) and to predict other phenomena. This work intends to spawn future developments on continual learning that will foster cooperative strategies among the mechanics community to solve increasingly challenging problems. We show that the chosen continual learning strategy can sequentially learn several constitutive laws without forgetting them, using less data to achieve the same error as standard (non-cooperative) training of one law per model.
翻译:基于机器学习的最新进展(尤其是人工神经网络),力学中的数据驱动建模正快速发展。随着该领域的成熟,不同研究团队创建的新数据和模型开始可获取,这为合作式建模创造了可能。然而,人工神经网络存在灾难性遗忘问题,即当对新任务进行训练时,网络会忘记如何执行旧任务。这阻碍了合作,因为将现有模型适配至新任务会影响他人原先训练的任务的性能。作者开发了一种解决此问题的持续学习方法,并首次将其应用于固体力学。具体而言,该方法被应用于递归神经网络以预测历史相关的塑性行为,尽管它也可用于任何其他架构(前馈网络、卷积网络等)并预测其他现象。本研究旨在推动持续学习的未来发展,从而促进力学界采用合作策略来解决日益具有挑战性的问题。我们表明,所选的持续学习策略能顺序学习多种本构定律而不遗忘它们,且使用的数据量更少,即可达到与标准(非合作式)训练(即每个模型仅学习一种本构定律)相同的误差水平。