Neural Networks (NNs) are promising models for refining the accuracy of molecular dynamics, potentially opening up new fields of application. Typically trained bottom-up, atomistic NN potential models can reach first-principle accuracy, while coarse-grained implicit solvent NN potentials surpass classical continuum solvent models. However, overcoming the limitations of costly generation of accurate reference data and data inefficiency of common bottom-up training demands efficient incorporation of data from many sources. This paper introduces the framework chemtrain to learn sophisticated NN potential models through customizable training routines and advanced training algorithms. These routines can combine multiple top-down and bottom-up algorithms, e.g., to incorporate both experimental and simulation data or pre-train potentials with less costly algorithms. chemtrain provides an object-oriented high-level interface to simplify the creation of custom routines. On the lower level, chemtrain relies on JAX to compute gradients and scale the computations to use available resources. We demonstrate the simplicity and importance of combining multiple algorithms in the examples of parametrizing an all-atomistic model of titanium and a coarse-grained implicit solvent model of alanine dipeptide.
翻译:神经网络(NNs)作为提升分子动力学精度的模型具有广阔前景,有望开拓新的应用领域。通常采用自下而上方式训练的原子级神经网络势能模型可达到第一性原理精度,而粗粒化隐式溶剂神经网络势能模型则超越了经典连续介质溶剂模型。然而,要克服精确参考数据生成成本高昂以及常见自下而上训练数据效率低下的局限性,需要高效整合多源数据。本文提出chemtrain框架,通过可定制的训练流程和先进训练算法来学习复杂的神经网络势能模型。这些训练流程可融合多种自上而下与自下而上算法,例如同时整合实验与模拟数据,或采用较低成本算法对势能进行预训练。chemtrain提供面向对象的高级接口以简化自定义流程的构建。在底层实现上,chemtrain依托JAX进行梯度计算,并利用可扩展计算架构充分发挥可用资源。我们通过钛金属全原子模型与丙氨酸二肽粗粒化隐式溶剂模型的参数化案例,论证了融合多算法策略的简明性与重要性。