This study introduces a language transformer-based machine learning model to predict key mechanical properties of high-entropy alloys (HEAs), addressing the challenges due to their complex, multi-principal element compositions and limited experimental data. By pre-training the transformer on extensive synthetic materials data and fine-tuning it with specific HEA datasets, the model effectively captures intricate elemental interactions through self-attention mechanisms. This approach mitigates data scarcity issues via transfer learning, enhancing predictive accuracy for properties like elongation (%) and ultimate tensile strength (UTS) compared to traditional regression models such as Random Forests and Gaussian Processes. The model's interpretability is enhanced by visualizing attention weights, revealing significant elemental relationships that align with known metallurgical principles. This work demonstrates the potential of transformer models to accelerate materials discovery and optimization, enabling accurate property predictions, thereby advancing the field of materials informatics.
翻译:本研究提出了一种基于语言Transformer的机器学习模型,用于预测高熵合金(HEAs)的关键力学性能,以应对其复杂的多主元成分和有限实验数据带来的挑战。通过在大量合成材料数据上对Transformer进行预训练,并利用特定高熵合金数据集进行微调,该模型通过自注意力机制有效捕捉了复杂的元素相互作用。该方法通过迁移学习缓解了数据稀缺问题,与随机森林和高斯过程等传统回归模型相比,在延伸率(%)和极限抗拉强度(UTS)等性能的预测精度上有所提升。通过可视化注意力权重增强了模型的可解释性,揭示了与已知冶金原理一致的重要元素关系。这项工作展示了Transformer模型在加速材料发现与优化方面的潜力,能够实现准确的性能预测,从而推动材料信息学领域的发展。