Recent advances in deep learning have enabled the generation of realistic data by training generative models on large datasets of text, images, and audio. While these models have demonstrated exceptional performance in generating novel and plausible data, it remains an open question whether they can effectively accelerate scientific discovery through the data generation and drive significant advancements across various scientific fields. In particular, the discovery of new inorganic materials with promising properties poses a critical challenge, both scientifically and for industrial applications. However, unlike textual or image data, materials, or more specifically crystal structures, consist of multiple types of variables - including lattice vectors, atom positions, and atomic species. This complexity in data give rise to a variety of approaches for representing and generating such data. Consequently, the design choices of generative models for crystal structures remain an open question. In this study, we explore a new type of diffusion model for the generative inverse design of crystal structures, with a backbone based on a Transformer architecture. We demonstrate our models are superior to previous methods in their versatility for generating crystal structures with desired properties. Furthermore, our empirical results suggest that the optimal conditioning methods vary depending on the dataset.
翻译:深度学习的最新进展使得通过在大规模文本、图像和音频数据集上训练生成模型来生成逼真数据成为可能。尽管这些模型在生成新颖且合理的数据方面表现出卓越性能,但它们能否通过数据生成有效加速科学发现并推动各科学领域的重大进展,仍是一个悬而未决的问题。特别是,发现具有优异性能的新型无机材料,无论在科学层面还是工业应用层面,均构成一项严峻挑战。然而,与文本或图像数据不同,材料(更具体而言是晶体结构)包含多种类型的变量——包括晶格矢量、原子位置和原子种类。这种数据复杂性催生了多种表示和生成此类数据的方法。因此,针对晶体结构的生成模型的设计选择仍是一个开放性问题。在本研究中,我们探索了一种新型扩散模型,用于晶体结构的生成式逆向设计,其主干网络基于Transformer架构。我们证明,在生成具有所需性能的晶体结构方面,我们的模型比先前方法更具通用性。此外,我们的实证结果表明,最优的条件化方法会因数据集的不同而变化。