While various models and computational tools have been proposed for structure and property analysis of molecules, generating molecules that conform to all desired structures and properties remains a challenge. Here, we introduce a multi-constraint molecular generation large language model, TSMMG, which, akin to a student, incorporates knowledge from various small models and tools, namely, the 'teachers'. To train TSMMG, we construct a large set of text-molecule pairs by extracting molecular knowledge from these 'teachers', enabling it to generate novel molecules that conform to the descriptions through various text prompts. We experimentally show that TSMMG remarkably performs in generating molecules meeting complex, natural language-described property requirements across two-, three-, and four-constraint tasks, with an average molecular validity of over 99% and success ratio of 82.58%, 68.03%, and 67.48%, respectively. The model also exhibits adaptability through zero-shot testing, creating molecules that satisfy combinations of properties that have not been encountered. It can comprehend text inputs with various language styles, extending beyond the confines of outlined prompts, as confirmed through empirical validation. Additionally, the knowledge distillation feature of TSMMG contributes to the continuous enhancement of small models, while the innovative approach to dataset construction effectively addresses the issues of data scarcity and quality, which positions TSMMG as a promising tool in the domains of drug discovery and materials science.
翻译:尽管已有多种模型和计算工具被提出用于分子的结构与性质分析,但生成同时符合所有期望结构与性质的分子仍然是一个挑战。本文介绍了一种多约束分子生成大语言模型TSMMG,该模型如同学生,整合了来自多种小型模型与工具(即“教师”)的知识。为训练TSMMG,我们通过从这些“教师”中提取分子知识,构建了一个大规模的文本-分子对数据集,使其能够通过多样化的文本提示生成符合描述的新型分子。实验表明,TSMMG在生成满足复杂自然语言描述性质要求的分子方面表现卓越,在二、三、四约束任务中,平均分子有效性超过99%,成功率分别达到82.58%、68.03%和67.48%。该模型还通过零样本测试展现出适应性,能够生成满足未见过的性质组合的分子。经验证,它能够理解多种语言风格的文本输入,其能力超出了预设提示的范畴。此外,TSMMG的知识蒸馏特性有助于持续增强小型模型,而其创新的数据集构建方法有效解决了数据稀缺与质量问题,这使TSMMG成为药物发现和材料科学领域中一个极具前景的工具。