Recently, Large Language Models (LLMs) with their strong task-handling capabilities have shown remarkable advancements across a spectrum of fields, moving beyond natural language understanding. However, their proficiency within the chemistry domain remains restricted, especially in solving professional molecule-related tasks. This challenge is attributed to their inherent limitations in comprehending molecules using only common textual representations, i.e., SMILES strings. In this study, we seek to enhance the ability of LLMs to comprehend molecules by designing and equipping them with a multi-modal external module, namely MolX. In particular, instead of directly using a SMILES string to represent a molecule, we utilize specific encoders to extract fine-grained features from both SMILES string and 2D molecular graph representations for feeding into an LLM. Moreover, a human-defined molecular fingerprint is incorporated to leverage its embedded domain knowledge. Then, to establish an alignment between MolX and the LLM's textual input space, the whole model in which the LLM is frozen, is pre-trained with a versatile strategy including a diverse set of tasks. Extensive experimental evaluations demonstrate that our proposed method only introduces a small number of trainable parameters while outperforming baselines on various downstream molecule-related tasks ranging from molecule-to-text translation to retrosynthesis, with and without fine-tuning the LLM.
翻译:近年来,大语言模型凭借其强大的任务处理能力,在超越自然语言理解的广泛领域中展现出显著进展。然而,其在化学领域的专业能力仍然受限,尤其在解决专业分子相关任务方面。这一挑战归因于大语言模型仅通过常见文本表示(即SMILES字符串)理解分子的固有局限性。在本研究中,我们通过设计并配备一个多模态外部模块——MolX,旨在增强大语言模型理解分子的能力。具体而言,我们并非直接使用SMILES字符串表示分子,而是利用特定编码器从SMILES字符串和二维分子图表示中提取细粒度特征,并将其输入大语言模型。此外,我们引入了人工定义的分子指纹以利用其嵌入的领域知识。随后,为建立MolX与大语言模型文本输入空间的对齐,我们在冻结大语言模型参数的条件下,通过包含多样化任务的通用策略对整个模型进行预训练。大量实验评估表明,我们提出的方法仅引入少量可训练参数,即可在从分子到文本翻译到逆合成等多种下游分子相关任务上超越基线模型,无论是否对大语言模型进行微调。