Most of the current transformer-based chemical language models are pre-trained on millions to billions of molecules. However, the improvement from such scaling in dataset size is not confidently linked to improved molecular property prediction. The aim of this study is to investigate and overcome some of the limitations of transformer models in predicting molecular properties. Specifically, we examine the impact of pre-training dataset size and diversity on the performance of transformer models and investigate the use of domain adaptation as a technique for improving model performance. First, our findings indicate that increasing pretraining dataset size beyond 400K molecules from the GuacaMol dataset does not result in a significant improvement on four ADME endpoints, namely, solubility, permeability, microsomal stability, and plasma protein binding. Second, our results demonstrate that using domain adaptation by further training the transformer model on a small set of domain-relevant molecules, i.e., a few hundred to a few thousand, using multi-task regression of physicochemical properties was sufficient to significantly improve performance for three out of the four investigated ADME endpoints (P-value < 0.001). Finally, we observe that a model pre-trained on 400K molecules and domain adopted on a few hundred/thousand molecules performs similarly (P-value > 0.05) to more complicated transformer models like MolBERT(pre-trained on 1.3M molecules) and MolFormer (pre-trained on 100M molecules). A comparison to a random forest model trained on basic physicochemical properties showed similar performance to the examined transformer models. We believe that current transformer models can be improved through further systematic analysis of pre-training and downstream data, pre-training objectives, and scaling laws, ultimately leading to better and more helpful models.
翻译:当前大多数基于Transformer的化学语言模型在数百万至数十亿分子上进行预训练。然而,这种数据集规模扩大所带来的性能提升与分子性质预测能力的改进之间尚未建立明确关联。本研究旨在探究并克服Transformer模型在预测分子性质方面的若干局限性。具体而言,我们考察了预训练数据集规模与多样性对Transformer模型性能的影响,并研究了领域自适应作为提升模型性能的技术路径。首先,我们的研究结果表明,当预训练数据量超过GuacaMol数据集的40万个分子后,继续增加数据规模对四个ADME终点指标(溶解度、渗透性、微粒体稳定性和血浆蛋白结合率)均未产生显著改善。其次,实验证明通过对Transformer模型进行领域自适应训练——即在数百至数千个领域相关分子上进行基于物理化学性质多任务回归的进一步训练——足以显著提升四个ADME终点指标中三个指标的性能(P值<0.001)。最后,我们观察到在40万个分子上预训练并经数百/数千个分子领域自适应的模型,其性能与更复杂的Transformer模型(如基于130万个分子预训练的MolBERT和基于1亿个分子预训练的MolFormer)表现相当(P值>0.05)。与基于基础物理化学性质训练的随机森林模型对比显示,其性能与所考察的Transformer模型相近。我们认为,通过对预训练与下游数据、预训练目标及缩放定律进行更系统的分析,现有Transformer模型有望得到进一步改进,最终发展出更优越且实用的模型。