Data scarcity and distribution shifts often hinder the ability of machine learning models to generalize when applied to proteins and other biological data. Self-supervised pre-training on large datasets is a common method to enhance generalization. However, striving to perform well on all possible proteins can limit model's capacity to excel on any specific one, even though practitioners are often most interested in accurate predictions for the individual protein they study. To address this limitation, we propose an orthogonal approach to achieve generalization. Building on the prevalence of self-supervised pre-training, we introduce a method for self-supervised fine-tuning at test time, allowing models to adapt to the test protein of interest on the fly and without requiring any additional data. We study our test-time training (TTT) method through the lens of perplexity minimization and show that it consistently enhances generalization across different models, their scales, and datasets. Notably, our method leads to new state-of-the-art results on the standard benchmark for protein fitness prediction, improves protein structure prediction for challenging targets, and enhances function prediction accuracy.
翻译:数据稀缺性与分布偏移常阻碍机器学习模型在蛋白质及其他生物数据上的泛化能力。基于大规模数据集的自监督预训练是提升泛化性能的常用方法。然而,追求在所有可能蛋白质上表现良好会限制模型在任一特定蛋白质上的卓越性能,尽管研究者通常最关注其研究的具体蛋白质的准确预测。为突破此局限,我们提出一种实现泛化的正交方法。基于自监督预训练的普遍性,我们引入一种测试时自监督微调方法,使模型能够即时适应目标测试蛋白质,且无需任何额外数据。我们通过困惑度最小化的视角研究该测试时训练方法,并证明其能持续提升不同模型、规模及数据集上的泛化性能。值得注意的是,本方法在蛋白质适应性预测标准基准上取得了新的最优结果,提升了困难靶点的蛋白质结构预测精度,并增强了功能预测的准确性。