While protein language models (pLMs) have transformed biological research, the scaling laws governing their improvement remain underexplored. By adapting methodologies from NLP scaling laws, we investigated the optimal ratio between model parameters and training tokens within a fixed compute budget. Our study reveals that pLM sizes scale sublinearly with compute budget, showing diminishing returns in performance as model size increases, and we identify a performance plateau in training loss comparable to the one found in relevant works in the field. Our findings suggest that widely-used pLMs might not be compute-optimal, indicating that larger models could achieve convergence more efficiently. Training a 35M model on a reduced token set, we attained perplexity results comparable to larger models like ESM-2 (15B) and xTrimoPGLM (100B) with a single dataset pass. This work paves the way towards more compute-efficient pLMs, democratizing their training and practical application in computational biology.
翻译:尽管蛋白质语言模型(pLMs)已彻底改变了生物学研究,但控制其性能提升的缩放定律仍未得到充分探索。通过借鉴自然语言处理领域缩放定律的方法,我们研究了在固定计算预算下模型参数与训练token之间的最优比例。研究表明,pLM的规模随计算预算呈亚线性增长,随着模型规模增大,性能提升呈现递减效应,且我们识别出训练损失的性能平台期,该现象与相关领域研究中的发现具有可比性。我们的发现表明,当前广泛使用的pLM可能未达到计算最优状态,这意味着更大规模的模型能够更高效地实现收敛。在减少token集的条件下训练3500万参数模型,通过单次数据集训练即可获得与ESM-2(150亿参数)和xTrimoPGLM(1000亿参数)等大型模型相当困惑度结果。本研究为开发更高效计算的pLM铺平了道路,推动其训练流程与在计算生物学领域的实际应用实现普惠化。