Multilingual transfer ability, which reflects how well models fine-tuned on one source language can be applied to other languages, has been well studied in multilingual pre-trained models. However, the existence of such capability transfer between natural language and gene sequences/languages remains underexplored.This study addresses this gap by drawing inspiration from the sentence-pair classification task used for evaluating sentence similarity in natural language. We constructed two analogous tasks: DNA-pair classification(DNA sequence similarity) and DNA-protein-pair classification(gene coding determination). These tasks were designed to validate the transferability of capabilities from natural language to gene sequences. Even a small-scale pre-trained model like GPT-2-small, which was pre-trained on English, achieved an accuracy of 78% on the DNA-pair classification task after being fine-tuned on English sentence-pair classification data(XTREME PAWS-X). While training a BERT model on multilingual text, the precision reached 82%.On the more complex DNA-protein-pair classification task, however, the model's output was barely distinguishable from random output.Experiments suggest that there may be a capability transfer from natural language to genetic language, but further task testing is needed to confirm this.
翻译:多语言迁移能力——即模型在一种源语言上微调后应用于其他语言的效果——在多语言预训练模型中已得到充分研究。然而,自然语言与基因序列/语言之间是否存在此类能力迁移仍缺乏探索。本研究通过借鉴自然语言中用于评估句子相似度的句子对分类任务来填补这一空白。我们构建了两个类比任务:DNA对分类(DNA序列相似性)和DNA-蛋白质对分类(基因编码判定)。这些任务旨在验证从自然语言到基因序列的能力可迁移性。即使是像GPT-2-small这样仅基于英语预训练的小规模模型,在英语句子对分类数据(XTREME PAWS-X)上微调后,在DNA对分类任务中仍达到了78%的准确率。而在多语言文本上训练的BERT模型准确率可达82%。然而,在更复杂的DNA-蛋白质对分类任务中,模型的输出几乎与随机输出无异。实验表明,从自然语言到基因语言可能存在能力迁移,但需要进一步的任务测试来验证这一点。