Masked language modelling (MLM) as a pretraining objective has been widely adopted in genomic sequence modelling. While pretrained models can successfully serve as encoders for various downstream tasks, the distribution shift between pretraining and inference detrimentally impacts performance, as the pretraining task is to map [MASK] tokens to predictions, yet the [MASK] is absent during downstream applications. This means the encoder does not prioritize its encodings of non-[MASK] tokens, and expends parameters and compute on work only relevant to the MLM task, despite this being irrelevant at deployment time. In this work, we propose a modified encoder-decoder architecture based on the masked autoencoder framework, designed to address this inefficiency within a BERT-based transformer. We empirically show that the resulting mismatch is particularly detrimental in genomic pipelines where models are often used for feature extraction without fine-tuning. We evaluate our approach on the BIOSCAN-5M dataset, comprising over 2 million unique DNA barcodes. We achieve substantial performance gains in both closed-world and open-world classification tasks when compared against causal models and bidirectional architectures pretrained with MLM tasks.
翻译:掩码语言建模(MLM)作为一种预训练目标,已在基因组序列建模中得到广泛应用。尽管预训练模型能成功作为各类下游任务的编码器,但预训练与推理之间的分布偏移会对性能产生不利影响,因为预训练任务是将[MASK]标记映射为预测结果,而下游应用中却不存在[MASK]标记。这意味着编码器不会优先处理非[MASK]标记的编码,并将参数与计算资源消耗在仅与MLM任务相关的工作上,尽管这些工作在部署阶段并无实际意义。本研究基于掩码自编码器框架,提出一种改进的编码器-解码器架构,旨在解决基于BERT的Transformer模型中存在的这种低效问题。我们通过实验证明,这种不匹配在基因组分析流程中尤为不利,因为模型常被用于特征提取而无需微调。我们在包含超过200万个独特DNA条形码的BIOSCAN-5M数据集上评估了所提方法。与基于MLM任务预训练的因果模型及双向架构相比,该方法在封闭世界和开放世界分类任务中均实现了显著的性能提升。