While (large) language models have significantly improved over the last years, they still struggle to sensibly process long sequences found, e.g., in books, due to the quadratic scaling of the underlying attention mechanism. To address this, we propose NextLevelBERT, a Masked Language Model operating not on tokens, but on higher-level semantic representations in the form of text embeddings. We pretrain NextLevelBERT to predict the vector representation of entire masked text chunks and evaluate the effectiveness of the resulting document vectors on three types of tasks: 1) Semantic Textual Similarity via zero-shot document embeddings, 2) Long document classification, 3) Multiple-choice question answering. We find that next-level Masked Language Modeling is an effective technique to tackle long-document use cases and can outperfor much larger embedding models as long as the required level of detail of semantic information is not too fine. Our models and code are publicly available online.
翻译:尽管(大型)语言模型在过去几年取得了显著进步,但由于底层注意力机制的二次方复杂度,它们仍难以有效处理书籍等场景中的长序列。为此,我们提出NextLevelBERT——一种不作用于词元,而是基于文本嵌入形式的高级语义表征进行操作的掩码语言模型。我们通过预训练使NextLevelBERT能够预测被掩码文本片段的整体向量表征,并在三类任务上评估所得文档向量的有效性:1)基于零样本文档嵌入的语义文本相似度计算,2)长文档分类,3)多项选择题问答。研究发现,高级掩码语言建模是处理长文档用例的有效技术,只要所需语义信息的精细度不过高,其性能可超越规模更大的嵌入模型。我们的模型与代码已公开在线发布。