Recent advances in natural language processing (NLP) can be largely attributed to the advent of pre-trained language models such as BERT and RoBERTa. While these models demonstrate remarkable performance on general datasets, they can struggle in specialized domains such as medicine, where unique domain-specific terminologies, domain-specific abbreviations, and varying document structures are common. This paper explores strategies for adapting these models to domain-specific requirements, primarily through continuous pre-training on domain-specific data. We pre-trained several German medical language models on 2.4B tokens derived from translated public English medical data and 3B tokens of German clinical data. The resulting models were evaluated on various German downstream tasks, including named entity recognition (NER), multi-label classification, and extractive question answering. Our results suggest that models augmented by clinical and translation-based pre-training typically outperform general domain models in medical contexts. We conclude that continuous pre-training has demonstrated the ability to match or even exceed the performance of clinical models trained from scratch. Furthermore, pre-training on clinical data or leveraging translated texts have proven to be reliable methods for domain adaptation in medical NLP tasks.
翻译:自然语言处理(NLP)的最新进展在很大程度上归功于预训练语言模型(如BERT和RoBERTa)的出现。尽管这些模型在通用数据集上表现卓越,但在医学等专业领域常面临挑战,因为该领域存在独特的领域专用术语、缩写及多样的文档结构。本文探讨了通过持续预训练领域特定数据来使模型适应领域需求的方法。我们基于24亿个来自翻译公开英语医学数据的token和30亿个德语临床数据的token,预训练了多个德语医学语言模型。所得模型在多项德语下游任务(包括命名实体识别、多标签分类和抽取式问答)上进行了评估。结果表明,通过临床数据和翻译数据增强预训练的模型在医学语境中通常优于通用领域模型。我们得出结论:持续预训练能够匹配甚至超越从头训练的临床模型性能。此外,基于临床数据或利用翻译文本进行预训练已被证明是医学NLP任务中有效的领域自适应方法。