We introduce the sequence classification problem CIViC Evidence to the field of medical NLP. CIViC Evidence denotes the multi-label classification problem of assigning labels of clinical evidence to abstracts of scientific papers which have examined various combinations of genomic variants, cancer types, and treatment approaches. We approach CIViC Evidence using different language models: We fine-tune pretrained checkpoints of BERT and RoBERTa on the CIViC Evidence dataset and challenge their performance with models of the same architecture which have been pretrained on domain-specific text. In this context, we find that BiomedBERT and BioLinkBERT can outperform BERT on CIViC Evidence (+0.8% and +0.9% absolute improvement in class-support weighted F1 score). All transformer-based models show a clear performance edge when compared to a logistic regression trained on bigram tf-idf scores (+1.5 - 2.7% improved F1 score). We compare the aforementioned BERT-like models to OpenAI's GPT-4 in a few-shot setting (on a small subset of our original test dataset), demonstrating that, without additional prompt-engineering or fine-tuning, GPT-4 performs worse on CIViC Evidence than our six fine-tuned models (66.1% weighted F1 score compared to 71.8% for the best fine-tuned model). However, performance gets reasonably close to the benchmark of a logistic regression model trained on bigram tf-idf scores (67.7% weighted F1 score).
翻译:本文将序列分类问题CIViC Evidence引入医学自然语言处理领域。CIViC Evidence指对已研究不同基因组变异、癌症类型和治疗方案组合的科学论文摘要进行临床证据标签分配的多标签分类问题。我们采用不同语言模型处理CIViC Evidence任务:在CIViC Evidence数据集上对BERT和RoBERTa的预训练检查点进行微调,并通过相同架构但在领域特定文本上预训练的模型挑战其性能。在此背景下,我们发现BiomedBERT和BioLinkBERT在CIViC Evidence任务上能够超越BERT(类别支持加权F1分数分别实现+0.8%和+0.9%的绝对提升)。所有基于Transformer的模型与基于二元语法tf-idf特征训练的逻辑回归模型相比均表现出明显的性能优势(F1分数提升+1.5%至+2.7%)。我们将上述类BERT模型与OpenAI的GPT-4在少样本设置下(基于原始测试数据集的子集)进行比较,结果表明:在没有额外提示工程或微调的情况下,GPT-4在CIViC Evidence任务上的表现劣于我们六个微调模型(加权F1分数为66.1%,而最佳微调模型达到71.8%),但其性能已接近基于二元语法tf-idf特征训练的逻辑回归模型基准水平(加权F1分数为67.7%)。