Large Language Models (LLMs) have swiftly emerged as vital resources for different applications in the biomedical and healthcare domains; however, these models encounter issues such as generating inaccurate information or hallucinations. Retrieval-augmented generation provided a solution for these models to update knowledge and enhance their performance. In contrast to previous retrieval-augmented LMs, which utilize specialized cross-attention mechanisms to help LLM encode retrieved text, BiomedRAG adopts a simpler approach by directly inputting the retrieved chunk-based documents into the LLM. This straightforward design is easily applicable to existing retrieval and language models, effectively bypassing noise information in retrieved documents, particularly in noise-intensive tasks. Moreover, we demonstrate the potential for utilizing the LLM to supervise the retrieval model in the biomedical domain, enabling it to retrieve the document that assists the LM in improving its predictions. Our experiments reveal that with the tuned scorer,\textsc{ BiomedRAG} attains superior performance across 5 biomedical NLP tasks, encompassing information extraction (triple extraction, relation extraction), text classification, link prediction, and question-answering, leveraging over 9 datasets. For instance, in the triple extraction task, \textsc{BiomedRAG} outperforms other triple extraction systems with micro-F1 scores of 81.42 and 88.83 on GIT and ChemProt corpora, respectively.
翻译:大语言模型(LLMs)已迅速成为生物医学和医疗保健领域各类应用的重要资源;然而,这些模型存在生成不准确信息或幻觉的问题。检索增强生成为这些模型更新知识并提升性能提供了一种解决方案。与以往利用专用交叉注意力机制帮助大语言模型编码检索文本的检索增强语言模型不同,BiomedRAG采用更简洁的方法,直接将基于分块的检索文档输入大语言模型。这种直接设计可轻松适配现有的检索模型和语言模型,有效规避检索文档中的噪声信息,尤其在噪声密集型任务中表现突出。此外,我们展示了利用大语言模型监督生物医学领域检索模型的潜力,使其能够检索出辅助语言模型改进预测的文档。实验表明,经过调优的打分器,BiomedRAG在5类生物医学自然语言处理任务(包括信息抽取(三元组抽取、关系抽取)、文本分类、链接预测和问答)中,基于9个以上数据集均取得了优越性能。例如,在三元组抽取任务中,BiomedRAG分别以81.42和88.83的微平均F1分数在GIT和ChemPro语料库上超越其他三元组抽取系统。