With the increasing use of large language models (LLMs) for generating answers to biomedical questions, it is crucial to evaluate the quality of the generated answers and the references provided to support the facts in the generated answers. Evaluation of text generated by LLMs remains a challenge for question answering, retrieval-augmented generation (RAG), summarization, and many other natural language processing tasks in the biomedical domain, due to the requirements of expert assessment to verify consistency with the scientific literature and complex medical terminology. In this work, we propose BioACE, an automated framework for evaluating biomedical answers and citations against the facts stated in the answers. The proposed BioACE framework considers multiple aspects, including completeness, correctness, precision, and recall, in relation to the ground-truth nuggets for answer evaluation. We developed automated approaches to evaluate each of the aforementioned aspects and performed extensive experiments to assess and analyze their correlation with human evaluations. In addition, we considered multiple existing approaches, such as natural language inference (NLI) and pre-trained language models and LLMs, to evaluate the quality of evidence provided to support the generated answers in the form of citations into biomedical literature. With the detailed experiments and analysis, we provide the best approaches for biomedical answer and citation evaluation as a part of BioACE (https://github.com/deepaknlp/BioACE) evaluation package.
翻译:随着大型语言模型(LLMs)在生成生物医学问题答案中的应用日益增多,评估生成答案的质量以及为支持答案中事实所提供的参考文献变得至关重要。由于需要专家评估来验证与科学文献的一致性以及处理复杂的医学术语,对LLMs生成文本的评估在生物医学领域的问答、检索增强生成(RAG)、摘要以及许多其他自然语言处理任务中仍然是一个挑战。在本工作中,我们提出了BioACE,一个用于依据答案中所述事实自动评估生物医学答案与引文的框架。所提出的BioACE框架在答案评估中考虑了多个方面,包括相对于真实信息单元的完整性、正确性、精确率和召回率。我们开发了自动化方法来评估上述每个方面,并进行了大量实验来评估和分析它们与人工评估的相关性。此外,我们考虑了多种现有方法,如自然语言推理(NLI)、预训练语言模型和LLMs,以评估为支持生成答案而以生物医学文献引文形式提供的证据质量。通过详细的实验与分析,我们为生物医学答案与引文评估提供了最佳方法,并将其作为BioACE(https://github.com/deepaknlp/BioACE)评估包的一部分。