The growing volume of biomedical scholarly document abstracts presents an increasing challenge in efficiently retrieving accurate and relevant information. To address this, we introduce a novel approach that integrates an optimized topic modelling framework, OVB-LDA, with the BI-POP CMA-ES optimization technique for enhanced scholarly document abstract categorization. Complementing this, we employ the distilled MiniLM model, fine-tuned on domain-specific data, for high-precision answer extraction. Our approach is evaluated across three configurations: scholarly document abstract retrieval, gold-standard scholarly documents abstract, and gold-standard snippets, consistently outperforming established methods such as RYGH and bio-answer finder. Notably, we demonstrate that extracting answers from scholarly documents abstracts alone can yield high accuracy, underscoring the sufficiency of abstracts for many biomedical queries. Despite its compact size, MiniLM exhibits competitive performance, challenging the prevailing notion that only large, resource-intensive models can handle such complex tasks. Our results, validated across various question types and evaluation batches, highlight the robustness and adaptability of our method in real-world biomedical applications. While our approach shows promise, we identify challenges in handling complex list-type questions and inconsistencies in evaluation metrics. Future work will focus on refining the topic model with more extensive domain-specific datasets, further optimizing MiniLM and utilizing large language models (LLM) to improve both precision and efficiency in biomedical question answering.
翻译:生物医学学术文献摘要数量的日益增长,对高效检索准确且相关信息提出了越来越大的挑战。为此,我们提出了一种新颖方法,该方法将优化的主题建模框架OVB-LDA与BI-POP CMA-ES优化技术相结合,以增强学术文献摘要的分类能力。作为补充,我们采用了在领域特定数据上微调的蒸馏MiniLM模型,以实现高精度的答案抽取。我们在三种配置下评估了我们的方法:学术文献摘要检索、黄金标准学术文献摘要以及黄金标准片段。结果表明,我们的方法在各项评估中均持续优于RYGH和bio-answer finder等现有方法。值得注意的是,我们证明仅从学术文献摘要中抽取答案即可获得很高的准确率,这强调了摘要对于许多生物医学查询的充分性。尽管MiniLM模型尺寸紧凑,但其表现出具有竞争力的性能,挑战了只有大型、资源密集型模型才能处理此类复杂任务的普遍观念。我们的结果在不同问题类型和评估批次中均得到验证,凸显了该方法在现实世界生物医学应用中的鲁棒性和适应性。尽管我们的方法展现出潜力,但我们发现其在处理复杂列表型问题以及评估指标不一致性方面仍存在挑战。未来的工作将侧重于利用更广泛的领域特定数据集来改进主题模型,进一步优化MiniLM模型,并利用大语言模型(LLM)来提高生物医学问答的精确度和效率。