Large language models have the potential to be valuable in the healthcare industry, but it's crucial to verify their safety and effectiveness through rigorous evaluation. For this purpose, we comprehensively evaluated both open-source LLMs and Google's new multimodal LLM called Gemini across Medical reasoning, hallucination detection, and Medical Visual Question Answering tasks. While Gemini showed competence, it lagged behind state-of-the-art models like MedPaLM 2 and GPT-4 in diagnostic accuracy. Additionally, Gemini achieved an accuracy of 61.45\% on the medical VQA dataset, significantly lower than GPT-4V's score of 88\%. Our analysis revealed that Gemini is highly susceptible to hallucinations, overconfidence, and knowledge gaps, which indicate risks if deployed uncritically. We also performed a detailed analysis by medical subject and test type, providing actionable feedback for developers and clinicians. To mitigate risks, we applied prompting strategies that improved performance. Additionally, we facilitated future research and development by releasing a Python module for medical LLM evaluation and establishing a dedicated leaderboard on Hugging Face for medical domain LLMs. Python module can be found at https://github.com/promptslab/RosettaEval
翻译:大型语言模型在医疗保健行业具有潜在价值,但通过严格评估验证其安全性和有效性至关重要。为此,我们全面评估了开源大语言模型及谷歌新型多模态大语言模型Gemini在医学推理、幻觉检测和医学视觉问答任务中的表现。尽管Gemini展现出一定能力,但在诊断准确性上落后于MedPaLM 2和GPT-4等最先进模型。此外,Gemini在医学VQA数据集上的准确率为61.45%,显著低于GPT-4V的88%。我们的分析表明,Gemini极易出现幻觉、过度自信和知识缺口,若不加批判地部署将带来风险。我们还按医学学科和测试类型进行了详细分析,为开发人员和临床医生提供了可操作的反馈。为降低风险,我们应用了提示策略以提升性能。此外,我们通过发布用于医学大语言模型评估的Python模块并在Hugging Face上建立医学领域大语言模型专用排行榜,促进了未来的研究与发展。Python模块可访问https://github.com/promptslab/RosettaEval获取。