We present KorMedMCQA, the first Korean Medical Multiple-Choice Question Answering benchmark, derived from professional healthcare licensing examinations conducted in Korea between 2012 and 2024. The dataset contains 7,469 questions from examinations for doctor, nurse, pharmacist, and dentist, covering a wide range of medical disciplines. We evaluate the performance of 59 large language models, spanning proprietary and open-source models, multilingual and Korean-specialized models, and those fine-tuned for clinical applications. Our results show that applying Chain of Thought (CoT) reasoning can enhance the model performance by up to 4.5% compared to direct answering approaches. We also investigate whether MedQA, one of the most widely used medical benchmarks derived from the U.S. Medical Licensing Examination, can serve as a reliable proxy for evaluating model performance in other regions-in this case, Korea. Our correlation analysis between model scores on KorMedMCQA and MedQA reveals that these two benchmarks align no better than benchmarks from entirely different domains (e.g., MedQA and MMLU-Pro). This finding underscores the substantial linguistic and clinical differences between Korean and U.S. medical contexts, reinforcing the need for region-specific medical QA benchmarks. To support ongoing research in Korean healthcare AI, we publicly release the KorMedMCQA via Huggingface.
翻译:我们提出了KorMedMCQA,这是首个基于2012年至2024年间韩国医疗专业执业资格考试构建的韩语医学多选问答基准数据集。该数据集包含来自医师、护士、药师和牙医资格考试的7,469道题目,覆盖广泛的医学学科领域。我们评估了59个大语言模型的性能,涵盖专有模型与开源模型、多语言模型与韩语专用模型,以及针对临床应用微调的模型。实验结果表明,与直接回答方法相比,应用思维链推理可将模型性能提升最高达4.5%。我们还探究了当前使用最广泛的医学基准之一——基于美国医师执业资格考试构建的MedQA——能否作为评估模型在其他地区(本文以韩国为例)性能的可靠代理。通过对模型在KorMedMCQA与MedQA上得分的相关性分析,我们发现这两个医学基准之间的关联度并不高于完全无关领域的基准(如MedQA与MMLU-Pro)。这一发现凸显了韩国与美国医疗语境间存在的显著语言及临床差异,进一步证实了构建区域特异性医学问答基准的必要性。为支持韩国医疗人工智能的持续研究,我们已通过Huggingface平台公开发布KorMedMCQA数据集。