Medical reasoning models (MRMs) achieve superior performance on medical benchmarks compared to medical LLMs; however, high accuracy alone is insufficient for practical deployment. One of such requirements for real-world application is robustness to varying output constraints. Specifically, posing the same medical question while requesting different answer formats should not affect the underlying correctness of the response. We investigate this phenomenon in this paper, focusing on MRMs. To quantify this behavior, we propose the metric answer-format robustness: the ability to reliably generate correct outputs across varying specified formats. We examine three representative formats: multiple-choice, open-ended question-answering, and ranked lists. Across 15 proprietary and open-weight models, we observe substantial variation in format robustness (35-100%). Furthermore, we conduct controlled fine-tuning experiments on a shared backbone with matched training data to isolate the effects of the fine-tuning paradigm. We find that supervised fine-tuning yields more stable behavior across formats, whereas reinforcement fine-tuning often exhibits higher cross-format brittleness, with the degree of instability strongly dependent on reward design. Overall, answer-format robustness in MRMs is trainable yet brittle and requires careful evaluation for practical medical use.
翻译:相较于医学大语言模型,医学推理模型在医学基准测试中展现出更优异的性能;然而,仅凭高准确率不足以支撑实际部署。现实应用的要求之一是对不同输出约束的鲁棒性。具体而言,在提出相同医学问题时,若要求不同的答案格式,不应影响回答的内在正确性。本文针对医学推理模型探究这一现象。为量化该行为,我们提出“答案格式鲁棒性”这一度量指标:即模型在不同指定格式下可靠生成正确答案的能力。我们考察了三种代表性格式:多项选择、开放式问答以及排序列表。通过对15个专有模型和开源权重模型的测试,我们观察到格式鲁棒性存在显著差异(35%-100%)。此外,我们在共享主干网络上使用匹配的训练数据进行受控微调实验,以分离微调范式的影响。研究发现:监督微调在不同格式间表现出更稳定的行为,而强化微调则常呈现更高的跨格式脆弱性,其不稳定程度高度依赖于奖励函数的设计。总体而言,医学推理模型的答案格式鲁棒性虽可通过训练获得,但仍具脆弱性,在实际医学应用中需要审慎评估。