Large Language Models (LLMs) have shown strong performance on static medical Question Answering (QA) tasks, yet their reasoning often deteriorates in multi-turn clinical dialogues where patient information is scattered across turns. This paper introduces TriMediQ, a triplet-structured approach that enhances the reasoning reliability of LLMs through explicit knowledge integration. TriMediQ first employs a frozen triplet extraction LLM to convert patient responses into clinically grounded triplets, ensuring factual precision via constrained prompting. These triplets are incorporated into a patient-specific Knowledge Graph (KG), from which a trainable projection module consisting of a graph encoder and a projector captures relational dependencies while keeping all LLM parameters frozen. During inference, the projection module guides multi-hop reasoning over the KG, enabling coherent clinical dialogue understanding. Experiments on two interactive medical QA benchmarks show that TriMediQ achieves up to 10.4\% improvement in accuracy over five existing baselines on the iMedQA dataset. These results demonstrate that structuring patient information as triplets can effectively improve the reasoning capability of LLMs in multi-turn medical QA.
翻译:大型语言模型(LLM)在静态医疗问答任务上已展现出强大性能,但在多轮临床对话中,当患者信息分散于不同轮次时,其推理能力往往下降。本文提出TriMediQ,一种三元组结构的方法,通过显式知识集成来增强LLM的推理可靠性。TriMediQ首先采用一个冻结的三元组提取LLM,将患者回答转换为基于临床事实的三元组,并通过约束提示确保事实准确性。这些三元组被整合到患者特定的知识图谱中,随后由一个可训练的投影模块(包含图编码器和投影器)从图谱中捕获关系依赖,同时保持所有LLM参数冻结。在推理过程中,投影模块引导对知识图谱的多跳推理,从而实现连贯的临床对话理解。在两个交互式医疗问答基准上的实验表明,在iMedQA数据集上,TriMediQ相比五种现有基线方法实现了最高10.4%的准确率提升。这些结果表明,将患者信息构建为三元组能有效提升LLM在多轮医疗问答中的推理能力。