Traditionally, AI research in medical diagnosis has largely centered on image analysis. While this has led to notable advancements, the absence of patient-reported symptoms continues to hinder diagnostic accuracy. To address this, we propose a Pre-Consultation Dialogue Framework (PCDF) that mimics real-world diagnostic procedures, where doctors iteratively query patients before reaching a conclusion. Specifically, we simulate diagnostic dialogues between two vision-language models (VLMs): a DocVLM, which generates follow-up questions based on the image and dialogue history, and a PatientVLM, which responds using a symptom profile derived from the ground-truth diagnosis. We additionally conducted a small-scale clinical validation of the synthetic symptoms generated by our framework, with licensed clinicians confirming their clinical relevance, symptom coverage, and overall realism. These findings indicate that the resulting DocVLM-PatientVLM interactions form coherent, multi-turn consultations paired with images and diagnoses, which we then use to fine-tune the DocVLM. This dialogue-based supervision leads to substantial gains over image-only training, highlighting the value of realistic symptom elicitation for diagnosis.
翻译:传统上,医学诊断领域的人工智能研究主要集中于图像分析。尽管这一方向已取得显著进展,但患者自述症状的缺失仍持续制约着诊断准确性的提升。为解决此问题,我们提出了一种预咨询对话框架,该框架模拟现实世界中的诊断流程——医生在得出结论前会与患者进行多轮交互询问。具体而言,我们构建了两个视觉语言模型间的诊断对话模拟系统:DocVLM基于医学影像和对话历史生成后续追问,PatientVLM则依据真实诊断结果构建的症状特征库进行应答。此外,我们对框架生成的合成症状开展了小规模临床验证,持照临床医生确认了这些症状的临床相关性、症状覆盖度及整体真实性。研究结果表明,由此产生的DocVLM-PatientVLM交互形成了与影像和诊断配对的连贯多轮咨询对话。我们随后利用这些对话数据对DocVLM进行微调,这种基于对话的监督机制相比纯图像训练取得了显著性能提升,凸显了真实症状采集对于诊断的重要价值。