The conventional pretraining-and-finetuning paradigm, while effective for common diseases with ample data, faces challenges in diagnosing data-scarce occupational diseases like pneumoconiosis. Recently, large language models (LLMs) have exhibits unprecedented ability when conducting multiple tasks in dialogue, bringing opportunities to diagnosis. A common strategy might involve using adapter layers for vision-language alignment and diagnosis in a dialogic manner. Yet, this approach often requires optimization of extensive learnable parameters in the text branch and the dialogue head, potentially diminishing the LLMs' efficacy, especially with limited training data. In our work, we innovate by eliminating the text branch and substituting the dialogue head with a classification head. This approach presents a more effective method for harnessing LLMs in diagnosis with fewer learnable parameters. Furthermore, to balance the retention of detailed image information with progression towards accurate diagnosis, we introduce the contextual multi-token engine. This engine is specialized in adaptively generating diagnostic tokens. Additionally, we propose the information emitter module, which unidirectionally emits information from image tokens to diagnosis tokens. Comprehensive experiments validate the superiority of our methods and the effectiveness of proposed modules. Our codes can be found at https://github.com/CodeMonsterPHD/PneumoLLM/tree/main.
翻译:传统的预训练-微调范式虽然在数据充足的常见疾病诊断中表现有效,但在诊断尘肺病等数据稀缺的职业病时面临挑战。近来,大语言模型在对话中执行多项任务时展现出前所未有的能力,为诊断带来了机遇。一种常见策略可能涉及使用适配器层进行视觉-语言对齐并以对话方式进行诊断。然而,这种方法通常需要优化文本分支和对话头中大量可学习参数,可能会降低大语言模型的效能,尤其是在训练数据有限的情况下。在我们的工作中,我们通过移除文本分支并用分类头替代对话头进行了创新。这种方法提供了一种更有效的利用大语言模型进行诊断的方案,且所需可学习参数更少。此外,为了在保留详细图像信息与推进准确诊断之间取得平衡,我们引入了上下文多令牌引擎。该引擎专门用于自适应生成诊断令牌。同时,我们提出了信息发射器模块,该模块将信息从图像令牌单向发射至诊断令牌。全面的实验验证了我们方法的优越性以及所提出模块的有效性。我们的代码可在 https://github.com/CodeMonsterPHD/PneumoLLM/tree/main 找到。