Accurate interpretation of Electrocardiogram (ECG) signals is pivotal for diagnosing cardiovascular diseases. Integrating ECG signals with their accompanying textual reports holds immense potential to enhance clinical diagnostics through the combination of physiological data and qualitative insights. However, this integration faces significant challenges due to inherent modality disparities and the scarcity of labeled data for robust cross-modal learning. To address these obstacles, we propose C-MELT, a novel framework that pre-trains ECG and text data using a contrastive masked auto-encoder architecture. C-MELT uniquely combines the strengths of generative with enhanced discriminative capabilities to achieve robust cross-modal representations. This is accomplished through masked modality modeling, specialized loss functions, and an improved negative sampling strategy tailored for cross-modal alignment. Extensive experiments on five public datasets across diverse downstream tasks demonstrate that C-MELT significantly outperforms existing methods, achieving 15% and 2% increases in linear probing and zero-shot performance over state-of-the-art models, respectively. These results highlight the effectiveness of C-MELT, underscoring its potential to advance automated clinical diagnostics through multi-modal representations.
翻译:心电图(ECG)信号的准确解读对于诊断心血管疾病至关重要。将ECG信号与其伴随的文本报告相结合,通过生理数据与定性见解的结合,在增强临床诊断方面具有巨大潜力。然而,由于固有的模态差异以及用于稳健跨模态学习的标记数据稀缺,这种整合面临重大挑战。为应对这些障碍,我们提出了C-MELT,一个利用对比掩码自编码器架构对ECG和文本数据进行预训练的新颖框架。C-MELT独特地结合了生成能力与增强的判别能力,以实现稳健的跨模态表示。这是通过掩码模态建模、专门设计的损失函数以及为跨模态对齐定制的改进负采样策略来实现的。在涵盖多种下游任务的五个公共数据集上进行的大量实验表明,C-MELT显著优于现有方法,在线性探测和零样本性能上分别比最先进模型提高了15%和2%。这些结果凸显了C-MELT的有效性,并强调了其通过多模态表征推动自动化临床诊断发展的潜力。