ECG foundation models are increasingly popular due to their adaptability across various tasks. However, their clinical applicability is often limited by performance gaps compared to task-specific models, even after pre-training on large ECG datasets and fine-tuning on target data. This limitation is likely due to the lack of an effective post-training strategy. In this paper, we propose a simple yet effective post-training approach to enhance ECG foundation models. We evaluate it on a publicly available Transformer-based foundation model. Experiments across multiple ECG tasks show that our method consistently outperforms baseline fine-tuning. On the PTB-XL benchmarks, it improves macro AUROC by 0.7%-8.9% and macro AUPRC by 23.3%-77.9%, also outperforming several recent state-of-the-art approaches, including task-specific and advanced architectures. Further analyses demonstrate improved training dynamics and data efficiency, with only 30% of the training data outperforming the baseline trained on the full dataset. Ablation studies highlight the importance of stochastic depth and preview linear probing. These findings underscore the potential of post-training strategies to improve ECG foundation models, and we hope this work will contribute to the continued development of foundation models in the ECG domain.
翻译:心电图基础模型因其在各种任务中的适应性而日益受到欢迎。然而,与针对特定任务训练的模型相比,其临床适用性常受限于性能差距,即使在大规模心电图数据集上进行预训练并在目标数据上进行微调后也是如此。这一局限很可能源于缺乏有效的后训练策略。本文提出了一种简单而有效的后训练方法,以增强心电图基础模型。我们在一个公开可用的基于Transformer的基础模型上对其进行了评估。在多个心电图任务上的实验表明,我们的方法始终优于基线微调。在PTB-XL基准测试中,它将宏观AUROC提升了0.7%-8.9%,宏观AUPRC提升了23.3%-77.9%,同时也优于几种近期最先进的方法,包括针对特定任务的模型和先进架构。进一步的分析展示了改进的训练动态和数据效率,仅使用30%的训练数据即可超越在完整数据集上训练的基线。消融研究强调了随机深度和预览线性探测的重要性。这些发现凸显了后训练策略在改进心电图基础模型方面的潜力,我们希望这项工作能促进心电图领域基础模型的持续发展。