Automatic estimation of cardiac ultrasound image quality can be beneficial for guiding operators and ensuring the accuracy of clinical measurements. Previous work often fails to distinguish the view correctness of the echocardiogram from the image quality. Additionally, previous studies only provide a global image quality value, which limits their practical utility. In this work, we developed and compared three methods to estimate image quality: 1) classic pixel-based metrics like the generalized contrast-to-noise ratio (gCNR) on myocardial segments as region of interest and left ventricle lumen as background, obtained using a U-Net segmentation 2) local image coherence derived from a U-Net model that predicts coherence from B-Mode images 3) a deep convolutional network that predicts the quality of each region directly in an end-to-end fashion. We evaluate each method against manual regional image quality annotations by three experienced cardiologists. The results indicate poor performance of the gCNR metric, with Spearman correlation to the annotations of rho = 0.24. The end-to-end learning model obtains the best result, rho = 0.69, comparable to the inter-observer correlation, rho = 0.63. Finally, the coherence-based method, with rho = 0.58, outperformed the classical metrics and is more generic than the end-to-end approach.
翻译:心脏超声图像质量的自动评估有助于指导操作者并确保临床测量的准确性。先前的研究往往未能区分超声心动图的视图正确性与图像质量。此外,既往研究仅提供全局图像质量值,这限制了其实际应用。在本研究中,我们开发并比较了三种评估图像质量的方法:1)基于经典像素的指标,如在心肌节段(作为感兴趣区域)与左心室腔(作为背景)上计算的广义对比噪声比(gCNR),该区域通过U-Net分割获得;2)从U-Net模型推导的局部图像相干性,该模型可直接从B型图像预测相干性;3)以端到端方式直接预测每个区域质量的深度卷积网络。我们以三位经验丰富的心脏病专家手动标注的区域图像质量为标准,对每种方法进行了评估。结果表明,gCNR指标表现较差,其与标注结果的Spearman相关系数仅为rho = 0.24。端到端学习模型取得了最佳结果(rho = 0.69),与观察者间相关性(rho = 0.63)相当。最后,基于相干性的方法(rho = 0.58)优于经典指标,且比端到端方法更具通用性。