Automated viewpoint classification in echocardiograms can help under-resourced clinics and hospitals in providing faster diagnosis and screening when expert technicians may not be available. We propose a novel approach towards echocardiographic viewpoint classification. We show that treating viewpoint classification as video classification rather than image classification yields advantage. We propose a CNN-GRU architecture with a novel temporal feature weaving method, which leverages both spatial and temporal information to yield a 4.33\% increase in accuracy over baseline image classification while using only four consecutive frames. The proposed approach incurs minimal computational overhead. Additionally, we publish the Neonatal Echocardiogram Dataset (NED), a professionally-annotated dataset providing sixteen viewpoints and associated echocardipgraphy videos to encourage future work and development in this field. Code available at: https://github.com/satchelfrench/NED
翻译:超声心动图自动视角分类可在专业技师可能无法到场的情况下,帮助资源匮乏的诊所和医院提供更快速的诊断与筛查。我们提出了一种用于超声心动图视角分类的新方法。研究表明,将视角分类视为视频分类而非图像分类具有优势。我们提出了一种结合CNN-GRU的架构及新颖的时序特征编织方法,该方法利用空间与时序信息,仅使用四个连续帧即可实现比基线图像分类准确率提升4.33%的效果,且计算开销极小。此外,我们发布了经专业标注的新生儿超声心动图数据集(NED),该数据集提供十六种视角及对应的超声心动图视频,以促进该领域的后续研究与开发。代码发布于:https://github.com/satchelfrench/NED