Fluorescence microscopy is widely employed for the analysis of living biological samples; however, the utility of the resulting recordings is frequently constrained by noise, temporal variability, and inconsistent visualisation of signals that oscillate over time. We present a unique computational framework that integrates information from multiple time-resolved frames into a single high-quality image, while preserving the underlying biological content of the original video. We evaluate the proposed method through an extensive number of configurations (n = 111) and on a challenging dataset comprising dynamic, heterogeneous, and morphologically complex 2D monolayers of cardiac cells. Results show that our framework, which consists of a combination of explainable techniques from different computer vision application fields, is capable of generating composite images that preserve and enhance the quality and information of individual microscopy frames, yielding 44% average increase in cell count compared to previous methods. The proposed pipeline is applicable to other imaging domains that require the fusion of multi-temporal image stacks into high-quality 2D images, thereby facilitating annotation and downstream segmentation.
翻译:荧光显微镜被广泛应用于活体生物样本的分析;然而,所得记录的实用性常常受到噪声、时间变异性以及随时间振荡信号可视化不一致的限制。我们提出了一种独特的计算框架,该框架将来自多个时间分辨帧的信息整合到单个高质量图像中,同时保留原始视频的潜在生物学内容。我们通过大量配置(n = 111)并在一个具有挑战性的数据集上评估了所提出的方法,该数据集包含动态、异质且形态复杂的二维单层心肌细胞。结果表明,我们的框架结合了来自不同计算机视觉应用领域的可解释技术,能够生成保留并增强单个显微镜帧质量和信息的合成图像,与先前方法相比,细胞计数平均提高了44%。所提出的流程适用于其他需要将多时序图像堆栈融合为高质量二维图像的成像领域,从而便于标注和下游分割。