Convolutional Neural Networks (CNNs) have proven to be highly effective in solving a broad spectrum of computer vision tasks, such as classification, identification, and segmentation. These methods can be deployed in both centralized and distributed environments, depending on the computational demands of the task. While much of the literature has focused on the explainability of CNNs, which is essential for building trust and confidence in their predictions, there remains a gap in understanding their impact on computational resources, particularly in distributed training contexts. In this study, we analyze how CNN architectures primarily influence model accuracy and investigate additional factors that affect computational efficiency in distributed systems. Our findings contribute valuable insights for optimizing the deployment of CNNs in resource-intensive scenarios, paving the way for further exploration of variables critical to distributed learning.
翻译:卷积神经网络(CNN)已被证明在解决分类、识别与分割等广泛的计算机视觉任务中极为有效。根据任务的计算需求,这些方法既可部署于集中式环境,也可部署于分布式环境。尽管现有文献多聚焦于CNN的可解释性——这对于建立对其预测的信任与信心至关重要——但在理解其对计算资源的影响方面仍存在空白,尤其是在分布式训练场景中。本研究分析了CNN架构如何主要影响模型精度,并探究了在分布式系统中影响计算效率的其他因素。我们的发现为在资源密集型场景中优化CNN的部署提供了有价值的见解,为进一步探索对分布式学习至关重要的变量铺平了道路。