Video capsule endoscopy has become increasingly important for investigating the small intestine within the gastrointestinal tract. However, a persistent challenge remains the short battery lifetime of such compact sensor edge devices. Integrating artificial intelligence can help overcome this limitation by enabling intelligent real-time decision- making, thereby reducing the energy consumption and prolonging the battery life. However, this remains challenging due to data sparsity and the limited resources of the device restricting the overall model size. In this work, we introduce a multi-task neural network that combines the functionalities of precise self-localization within the gastrointestinal tract with the ability to detect anomalies in the small intestine within a single model. Throughout the development process, we consistently restricted the total number of parameters to ensure the feasibility to deploy such model in a small capsule. We report the first multi-task results using the recently published Galar dataset, integrating established multi-task methods and Viterbi decoding for subsequent time-series analysis. This outperforms current single-task models and represents a significant ad- vance in AI-based approaches in this field. Our model achieves an accu- racy of 93.63% on the localization task and an accuracy of 87.48% on the anomaly detection task. The approach requires only 1 million parameters while surpassing the current baselines.
翻译:视频胶囊内窥镜在胃肠道内的小肠检查中已变得日益重要。然而,此类紧凑型传感器边缘设备始终面临电池续航时间短的挑战。集成人工智能可通过实现智能实时决策来帮助克服这一限制,从而降低能耗并延长电池寿命。但由于数据稀疏性以及设备有限资源对模型总体规模的限制,这仍然具有挑战性。在本工作中,我们提出了一种多任务神经网络,它在一个单一模型中结合了胃肠道内精确定位功能与小肠异常检测能力。在整个开发过程中,我们始终限制参数总数,以确保该模型在小型胶囊中部署的可行性。我们报告了使用近期发布的Galar数据集获得的首个多任务结果,其中整合了成熟的多任务方法以及用于后续时间序列分析的维特比解码。该模型超越了当前的单任务模型,代表了该领域基于人工智能方法的一次重大进展。我们的模型在定位任务上达到了93.63%的准确率,在异常检测任务上达到了87.48%的准确率。该方法仅需100万个参数,同时超越了现有基线。