TriCloudEdge is a scalable three-tier cloud continuum that integrates far-edge devices, intermediate edge nodes, and central cloud services, working in parallel as a unified solution. At the far edge, ultra-low-cost microcontrollers can handle lightweight AI tasks, while intermediate edge devices provide local intelligence, and the cloud tier offers large-scale analytics, federated learning, model adaptation, and global identity management. The proposed architecture enables multi-protocols and technologies (WebSocket, MQTT, HTTP) compared to a versatile protocol (Zenoh) to transfer diverse bidirectional data across the tiers, offering a balance between computational challenges and latency requirements. Comparative implementations between these two architectures demonstrate the trade-offs between resource utilization and communication efficiency. The results show that TriCloudEdge can distribute computational challenges to address latency and privacy concerns. The work also presents tests of AI model adaptation on the far edge and the computational effort challenges under the prism of parallelism. This work offers a perspective on the practical continuum challenges of implementation aligned with recent research advances addressing challenges across the different cloud levels.
翻译:TriCloudEdge是一种可扩展的三层云连续体,它将远端边缘设备、中间边缘节点和中央云服务集成在一起,作为一个统一的解决方案并行工作。在远端边缘,超低成本微控制器可以处理轻量级人工智能任务,而中间边缘设备提供本地智能,云层则提供大规模分析、联邦学习、模型适配和全局身份管理。所提出的架构支持多种协议和技术(WebSocket、MQTT、HTTP),并与通用协议(Zenoh)进行比较,以在各层之间传输多样化的双向数据,从而在计算挑战和延迟要求之间取得平衡。这两种架构之间的对比实现展示了资源利用率和通信效率之间的权衡。结果表明,TriCloudEdge可以分配计算挑战以解决延迟和隐私问题。这项工作还展示了在远端边缘进行人工智能模型适配的测试,以及在并行性视角下的计算工作量挑战。这项工作为实际连续体实施中的挑战提供了视角,这些挑战与最近针对不同云层级挑战的研究进展相一致。