TriCloudEdge is a scalable three-tier cloud continuum that integrates far-edge devices, intermediate edge nodes, and central cloud services, working in parallel as a unified solution. At the far edge, ultra-low-cost microcontrollers can handle lightweight AI tasks, while intermediate edge devices provide local intelligence, and the cloud tier offers large-scale analytics, federated learning, model adaptation, and global identity management. The proposed architecture enables multi-protocols and technologies (WebSocket, MQTT, HTTP) compared to a versatile protocol (Zenoh) to transfer diverse bidirectional data across the tiers, offering a balance between computational challenges and latency requirements. Comparative implementations between these two architectures demonstrate the trade-offs between resource utilization and communication efficiency. The results show that TriCloudEdge can distribute computational challenges to address latency and privacy concerns. The work also presents tests of AI model adaptation on the far edge and the computational effort challenges under the prism of parallelism. This work offers a perspective on the practical continuum challenges of implementation aligned with recent research advances addressing challenges across the different cloud levels.
翻译:TriCloudEdge是一种可扩展的三层云连续体架构,它集成远端边缘设备、中间边缘节点和中心云服务,以并行方式作为统一解决方案协同工作。在远端边缘,超低成本微控制器可处理轻量级AI任务;中间边缘设备提供本地智能;而云层则提供大规模分析、联邦学习、模型适配和全局身份管理。相较于采用通用协议(Zenoh)的传统架构,该架构支持多协议与技术(WebSocket、MQTT、HTTP)在层级间传输多样化双向数据,在计算挑战与延迟需求之间实现平衡。两种架构的对比实施展示了资源利用率与通信效率之间的权衡关系。结果表明,TriCloudEdge能够通过分布式计算应对延迟与隐私问题。本研究还展示了远端边缘AI模型适配测试,以及并行化视角下的计算负载挑战。这项工作从实践连续体实施挑战的角度出发,与近期针对不同云层级挑战的研究进展保持同步。