Processing sensory data close to the data source, often involving Edge devices, promises low latency for pervasive applications, like smart cities. This commonly involves a multitude of processing services, executed with limited resources; this setup faces three problems: first, the application demand and the resource availability fluctuate, so the service execution must scale dynamically to sustain processing requirements (e.g., latency); second, each service permits different actions to adjust its operation, so they require individual scaling policies; third, without a higher-level mediator, services would cannibalize any resources of services co-located on the same device. This demo first presents a platform for context-aware autoscaling of stream processing services that allows developers to monitor and adjust the service execution across multiple service-specific parameters. We then connect a scaling agent to these interfaces that gradually builds an understanding of the processing environment by exploring each service's action space; the agent then optimizes the service execution according to this knowledge. Participants can revisit the demo contents as video summary and introductory poster, or build a custom agent by extending the artifact repository.
翻译:在数据源附近(通常涉及边缘设备)处理感知数据,可为智慧城市等普适应用提供低延迟保障。此类处理通常涉及大量在有限资源下执行的处理服务,该配置面临三个问题:首先,应用需求与资源可用性存在波动,服务执行必须动态扩缩以满足处理要求(如延迟);其次,每个服务允许通过不同操作调整其运行状态,因此需要独立的扩缩策略;第三,若缺乏高层协调机制,部署于同一设备上的服务将相互争夺资源。本演示首先提出一个面向流处理服务的上下文感知自动扩缩平台,该平台允许开发者通过多个服务特定参数监控和调整服务执行。随后我们将扩缩智能体接入这些接口,该智能体通过探索各服务的操作空间逐步构建对处理环境的认知,并依据此知识优化服务执行。参与者可通过视频摘要和介绍海报回顾演示内容,或通过扩展制品库构建定制化智能体。