In this paper, the CD-TWINSAFE is introduced, a V2I-based digital twin for Autonomous Vehicles. The proposed architecture is composed of two stacks running simultaneously, an on-board driving stack that includes a stereo camera for scene understanding, and a digital twin stack that runs an Unreal Engine 5 replica of the scene viewed by the camera as well as returning safety alerts to the cockpit. The on-board stack is implemented on the vehicle side including 2 main autonomous modules; localization and perception. The position and orientation of the ego vehicle are obtained using on-board sensors. Furthermore, the perception module is responsible for processing 20-fps images from stereo camera and understands the scene through two complementary pipelines. The pipeline are working on object detection and feature extraction including object velocity, yaw and the safety metrics time-to-collision and time-headway. The collected data form the driving stack are sent to the infrastructure side through the ROS-enabled architecture in the form of custom ROS2 messages and sent over UDP links that ride a 4G modem for V2I communication. The environment is monitored via the digital twin through the shared messages which update the information of the spawned ego vehicle and detected objects based on the real-time localization and perception data. Several tests with different driving scenarios to confirm the validity and real-time response of the proposed architecture.
翻译:本文介绍了CD-TWINSAFE,一种基于车路协同(V2I)的自动驾驶车辆数字孪生系统。所提出的架构由两个并行运行的软件栈组成:一个包含用于场景理解的双目摄像头的车载驾驶栈,以及一个数字孪生栈。该数字孪生栈运行着由摄像头所视场景的Unreal Engine 5复制环境,并向驾驶舱返回安全警报。车载栈在车辆端实现,包含两个主要的自动驾驶模块:定位与感知。自车的位置和姿态通过车载传感器获取。此外,感知模块负责处理来自双目摄像头的20帧/秒图像,并通过两个互补的处理流程来理解场景。这些流程分别进行目标检测和特征提取,提取的特征包括目标速度、偏航角以及安全度量指标——碰撞时间和车头时距。从驾驶栈收集的数据通过支持ROS的架构,以自定义ROS2消息的形式,经由承载于4G调制解调器上的UDP链路发送至基础设施端,实现V2I通信。数字孪生通过共享的消息监控环境,这些消息基于实时定位与感知数据,更新已生成的自车及检测到目标的信息。通过在不同驾驶场景下的多次测试,验证了所提架构的有效性和实时响应能力。