Multi-sensor fusion is central to robust robotic perception, yet most existing systems operate under static sensor configurations, collecting all modalities at fixed rates and fidelity regardless of their situational utility. This rigidity wastes bandwidth, computation, and energy, and prevents systems from prioritizing sensors under challenging conditions such as poor lighting or occlusion. Recent advances in reinforcement learning (RL) and modality-aware fusion suggest the potential for adaptive perception, but prior efforts have largely focused on re-weighting features at inference time, ignoring the physical cost of sensor data collection. We introduce a framework that unifies sensing, learning, and actuation into a closed reconfiguration loop. A task-specific detection backbone extracts multispectral features (e.g. RGB, IR, mmWave, depth) and produces quantitative contribution scores for each modality. These scores are passed to an RL agent, which dynamically adjusts sensor configurations, including sampling frequency, resolution, sensing range, and etc., in real time. Less informative sensors are down-sampled or deactivated, while critical sensors are sampled at higher fidelity as environmental conditions evolve. We implement and evaluate this framework on a mobile rover, showing that adaptive control reduces GPU load by 29.3\% with only a 5.3\% accuracy drop compared to a heuristic baseline. These results highlight the potential of resource-aware adaptive sensing for embedded robotic platforms.
翻译:多传感器融合是鲁棒机器人感知的核心,然而现有系统大多采用静态传感器配置,以固定速率和保真度收集所有模态数据,而忽略其情境效用。这种僵化性浪费了带宽、计算资源和能量,且无法在恶劣条件(如光照不足或遮挡)下优先使用关键传感器。强化学习与模态感知融合的最新进展展现了自适应感知的潜力,但先前研究主要集中于推理时重新加权特征,忽视了传感器数据采集的物理成本。本文提出一个将传感、学习与执行统一为闭环重配置框架的系统。任务专用的检测主干网络提取多光谱特征(如RGB、红外、毫米波、深度),并为各模态生成定量贡献度评分。这些评分被传递至强化学习智能体,由其动态实时调整传感器配置,包括采样频率、分辨率、感知范围等。随着环境条件变化,信息量较低的传感器将被降采样或停用,而关键传感器则以更高保真度采样。我们在移动探测车上实现并评估该框架,实验表明自适应控制相比启发式基线在仅损失5.3%精度的情况下,可降低29.3%的GPU负载。这些结果凸显了资源感知自适应传感在嵌入式机器人平台中的应用潜力。