Achieving accurate garment grasping under dynamically changing illumination is crucial for all-day operation of service robots.However, the reduced illumination in low-light scenes severely degrades garment structural features, leading to a significant drop in grasping robustness.Existing methods typically enhance RGB features by exploiting the illumination-invariant properties of non-RGB modalities, yet they overlook the varying dependence on non-RGB features under varying lighting conditions, which can introduce misaligned non-RGB cues and thereby weaken the model's adaptability to illumination changes when utilizing multimodal information.To address this problem, we propose GraspALL, an illumination-structure interactive compensation model.The innovation of GraspALL lies in encoding continuous illumination changes into quantitative references to guide adaptive feature fusion between RGB and non-RGB modalities according to varying lighting intensities, thereby generating illumination-consistent grasping representations.Experiments on the self-built garment grasping dataset demonstrate that GraspALL improves grasping accuracy by 32-44% over baselines under diverse illumination conditions.
翻译:在动态变化光照下实现精确的衣物抓取对于服务机器人的全天候运行至关重要。然而,低光照场景中光照强度的降低会严重削弱衣物结构特征,导致抓取鲁棒性显著下降。现有方法通常通过利用非RGB模态的光照不变特性来增强RGB特征,但它们忽略了在不同光照条件下对非RGB特征的依赖程度存在差异,这可能在利用多模态信息时引入未对齐的非RGB线索,从而削弱模型对光照变化的适应能力。为解决该问题,我们提出GraspALL——一种光照-结构交互补偿模型。GraspALL的创新之处在于将连续的光照变化编码为量化参考,根据不同的光照强度指导RGB与非RGB模态间的自适应特征融合,从而生成光照一致的抓取表征。在自建衣物抓取数据集上的实验表明,GraspALL在多种光照条件下比基线方法提升了32-44%的抓取准确率。