The development of assistive robotic agents to support household tasks is advancing, yet the underlying models often operate in virtual settings that do not reflect real-world complexity. For assistive care robots to be effective in diverse environments, their models must be robust and integrate multiple modalities. Consider a caretaker needing assistance in a dimly lit room or navigating around a newly installed glass door. Models relying solely on visual input might fail in low light, while those using depth information could avoid the door. This demonstrates the necessity for models that can process various sensory inputs. Our ongoing study evaluates state-of-the-art robotic models in the AI2Thor virtual environment. We introduce disturbances, such as dimmed lighting and mirrored walls, to assess their impact on modalities like movement or vision, and object recognition. Our goal is to gather input from the Geriatronics community to understand and model the challenges faced by practitioners.
翻译:辅助机器人代理在支持家庭任务方面的发展正在推进,但其底层模型通常在虚拟环境中运行,未能反映现实世界的复杂性。为使辅助护理机器人在多样化环境中有效工作,其模型必须具备鲁棒性并能整合多种模态。试想护理人员需要在光线昏暗的房间中获得协助,或需绕行新安装的玻璃门。仅依赖视觉输入的模型可能在低光照条件下失效,而使用深度信息的模型则可避开玻璃门。这证明了模型需具备处理多种感官输入的必要性。我们正在进行的研究在AI2Thor虚拟环境中评估最先进的机器人模型。我们引入干扰因素(如调暗照明和镜面墙壁),以评估其对运动、视觉及物体识别等模态的影响。我们的目标是收集老年护理机器人学界的意见,以理解并建模从业者面临的实际挑战。