This article describes GAZELOAD, a multimodal dataset for mental workload estimation in industrial human-robot collaboration. The data were collected in a laboratory assembly testbed where 26 participants interacted with two collaborative robots (UR5 and Franka Emika Panda) while wearing Meta ARIA smart glasses. The dataset time-synchronizes eye-tracking signals (pupil diameter, fixations, saccades, eye gaze, gaze transition entropy, fixation dispersion index) with environmental real-time and continuous measurements (illuminance) and task and robot context (bench, task block, induced faults), under controlled manipulations of task difficulty and ambient conditions. For each participant and workload-graded task block, we provide CSV files with ocular metrics aggregated into 250 ms windows, environmental logs, and self-reported mental workload ratings on a 1-10 Likert scale, organized in participant-specific folders alongside documentation. These data can be used to develop and benchmark algorithms for mental workload estimation, feature extraction, and temporal modeling in realistic industrial HRC scenarios, and to investigate the influence of environmental factors such as lighting on eye-based workload markers.
翻译:本文介绍了GAZELOAD,一个用于工业人机协作中认知负荷评估的多模态数据集。数据采集于实验室装配测试平台,26名参与者佩戴Meta ARIA智能眼镜与两台协作机器人(UR5和Franka Emika Panda)进行交互。该数据集在任务难度与环境条件的受控操纵下,实现了眼动信号(瞳孔直径、注视点、扫视、视线方向、注视转移熵、注视分散指数)与环境实时连续测量(照度)以及任务与机器人上下文(工作台、任务模块、诱发故障)的时间同步。针对每位参与者及按认知负荷分级的任务模块,我们提供了以250毫秒窗口聚合的眼动指标CSV文件、环境日志以及基于1-10李克特量表的自报告认知负荷评分,这些数据按参与者独立文件夹组织并附有说明文档。本数据集可用于开发和评估真实工业人机协作场景中的认知负荷估计算法、特征提取及时序建模方法,并探究照明等环境因素对基于眼动的认知负荷标记物的影响。