Human-robot collaboration enables highly adaptive co-working. The variety of resulting workflows makes it difficult to measure metrics as, e.g. makespans or idle times for multiple systems and tasks in a comparable manner. This issue can be addressed with virtual commissioning, where arbitrary numbers of non-deterministic human-robot workflows in assembly tasks can be simulated. To this end, data-driven models of human decisions are needed. Gathering the required large corpus of data with on-site user studies is quite time-consuming. In comparison, simulation-based studies (e.g., by crowdsourcing) would allow us to access a large pool of study participants with less effort. To rely on respective study results, human action sequences observed in a browser-based simulation environment must be shown to match those gathered in a laboratory setting. To this end, this work aims to understand to what extent cooperative assembly work in a simulated environment differs from that in an on-site laboratory setting. We show how a simulation environment can be aligned with a laboratory setting in which a robot and a human perform pick-and-place tasks together. A user study (N=29) indicates that participants' assembly decisions and perception of the situation are consistent across these different environments.
翻译:人机协作能够实现高度自适应的协同工作。由此产生的多样化工作流程使得以可比较的方式测量多个系统和任务的指标(如制造周期或空闲时间)变得困难。这一问题可以通过虚拟调试来解决,其中可以模拟任意数量的非确定性人机装配工作流程。为此,需要基于数据的人类决策模型。通过现场用户研究收集所需的大规模数据相当耗时。相比之下,基于模拟的研究(例如通过众包)将使我们能够以较少的工作量接触到大量的研究参与者。为了依赖相应的研究结果,必须证明在基于浏览器的模拟环境中观察到的人类操作序列与在实验室环境中收集的序列相匹配。为此,本研究旨在理解模拟环境中的协作装配工作与现场实验室环境中的工作在多大程度上存在差异。我们展示了如何将模拟环境与实验室设置对齐,其中机器人和人类共同执行拾取-放置任务。一项用户研究(N=29)表明,参与者的装配决策和情境感知在这些不同环境中具有一致性。