Autonomous multi-agent systems such as hospital robots and package delivery drones often operate in highly uncertain environments and are expected to achieve complex temporal task objectives while ensuring safety. While learning-based methods such as reinforcement learning are popular methods to train single and multi-agent autonomous systems under user-specified and state-based reward functions, applying these methods to satisfy trajectory-level task objectives is a challenging problem. Our first contribution is the use of weighted automata to specify trajectory-level objectives, such that, maximal paths induced in the weighted automaton correspond to desired trajectory-level behaviors. We show how weighted automata-based specifications go beyond timeliness properties focused on deadlines to performance properties such as expeditiousness. Our second contribution is the use of evolutionary game theory (EGT) principles to train homogeneous multi-agent teams targeting homogeneous task objectives. We show how shared experiences of agents and EGT-based policy updates allow us to outperform state-of-the-art reinforcement learning (RL) methods in minimizing path length by nearly 30\% in large spaces. We also show that our algorithm is computationally faster than deep RL methods by at least an order of magnitude. Additionally our results indicate that it scales better with an increase in the number of agents as compared to other methods.
翻译:在医院机器人与包裹配送无人机等自主多智能体系统中,智能体常需在高度不确定的环境中运行,并需在保障安全的前提下完成复杂的时序任务目标。尽管基于学习的方法(如强化学习)在用户指定和基于状态的奖励函数下训练单智能体与多智能体自主系统已成为主流方法,但将这些方法应用于满足轨迹级任务目标仍具挑战性。我们的首要贡献在于采用加权自动机来规约轨迹级目标,使得加权自动机中导出的最大路径对应于期望的轨迹级行为。我们展示了基于加权自动机的规约方法不仅涵盖注重截止时间的及时性属性,还能表达如迅捷性等性能属性。我们的第二项贡献是利用演化博弈论原理训练面向同质任务目标的同质多智能体团队。研究表明,通过共享智能体经验与基于演化博弈论策略更新,我们在大规模场景中能够将路径长度较当前最先进的强化学习方法减少近30%。同时,我们的算法在计算速度上比深度强化学习方法至少快一个数量级。此外,实验结果表明,相较于其他方法,我们的算法在智能体数量增加时展现出更优的扩展性。