Crowd Motion Generation is essential in entertainment industries such as animation and games as well as in strategic fields like urban simulation and planning. This new task requires an intricate integration of control and generation to realistically synthesize crowd dynamics under specific spatial and semantic constraints, whose challenges are yet to be fully explored. On the one hand, existing human motion generation models typically focus on individual behaviors, neglecting the complexities of collective behaviors. On the other hand, recent methods for multi-person motion generation depend heavily on pre-defined scenarios and are limited to a fixed, small number of inter-person interactions, thus hampering their practicality. To overcome these challenges, we introduce CrowdMoGen, a zero-shot text-driven framework that harnesses the power of Large Language Model (LLM) to incorporate the collective intelligence into the motion generation framework as guidance, thereby enabling generalizable planning and generation of crowd motions without paired training data. Our framework consists of two key components: 1) Crowd Scene Planner that learns to coordinate motions and dynamics according to specific scene contexts or introduced perturbations, and 2) Collective Motion Generator that efficiently synthesizes the required collective motions based on the holistic plans. Extensive quantitative and qualitative experiments have validated the effectiveness of our framework, which not only fills a critical gap by providing scalable and generalizable solutions for Crowd Motion Generation task but also achieves high levels of realism and flexibility.
翻译:群体运动生成在动画与游戏等娱乐产业以及城市仿真与规划等战略领域均至关重要。该新兴任务要求对控制与生成进行精细整合,以在特定空间与语义约束下逼真合成群体动态,其挑战尚未被充分探索。一方面,现有的人体运动生成模型通常聚焦于个体行为,忽视了集体行为的复杂性;另一方面,近期多人运动生成方法严重依赖预定义场景,且仅限于固定、少量的人际交互,从而限制了其实用性。为克服这些挑战,我们提出CrowdMoGen——一个零样本文本驱动的框架,其利用大语言模型(LLM)将集体智能融入运动生成框架作为引导,从而无需配对训练数据即可实现群体运动的可泛化规划与生成。我们的框架包含两个核心组件:1)群体场景规划器,其学习根据特定场景语境或引入的扰动协调运动与动态;2)集体运动生成器,其基于整体规划高效合成所需的集体运动。大量定量与定性实验验证了本框架的有效性,该框架不仅通过为群体运动生成任务提供可扩展、可泛化的解决方案填补了关键空白,同时实现了高度的真实性与灵活性。