Over the past two decades, researchers have made significant advancements in simulating human crowds, yet these efforts largely focus on low-level tasks like collision avoidance and a narrow range of behaviors such as path following and flocking. However, creating compelling crowd scenes demands more than just functional movement-it requires capturing high-level interactions between agents, their environment, and each other over time. To address this issue, we introduce Gen-C, a generative model to automate the task of authoring high-level crowd behaviors. Gen-C bypasses the labor-intensive and challenging task of collecting and annotating real crowd video data by leveraging a large language model (LLM) to generate a limited set of crowd scenarios, which are subsequently expanded and generalized through simulations to construct time-expanded graphs that model the actions and interactions of virtual agents. Our method employs two Variational Graph Auto-Encoders guided by a condition prior network: one dedicated to learning a latent space for graph structures (agent interactions) and the other for node features (agent actions and navigation). This setup enables the flexible generation of dynamic crowd interactions. The trained model can be conditioned on natural language, empowering users to synthesize novel crowd behaviors from text descriptions. We demonstrate the effectiveness of our approach in two scenarios, a University Campus and a Train Station, showcasing its potential for populating diverse virtual environments with agents exhibiting varied and dynamic behaviors that reflect complex interactions and high-level decision-making patterns.
翻译:在过去的二十年里,研究人员在模拟人类群体方面取得了显著进展,但这些工作主要集中在避碰等低级任务以及路径跟随、聚集等有限的行为上。然而,创造引人入胜的群体场景需要的不仅仅是功能性运动——它需要捕捉智能体之间、智能体与环境之间随时间推移的高层次交互。为解决这一问题,我们提出了Gen-C,一种用于自动化创作高层次群体行为的生成模型。Gen-C通过利用大型语言模型生成一组有限的群体场景,绕过了收集和标注真实群体视频数据这一劳动密集且具有挑战性的任务;这些场景随后通过仿真进行扩展和泛化,以构建时间扩展图来建模虚拟智能体的动作与交互。我们的方法采用两个由条件先验网络引导的变分图自编码器:一个专门用于学习图结构(智能体交互)的潜在空间,另一个用于学习节点特征(智能体动作与导航)。这种设置能够灵活地生成动态的群体交互。训练后的模型可以以自然语言为条件,使用户能够从文本描述中合成新的群体行为。我们在大学校园和火车站两个场景中展示了该方法的有效性,证明了其潜力:能够为多样化的虚拟环境填充智能体,这些智能体表现出反映复杂交互和高层次决策模式的多样化动态行为。