We address the challenge of content diversity and controllability in pedestrian simulation for driving scenarios. Recent pedestrian animation frameworks have a significant limitation wherein they primarily focus on either following trajectory [46] or the content of the reference video [57], consequently overlooking the potential diversity of human motion within such scenarios. This limitation restricts the ability to generate pedestrian behaviors that exhibit a wider range of variations and realistic motions and therefore restricts its usage to provide rich motion content for other components in the driving simulation system, e.g., suddenly changed motion to which the autonomous vehicle should respond. In our approach, we strive to surpass the limitation by showcasing diverse human motions obtained from various sources, such as generated human motions, in addition to following the given trajectory. The fundamental contribution of our framework lies in combining the motion tracking task with trajectory following, which enables the tracking of specific motion parts (e.g., upper body) while simultaneously following the given trajectory by a single policy. This way, we significantly enhance both the diversity of simulated human motion within the given scenario and the controllability of the content, including language-based control. Our framework facilitates the generation of a wide range of human motions, contributing to greater realism and adaptability in pedestrian simulations for driving scenarios. More information is on our project page https://wangjingbo1219.github.io/papers/CVPR2024_PACER_PLUS/PACERPLUSPage.html .
翻译:我们针对驾驶场景中行人模拟的内容多样性和可控性挑战展开研究。现有行人动画框架存在显著局限:它们主要侧重于轨迹跟踪[46]或参考视频内容[57],忽视了此类场景中人体运动的潜在多样性。这种限制制约了生成具有更丰富变化和真实运动行为的行人能力,进而导致无法为驾驶模拟系统的其他组件(如自动驾驶车辆需要响应的突发运动变化)提供丰富的运动内容。我们的方法致力于突破这一局限,通过展示来自不同来源(如生成式人体运动)的多样化人体运动,在遵循给定轨迹的同时实现运动多样性。本框架的核心贡献在于将运动跟踪任务与轨迹跟踪相结合,通过单一策略实现同时跟踪特定身体部位(如上半身)的运动并遵循给定轨迹。通过这种方式,我们显著增强了模拟场景中人体运动的多样性以及内容可控性(包括基于语言的控制)。该框架可生成广泛的人体运动模式,提升驾驶场景行人模拟的真实性与适应性。更多信息请访问项目页面 https://wangjingbo1219.github.io/papers/CVPR2024_PACER_PLUS/PACERPLUSPage.html 。