Humans have an innate ability to decompose their perceptions of the world into objects and their attributes, such as colors, shapes, and movement patterns. This cognitive process enables us to imagine novel futures by recombining familiar concepts. However, replicating this ability in artificial intelligence systems has proven challenging, particularly when it comes to modeling videos into compositional concepts and generating unseen, recomposed futures without relying on auxiliary data, such as text, masks, or bounding boxes. In this paper, we propose Dreamweaver, a neural architecture designed to discover hierarchical and compositional representations from raw videos and generate compositional future simulations. Our approach leverages a novel Recurrent Block-Slot Unit (RBSU) to decompose videos into their constituent objects and attributes. In addition, Dreamweaver uses a multi-future-frame prediction objective to capture disentangled representations for dynamic concepts more effectively as well as static concepts. In experiments, we demonstrate our model outperforms current state-of-the-art baselines for world modeling when evaluated under the DCI framework across multiple datasets. Furthermore, we show how the modularized concept representations of our model enable compositional imagination, allowing the generation of novel videos by recombining attributes from previously seen objects. cun-bjy.github.io/dreamweaver-website
翻译:人类天生具备将对外部世界的感知分解为对象及其属性(如颜色、形状和运动模式)的能力。这种认知过程使我们能够通过重组熟悉的概念来想象新颖的未来场景。然而,在人工智能系统中复现这种能力一直面临挑战,尤其是在将视频建模为组合概念、并无需依赖文本、掩码或边界框等辅助数据的情况下生成未见过的重组未来场景方面。本文提出Dreamweaver——一种从原始视频中发现层次化组合表征并生成组合式未来模拟的神经架构。该方法采用新颖的循环块-槽单元(RBSU)将视频分解为其构成对象与属性。此外,Dreamweaver通过多未来帧预测目标,更有效地解耦动态概念与静态概念的表征。实验表明,在多个数据集上通过DCI框架评估时,我们的模型在世界建模任务上超越了当前最先进的基线方法。进一步地,我们展示了模型模块化的概念表征如何实现组合式想象,即通过重组已见对象的属性来生成新颖视频。cun-bjy.github.io/dreamweaver-website