A 360-degree (omni-directional) image provides an all-encompassing spherical view of a scene. Recently, there has been an increasing interest in synthesising 360-degree images from conventional narrow field of view (NFoV) images captured by digital cameras and smartphones, for providing immersive experiences in various scenarios such as virtual reality. Yet, existing methods typically fall short in synthesizing intricate visual details or ensure the generated images align consistently with user-provided prompts. In this study, autoregressive omni-aware generative network (AOG-Net) is proposed for 360-degree image generation by out-painting an incomplete 360-degree image progressively with NFoV and text guidances joinly or individually. This autoregressive scheme not only allows for deriving finer-grained and text-consistent patterns by dynamically generating and adjusting the process but also offers users greater flexibility to edit their conditions throughout the generation process. A global-local conditioning mechanism is devised to comprehensively formulate the outpainting guidance in each autoregressive step. Text guidances, omni-visual cues, NFoV inputs and omni-geometry are encoded and further formulated with cross-attention based transformers into a global stream and a local stream into a conditioned generative backbone model. As AOG-Net is compatible to leverage large-scale models for the conditional encoder and the generative prior, it enables the generation to use extensive open-vocabulary text guidances. Comprehensive experiments on two commonly used 360-degree image datasets for both indoor and outdoor settings demonstrate the state-of-the-art performance of our proposed method. Our code will be made publicly available.
翻译:360度(全景)图像提供了场景的全方位球面视图。近年来,从数码相机和智能手机拍摄的传统窄视场图像合成360度图像的研究日益增多,旨在为虚拟现实等场景提供沉浸式体验。然而,现有方法在合成精细视觉细节或确保生成图像与用户提供的提示一致方面存在不足。本研究提出自回归全方位感知生成网络,通过逐步外推不完整的360度图像,结合窄视场图像和文本引导(可联合或单独使用),实现360度图像生成。这种自回归方案不仅通过动态生成与调整过程获得更细粒度且与文本一致的图案,还允许用户在生成全过程中灵活编辑条件。我们设计了全局-局部条件机制,在每一步自回归中全面构建外推引导。将文本引导、全向视觉线索、窄视场输入和全向几何进行编码,并通过基于交叉注意力的Transformer进一步构建为全局流和局部流,输入条件生成骨干模型。由于AOG-Net兼容利用大规模模型作为条件编码器和生成先验,因此能够支持广泛开放词汇的文本引导生成。在室内外两种常用360度图像数据集上的综合实验表明,所提方法达到了最先进性能。我们的代码将公开发布。