Arbitrary viewpoint image generation holds significant potential for autonomous driving, yet remains a challenging task due to the lack of ground-truth data for extrapolated views, which hampers the training of high-fidelity generative models. In this work, we propose Arbiviewgen, a novel diffusion-based framework for the generation of controllable camera images from arbitrary points of view. To address the absence of ground-truth data in unseen views, we introduce two key components: Feature-Aware Adaptive View Stitching (FAVS) and Cross-View Consistency Self-Supervised Learning (CVC-SSL). FAVS employs a hierarchical matching strategy that first establishes coarse geometric correspondences using camera poses, then performs fine-grained alignment through improved feature matching algorithms, and identifies high-confidence matching regions via clustering analysis. Building upon this, CVC-SSL adopts a self-supervised training paradigm where the model reconstructs the original camera views from the synthesized stitched images using a diffusion model, enforcing cross-view consistency without requiring supervision from extrapolated data. Our framework requires only multi-camera images and their associated poses for training, eliminating the need for additional sensors or depth maps. To our knowledge, Arbiviewgen is the first method capable of controllable arbitrary view camera image generation in multiple vehicle configurations.
翻译:任意视点图像生成在自动驾驶领域具有重要潜力,但由于外推视点缺乏真实数据,阻碍了高保真生成模型的训练,该任务仍具挑战性。本文提出Arbiviewgen,一种基于扩散模型的新型框架,用于从任意视点生成可控的相机图像。为解决未见视点中真实数据缺失的问题,我们引入两个核心组件:特征感知自适应视点拼接(FAVS)与跨视图一致性自监督学习(CVC-SSL)。FAVS采用分层匹配策略:首先利用相机姿态建立粗略几何对应关系,随后通过改进的特征匹配算法进行细粒度对齐,并借助聚类分析识别高置信度匹配区域。在此基础上,CVC-SSL采用自监督训练范式,通过扩散模型从合成的拼接图像中重建原始相机视图,在无需外推数据监督的情况下强制实现跨视图一致性。我们的框架仅需多相机图像及其对应姿态进行训练,无需额外传感器或深度图。据我们所知,Arbiviewgen是首个能够在多种车辆配置下实现可控任意视点相机图像生成的方法。