Lifting perspective images and videos to 360° panoramas enables immersive 3D world generation. Existing approaches often rely on explicit geometric alignment between the perspective and the equirectangular projection (ERP) space. Yet, this requires known camera metadata, obscuring the application to in-the-wild data where such calibration is typically absent or noisy. We propose 360Anything, a geometry-free framework built upon pre-trained diffusion transformers. By treating the perspective input and the panorama target simply as token sequences, 360Anything learns the perspective-to-equirectangular mapping in a purely data-driven way, eliminating the need for camera information. Our approach achieves state-of-the-art performance on both image and video perspective-to-360° generation, outperforming prior works that use ground-truth camera information. We also trace the root cause of the seam artifacts at ERP boundaries to zero-padding in the VAE encoder, and introduce Circular Latent Encoding to facilitate seamless generation. Finally, we show competitive results in zero-shot camera FoV and orientation estimation benchmarks, demonstrating 360Anything's deep geometric understanding and broader utility in computer vision tasks. Additional results are available at https://360anything.github.io/.
翻译:将透视图像与视频升维至360°全景图可实现沉浸式三维场景生成。现有方法通常依赖于透视图像与等距柱状投影空间之间的显式几何对齐,但这需要已知的相机元数据,限制了其在真实场景数据中的应用——此类数据通常缺乏或含有噪声校准信息。本文提出360Anything,一种基于预训练扩散Transformer的几何无关框架。通过将透视输入与全景目标简单视为token序列,360Anything以纯数据驱动的方式学习透视到等距柱状投影的映射关系,无需相机信息。我们的方法在图像与视频的透视到360°生成任务中均达到最先进性能,优于使用真实相机信息的现有方法。我们进一步追踪了等距柱状投影边界处接缝伪影的根本成因,将其归因于VAE编码器中的零填充操作,并引入环形潜在编码以实现无缝生成。最后,我们在零样本相机视场角与方向估计基准测试中展示了具有竞争力的结果,证明了360Anything在计算机视觉任务中具备深刻的几何理解能力与更广泛的实用性。更多结果请访问https://360anything.github.io/。