Feed-forward transformer models have driven rapid progress in 3D vision, but state-of-the-art methods such as VGGT and $π^3$ have a computational cost that scales quadratically with the number of input images, making them inefficient when applied to large image collections. Sequential-reconstruction approaches reduce this cost but sacrifice reconstruction quality. We introduce ZipMap, a stateful feed-forward model that achieves linear-time, bidirectional 3D reconstruction while matching or surpassing the accuracy of quadratic-time methods. ZipMap employs test-time training layers to zip an entire image collection into a compact hidden scene state in a single forward pass, enabling reconstruction of over 700 frames in under 10 seconds on a single H100 GPU, more than $20\times$ faster than state-of-the-art methods such as VGGT. Moreover, we demonstrate the benefits of having a stateful representation in real-time scene-state querying and its extension to sequential streaming reconstruction.
翻译:前馈Transformer模型推动了三维视觉领域的快速发展,但诸如VGGT和$π^3$等最先进方法的计算成本随输入图像数量呈二次方增长,在处理大规模图像集时效率低下。顺序重建方法虽能降低计算成本,却以牺牲重建质量为代价。我们提出ZipMap,一种状态化前馈模型,在达到或超越二次时间方法精度的同时,实现了线性时间的双向三维重建。ZipMap采用测试时训练层,通过单次前向传播将整个图像集压缩为紧凑的隐式场景状态,在单个H100 GPU上可在10秒内重建超过700帧图像,比VGGT等最先进方法快$20\times$以上。此外,我们展示了状态化表示在实时场景状态查询中的优势,及其在顺序流式重建中的扩展应用。