Humans can infer the three-dimensional structure of objects from two-dimensional visual inputs. Modeling this ability has been a longstanding goal for the science and engineering of visual intelligence, yet decades of computational methods have fallen short of human performance. Here we develop a modeling framework that predicts human 3D shape inferences for arbitrary objects, directly from experimental stimuli. We achieve this with a novel class of neural networks trained using a visual-spatial objective over naturalistic sensory data; given a set of images taken from different locations within a natural scene, these models learn to predict spatial information related to these images, such as camera location and visual depth, without relying on any object-related inductive biases. Notably, these visual-spatial signals are analogous to sensory cues readily available to humans. We design a zero-shot evaluation approach to determine the performance of these `multi-view' models on a well established 3D perception task, then compare model and human behavior. Our modeling framework is the first to match human accuracy on 3D shape inferences, even without task-specific training or fine-tuning. Remarkably, independent readouts of model responses predict fine-grained measures of human behavior, including error patterns and reaction times, revealing a natural correspondence between model dynamics and human perception. Taken together, our findings indicate that human-level 3D perception can emerge from a simple, scalable learning objective over naturalistic visual-spatial data. All code, human behavioral data, and experimental stimuli needed to reproduce our findings can be found on our project page.
翻译:人类能够从二维视觉输入中推断物体的三维结构。对这一能力进行建模一直是视觉智能科学与工程的长期目标,然而数十年的计算方法始终未能达到人类水平。本文开发了一个建模框架,能够直接从实验刺激中预测人类对任意物体的三维形状推断。我们通过一类新颖的神经网络实现这一目标,该网络使用视觉-空间目标在自然感知数据上进行训练;给定从自然场景中不同位置拍摄的一组图像,这些模型能够学习预测与这些图像相关的空间信息(如相机位置和视觉深度),而无需依赖任何与物体相关的归纳偏置。值得注意的是,这些视觉-空间信号类似于人类容易获取的感知线索。我们设计了一种零样本评估方法,以确定这些"多视角"模型在成熟三维感知任务上的表现,进而比较模型与人类行为。我们的建模框架首次在三维形状推断任务上达到了与人类相当的准确率,且无需任务特定训练或微调。值得注意的是,对模型响应的独立解读能够预测人类行为的细粒度测量指标,包括错误模式和反应时间,揭示了模型动态与人类感知之间的自然对应关系。综合而言,我们的研究结果表明:人类水平的三维感知能力可以从自然视觉-空间数据中通过简单、可扩展的学习目标自然涌现。重现本研究结果所需的全部代码、人类行为数据和实验刺激均可在项目页面获取。