Three-dimensional (3D) reconstruction from a single image is an ill-posed problem with inherent ambiguities, i.e. scale. Predicting a 3D scene from text description(s) is similarly ill-posed, i.e. spatial arrangements of objects described. We investigate the question of whether two inherently ambiguous modalities can be used in conjunction to produce metric-scaled reconstructions. To test this, we focus on monocular depth estimation, the problem of predicting a dense depth map from a single image, but with an additional text caption describing the scene. To this end, we begin by encoding the text caption as a mean and standard deviation; using a variational framework, we learn the distribution of the plausible metric reconstructions of 3D scenes corresponding to the text captions as a prior. To "select" a specific reconstruction or depth map, we encode the given image through a conditional sampler that samples from the latent space of the variational text encoder, which is then decoded to the output depth map. Our approach is trained alternatingly between the text and image branches: in one optimization step, we predict the mean and standard deviation from the text description and sample from a standard Gaussian, and in the other, we sample using a (image) conditional sampler. Once trained, we directly predict depth from the encoded text using the conditional sampler. We demonstrate our approach on indoor (NYUv2) and outdoor (KITTI) scenarios, where we show that language can consistently improve performance in both.
翻译:从单幅图像进行三维重建是一个具有内在模糊性(如尺度问题)的病态问题。根据文本描述预测三维场景同样存在病态性(如物体空间布局的模糊性)。本研究探讨了两种本质上具有模糊性的模态能否结合使用以产生度量尺度化的重建结果。为此,我们聚焦于单目深度估计问题——即从单幅图像预测稠密深度图,但额外引入描述场景的文本标注。具体而言,我们首先将文本标注编码为均值与标准差;通过变分框架,学习文本描述对应的三维场景合理度量重建结果的分布作为先验。为"选择"特定重建结果或深度图,我们通过条件采样器对给定图像进行编码,该采样器从变分文本编码器的隐空间中采样,随后解码为输出深度图。我们的方法通过文本分支与图像分支交替训练:在一个优化步骤中,根据文本描述预测均值与标准差并从标准高斯分布采样;在另一个步骤中,使用(图像)条件采样器进行采样。训练完成后,可直接通过条件采样器基于编码文本预测深度。我们在室内(NYUv2)与室外(KITTI)场景验证了本方法,结果表明语言信息能持续提升两类场景的性能表现。