Three-dimensional (3D) reconstruction from a single image is an ill-posed problem with inherent ambiguities, i.e. scale. Predicting a 3D scene from text description(s) is similarly ill-posed, i.e. spatial arrangements of objects described. We investigate the question of whether two inherently ambiguous modalities can be used in conjunction to produce metric-scaled reconstructions. To test this, we focus on monocular depth estimation, the problem of predicting a dense depth map from a single image, but with an additional text caption describing the scene. To this end, we begin by encoding the text caption as a mean and standard deviation; using a variational framework, we learn the distribution of the plausible metric reconstructions of 3D scenes corresponding to the text captions as a prior. To "select" a specific reconstruction or depth map, we encode the given image through a conditional sampler that samples from the latent space of the variational text encoder, which is then decoded to the output depth map. Our approach is trained alternatingly between the text and image branches: in one optimization step, we predict the mean and standard deviation from the text description and sample from a standard Gaussian, and in the other, we sample using a (image) conditional sampler. Once trained, we directly predict depth from the encoded text using the conditional sampler. We demonstrate our approach on indoor (NYUv2) and outdoor (KITTI) scenarios, where we show that language can consistently improve performance in both.
翻译:从单幅图像进行三维重建是一个具有内在模糊性(如尺度)的病态问题。从文本描述预测三维场景同样存在病态性(如描述物体的空间排布)。本研究探讨两种本质上存在模糊性的模态能否协同使用以产生度量尺度化的重建结果。为此,我们聚焦于单目深度估计——即从单幅图像预测稠密深度图的问题,但额外引入描述场景的文本标注。具体而言,我们首先将文本标注编码为均值与标准差;通过变分框架,学习与文本标注对应的三维场景可能度量重建结果的分布作为先验。为“选择”特定重建结果或深度图,我们通过条件采样器对给定图像进行编码,该采样器从变分文本编码器的潜空间采样,随后解码为输出深度图。我们的方法在文本与图像分支间交替训练:在一个优化步骤中,从文本描述预测均值与标准差并从标准高斯分布采样;在另一步骤中,使用(图像)条件采样器进行采样。训练完成后,我们直接通过条件采样器基于编码文本预测深度。我们在室内(NYUv2)与室外(KITTI)场景验证了本方法,结果表明语言信息能持续提升两类场景的性能表现。