The generation of 3D models from real-world objects has often been accomplished through photogrammetry, i.e., by taking 2D photos from a variety of perspectives and then triangulating matched point-based features to create a textured mesh. Many design choices exist within this framework for the generation of digital twins, and differences between such approaches are largely judged qualitatively. Here, we present and test a novel pipeline for generating synthetic images from high-quality 3D models and programmatically generated camera poses. This enables a wide variety of repeatable, quantifiable experiments which can compare ground-truth knowledge of virtual camera parameters and of virtual objects against the reconstructed estimations of those perspectives and subjects.
翻译:从真实物体生成三维模型通常通过摄影测量法实现,即从多视角拍摄二维照片,然后对匹配的基于点的特征进行三角剖分以创建带纹理的网格。在该数字孪生生成框架中存在诸多设计选择,不同方法间的差异主要依赖定性评估。本文提出并测试了一种从高质量三维模型通过程序化生成相机位姿来合成图像的新流程。该流程支持多种可重复、可量化的实验,能够将虚拟相机参数与虚拟物体的真实数据,与重建后的视角及主体估计结果进行对比。