We present TreeON, a novel neural-based framework for reconstructing detailed 3D tree point clouds from sparse top-down geodata, using only a single orthophoto and its corresponding Digital Surface Model (DSM). Our method introduces a new training supervision strategy that combines both geometric supervision and differentiable shadow and silhouette losses to learn point cloud representations of trees without requiring species labels, procedural rules, terrestrial reconstruction data, or ground laser scans. To address the lack of ground truth data, we generate a synthetic dataset of point clouds from procedurally modeled trees and train our network on it. Quantitative and qualitative experiments demonstrate better reconstruction quality and coverage compared to existing methods, as well as strong generalization to real-world data, producing visually appealing and structurally plausible tree point cloud representations suitable for integration into interactive digital 3D maps. The codebase, synthetic dataset, and pretrained model are publicly available at https://angelikigram.github.io/treeON/.
翻译:本文提出TreeON,一种新颖的基于神经网络的框架,仅利用单张正射影像及其对应的数字表面模型(DSM),即可从稀疏的俯视地理数据中重建精细的三维树木点云。我们的方法引入了一种新的训练监督策略,该策略结合了几何监督与可微分的阴影及轮廓损失,从而能够学习树木的点云表示,且无需树种标签、程序化规则、地面重建数据或地面激光扫描。为解决真实标注数据的缺乏,我们通过程序化建模的树木生成了一个合成点云数据集,并在此数据集上训练我们的网络。定量与定性实验表明,相较于现有方法,我们的方法在重建质量和覆盖范围上表现更优,并且对真实世界数据展现出强大的泛化能力,能够生成视觉逼真且结构合理、适合集成到交互式数字三维地图中的树木点云表示。代码库、合成数据集及预训练模型已在 https://angelikigram.github.io/treeON/ 公开。