Image-to-3D generation faces inherent semantic ambiguity under occlusion, where partial observation alone is often insufficient to determine object category. In this work, we formalize text-driven amodal 3D generation, where text prompts steer the completion of unseen regions while strictly preserving input observation. Crucially, we identify that these objectives demand distinct control granularities: rigid control for the observation versus relaxed structural control for the prompt. To this end, we propose RelaxFlow, a training-free dual-branch framework that decouples control granularity via a Multi-Prior Consensus Module and a Relaxation Mechanism. Theoretically, we prove that our relaxation is equivalent to applying a low-pass filter on the generative vector field, which suppresses high-frequency instance details to isolate geometric structure that accommodates the observation. To facilitate evaluation, we introduce two diagnostic benchmarks, ExtremeOcc-3D and AmbiSem-3D. Extensive experiments demonstrate that RelaxFlow successfully steers the generation of unseen regions to match the prompt intent without compromising visual fidelity.
翻译:图像到三维生成在遮挡条件下存在固有的语义模糊性,仅凭部分观测通常不足以确定物体类别。在本工作中,我们形式化了文本驱动的非模态三维生成任务,其中文本提示词引导对不可见区域的补全,同时严格保持输入观测。关键的是,我们发现这些目标需要不同的控制粒度:对观测需要刚性控制,而对提示词则需要松弛的结构控制。为此,我们提出RelaxFlow——一种免训练的双分支框架,通过多先验共识模块与松弛机制实现控制粒度的解耦。理论上,我们证明了所提出的松弛机制等价于对生成向量场施加低通滤波,从而抑制高频实例细节以分离出适配观测的几何结构。为促进评估,我们构建了两个诊断性基准数据集:ExtremeOcc-3D与AmbiSem-3D。大量实验表明,RelaxFlow能成功引导不可见区域的生成以匹配提示词意图,同时不损害视觉保真度。