For designing a wide range of everyday objects, the design process should be aware of both the human body and the underlying semantics of the design specification. However, these two objectives present significant challenges to the current AI-based designing tools. In this work, we present a method to synthesize body-aware 3D objects from a base mesh given an input body geometry and either text or image as guidance. The generated objects can be simulated on virtual characters, or fabricated for real-world use. We propose to use a mesh deformation procedure that optimizes for both semantic alignment as well as contact and penetration losses. Using our method, users can generate both virtual or real-world objects from text, image, or sketch, without the need for manual artist intervention. We present both qualitative and quantitative results on various object categories, demonstrating the effectiveness of our approach.
翻译:在设计各类日常物品时,设计过程应同时兼顾人体特征与设计规范的内在语义。然而,这两个目标对当前基于人工智能的设计工具构成了重大挑战。本研究提出一种方法,能够在给定输入人体几何形状及文本或图像引导的条件下,从基础网格合成具有人体感知能力的三维物体。生成的对象可在虚拟角色上进行模拟,或通过制造技术应用于现实场景。我们提出采用网格变形流程,同时优化语义对齐度以及接触与穿透损失。通过本方法,用户无需人工艺术干预即可从文本、图像或草图生成虚拟或现实世界中的物体。我们在多种物体类别上展示了定性与定量实验结果,验证了所提方法的有效性。