Creating physically realistic content in VR often requires complex modeling tools or predefined 3D models, textures, and animations, which present significant barriers for non-expert users. In this paper, we propose SketchPlay, a novel VR interaction framework that transforms humans' air-drawn sketches and gestures into dynamic, physically realistic scenes, making content creation intuitive and playful like drawing. Specifically, sketches capture the structure and spatial arrangement of objects and scenes, while gestures convey physical cues such as velocity, direction, and force that define movement and behavior. By combining these complementary forms of input, SketchPlay captures both the structure and dynamics of user-created content, enabling the generation of a wide range of complex physical phenomena, such as rigid body motion, elastic deformation, and cloth dynamics. Experimental results demonstrate that, compared to traditional text-driven methods, SketchPlay offers significant advantages in expressiveness, and user experience. By providing an intuitive and engaging creation process, SketchPlay lowers the entry barrier for non-expert users and shows strong potential for applications in education, art, and immersive storytelling.
翻译:在虚拟现实(VR)中创建物理真实感内容通常需要复杂的建模工具或预定义的3D模型、纹理与动画,这对非专业用户构成了显著障碍。本文提出SketchPlay,一种新颖的VR交互框架,能够将用户在空中绘制的草图与手势转化为动态且物理真实感强的场景,使内容创作过程如同绘画般直观且富有乐趣。具体而言,草图捕捉物体与场景的结构及空间布局,而手势则传递速度、方向与力等定义运动与行为的物理线索。通过结合这两种互补的输入形式,SketchPlay能够同时捕捉用户创作内容的结构与动态特性,从而生成包括刚体运动、弹性形变与布料动力学在内的多种复杂物理现象。实验结果表明,相较于传统的文本驱动方法,SketchPlay在表现力与用户体验方面具有显著优势。通过提供直观且引人入胜的创作流程,SketchPlay降低了非专业用户的入门门槛,并在教育、艺术与沉浸式叙事等领域展现出广阔的应用前景。