We present Roomify, a spatially-grounded transformation system that generates themed virtual environments anchored to users' physical rooms while maintaining spatial structure and functional semantics. Current VR approaches face a fundamental trade-off: full immersion sacrifices spatial awareness, while passthrough solutions break presence. Roomify addresses this through spatially-grounded transformation - treating physical spaces as "spatial containers" that preserve key functional and geometric properties of furniture while enabling radical stylistic changes. Our pipeline combines in-situ 3D scene understanding, AI-driven spatial reasoning, and style-aware generation to create personalized virtual environments grounded in physical reality. We introduce a cross-reality authoring tool enabling fine-grained user control through MR editing and VR preview workflows. Two user studies validate our approach: one with 18 VR users demonstrates a 63% improvement in presence over passthrough and 26% over fully virtual baselines while maintaining spatial awareness; another with 8 design professionals confirms the system's creative expressiveness (scene quality: 5.95/7; creativity support: 6.08/7) and professional workflow value across diverse environments.
翻译:本文提出Roomify,一种空间锚定的转换系统,能够生成以用户物理房间为锚点的主题化虚拟环境,同时保持空间结构与功能语义。当前VR方法面临根本性权衡:完全沉浸会牺牲空间感知,而透视方案则破坏临场感。Roomify通过空间锚定转换解决这一问题——将物理空间视为"空间容器",在保留家具关键功能与几何属性的同时实现彻底的风格转变。我们的技术流程融合了原位三维场景理解、AI驱动的空间推理与风格感知生成,以创建基于物理现实的个性化虚拟环境。我们开发了跨现实创作工具,通过混合现实编辑与虚拟现实预览工作流实现细粒度用户控制。两项用户研究验证了该方法的有效性:针对18名VR用户的研究显示,相比透视方案和完全虚拟基线,本系统在保持空间感知的同时分别将临场感提升63%与26%;针对8名设计专家的研究证实了系统的创意表现力(场景质量:5.95/7;创意支持度:6.08/7)及在不同环境中的专业工作流价值。