High-fidelity haptic feedback is essential for immersive virtual environments, yet authoring realistic tactile textures remains a significant bottleneck for designers. We introduce HapticMatch, a visual-to-tactile generation framework designed to democratize haptic content creation. We present a novel dataset containing precisely aligned pairs of micro-scale optical images, surface height maps, and friction-induced vibrations for 100 diverse materials. Leveraging this data, we explore and demonstrate that conditional generative models like diffusion and flow-matching can synthesize high-fidelity, renderable surface geometries directly from standard RGB photos. By enabling a "Scan-to-Touch" workflow, HapticMatch allows interaction designers to rapidly prototype multimodal surface sensations without specialized recording equipment, bridging the gap between visual and tactile immersion in VR/AR interfaces.
翻译:高保真触觉反馈对于沉浸式虚拟环境至关重要,然而创作逼真的触觉纹理仍然是设计者面临的主要瓶颈。本文提出HapticMatch,一种视觉到触觉的生成框架,旨在使触觉内容创作大众化。我们构建了一个新颖的数据集,其中包含100种不同材料在微观尺度上精确对齐的光学图像、表面高度图与摩擦诱发振动数据对。基于此数据,我们探索并证明了条件生成模型(如扩散模型与流匹配模型)能够直接从标准RGB照片合成高保真、可渲染的表面几何形貌。通过实现"扫描即触"工作流,HapticMatch使交互设计师无需专业录制设备即可快速原型化多模态表面感知,从而弥合VR/AR界面中视觉沉浸与触觉沉浸之间的鸿沟。