We present VRGaussianAvatar, an integrated system that enables real-time full-body 3D Gaussian Splatting (3DGS) avatars in virtual reality using only head-mounted display (HMD) tracking signals. The system adopts a parallel pipeline with a VR Frontend and a GA Backend. The VR Frontend uses inverse kinematics to estimate full-body pose and streams the resulting pose along with stereo camera parameters to the backend. The GA Backend stereoscopically renders a 3DGS avatar reconstructed from a single image. To improve stereo rendering efficiency, we introduce Binocular Batching, which jointly processes left and right eye views in a single batched pass to reduce redundant computation and support high-resolution VR displays. We evaluate VRGaussianAvatar with quantitative performance tests and a within-subject user study against image- and video-based mesh avatar baselines. Results show that VRGaussianAvatar sustains interactive VR performance and yields higher perceived appearance similarity, embodiment, and plausibility. Project page and source code are available at https://vrgaussianavatar.github.io.
翻译:本文提出VRGaussianAvatar,这是一个集成系统,能够仅利用头戴式显示器(HMD)的追踪信号,在虚拟现实中实现实时全身3D高斯泼溅(3DGS)化身。该系统采用包含VR前端与GA后端的并行处理流水线。VR前端通过逆运动学估算全身姿态,并将生成的姿态数据与立体相机参数流式传输至后端。GA后端对从单张图像重建的3DGS化身进行立体渲染。为提升立体渲染效率,我们引入了双目批处理技术,该技术通过单次批处理联合处理左右眼视图,以减少冗余计算并支持高分辨率VR显示。我们通过定量性能测试及针对图像与视频基网格化身基线的受试者内用户研究对VRGaussianAvatar进行评估。结果表明,VRGaussianAvatar能够维持交互式VR性能,并在感知外观相似性、具身感及合理性方面获得更高评价。项目页面与源代码发布于https://vrgaussianavatar.github.io。