Recent advancements in view synthesis have significantly enhanced immersive experiences across various computer graphics and multimedia applications, including telepresence, and entertainment. By enabling the generation of new perspectives from a single input view, view synthesis allows users to better perceive and interact with their environment. However, many state-of-the-art methods, while achieving high visual quality, face limitations in real-time performance, which makes them less suitable for live applications where low latency is critical. In this paper, we present a lightweight, position-aware network designed for real-time view synthesis from a single input image and a target camera pose. The proposed framework consists of a Position Aware Embedding, modeled with a multi-layer perceptron, which efficiently maps positional information from the target pose to generate high dimensional feature maps. These feature maps, along with the input image, are fed into a Rendering Network that merges features from dual encoder branches to resolve both high level semantics and low level details, producing a realistic new view of the scene. Experimental results demonstrate that our method achieves superior efficiency and visual quality compared to existing approaches, particularly in handling complex translational movements without explicit geometric operations like warping. This work marks a step toward enabling real-time view synthesis from a single image for live and interactive applications.
翻译:近年来,视图合成技术的进步显著增强了远程呈现、娱乐等多种计算机图形学与多媒体应用中的沉浸式体验。通过从单一输入视图生成新视角,视图合成使用户能够更好地感知并与其环境进行交互。然而,许多现有先进方法虽然在视觉质量上表现出色,但在实时性能方面存在局限,这使得它们难以适用于对低延迟要求严格的实时应用。本文提出了一种轻量级、位置感知的网络,旨在从单张输入图像和目标相机位姿实现实时视图合成。所提出的框架包含一个由多层感知机建模的位置感知嵌入模块,该模块能够高效地将目标位姿的位置信息映射为高维特征图。这些特征图与输入图像一同馈入渲染网络,该网络通过融合双编码器分支的特征,同时解析高层语义与底层细节,从而生成场景的真实感新视图。实验结果表明,与现有方法相比,我们的方法在效率和视觉质量上均表现优异,尤其是在处理复杂平移运动时无需依赖如变形等显式几何操作。本工作为实现基于单张图像的实时、交互式视图合成应用迈出了重要一步。