Creating a controllable and relightable digital avatar from multi-view video with fixed illumination is a very challenging problem since humans are highly articulated, creating pose-dependent appearance effects, and skin as well as clothing require space-varying BRDF modeling. Existing works on creating animatible avatars either do not focus on relighting at all, require controlled illumination setups, or try to recover a relightable avatar from very low cost setups, i.e. a single RGB video, at the cost of severely limited result quality, e.g. shadows not even being modeled. To address this, we propose Relightable Neural Actor, a new video-based method for learning a pose-driven neural human model that can be relighted, allows appearance editing, and models pose-dependent effects such as wrinkles and self-shadows. Importantly, for training, our method solely requires a multi-view recording of the human under a known, but static lighting condition. To tackle this challenging problem, we leverage an implicit geometry representation of the actor with a drivable density field that models pose-dependent deformations and derive a dynamic mapping between 3D and UV spaces, where normal, visibility, and materials are effectively encoded. To evaluate our approach in real-world scenarios, we collect a new dataset with four identities recorded under different light conditions, indoors and outdoors, providing the first benchmark of its kind for human relighting, and demonstrating state-of-the-art relighting results for novel human poses.
翻译:从固定光照条件下的多视角视频中创建可控且可重光照的数字化身是一个极具挑战性的问题,因为人体具有高度关节化特性,会产生姿态依赖的外观效应,且皮肤与衣物都需要空间变化的双向反射分布函数建模。现有关于创建可动画化身的研究要么完全不关注重光照,要么需要受控的照明设置,或者试图从极低成本设置(如单段RGB视频)中恢复可重光照的化身,这导致结果质量严重受限(例如甚至无法建模阴影)。为解决这一问题,我们提出可重光照神经演员,这是一种基于视频的新方法,用于学习一种姿态驱动的神经人体模型。该模型可实现重光照、支持外观编辑,并能建模姿态依赖效应(如皱纹和自阴影)。重要的是,在训练过程中,我们的方法仅需在已知但静态的照明条件下对人体进行多视角录制。为应对这一挑战,我们利用演员的隐式几何表示,通过可驱动的密度场建模姿态依赖形变,并推导出三维空间与UV空间之间的动态映射,在此空间中有效编码了法线、可见性和材质信息。为在真实场景中评估我们的方法,我们收集了一个包含四种身份的新数据集,这些数据在室内外不同光照条件下录制,为此类人体重光照任务提供了首个基准测试,并在新人体姿态上展示了最先进的重光照效果。