Face fill-light enhancement (FFE) brightens underexposed faces by adding virtual fill light while keeping the original scene illumination and background unchanged. Most face relighting methods aim to reshape overall lighting, which can suppress the input illumination or modify the entire scene, leading to foreground-background inconsistency and mismatching practical FFE needs. To support scalable learning, we introduce LightYourFace-160K (LYF-160K), a large-scale paired dataset built with a physically consistent renderer that injects a disk-shaped area fill light controlled by six disentangled factors, producing 160K before-and-after pairs. We first pretrain a physics-aware lighting prompt (PALP) that embeds the 6D parameters into conditioning tokens, using an auxiliary planar-light reconstruction objective. Building on a pretrained diffusion backbone, we then train a fill-light diffusion (FiLitDiff), an efficient one-step model conditioned on physically grounded lighting codes, enabling controllable and high-fidelity fill lighting at low computational cost. Experiments on held-out paired sets demonstrate strong perceptual quality and competitive full-reference metrics, while better preserving background illumination. The dataset and model will be at https://github.com/gobunu/Light-Up-Your-Face.
翻译:人脸补光增强(FFE)通过添加虚拟补光来提亮曝光不足的人脸,同时保持原始场景照明和背景不变。大多数人脸重照明方法旨在重塑整体光照,这可能会抑制输入照明或修改整个场景,导致前景与背景不一致,且与实际FFE需求不匹配。为支持可扩展学习,我们引入了LightYourFace-160K(LYF-160K)——一个通过物理一致渲染器构建的大规模配对数据集,该渲染器注入由六个解耦因子控制的盘状区域补光,生成了16万组前后对比图像对。我们首先预训练了一个物理感知照明提示(PALP),通过辅助的平面光重建目标,将六维参数嵌入到条件标记中。基于预训练的扩散主干网络,我们随后训练了补光扩散模型(FiLitDiff),这是一个以物理基础照明编码为条件的高效一步式模型,能够以较低计算成本实现可控且高保真的补光。在保留配对集上的实验表明,该方法具有出色的感知质量和有竞争力的全参考指标,同时能更好地保持背景照明。数据集与模型将发布于https://github.com/gobunu/Light-Up-Your-Face。