We propose a trait-specific image generation method that models forehead creases geometrically using B-spline and B\'ezier curves. This approach ensures the realistic generation of both principal creases and non-prominent crease patterns, effectively constructing detailed and authentic forehead-crease images. These geometrically rendered images serve as visual prompts for a diffusion-based Edge-to-Image translation model, which generates corresponding mated samples. The resulting novel synthetic identities are then used to train a forehead-crease verification network. To enhance intra-subject diversity in the generated samples, we employ two strategies: (a) perturbing the control points of B-splines under defined constraints to maintain label consistency, and (b) applying image-level augmentations to the geometric visual prompts, such as dropout and elastic transformations, specifically tailored to crease patterns. By integrating the proposed synthetic dataset with real-world data, our method significantly improves the performance of forehead-crease verification systems under a cross-database verification protocol.
翻译:我们提出一种特定特征的图像生成方法,该方法使用B样条曲线和贝塞尔曲线对额纹进行几何建模。此方法确保了主要皱纹和非显著皱纹模式的真实生成,有效构建了细节丰富且真实的额纹图像。这些几何渲染的图像作为基于扩散的边缘到图像转换模型的视觉提示,用于生成相应的配对样本。由此产生的新型合成身份随后用于训练额纹验证网络。为了增强生成样本的类内多样性,我们采用两种策略:(a) 在定义的约束下扰动B样条曲线的控制点以保持标签一致性,以及(b) 对几何视觉提示应用图像级增强,例如专门针对皱纹模式定制的随机丢弃和弹性变换。通过将提出的合成数据集与真实世界数据相结合,我们的方法在跨数据库验证协议下显著提升了额纹验证系统的性能。