We evaluate the information that can unintentionally leak into the low dimensional output of a neural network, by reconstructing an input image from a 40- or 32-element feature vector that intends to only describe abstract attributes of a facial portrait. The reconstruction uses blackbox-access to the image encoder which generates the feature vector. Other than previous work, we leverage recent knowledge about image generation and facial similarity, implementing a method that outperforms the current state-of-the-art. Our strategy uses a pretrained StyleGAN and a new loss function that compares the perceptual similarity of portraits by mapping them into the latent space of a FaceNet embedding. Additionally, we present a new technique that fuses the output of an ensemble, to deliberately generate specific aspects of the recreated image.
翻译:本研究通过从仅意图描述面部肖像抽象属性的40维或32维特征向量重建输入图像,评估了神经网络低维输出中可能非意图泄露的信息。重建过程利用了生成特征向量的图像编码器的黑盒访问权限。与先前研究不同,我们结合了图像生成与面部相似性领域的最新认知,实现了一种超越当前最优水平的方法。该策略采用预训练的StyleGAN模型及新型损失函数,通过将肖像映射至FaceNet嵌入的潜在空间来比较其感知相似度。此外,我们提出一种集成输出融合新技术,可定向生成重建图像的特定特征维度。