The increasing dependence on large-scale datasets in machine learning introduces significant privacy and ethical challenges. Synthetic data generation offers a promising solution; however, most current methods rely on external datasets or pre-trained models, which add complexity and escalate resource demands. In this work, we introduce a novel self-contained synthetic augmentation technique that strategically samples from a conditional generative model trained exclusively on the target dataset. This approach eliminates the need for auxiliary data sources. Applied to face recognition datasets, our method achieves 1--12\% performance improvements on the IJB-C and IJB-B benchmarks. It outperforms models trained solely on real data and exceeds the performance of state-of-the-art synthetic data generation baselines. Notably, these enhancements often surpass those achieved through architectural improvements, underscoring the significant impact of synthetic augmentation in data-scarce environments. These findings demonstrate that carefully integrated synthetic data not only addresses privacy and resource constraints but also substantially boosts model performance. Project page https://parsa-ra.github.io/auggen
翻译:机器学习对大规模数据集的日益依赖带来了显著的隐私与伦理挑战。合成数据生成提供了一种前景广阔的解决方案;然而,当前大多数方法依赖于外部数据集或预训练模型,这增加了复杂性并提高了资源需求。本研究提出了一种新颖的自包含合成增强技术,该技术策略性地从仅使用目标数据集训练的条件生成模型中采样。该方法消除了对辅助数据源的需求。在人脸识别数据集上的应用表明,我们的方法在IJB-C和IJB-B基准测试中实现了1-12%的性能提升。其性能优于仅使用真实数据训练的模型,并超越了当前最先进的合成数据生成基线方法。值得注意的是,这些改进效果往往超过通过架构优化所获得的提升,突显了在数据稀缺环境下合成增强技术的显著影响。这些发现表明,精心整合的合成数据不仅能应对隐私和资源限制问题,还能显著提升模型性能。项目页面 https://parsa-ra.github.io/auggen