In this research, we present SLYKLatent, a novel approach for enhancing gaze estimation by addressing appearance instability challenges in datasets due to aleatoric uncertainties, covariant shifts, and test domain generalization. SLYKLatent utilizes Self-Supervised Learning for initial training with facial expression datasets, followed by refinement with a patch-based tri-branch network and an inverse explained variance-weighted training loss function. Our evaluation on benchmark datasets achieves a 10.9% improvement on Gaze360, supersedes top MPIIFaceGaze results with 3.8%, and leads on a subset of ETH-XGaze by 11.6%, surpassing existing methods by significant margins. Adaptability tests on RAF-DB and Affectnet show 86.4% and 60.9% accuracies, respectively. Ablation studies confirm the effectiveness of SLYKLatent's novel components.
翻译:本研究提出SLYKLatent,一种通过解决数据集中因偶然不确定性、协变量偏移和测试域泛化引起的外观不稳定性问题来增强视线估计的新方法。SLYKLatent利用自监督学习在面部表情数据集上进行初始训练,随后通过基于图像块的三分支网络和逆向解释方差加权训练损失函数进行精调。我们在基准数据集上的评估结果显示:在Gaze360上实现了10.9%的性能提升,以3.8%的优势超越MPIIFaceGaze的最佳结果,并在ETH-XGaze子集上以11.6%的领先幅度显著超越现有方法。在RAF-DB和Affectnet上的适应性测试分别达到86.4%和60.9%的准确率。消融研究证实了SLYKLatent各创新组件的有效性。