Self-supervised learning has developed rapidly over the last decade and has been applied in many areas of computer vision. Decorrelation-based self-supervised pretraining has shown great promise among non-contrastive algorithms, yielding performance at par with supervised and contrastive self-supervised baselines. In this work, we explore the decorrelation-based paradigm of self-supervised learning and apply the same to learning disentangled stroke features for writer identification. Here we propose a modified formulation of the decorrelation-based framework named SWIS which was proposed for signature verification by standardizing the features along each dimension on top of the existing framework. We show that the proposed framework outperforms the contemporary self-supervised learning framework on the writer identification benchmark and also outperforms several supervised methods as well. To the best of our knowledge, this work is the first of its kind to apply self-supervised learning for learning representations for writer verification tasks.
翻译:自监督学习在过去十年中发展迅速,并已应用于计算机视觉的许多领域。基于去相关的自监督预训练在非对比算法中显示出巨大潜力,其性能与有监督和对比自监督基线方法相当。在本工作中,我们探索了基于去相关的自监督学习范式,并将其应用于学习用于笔迹识别的解耦笔画特征。我们在此提出一种改进的基于去相关的框架公式,命名为SWIS,该框架通过在现有框架基础上对每个维度的特征进行标准化,最初是为签名验证而提出的。我们证明,所提出的框架在笔迹识别基准测试中优于当代的自监督学习框架,并且也优于多种有监督方法。据我们所知,这项工作是首次将自监督学习应用于笔迹验证任务的表征学习。