Contrastive Learning (CL) has been successfully applied to classification and other downstream tasks related to concrete concepts, such as objects contained in the ImageNet dataset. No attempts seem to have been made so far in applying this promising scheme to more abstract entities. A prominent example of these could be the concept of (discrete) Quantity. CL can be frequently interpreted as a self-supervised scheme guided by some profound and ubiquitous conservation principle (e.g. conservation of identity in object classification tasks). In this introductory work we apply a suitable conservation principle to the semi-abstract concept of natural numbers by which discrete quantities can be estimated or predicted. We experimentally show, by means of a toy problem, that contrastive learning can be trained to count at a glance with high accuracy both at human as well as at super-human ranges.. We compare this with the results of a trained-to-count at a glance supervised learning (SL) neural network scheme of similar architecture. We show that both schemes exhibit similar good performance on baseline experiments, where the distributions of the training and testing stages are equal. Importantly, we demonstrate that in some generalization scenarios, where training and testing distributions differ, CL boasts more robust and much better error performance.
翻译:对比学习(CL)已成功应用于与具体概念相关的分类及其他下游任务,例如ImageNet数据集中包含的物体。迄今为止,似乎尚未尝试将这一有前景的方案应用于更抽象的实体。此类实体的一个典型例子可能是(离散)数量的概念。对比学习常可被解释为受某种深刻且普遍存在的守恒原理指导的自监督方案(例如物体分类任务中的身份守恒原理)。在这项基础性工作中,我们将合适的守恒原理应用于自然数这一半抽象概念,通过该概念可估计或预测离散数量。我们通过一个示例问题实验证明,对比学习能够被训练成以极高精度实现“一眼计数”,其计数范围既涵盖人类水平也超越人类范围。我们将此结果与具有类似架构的、经过“一眼计数”训练的有监督学习(SL)神经网络方案进行比较。结果表明,在训练与测试阶段数据分布相同的基线实验中,两种方案均表现出相似的良好性能。重要的是,我们证明在某些泛化场景中(即训练与测试分布存在差异时),对比学习展现出更鲁棒且更优越的错误性能。