Children learn powerful internal models of the world around them from a few years of egocentric visual experience. Can such internal models be learned from a child's visual experience with highly generic learning algorithms or do they require strong inductive biases? Recent advances in collecting large-scale, longitudinal, developmentally realistic video datasets and generic self-supervised learning (SSL) algorithms are allowing us to begin to tackle this nature vs. nurture question. However, existing work typically focuses on image-based SSL algorithms and visual capabilities that can be learned from static images (e.g. object recognition), thus ignoring temporal aspects of the world. To close this gap, here we train self-supervised video models on longitudinal, egocentric headcam recordings collected from a child over a two year period in their early development (6-31 months). The resulting models are highly effective at facilitating the learning of action concepts from a small number of labeled examples; they have favorable data size scaling properties; and they display emergent video interpolation capabilities. Video models also learn more robust object representations than image-based models trained with the exact same data. These results suggest that important temporal aspects of a child's internal model of the world may be learnable from their visual experience using highly generic learning algorithms and without strong inductive biases.
翻译:儿童通过数年的自我中心视觉经验学习到关于周围世界的强大内部模型。这类内部模型能否通过高度通用的学习算法从儿童的视觉经验中习得,抑或需要强烈的归纳偏置?近期在大规模、纵向、发育真实性视频数据集收集以及通用自监督学习算法方面的进展,使我们得以开始探讨这一先天与后天问题。然而,现有研究通常聚焦于基于图像的自监督学习算法以及可从静态图像中习得的视觉能力(如物体识别),从而忽略了世界的时间维度。为填补这一空白,本研究基于一名儿童早期发育阶段(6-31个月)两年间采集的纵向自我中心头戴摄像机记录,训练自监督视频模型。所得模型在以下方面表现优异:能够高效促进从少量标注样本中学习动作概念;具备良好的数据规模扩展特性;并展现出涌现的视频插帧能力。与使用完全相同数据训练的基于图像的模型相比,视频模型还能学习到更鲁棒的物体表征。这些结果表明,儿童对世界内部模型中重要的时间维度特征,可能通过高度通用的学习算法从其视觉经验中习得,而无需强烈的归纳偏置。