We propose a self-supervised method for learning representations based on spatial audio-visual correspondences in egocentric videos. Our method uses a masked auto-encoding framework to synthesize masked binaural (multi-channel) audio through the synergy of audio and vision, thereby learning useful spatial relationships between the two modalities. We use our pretrained features to tackle two downstream video tasks requiring spatial understanding in social scenarios: active speaker detection and spatial audio denoising. Through extensive experiments, we show that our features are generic enough to improve over multiple state-of-the-art baselines on both tasks on two challenging egocentric video datasets that offer binaural audio, EgoCom and EasyCom. Project: http://vision.cs.utexas.edu/projects/ego_av_corr.
翻译:我们提出一种基于第一人称视频中空间音画对应关系的自监督表征学习方法。该方法采用掩码自编码框架,通过音视频协同合成掩码双耳(多通道)音频,从而习得两种模态间有用的空间关系。我们利用预训练特征处理需要空间理解的社交场景下游视频任务:主动说话人检测与空间音频降噪。通过大量实验表明,我们的特征具有足够通用性,能在两个提供双耳音频的挑战性第一人称视频数据集EgoCom和EasyCom上,提升两类任务中多个当前最优基准方法的性能。项目主页:http://vision.cs.utexas.edu/projects/ego_av_corr