Passive acoustic monitoring (PAM) data is often weakly labelled, audited at the scale of detection presence or absence on timescales of minutes to hours. Moreover, this data exhibits great variability from one deployment to the next, due to differences in ambient noise and the signals across sources and geographies. This study proposes a two-step solution to leverage weakly annotated data for training Deep Learning (DL) detection models. Our case study involves binary classification of the presence/absence of sperm whale (\textit{Physeter macrocephalus}) click trains in 4-minute-long recordings from a dataset comprising diverse sources and deployment conditions to maximise generalisability. We tested methods for extracting acoustic features from lengthy audio segments and integrated Temporal Convolutional Networks (TCNs) trained on the extracted features for sequence classification. For feature extraction, we introduced a new approach using Variational AutoEncoders (VAEs) to extract information from both waveforms and spectrograms, which eliminates the necessity for manual threshold setting or time-consuming strong labelling. For classification, TCNs were trained separately on sequences of either VAE embeddings or handpicked acoustic features extracted from the waveform and spectrogram representations using classical methods, to compare the efficacy of the two approaches. The TCN demonstrated robust classification capabilities on a validation set, achieving accuracies exceeding 85\% when applied to 4-minute acoustic recordings. Notably, TCNs trained on handpicked acoustic features exhibited greater variability in performance across recordings from diverse deployment conditions, whereas those trained on VAEs showed a more consistent performance, highlighting the robust transferability of VAEs for feature extraction across different deployment conditions.
翻译:被动声学监测数据通常带有弱标签,其审核尺度为分钟至小时级别的检测存在与否。此外,由于环境噪声以及不同声源和地理位置的信号存在差异,此类数据在不同部署间表现出极大的变异性。本研究提出一种两步解决方案,以利用弱标注数据训练深度学习检测模型。我们的案例研究涉及对来自一个包含多种声源和部署条件的数据集中、时长为4分钟的录音进行抹香鲸点击序列存在与否的二元分类,以最大化模型的泛化能力。我们测试了从长音频片段中提取声学特征的方法,并整合了基于所提取特征进行序列分类的时序卷积网络。在特征提取方面,我们引入了一种新方法,利用变分自编码器从波形和频谱图中提取信息,从而避免了手动设置阈值或耗时的人工强标注需求。在分类方面,我们分别使用经典方法从波形和频谱图表示中提取的VAE嵌入序列或人工选取的声学特征序列单独训练TCN,以比较两种方法的效能。TCN在验证集上展现出稳健的分类能力,应用于4分钟音频录音时准确率超过85%。值得注意的是,基于人工选取声学特征训练的TCN在不同部署条件的录音中表现出更大的性能波动,而基于VAE训练的模型则显示出更稳定的性能,这凸显了VAE在不同部署条件下进行特征提取时具有更强的可迁移性。