Autonomous soundscape augmentation systems typically use trained models to pick optimal maskers to effect a desired perceptual change. While acoustic information is paramount to such systems, contextual information, including participant demographics and the visual environment, also influences acoustic perception. Hence, we propose modular modifications to an existing attention-based deep neural network, to allow early, mid-level, and late feature fusion of participant-linked, visual, and acoustic features. Ablation studies on module configurations and corresponding fusion methods using the ARAUS dataset show that contextual features improve the model performance in a statistically significant manner on the normalized ISO Pleasantness, to a mean squared error of $0.1194\pm0.0012$ for the best-performing all-modality model, against $0.1217\pm0.0009$ for the audio-only model. Soundscape augmentation systems can thereby leverage multimodal inputs for improved performance. We also investigate the impact of individual participant-linked factors using trained models to illustrate improvements in model explainability.
翻译:自主声景增强系统通常采用训练模型选择最优掩蔽声以实现预期的感知改变。尽管声学信息对此类系统至关重要,但包括参与者人口统计特征与视觉环境在内的情境信息同样影响听觉感知。为此,我们对现有基于注意力的深度神经网络提出模块化改进方案,以实现参与者关联特征、视觉特征与声学特征在早期、中期及晚期的多层级融合。基于ARAUS数据集对模块配置及相应融合方法进行的消融实验表明:情境特征能以统计学显著方式提升模型在归一化ISO愉悦度指标上的性能,最佳全模态模型的均方误差达$0.1194\pm0.0012$,而纯音频模型的对应误差为$0.1217\pm0.0009$。声景增强系统可由此利用多模态输入提升性能。我们还通过训练模型探究了各参与者关联因子的影响机制,从而提升了模型的可解释性。