This paper paper develops a theory-based, explainable deep learning convolutional neural network (CNN) classifier to predict the time-varying emotional response to music. We design novel CNN filters that leverage the frequency harmonics structure from acoustic physics known to impact the perception of musical features. Our theory-based model is more parsimonious, but provides comparable predictive performance to atheoretical deep learning models, while performing better than models using handcrafted features. Our model can be complemented with handcrafted features, but the performance improvement is marginal. Importantly, the harmonics-based structure placed on the CNN filters provides better explainability for how the model predicts emotional response (valence and arousal), because emotion is closely related to consonance--a perceptual feature defined by the alignment of harmonics. Finally, we illustrate the utility of our model with an application involving digital advertising. Motivated by YouTube mid-roll ads, we conduct a lab experiment in which we exogenously insert ads at different times within videos. We find that ads placed in emotionally similar contexts increase ad engagement (lower skip rates, higher brand recall rates). Ad insertion based on emotional similarity metrics predicted by our theory-based, explainable model produces comparable or better engagement relative to atheoretical models.
翻译:本文提出了一种基于理论的可解释深度学习卷积神经网络(CNN)分类器,用于预测音乐随时间变化的情感响应。我们设计了新颖的CNN滤波器,这些滤波器利用了声学物理中已知影响音乐特征感知的谐波频率结构。我们的基于理论的模型更为简洁,但其预测性能与无理论基础的深度学习模型相当,同时优于使用手工特征构建的模型。虽然可以补充手工特征,但性能提升有限。重要的是,基于谐波结构的CNN滤波器设计为模型如何预测情感响应(效价与唤醒度)提供了更好的可解释性,因为情感与协和度密切相关——后者是由谐波对齐定义的感知特征。最后,我们通过数字广告应用展示了该模型的实用性。受YouTube插播广告启发,我们在实验室实验中于视频不同时间点外生地插入广告。研究发现,在情感相似情境下投放的广告能提升广告参与度(降低跳过率、提高品牌回忆率)。使用我们基于理论的可解释模型预测的情感相似度指标进行广告投放,其产生的参与度与无理论模型相当或更优。