Depression is a mental disorder and can cause a variety of symptoms, including psychological, physical, and social. Speech has been proved an objective marker for the early recognition of depression. For this reason, many studies have been developed aiming to recognize depression through speech. However, existing methods rely on the usage of only the spontaneous speech neglecting information obtained via read speech, use transcripts which are often difficult to obtain (manual) or come with high word-error rates (automatic), and do not focus on input-conditional computation methods. To resolve these limitations, this is the first study in depression recognition task obtaining representations of both spontaneous and read speech, utilizing multimodal fusion methods, and employing Mixture of Experts (MoE) models in a single deep neural network. Specifically, we use audio files corresponding to both interview and reading tasks and convert each audio file into log-Mel spectrogram, delta, and delta-delta. Next, the image representations of the two tasks pass through shared AlexNet models. The outputs of the AlexNet models are given as input to a multimodal fusion method. The resulting vector is passed through a MoE module. In this study, we employ three variants of MoE, namely sparsely-gated MoE and multilinear MoE based on factorization. Findings suggest that our proposed approach yields an Accuracy and F1-score of 87.00% and 86.66% respectively on the Androids corpus.
翻译:抑郁症是一种精神障碍,可引发包括心理、生理及社会功能在内的多种症状。语音已被证实可作为抑郁症早期识别的客观标志物。为此,已有大量研究致力于通过语音识别抑郁症。然而,现有方法通常仅依赖自发语音而忽视朗读语音所包含的信息,采用难以获取(人工转录)或词错误率较高(自动转录)的文本转录,且未聚焦于输入条件计算方法的探索。为突破这些局限,本研究首次在抑郁症识别任务中同时获取自发与朗读语音的表征,采用多模态融合方法,并在单一深度神经网络中集成专家混合模型。具体而言,我们使用访谈与阅读任务对应的音频文件,将每个音频文件转换为对数梅尔频谱图及其一阶、二阶差分。随后,两项任务的图像表征通过共享的AlexNet模型处理,其输出作为多模态融合方法的输入。生成的向量经由MoE模块处理。本研究采用三种MoE变体:基于稀疏门控的MoE与基于因子分解的多线性MoE。实验结果表明,在Androids语料库上,我们提出的方法取得了87.00%的准确率与86.66%的F1分数。