Ensuring intelligible speech communication for hearing assistive devices in low-latency scenarios presents significant challenges in terms of speech enhancement, coding and transmission. In this paper, we propose novel solutions for low-latency joint speech transmission and enhancement, leveraging deep neural networks (DNNs). Our approach integrates two state-of-the-art DNN architectures for low-latency speech enhancement and low-latency analog joint source-channel-based transmission, creating a combined low-latency system and jointly training both systems in an end-to-end approach. Due to the computational demands of the enhancement system, this order is suitable when high computational power is unavailable in the decoder, like hearing assistive devices. The proposed system enables the configuration of total latency, achieving high performance even at latencies as low as 3 ms, which is typically challenging to attain. The simulation results provide compelling evidence that a joint enhancement and transmission system is superior to a simple concatenation system in diverse settings, encompassing various wireless channel conditions, latencies, and background noise scenarios.
翻译:确保助听设备在低延迟场景中实现清晰的语音通信,在语音增强、编码和传输方面面临重大挑战。本文提出了基于深度神经网络(DNN)的低延迟联合语音传输与增强创新解决方案。我们的方法融合了两种最先进的深度神经网络架构——低延迟语音增强网络和低延迟模拟联合信源信道传输网络,构建了一个联合低延迟系统,并通过端到端方法对两个子系统进行联合训练。考虑到增强系统的计算需求,该架构适用于解码端计算能力受限(如助听设备)的场景。所提系统可配置总延迟量,即使在传统方法难以实现的3毫秒超低延迟条件下仍能保持高性能。仿真结果有力证明,在不同无线信道条件、延迟配置和背景噪声场景中,联合增强传输系统均优于简单的串联系统。