An increasing number of Chinese people are troubled by different degrees of visual impairment, which has made the modal conversion between a single image or video frame in the visual field and the audio expressing the same information a research hotspot. Deep learning technologies such as OCR+Vocoder and Im2Wav enable English audio synthesis or image-to-sound matching in a self-supervised manner. However, the audio data used for training is limited and English is not universal for visually impaired people with different educational levels. Therefore, for the sake of solving the problems of data volume and language applicability to improve the reading efficiency of visually impaired people, a set of image-to-speech framework CLIP-KNN-Fastspeech2 based on the Chinese context was constructed. The framework integrates multiple basic models and adopts the strategy of independent pre-training and joint fine-tuning. First, the Chinese CLIP and Fastspeech2 text-to-speech models were pre-trained on two public datasets, MUGE and Baker, respectively, and their convergence was verified. Subsequently, joint fine-tuning was performed using a self-built Braille image dataset. Experimental results on multiple public datasets such as VGGSound, Flickr8k, ImageHear, and the self-built Braille dataset BIT-DP show that the model has improved objective indicators such as BLEU4,FAD(Fr\'echet Audio Distance), WER(Word Error Ratio), and even inference speed. This verifies that the constructed model still has the ability to synthesize high-quality speech under limited data, and also proves the effectiveness of the joint training strategy that integrates multiple basic models.
翻译:越来越多的中国人正受到不同程度视力障碍的困扰,这使得视觉领域中单一图像或视频帧与表达相同信息的音频之间的模态转换成为一个研究热点。OCR+Vocoder、Im2Wav等深度学习技术能够以自监督方式实现英文音频合成或图像到声音的匹配。然而,用于训练的音频数据有限,且英语对不同教育背景的视障人士并不通用。因此,为解决数据量与语言适用性问题以提升视障人士的阅读效率,本研究构建了一套基于中文语境的图像到语音框架CLIP-KNN-Fastspeech2。该框架整合了多个基础模型,采用独立预训练与联合微调的策略。首先,中文CLIP与Fastspeech2文本转语音模型分别在MUGE和Baker两个公开数据集上进行预训练,并验证了其收敛性。随后,使用自建的盲文图像数据集进行联合微调。在VGGSound、Flickr8k、ImageHear等公开数据集及自建盲文数据集BIT-DP上的实验结果表明,该模型在BLEU4、FAD(Fr\'echet音频距离)、WER(词错误率)乃至推理速度等客观指标上均有所提升。这验证了所构建模型在有限数据下仍具备合成高质量语音的能力,同时也证明了融合多基础模型的联合训练策略的有效性。