Automatic Speech Recognition (ASR) transcripts exhibit recognition errors and various spoken language phenomena such as disfluencies, ungrammatical sentences, and incomplete sentences, hence suffering from poor readability. To improve readability, we propose a Contextualized Spoken-to-Written conversion (CoS2W) task to address ASR and grammar errors and also transfer the informal text into the formal style with content preserved, utilizing contexts and auxiliary information. This task naturally matches the in-context learning capabilities of Large Language Models (LLMs). To facilitate comprehensive comparisons of various LLMs, we construct a document-level Spoken-to-Written conversion of ASR Transcripts Benchmark (SWAB) dataset. Using SWAB, we study the impact of different granularity levels on the CoS2W performance, and propose methods to exploit contexts and auxiliary information to enhance the outputs. Experimental results reveal that LLMs have the potential to excel in the CoS2W task, particularly in grammaticality and formality, our methods achieve effective understanding of contexts and auxiliary information by LLMs. We further investigate the effectiveness of using LLMs as evaluators and find that LLM evaluators show strong correlations with human evaluations on rankings of faithfulness and formality, which validates the reliability of LLM evaluators for the CoS2W task.
翻译:自动语音识别(ASR)转录文本存在识别错误及多种口语现象,如不流利表达、非语法句和不完整句,因而可读性较差。为提高可读性,我们提出一种基于上下文的口语到书面语转换(CoS2W)任务,旨在纠正ASR及语法错误,并在保留内容的前提下将非正式文本转换为正式文体,该过程充分利用上下文及辅助信息。此任务天然契合大语言模型(LLMs)的上下文学习能力。为全面比较各类LLMs的性能,我们构建了文档级ASR转录口语到书面语转换基准数据集(SWAB)。基于SWAB,我们研究了不同粒度层级对CoS2W性能的影响,并提出利用上下文与辅助信息以提升输出质量的方法。实验结果表明,LLMs在CoS2W任务中具有显著潜力,尤其在语法规范性与文体正式性方面;我们提出的方法能有效帮助LLMs理解上下文及辅助信息。我们进一步探究了使用LLMs作为评估器的有效性,发现LLM评估器在忠实度与正式性排序方面与人工评估呈现强相关性,这验证了LLM评估器在CoS2W任务中的可靠性。