Follow-up conversations with virtual assistants (VAs) enable a user to seamlessly interact with a VA without the need to repeatedly invoke it using a keyword (after the first query). Therefore, accurate Device-directed Speech Detection (DDSD) from the follow-up queries is critical for enabling naturalistic user experience. To this end, we explore the notion of Large Language Models (LLMs) and model the first query when making inference about the follow-ups (based on the ASR-decoded text), via prompting of a pretrained LLM, or by adapting a binary classifier on top of the LLM. In doing so, we also exploit the ASR uncertainty when designing the LLM prompts. We show on the real-world dataset of follow-up conversations that this approach yields large gains (20-40% reduction in false alarms at 10% fixed false rejects) due to the joint modeling of the previous speech context and ASR uncertainty, compared to when follow-ups are modeled alone.
翻译:与虚拟助手(VAs)的后续对话使用户能够在无需重复使用关键词唤醒(在首次查询之后)的情况下,与VA进行无缝交互。因此,从后续查询中准确检测设备指向语音(DDSD)对于实现自然的用户体验至关重要。为此,我们探索了大语言模型(LLMs)的概念,并在对后续查询进行推断时(基于ASR解码文本)对首次查询进行建模,方法包括提示预训练的LLM,或在LLM之上适配一个二元分类器。在此过程中,我们在设计LLM提示时也利用了ASR的不确定性。我们在真实世界的后续对话数据集上证明,与仅对后续查询建模的情况相比,由于联合建模了先前的语音上下文和ASR不确定性,该方法带来了显著的性能提升(在10%固定错误拒绝率下,错误警报减少了20-40%)。