Speech enabled foundation models, either in the form of flexible speech recognition based systems or audio-prompted large language models (LLMs), are becoming increasingly popular. One of the interesting aspects of these models is their ability to perform tasks other than automatic speech recognition (ASR) using an appropriate prompt. For example, the OpenAI Whisper model can perform both speech transcription and speech translation. With the development of audio-prompted LLMs there is the potential for even greater control options. In this work we demonstrate that with this greater flexibility the systems can be susceptible to model-control adversarial attacks. Without any access to the model prompt it is possible to modify the behaviour of the system by appropriately changing the audio input. To illustrate this risk, we demonstrate that it is possible to prepend a short universal adversarial acoustic segment to any input speech signal to override the prompt setting of an ASR foundation model. Specifically, we successfully use a universal adversarial acoustic segment to control Whisper to always perform speech translation, despite being set to perform speech transcription. Overall, this work demonstrates a new form of adversarial attack on multi-tasking speech enabled foundation models that needs to be considered prior to the deployment of this form of model.
翻译:基于语音的基础模型,无论是灵活基于语音识别的系统还是音频提示的大型语言模型(LLM),正变得越来越流行。这些模型的一个有趣方面在于,通过适当的提示,它们能够执行自动语音识别(ASR)以外的任务。例如,OpenAI的Whisper模型既能执行语音转录,也能执行语音翻译。随着音频提示LLM的发展,潜在的控制选项可能更加丰富。本研究表明,这种灵活性的提升也使系统更容易受到模型控制对抗攻击的影响。在无需访问模型提示的情况下,通过适当修改音频输入即可改变系统的行为。为说明这一风险,我们证明可以在任意输入语音信号前添加一个短的通用对抗声学片段,以覆盖ASR基础模型的提示设置。具体而言,我们成功使用一个通用对抗声学片段来控制Whisper,使其始终执行语音翻译,尽管其原本设置为执行语音转录。总体而言,这项工作展示了一种针对多任务语音基础模型的新型对抗攻击形式,在该类模型部署前需予以充分考虑。