With the rapid advancement and adoption of Audio Large Language Models (ALLMs), voice agents are now being deployed in high-stakes domains such as banking, customer service, and IT support. However, their vulnerabilities to adversarial misuse still remain unexplored. While prior work has examined aspects of trustworthiness in ALLMs, such as harmful content generation and hallucination, systematic security evaluations of voice agents are still lacking. To address this gap, we propose Aegis, a red-teaming framework for the governance, integrity, and security of voice agents. Aegis models the realistic deployment pipeline of voice agents and designs structured adversarial scenarios of critical risks, including privacy leakage, privilege escalation, resource abuse, etc. We evaluate the framework through case studies in banking call centers, IT Support, and logistics. Our evaluation shows that while access controls mitigate data-level risks, voice agents remain vulnerable to behavioral attacks that cannot be addressed through access restrictions alone, even under strict access controls. We observe systematic differences across model families, with open-weight models exhibiting higher susceptibility, underscoring the need for layered defenses that combine access control, policy enforcement, and behavioral monitoring to secure next-generation voice agents.
翻译:随着音频大语言模型(ALLM)的快速发展和广泛应用,语音代理正被部署于银行、客户服务和IT支持等高风险领域。然而,其面对对抗性滥用的脆弱性仍未得到充分探索。尽管先前的研究已考察了ALLM可信度的某些方面,如有害内容生成和幻觉问题,但对语音代理的系统性安全评估仍然缺乏。为填补这一空白,我们提出了Aegis——一个针对语音代理治理、完整性与安全的红队测试框架。Aegis模拟了语音代理的实际部署流程,并设计了关键风险的结构化对抗场景,包括隐私泄露、权限提升、资源滥用等。我们通过银行呼叫中心、IT支持和物流领域的案例研究对该框架进行了评估。评估结果表明,虽然访问控制能缓解数据层面的风险,但即使在严格的访问控制下,语音代理仍易受行为攻击的影响,这类攻击无法仅通过访问限制来解决。我们观察到不同模型系列之间存在系统性差异,其中开源权重模型表现出更高的易受攻击性,这凸显了需要结合访问控制、策略执行和行为监控的多层防御机制来保障下一代语音代理的安全。