The area of Machine Learning as a Service (MLaaS) is experiencing increased implementation due to recent advancements in the AI (Artificial Intelligence) industry. However, this spike has prompted concerns regarding AI defense mechanisms, specifically regarding potential covert attacks from third-party providers that cannot be entirely trusted. Recent research has uncovered that auditory backdoors may use certain modifications as their initiating mechanism. DynamicTrigger is introduced as a methodology for carrying out dynamic backdoor attacks that use cleverly designed tweaks to ensure that corrupted samples are indistinguishable from clean. By utilizing fluctuating signal sampling rates and masking speaker identities through dynamic sound triggers (such as the clapping of hands), it is possible to deceive speech recognition systems (ASR). Our empirical testing demonstrates that DynamicTrigger is both potent and stealthy, achieving impressive success rates during covert attacks while maintaining exceptional accuracy with non-poisoned datasets.
翻译:随着人工智能(AI)行业的最新进展,机器学习即服务(MLaaS)领域的应用日益广泛。然而,这种激增也引发了人们对AI防御机制的担忧,特别是针对来自无法完全信任的第三方提供商的潜在隐蔽攻击。近期研究揭示,听觉后门可能利用某些修改作为其启动机制。本文提出的DynamicTrigger是一种执行动态后门攻击的方法,它通过巧妙设计的调整确保被污染的样本与干净样本无法区分。通过利用波动的信号采样率,并通过动态声音触发器(如拍手声)掩盖说话者身份,该方法能够欺骗语音识别系统(ASR)。我们的实证测试表明,DynamicTrigger既高效又隐蔽,在隐蔽攻击中实现了令人印象深刻的成功率,同时在非污染数据集上保持了卓越的准确性。