To prevent the misuse of Large Language Models (LLMs) for malicious purposes, numerous efforts have been made to develop the safety alignment mechanisms of LLMs. However, as multiple LLMs become readily accessible through various Model-as-a-Service (MaaS) platforms, attackers can strategically exploit LLMs' heterogeneous safety policies to fulfill malicious information generation tasks in a distributed manner. In this study, we introduce \textit{\textbf{PoisonSwarm}} to how attackers can reliably launder malicious tasks via the speculative use of LLM crowdsourcing. Building upon a scheduler orchestrating crowdsourced LLMs, PoisonSwarm maps the given malicious task to a benign analogue to derive a content template, decomposes it into semantic units for crowdsourced unit-wise rewriting, and reassembles the outputs into malicious content. Experiments show its superiority over existing methods in data quality, diversity, and success rates. Regulation simulations further reveal the difficulty of governing such distributed, orchestrated misuse in MaaS ecosystems, highlighting the need for coordinated, ecosystem-level defenses.
翻译:为防止大型语言模型(LLM)被滥用于恶意目的,学界已投入大量努力开发LLM的安全对齐机制。然而,随着多种LLM通过各类模型即服务(MaaS)平台变得易于获取,攻击者可策略性地利用LLM异构的安全策略,以分布式方式完成恶意信息生成任务。本研究提出\textbf{\textit{PoisonSwarm}}方法,揭示攻击者如何通过推测性使用LLM众包可靠地清洗恶意任务。基于调度众包LLM的协调器,PoisonSwarm将给定的恶意任务映射至良性类比任务以生成内容模板,将其分解为语义单元进行众包式单元重写,并将输出重组为恶意内容。实验表明,该方法在数据质量、多样性和成功率方面均优于现有方法。监管模拟进一步揭示了在MaaS生态中治理此类分布式协同滥用的困难,凸显了建立协同生态系统级防御机制的必要性。