Small language models (SLMs) have become increasingly prominent in the deployment on edge devices due to their high efficiency and low computational cost. While researchers continue to advance the capabilities of SLMs through innovative training strategies and model compression techniques, the security risks of SLMs have received considerably less attention compared to large language models (LLMs).To fill this gap, we provide a comprehensive empirical study to evaluate the security performance of 13 state-of-the-art SLMs under various jailbreak attacks. Our experiments demonstrate that most SLMs are quite susceptible to existing jailbreak attacks, while some of them are even vulnerable to direct harmful prompts.To address the safety concerns, we evaluate several representative defense methods and demonstrate their effectiveness in enhancing the security of SLMs. We further analyze the potential security degradation caused by different SLM techniques including architecture compression, quantization, knowledge distillation, and so on. We expect that our research can highlight the security challenges of SLMs and provide valuable insights to future work in developing more robust and secure SLMs.
翻译:小型语言模型(SLMs)因其高效率和低计算成本,在边缘设备部署中日益突出。尽管研究人员通过创新的训练策略和模型压缩技术不断提升SLMs的能力,但与大型语言模型(LLMs)相比,SLMs的安全风险受到的关注明显不足。为填补这一空白,我们开展了一项全面的实证研究,评估了13个最先进的SLMs在各种越狱攻击下的安全性能。实验表明,大多数SLMs对现有的越狱攻击相当敏感,其中一些甚至对直接的恶意提示也表现出脆弱性。针对这些安全问题,我们评估了几种代表性的防御方法,并证明了它们在增强SLMs安全性方面的有效性。我们进一步分析了不同SLM技术(包括架构压缩、量化、知识蒸馏等)可能引发的安全性能下降。我们期望本研究能凸显SLMs面临的安全挑战,并为未来开发更鲁棒、更安全的SLMs提供有价值的见解。