Small language models (SLMs) have become increasingly prominent in the deployment on edge devices due to their high efficiency and low computational cost. While researchers continue to advance the capabilities of SLMs through innovative training strategies and model compression techniques, the security risks of SLMs have received considerably less attention compared to large language models (LLMs).To fill this gap, we provide a comprehensive empirical study to evaluate the security performance of 13 state-of-the-art SLMs under various jailbreak attacks. Our experiments demonstrate that most SLMs are quite susceptible to existing jailbreak attacks, while some of them are even vulnerable to direct harmful prompts.To address the safety concerns, we evaluate several representative defense methods and demonstrate their effectiveness in enhancing the security of SLMs. We further analyze the potential security degradation caused by different SLM techniques including architecture compression, quantization, knowledge distillation, and so on. We expect that our research can highlight the security challenges of SLMs and provide valuable insights to future work in developing more robust and secure SLMs.
翻译:小型语言模型因其高效率与低计算成本,在边缘设备部署中日益凸显其重要性。尽管研究者持续通过创新的训练策略与模型压缩技术提升小型语言模型的能力,但与大型语言模型相比,小型语言模型的安全风险尚未获得足够关注。为填补这一空白,我们开展了系统的实证研究,评估了13个前沿小型语言模型在多种越狱攻击下的安全表现。实验表明,大多数小型语言模型对现有越狱攻击具有较高脆弱性,部分模型甚至对直接有害提示表现出易受攻击性。针对安全性问题,我们评估了若干代表性防御方法,并验证了其在增强小型语言模型安全性方面的有效性。我们进一步分析了包括架构压缩、量化、知识蒸馏等不同小型模型技术可能引发的安全性退化问题。本研究旨在揭示小型语言模型面临的安全挑战,并为未来开发更鲁棒、更安全的小型语言模型提供有价值的见解。