Fine-tuned LLMs can covertly encode prompt secrets into outputs via steganographic channels. Prior work demonstrated this threat but relied on trivially recoverable encodings. We formalize payload recoverability via classifier accuracy and show previous schemes achieve 100\% recoverability. In response, we introduce low-recoverability steganography, replacing arbitrary mappings with embedding-space-derived ones. For Llama-8B (LoRA) and Ministral-8B (LoRA) trained on TrojanStego prompts, exact secret recovery rises from 17$\rightarrow$30\% (+78\%) and 24$\rightarrow$43\% (+80\%) respectively, while on Llama-70B (LoRA) trained on Wiki prompts, it climbs from 9$\rightarrow$19\% (+123\%), all while reducing payload recoverability. We then discuss detection. We argue that detecting fine-tuning-based steganographic attacks requires approaches beyond traditional steganalysis. Standard approaches measure distributional shift, which is an expected side-effect of fine-tuning. Instead, we propose a mechanistic interpretability approach: linear probes trained on later-layer activations detect the secret with up to 33\% higher accuracy in fine-tuned models compared to base models, even for low-recoverability schemes. This suggests that malicious fine-tuning leaves actionable internal signatures amenable to interpretability-based defenses.
翻译:微调后的大语言模型可通过隐写通道将提示词秘密隐蔽编码至输出中。先前研究虽已证实此威胁,但依赖于可轻易恢复的编码方式。本文通过分类器准确率形式化定义载荷可恢复性,并证明现有方案可实现100%的恢复率。为此,我们提出低可恢复性隐写术,将任意映射替换为嵌入空间导出的映射。在TrojanStego提示词上训练的Llama-8B(LoRA)和Ministral-8B(LoRA)模型中,精确秘密恢复率分别从17%提升至30%(+78%)和24%提升至43%(+80%);在Wiki提示词上训练的Llama-70B(LoRA)模型中,该比率从9%提升至19%(+123%),同时降低了载荷可恢复性。继而我们探讨检测方法:我们认为检测基于微调的隐写攻击需要超越传统隐写分析的方法。标准方法通过测量分布偏移实现检测,但分布偏移是微调的预期副作用。为此,我们提出一种机制可解释性方法:基于深层激活训练的线性探针在微调模型中检测秘密的准确率可比基础模型提升高达33%,即使对于低可恢复性方案亦然。这表明恶意微调会留下可操作的内部特征,适用于基于可解释性的防御机制。