Large language models (LLMs) exhibit excellent ability to understand human languages, but do they also understand their own language that appears gibberish to us? In this work we delve into this question, aiming to uncover the mechanisms underlying such behavior in LLMs. We employ the Greedy Coordinate Gradient optimizer to craft prompts that compel LLMs to generate coherent responses from seemingly nonsensical inputs. We call these inputs LM Babel and this work systematically studies the behavior of LLMs manipulated by these prompts. We find that the manipulation efficiency depends on the target text's length and perplexity, with the Babel prompts often located in lower loss minima compared to natural prompts. We further examine the structure of the Babel prompts and evaluate their robustness. Notably, we find that guiding the model to generate harmful texts is not more difficult than into generating benign texts, suggesting lack of alignment for out-of-distribution prompts.
翻译:大型语言模型展现出对人类语言的卓越理解能力,但它们是否也能理解那些对我们而言如同乱码的"自身语言"?本文深入探究这一问题,旨在揭示大型语言模型中此类行为的内在机制。我们采用贪婪坐标梯度优化器构造提示语,迫使大型语言模型从看似无意义的输入中生成连贯的回复。我们将这类输入称为"语言巴别塔"(LM Babel),并系统研究了受这些提示语操控的大型语言模型的行为。研究发现,操控效率取决于目标文本的长度与困惑度,而语言巴别塔提示语通常比自然提示语更易收敛至较低损失函数最小值。我们进一步分析了语言巴别塔提示语的结构并评估其鲁棒性。值得注意的是,研究发现引导模型生成有害文本并不比生成良性文本更困难,这暗示了针对分布外提示语的对齐机制缺失。