We consider coverless steganography where a Large Language Model (LLM) drives an arithmetic coding decoder to generate stego-texts. An efficient method should embed secret message bits in as few language tokens as possible, while still keeping the stego-text natural and fluent. We show that on the individual token level, this problem is mathematically equivalent to maximizing the entropy of a replacement probability distribution of the next token generation, subject to a constraint on the KL divergence between the chosen probability distribution and the original distribution given by the LLM. A closed-form solution is provided for the optimization problem, which can be computed efficiently. Several important practical issues are also tackled: 1) An often-overlooked tokenization mismatch issue is resolved with a simple prompt selection approach, 2) The combination of the optimized distribution and the vocabulary truncation technique is considered, and 3) The combination of the optimized distribution with other sequence-level selection heuristics to further enhance the efficiency and reliability is studied.
翻译:本文研究基于大语言模型的算术编码解码器生成隐写文本的无载体隐写方法。高效的方法应使用尽可能少的语言令牌嵌入秘密信息比特,同时保持隐写文本的自然流畅性。我们证明,在单个令牌层面上,该问题在数学上等价于在给定约束条件下最大化下一令牌生成替换概率分布的熵,该约束条件为所选概率分布与大语言模型给出的原始分布之间的KL散度。我们为该优化问题提供了闭式解,该解可高效计算。本文还解决了若干重要实际问题:1)通过简单的提示选择方法解决了常被忽视的令牌化失配问题;2)探讨了优化分布与词汇截断技术的结合;3)研究了优化分布与其他序列级选择启发式方法的结合,以进一步提升效率与可靠性。