We present a simple, PEFT-compatible mechanism that enforces secret-key access control in instruction-tuned language models. K-OTG trains on a dual-path corpus: authorized examples (prefixed with a role key) learn the task output, while unauthorized examples learn a visible block token. At inference, a pre-lm_head hook applies an orthonormal transform to the hidden state: with the correct key/role the inverse map restores the model's native basis; otherwise a session-ephemeral scrambler (permutation, sign flips, Householders) makes logits uninformative and the system short-circuits to BLOCK. Keys are not added as special tokens, and the method composes cleanly with LoRA on 4-bit bases. We evaluate an hour-scale protocol on 1-3B-class instruction models (Llama 3.2, Qwen2.5 1.5B) across utility (XSum ROUGE/BLEU, GSM8K accuracy, WikiText-2 perplexity), selectivity (3by3 role-key unlock matrices), nonce invariance, block suppression, and throughput. Authorized utility remains close to the base on summarization with the expected modest PPL increase from instruction tuning; unauthorized utility collapses (near-zero sequence metrics with exploding PPL), indicating practical unusability without the key. Unlock matrices are diagonally dominant (high on-target unlock, low cross-unlock), authorized block emission is 0 per N via robust bad-word lists, and greedy outputs match exactly across nonces, confirming correct inverse cancellation. The runtime overhead of the Python-level hook is 40% tokens per sec versus the base. K-OTG therefore provides a pragmatic, model-agnostic way to prevent unauthorized use while preserving authorized utility.
翻译:我们提出了一种简单、兼容参数高效微调(PEFT)的机制,用于在指令微调语言模型中实施密钥访问控制。K-OTG 在双路径语料上进行训练:授权样本(以角色密钥为前缀)学习任务输出,而未授权样本学习一个可见的阻塞标记。在推理时,一个置于 lm_head 之前的钩子对隐藏状态应用正交变换:使用正确的密钥/角色时,逆映射会恢复模型原有的基;否则,一个会话临时的置乱器(置换、符号翻转、Householder变换)会使逻辑值失去信息量,系统将短路输出 BLOCK。密钥不作为特殊标记添加,该方法可与4位基础模型上的LoRA干净地组合。我们在1-3B级别的指令模型(Llama 3.2, Qwen2.5 1.5B)上评估了一个小时级的协议,涵盖效用(XSum ROUGE/BLEU、GSM8K准确率、WikiText-2困惑度)、选择性(3x3角色密钥解锁矩阵)、随机数不变性、阻塞抑制和吞吐量。在摘要任务上,授权效用保持接近基础模型,指令微调带来的困惑度适度增加符合预期;未授权效用则崩溃(序列指标接近零且困惑度激增),表明无密钥时实际不可用。解锁矩阵呈对角占优(目标解锁率高,交叉解锁率低),通过鲁棒的坏词列表,授权输出中阻塞标记的发射率为每N个0次,且贪婪解码输出在不同随机数下完全匹配,证实了逆变换的正确抵消作用。Python层级钩子的运行时开销为每秒令牌数相比基础模型增加40%。因此,K-OTG 提供了一种实用、模型无关的方法,可在保持授权效用的同时防止未授权使用。