Modern LLMs are increasingly accessed via black-box APIs, requiring users to transmit sensitive prompts, outputs, and fine-tuning data to external providers, creating a critical privacy risk at the API boundary. We introduce AlienLM, a deployable API-only privacy layer that protects text by translating it into an Alien Language via a vocabulary-scale bijection, enabling lossless recovery on the client side. Using only standard fine-tuning APIs, Alien Adaptation Training (AAT) adapts target models to operate directly on alienized inputs. Across four LLM backbones and seven benchmarks, AlienLM retains over 81\% of plaintext-oracle performance on average, substantially outperforming random-bijection and character-level baselines. Under adversaries with access to model weights, corpus statistics, and learning-based inverse translation, recovery attacks reconstruct fewer than 0.22\% of alienized tokens. Our results demonstrate a practical pathway for privacy-preserving LLM deployment under API-only access, substantially reducing plaintext exposure while maintaining task performance.
翻译:现代大语言模型日益通过黑盒API提供服务,这要求用户将敏感提示词、输出结果及微调数据传输至外部提供商,在API边界处形成了严重的隐私风险。本文提出AlienLM——一种可部署的纯API隐私保护层,其通过词汇级双射将文本转化为异化语言,并在客户端实现无损恢复。仅利用标准微调API,异化适应训练即可使目标模型直接对异化输入进行操作。在四种大语言模型架构和七项基准测试中,AlienLM平均保持了超过81%的明文基准性能,显著优于随机双射与字符级基线方法。面对能够获取模型权重、语料统计特征及基于学习的逆向翻译的对抗者,恢复攻击仅能重构少于0.22%的异化词元。我们的研究结果为纯API访问场景下的隐私保护大语言模型部署提供了可行路径,在保持任务性能的同时显著减少了明文暴露风险。