Embeddings have become a cornerstone in the functionality of large language models (LLMs) due to their ability to transform text data into rich, dense numerical representations that capture semantic and syntactic properties. These embedding vector databases serve as the long-term memory of LLMs, enabling efficient handling of a wide range of natural language processing tasks. However, the surge in popularity of embedding vector databases in LLMs has been accompanied by significant concerns about privacy leakage. Embedding vector databases are particularly vulnerable to embedding inversion attacks, where adversaries can exploit the embeddings to reverse-engineer and extract sensitive information from the original text data. Existing defense mechanisms have shown limitations, often struggling to balance security with the performance of downstream tasks. To address these challenges, we introduce Eguard, a novel defense mechanism designed to mitigate embedding inversion attacks. Eguard employs a transformer-based projection network and text mutual information optimization to safeguard embeddings while preserving the utility of LLMs. Our approach significantly reduces privacy risks, protecting over 95% of tokens from inversion while maintaining high performance across downstream tasks consistent with original embeddings.
翻译:嵌入向量已成为大型语言模型(LLM)功能的核心基石,因其能够将文本数据转化为丰富、稠密的数值表示,从而捕捉语义和句法特性。这些嵌入向量数据库作为LLM的长期记忆,使其能够高效处理各种自然语言处理任务。然而,随着LLM中嵌入向量数据库的普及,隐私泄露问题也引发了严重关切。嵌入向量数据库尤其容易受到嵌入反演攻击,攻击者可利用嵌入向量逆向工程,从原始文本数据中提取敏感信息。现有防御机制存在局限性,往往难以在安全性与下游任务性能之间取得平衡。为应对这些挑战,我们提出了Eguard——一种旨在缓解嵌入反演攻击的新型防御机制。Eguard采用基于Transformer的投影网络和文本互信息优化技术,在保护嵌入向量的同时保持LLM的实用性。我们的方法显著降低了隐私风险,可保护超过95%的词元免遭反演,同时在下游任务中保持与原始嵌入向量相当的高性能。