Large Language Models (LLMs) are increasingly deployed across multilingual applications that handle sensitive data, yet their scale and linguistic variability introduce major privacy risks. Mostly evaluated for English, this paper investigates how language structure affects privacy leakage in LLMs trained on English, Spanish, French, and Italian medical corpora. We quantify six linguistic indicators and evaluate three attack vectors: extraction, counterfactual memorization, and membership inference. Results show that privacy vulnerability scales with linguistic redundancy and tokenization granularity: Italian exhibits the strongest leakage, while English shows higher membership separability. In contrast, French and Spanish display greater resilience due to higher morphological complexity. Overall, our findings provide the first quantitative evidence that language matters in privacy leakage, underscoring the need for language-aware privacy-preserving mechanisms in LLM deployments.
翻译:大型语言模型(LLMs)正日益广泛地部署于处理敏感数据的多语言应用中,但其规模与语言多样性带来了重大的隐私风险。现有研究主要针对英语进行评估,本文则探究语言结构如何影响基于英语、西班牙语、法语和意大利语医学语料库训练的LLMs中的隐私泄露问题。我们量化了六项语言特征指标,并评估了三种攻击向量:信息提取、反事实记忆与成员推断。结果表明,隐私脆弱性与语言冗余度及分词粒度呈正相关:意大利语表现出最强的泄露倾向,而英语则显示出更高的成员可区分性。相比之下,法语和西班牙语因形态复杂度更高而展现出更强的抗攻击能力。总体而言,我们的研究首次提供了语言差异影响隐私泄露的量化证据,强调了在LLM部署中开发语言感知型隐私保护机制的必要性。