Anonymizing sensitive information in user text is essential for privacy, yet existing methods often apply uniform treatment across attributes, which can conflict with communicative intent and obscure necessary information. This is particularly problematic when personal attributes are integral to expressive or pragmatic goals. The central challenge lies in determining which attributes to protect, and to what extent, while preserving semantic and pragmatic functions. We propose IntentAnony, a utility-preserving anonymization approach that performs intent-conditioned exposure control. IntentAnony models pragmatic intent and constructs privacy inference evidence chains to capture how distributed cues support attribute inference. Conditioned on intent, it assigns each attribute an exposure budget and selectively suppresses non-intent inference pathways while preserving intent-relevant content, semantic structure, affective nuance, and interactional function. We evaluate IntentAnony using privacy inference success rates, text utility metrics, and human evaluation. The results show an approximately 30% improvement in the overall privacy--utility trade-off, with notably stronger usability of anonymized text compared to prior state-of-the-art methods. Our code is available at https://github.com/Nevaeh7/IntentAnony.
翻译:用户文本中的敏感信息匿名化对隐私保护至关重要,但现有方法通常对所有属性采用统一处理方式,这可能与交流意图相冲突并模糊必要信息。当个人属性是表达性或语用目标不可或缺的组成部分时,这一问题尤为突出。核心挑战在于确定需要保护哪些属性及其保护程度,同时保持语义和语用功能。我们提出IntentAnony——一种保持实用性的匿名化方法,该方法执行意图条件化的暴露控制。IntentAnony对语用意图进行建模,并构建隐私推断证据链以捕捉分布式线索如何支持属性推断。在意图条件约束下,它为每个属性分配暴露预算,并选择性地抑制非意图推断路径,同时保留意图相关内容、语义结构、情感细微差别和交互功能。我们使用隐私推断成功率、文本效用指标和人工评估对IntentAnony进行评估。结果显示,其整体隐私-效用权衡提升了约30%,且匿名化文本的可用性相较于现有最先进方法有显著增强。我们的代码发布于https://github.com/Nevaeh7/IntentAnony。