Vision Language Models (VLMs) are increasingly integrated into privacy-critical domains, yet existing evaluations of personally identifiable information (PII) leakage largely treat privacy as a static extraction task and ignore how a subject's online presence--the volume of their data available online--influences privacy alignment. We introduce PII-VisBench, a novel benchmark containing 4000 unique probes designed to evaluate VLM safety through the continuum of online presence. The benchmark stratifies 200 subjects into four visibility categories: high, medium, low, and zero--based on the extent and nature of their information available online. We evaluate 18 open-source VLMs (0.3B-32B) based on two key metrics: percentage of PII probing queries refused (Refusal Rate) and the fraction of non-refusal responses flagged for containing PII (Conditional PII Disclosure Rate). Across models, we observe a consistent pattern: refusals increase and PII disclosures decrease (9.10% high to 5.34% low) as subject visibility drops. We identify that models are more likely to disclose PII for high-visibility subjects, alongside substantial model-family heterogeneity and PII-type disparities. Finally, paraphrasing and jailbreak-style prompts expose attack and model-dependent failures, motivating visibility-aware safety evaluation and training interventions.
翻译:视觉语言模型(VLMs)正日益集成到隐私关键领域,然而现有对个人身份信息(PII)泄露的评估大多将隐私视为静态提取任务,忽视了主体的在线存在(即其在线可用数据的数量)如何影响隐私对齐。我们提出了PII-VisBench,这是一个包含4000个独特探测样本的新型基准,旨在通过在线存在的连续体评估VLM的安全性。该基准将200名受试者根据其在线信息的范围和性质划分为四个可见性类别:高、中、低和零。我们基于两个关键指标评估了18个开源VLM(0.3B-32B参数规模):拒绝PII探测查询的百分比(拒绝率)以及被标记为包含PII的非拒绝响应比例(条件性PII泄露率)。在所有模型中,我们观察到一个一致的模式:随着主体可见性降低,拒绝率上升而PII泄露率下降(从高可见性的9.10%降至低可见性的5.34%)。我们发现模型更可能泄露高可见性主体的PII,同时存在显著的模型系列异质性和PII类型差异。最后,通过释义和越狱式提示暴露了攻击依赖性和模型依赖性的失效,这促使我们需要进行可见性感知的安全性评估和训练干预。