Self-transparency is a critical safety boundary, requiring language models to honestly disclose their limitations and artificial nature. This study stress-tests this capability, investigating whether models willingly disclose their identity when assigned professional personas that conflict with transparent self-representation. When models prioritize role consistency over this boundary disclosure, users may calibrate trust based on overstated competence claims, treating AI-generated guidance as equivalent to licensed professional advice. Using a common-garden experimental design, sixteen open-weight models (4B-671B parameters) were audited under identical conditions across 19,200 trials. Models exhibited sharp domain-specific inconsistency: a Financial Advisor persona elicited 35.2% disclosure at the first prompt, while a Neurosurgeon persona elicited only 3.6%-a 9.7-fold difference that emerged at the initial epistemic inquiry. Disclosure ranged from 2.8% to 73.6% across model families, with a 14B model reaching 61.4% while a 70B model produced just 4.1%. Model identity provided substantially larger improvement in fitting observations than parameter count (Delta R_adj^2 = 0.375 vs 0.012). Reasoning variants showed heterogeneous effects: some exhibited up to -48.4 percentage points lower disclosure than their base instruction-tuned counterparts, while others maintained high transparency. An additional experiment demonstrated that explicit permission to disclose AI nature increased disclosure from 23.7% to 65.8%, revealing that suppression reflects instruction-following prioritization rather than capability limitations. Bayesian validation confirmed robustness to judge measurement error (kappa = 0.908). Organizations cannot assume safety properties will transfer across deployment domains, requiring deliberate behavior design and empirical verification.
翻译:自我透明性是一项关键的安全边界,要求语言模型如实披露其局限性与人工本质。本研究对此能力进行压力测试,探究当模型被赋予与透明自我表征相冲突的专业角色时,其是否愿意主动披露身份。若模型将角色一致性置于该边界披露之上,用户可能基于被夸大的能力声明来校准信任,将AI生成的指导等同于持牌专业建议。采用同质环境实验设计,在19,200次试验中对16个开源模型(参数量4B-671B)进行标准化审计。模型表现出显著的领域特异性不一致:金融顾问角色在首次提示时引发35.2%的披露率,而神经外科医生角色仅引发3.6%——这一9.7倍的差异在初始认知质询阶段即显现。不同模型家族的披露率介于2.8%至73.6%之间,其中14B模型达到61.4%,而70B模型仅产生4.1%。模型身份对观测结果的解释力提升显著大于参数量(ΔR_adj² = 0.375 vs 0.012)。推理变体呈现异质性效应:部分变体比其基础指令微调版本的披露率降低达48.4个百分点,而其他变体则保持高透明度。补充实验表明,明确允许披露AI属性可将披露率从23.7%提升至65.8%,揭示抑制行为反映的是指令遵循优先级而非能力局限。贝叶斯验证确认了测量误差的稳健性(kappa = 0.908)。机构不能假定安全属性可在不同部署领域间迁移,必须进行审慎的行为设计与实证验证。