When language models are assigned professional personas, they face a conflict between maintaining the persona and disclosing their AI nature. How models resolve this conflict has practical consequences: a model that constructs detailed narratives of medical training and board certifications presents a surface of professional authority it does not possess. We systematically characterize this behavior using AI identity disclosure as a testbed: when probed about expertise origins, a model can either acknowledge its AI nature or maintain its assigned professional identity. Using a factorial design, sixteen open-weight models were audited across 19,200 trials. Under neutral conditions, models disclosed their AI nature in 99.8%-99.9% of interactions; assigning a professional persona reduced disclosure to 36.3% on average, though this suppression was highly context-dependent: the same models that maintained a neurosurgeon persona often disclosed under a financial advisor persona, a 9.7-fold difference. Counter to expectations that greater scale should support broader behavioral generalization, model size explained little of this variation, while model identity explained substantially more (Delta R_adj^2 = 0.375 vs. 0.012). We hypothesized that instruction-following dynamics contribute to these patterns and probed this directly: varying a single system prompt statement increased disclosure from 23.7% to 65.8%, while general honesty instructions produced negligible effects. Self-representational behavior does not generalize across professional contexts; instead, models exhibit sharp and sometimes unexpected differences under minor environmental changes, with training choices appearing to matter more than scale.
翻译:暂无翻译