As Large Language Models (LLMs) transition from standalone chat interfaces to foundational reasoning layers in multi-agent systems and recursive evaluation loops (LLM-as-a-judge), the detection of durable, provider-level behavioral signatures becomes a critical requirement for safety and governance. Traditional benchmarks measure transient task accuracy but fail to capture stable, latent response policies -- the ``prevailing mindsets'' embedded during training and alignment that outlive individual model versions. This paper introduces a novel auditing framework that utilizes psychometric measurement theory -- specifically latent trait estimation under ordinal uncertainty -- to quantify these tendencies without relying on ground-truth labels. Utilizing forced-choice ordinal vignettes masked by semantically orthogonal decoys and governed by cryptographic permutation-invariance, the research audits nine leading models across dimensions including Optimization Bias, Sycophancy, and Status-Quo Legitimization. Using Mixed Linear Models (MixedLM) and Intraclass Correlation Coefficient (ICC) analysis, the research identifies that while item-level framing drives high variance, a persistent ``lab signal'' accounts for significant behavioral clustering. These findings demonstrate that in ``locked-in'' provider ecosystems, latent biases are not merely static errors but compounding variables that risk creating recursive ideological echo chambers in multi-layered AI architectures.
翻译:随着大型语言模型(LLM)从独立的聊天界面转向多智能体系统和递归评估循环(LLM-as-a-judge)中的基础推理层,检测持久且具有提供商层面特征的行为模式成为安全与治理的关键需求。传统基准测试衡量的是瞬态任务准确性,但无法捕捉稳定的潜在响应策略——即在训练和对齐过程中嵌入、比单个模型版本更持久的“主导思维模式”。本文引入一种新颖的审计框架,该框架利用心理测量学理论——特别是序数不确定性下的潜在特质估计——在不依赖真实标签的情况下量化这些倾向。通过采用由语义正交干扰项掩蔽、并受密码学置换不变性约束的强制选择序数情境测试,本研究从优化偏见、谄媚倾向和现状合法化等多个维度对九种主流模型进行了审计。借助混合线性模型(MixedLM)和组内相关系数(ICC)分析,研究发现:虽然项目层面的表述框架导致高方差,但持续的“实验室信号”仍能解释显著的行为聚类现象。这些结果表明,在“锁定”的提供商生态系统中,潜在偏见不仅是静态误差,更是可能在多层级AI架构中形成递归意识形态回声室的复合风险变量。