Existing evaluations of political bias in large language models (LLMs) typically classify outputs as left- or right-leaning. We extend this perspective by examining how ideological tendencies vary across topics and how consistently models maintain their positions, a property we refer to as stability. To capture this dimension, we propose PReSS (Political Response Stability under Stress), a black-box framework that evaluates LLMs by jointly considering model and topic context, categorizing responses into four stance types: stable-left, unstable-left, stable-right, and unstable-right. Applying PReSS to 12 widely used LLMs across 19 political topics reveals substantial variation in stance stability; for instance, a model that is left-leaning overall can exhibit stable-right behavior on certain topics. This highlights the importance of topic-aware and fine-grained evaluation of political ideologies of LLMs. Moreover, stability has practical implications for controlled generation and model alignment: interventions such as debiasing or ideology reversal should explicitly account for stance stability. Our empirical analyses reveal that when models are prompted or fine-tuned to adopt the opposite ideology, unstable topic stances are more likely to change, whereas stable ones resist modification. Thus, treating stability as a moderating factor provides a principled foundation for understanding, evaluating, and guiding interventions in politically sensitive model behavior.
翻译:现有的大语言模型(LLM)政治偏见评估通常将输出分类为左倾或右倾。我们通过考察意识形态倾向如何随话题变化以及模型如何一致地保持其立场(我们称之为稳定性)来拓展这一视角。为捕捉这一维度,我们提出了PReSS(压力下的政治响应稳定性),这是一个黑盒评估框架,通过综合考虑模型和话题语境,将响应分为四种立场类型:稳定左倾、不稳定左倾、稳定右倾和不稳定右倾。将PReSS应用于12个广泛使用的LLM在19个政治话题上的表现,揭示了立场稳定性的显著差异;例如,一个总体左倾的模型可能在特定话题上表现出稳定右倾行为。这凸显了对LLM政治意识形态进行话题感知和细粒度评估的重要性。此外,稳定性对于受控生成和模型对齐具有实际意义:诸如去偏见或意识形态反转等干预措施应明确考虑立场稳定性。我们的实证分析表明,当通过提示或微调使模型采用相反意识形态时,不稳定的话题立场更可能发生改变,而稳定的立场则抵抗修改。因此,将稳定性视为调节因素,为理解、评估和指导政治敏感模型行为的干预措施提供了原则性基础。