Current research on bias in language models (LMs) predominantly focuses on data quality, with significantly less attention paid to model architecture and temporal influences of data. Even more critically, few studies systematically investigate the origins of bias. We propose a methodology grounded in comparative behavioral theory to interpret the complex interaction between training data and model architecture in bias propagation during language modeling. Building on recent work that relates transformers to n-gram LMs, we evaluate how data, model design choices, and temporal dynamics affect bias propagation. Our findings reveal that: (1) n-gram LMs are highly sensitive to context window size in bias propagation, while transformers demonstrate architectural robustness; (2) the temporal provenance of training data significantly affects bias; and (3) different model architectures respond differentially to controlled bias injection, with certain biases (e.g. sexual orientation) being disproportionately amplified. As language models become ubiquitous, our findings highlight the need for a holistic approach -- tracing bias to its origins across both data and model dimensions, not just symptoms, to mitigate harm.
翻译:当前关于语言模型偏见的研究主要集中于数据质量,对模型架构与数据时序影响的关注显著不足。更关键的是,很少有研究系统性地探究偏见的起源。我们提出一种基于比较行为理论的方法论,用以解释语言建模过程中训练数据与模型架构在偏见传播中的复杂交互。基于近期将Transformer与n-gram语言模型相关联的研究,我们评估了数据、模型设计选择及时序动态如何影响偏见传播。研究发现:(1)n-gram语言模型在偏见传播中对上下文窗口大小高度敏感,而Transformer展现出架构层面的鲁棒性;(2)训练数据的时序来源显著影响偏见表现;(3)不同模型架构对受控偏见注入的响应存在差异,特定偏见类型(如性取向偏见)会被不成比例地放大。随着语言模型的普及,我们的研究结果强调需要采用整体性方法——在数据和模型双重维度上追溯偏见的根源而非仅关注表象,以减轻其危害。