Measurement non-invariance arises when the psychometric properties of a scale differ across subgroups, undermining the validity of group comparisons. At the item level, such non-invariance manifests as differential item functioning (DIF), which occurs when the conditional distribution of an item response differs across groups after controlling for the latent trait. This paper introduces a statistical framework for detecting DIF in ordinal scales without requiring known group labels or anchor items. We propose a hybrid latent-class item response model to ordinal data using a proportional-odds formulation, assigning individuals probabilistically to latent classes. DIF is captured through class-specific shifts in item intercepts and slopes, allowing for both uniform and non-uniform DIF. The identification of DIF effects is achieved via an $L_1$-penalised marginal likelihood function under a sparsity assumption, and model estimation is implemented using a tailored EM algorithm. Simulation studies demonstrate strong recovery of item parameters and both uniform and non-uniform types of DIF. An empirical application to a personality test reveals latent subgroups with distinct response patterns and identifies items that may bias group comparisons. The proposed framework provides a flexible approach to assessing measurement invariance in ordinal scales when comparison groups are unobserved or poorly defined.
翻译:测量非不变性指量表的心理测量学特性在不同亚组间存在差异,从而削弱了组间比较的有效性。在项目层面,这种非不变性表现为差异项目功能,即在控制潜在特质后,项目反应的条件分布在各组间存在差异。本文提出了一种统计框架,用于检测序数量表中的差异项目功能,且无需已知组别标签或锚定项目。我们采用比例优势模型构建了适用于序数数据的混合潜类别项目反应模型,通过概率分配将个体归类至潜类别。差异项目功能通过项目截距和斜率的类别特异性偏移来捕捉,可同时处理均匀与非均匀两种类型的差异项目功能。基于稀疏性假设,我们通过$L_1$惩罚边际似然函数实现差异项目功能效应的识别,并采用定制化的EM算法进行模型估计。模拟研究显示,该方法能准确恢复项目参数及均匀与非均匀两类差异项目功能。在人格测验的实证应用中,本框架揭示了具有不同反应模式的潜亚组,并识别出可能导致组间比较偏差的项目。当比较组未被观测或定义不明确时,所提出的框架为评估序数量表的测量不变性提供了一种灵活的方法。