Ways in which people's opinions change are, without a doubt, subject to a rich tapestry of differing influences. Factors that affect how one arrives at an opinion reflect how they have been shaped by their environment throughout their lives, education, material status, what belief systems are they subscribed to, and what socio-economic minorities are they a part of. This already complex system is further expanded by the ever-changing nature of one's social network. It is therefore no surprise that many models have a tendency to perform best for the majority of the population and discriminating those people who are members of various marginalized groups . This bias and the study of how to counter it are subject to a rapidly developing field of Fairness in Social Network Analysis (SNA). The focus of this work is to look into how a state-of-the-art model discriminates certain minority groups and whether it is possible to reliably predict for whom it will perform worse. Moreover, is such prediction possible based solely on one's demographic or topological features? To this end, the NetSense dataset, together with a state-of-the-art CoDiNG model for opinion prediction have been employed. Our work explores how three classifier models (Demography-Based, Topology-Based, and Hybrid) perform when assessing for whom this algorithm will provide inaccurate predictions. Finally, through a comprehensive analysis of these experimental results, we identify four key patterns of algorithmic bias. Our findings suggest that no single paradigm provides the best results and that there is a real need for context-aware strategies in fairness-oriented social network analysis. We conclude that a multi-faceted approach, incorporating both individual attributes and network structures, is essential for reducing algorithmic bias and promoting inclusive decision-making.
翻译:人们观点转变的方式无疑受到多种不同影响的复杂交织。影响个体形成观点的因素反映了其一生中如何被环境塑造,包括教育背景、物质条件、所信奉的信念体系以及所属的社会经济少数群体。这一本就复杂的系统还因个人社交网络不断变化的特性而进一步扩展。因此,许多模型往往对多数群体表现最佳,而对各类边缘化群体成员产生歧视性结果,这并不令人意外。这种偏见及其应对方法的研究,正属于社会网络分析公平性这一快速发展的领域。本研究的重点在于探究一种先进模型如何歧视特定少数群体,以及是否能够可靠预测该模型对哪些人群表现更差。此外,仅基于人口统计学特征或拓扑特征是否可能实现此类预测?为此,我们采用了NetSense数据集以及用于观点预测的先进CoDiNG模型。本研究探讨了三种分类器模型(基于人口统计学、基于拓扑结构及混合模型)在评估该算法将对哪些人群提供不准确预测时的表现。最后,通过对这些实验结果的综合分析,我们识别出算法偏见的四种关键模式。研究结果表明,单一范式无法提供最优结果,在面向公平性的社会网络分析中确实需要情境感知策略。我们得出结论:融合个体属性与网络结构的多维度方法,对于减少算法偏见和促进包容性决策至关重要。