Large language models (LLMs) are increasingly deployed in applications with societal impact, raising concerns about the cultural biases they encode. We probe these representations by evaluating whether LLMs can perform author profiling from song lyrics in a zero-shot setting, inferring singers' gender and ethnicity without task-specific fine-tuning. Across several open-source models evaluated on more than 10,000 lyrics, we find that LLMs achieve non-trivial profiling performance but demonstrate systematic cultural alignment: most models default toward North American ethnicity, while DeepSeek-1.5B aligns more strongly with Asian ethnicity. This finding emerges from both the models' prediction distributions and an analysis of their generated rationales. To quantify these disparities, we introduce two fairness metrics, Modality Accuracy Divergence (MAD) and Recall Divergence (RD), and show that Ministral-8B displays the strongest ethnicity bias among the evaluated models, whereas Gemma-12B shows the most balanced behavior. Our code is available on GitHub (https://github.com/ValentinLafargue/CulturalProbingLLM).
翻译:大型语言模型(LLMs)正越来越多地部署在具有社会影响的应用中,引发了对其所编码文化偏见的担忧。我们通过评估LLMs能否在零样本设置下根据歌词进行作者画像(即无需任务特定微调即可推断歌手的性别和种族)来探测这些表征。在对超过10,000首歌词评估的多个开源模型中发现,LLMs取得了显著的画像性能,但表现出系统性的文化对齐:大多数模型默认偏向北美种族,而DeepSeek-1.5B则更强烈地对齐亚洲种族。这一发现既来自模型的预测分布,也来自对其生成理由的分析。为量化这些差异,我们引入了两个公平性指标——模态准确度差异(MAD)和召回率差异(RD),并表明在评估的模型中,Ministral-8B表现出最强的种族偏见,而Gemma-12B则显示出最均衡的行为。我们的代码已在GitHub上公开(https://github.com/ValentinLafargue/CulturalProbingLLM)。