Research in natural language processing (NLP) for Computational Social Science (CSS) heavily relies on data from social media platforms. This data plays a crucial role in the development of models for analysing socio-linguistic phenomena within online communities. In this work, we conduct an in-depth examination of 20 datasets extensively used in NLP for CSS to comprehensively examine data quality. Our analysis reveals that social media datasets exhibit varying levels of data duplication. Consequently, this gives rise to challenges like label inconsistencies and data leakage, compromising the reliability of models. Our findings also suggest that data duplication has an impact on the current claims of state-of-the-art performance, potentially leading to an overestimation of model effectiveness in real-world scenarios. Finally, we propose new protocols and best practices for improving dataset development from social media data and its usage.
翻译:计算社会科学(CSS)领域的自然语言处理(NLP)研究高度依赖社交媒体平台数据。这些数据对于开发分析网络社区社会语言现象的模型具有关键作用。本研究对计算社会科学NLP研究中广泛使用的20个数据集进行了深入审查,以全面评估数据质量。我们的分析表明,社交媒体数据集存在不同程度的数据重复现象,进而引发标签不一致和数据泄露等问题,损害了模型的可靠性。研究结果还表明,数据重复会影响当前关于最先进性能的论断,可能导致模型在实际场景中的效果被高估。最后,我们提出了改进社交媒体数据集开发与使用的新规范及最佳实践方案。