Recent advancements in hate speech detection (HSD) in Vietnamese have made significant progress, primarily attributed to the emergence of transformer-based pre-trained language models, particularly those built on the BERT architecture. However, the necessity for specialized fine-tuned models has resulted in the complexity and fragmentation of developing a multitasking HSD system. Moreover, most current methodologies focus on fine-tuning general pre-trained models, primarily trained on formal textual datasets like Wikipedia, which may not accurately capture human behavior on online platforms. In this research, we introduce ViHateT5, a T5-based model pre-trained on our proposed large-scale domain-specific dataset named VOZ-HSD. By harnessing the power of a text-to-text architecture, ViHateT5 can tackle multiple tasks using a unified model and achieve state-of-the-art performance across all standard HSD benchmarks in Vietnamese. Our experiments also underscore the significance of label distribution in pre-training data on model efficacy. We provide our experimental materials for research purposes, including the VOZ-HSD dataset, pre-trained checkpoint, the unified HSD-multitask ViHateT5 model, and related source code on GitHub publicly.
翻译:近年来,越南语仇恨言论检测领域取得了显著进展,这主要归功于基于Transformer的预训练语言模型的出现,特别是基于BERT架构的模型。然而,对专用微调模型的需求导致了开发多任务仇恨言论检测系统的复杂性和碎片化。此外,当前大多数方法侧重于微调通用的预训练模型,这些模型主要在维基百科等正式文本数据集上进行训练,可能无法准确捕捉在线平台上的人类行为。在本研究中,我们提出了ViHateT5,这是一个基于T5架构的模型,在我们提出的大规模领域特定数据集VOZ-HSD上进行了预训练。通过利用文本到文本架构的优势,ViHateT5能够使用统一模型处理多项任务,并在所有越南语标准仇恨言论检测基准测试中取得了最先进的性能。我们的实验还强调了预训练数据中标签分布对模型效能的重要性。我们为研究目的提供了实验材料,包括VOZ-HSD数据集、预训练检查点、统一的多任务仇恨言论检测模型ViHateT5以及相关的源代码,并已在GitHub上公开。