The detection of hate speech has become increasingly important in combating online hostility and its real-world consequences. Despite recent advancements, there is limited research addressing hate speech detection in Devanagari-scripted languages, where resources and tools are scarce. While large language models (LLMs) have shown promise in language-related tasks, traditional fine-tuning approaches are often infeasible given the size of the models. In this paper, we propose a Parameter Efficient Fine tuning (PEFT) based solution for hate speech detection and target identification. We evaluate multiple LLMs on the Devanagari dataset provided by (Thapa et al., 2025), which contains annotated instances in 2 languages - Hindi and Nepali. The results demonstrate the efficacy of our approach in handling Devanagari-scripted content.
翻译:仇恨言论检测对于遏制网络敌意及其现实后果日益重要。尽管近期取得进展,但针对梵文字母语言的仇恨言论检测研究仍十分有限,相关资源与工具匮乏。虽然大语言模型(LLMs)在语言相关任务中展现出潜力,但鉴于模型规模,传统微调方法往往难以实施。本文提出一种基于参数高效微调(PEFT)的仇恨言论检测与目标识别解决方案。我们在(Thapa等人,2025)提供的梵文数据集(包含印地语和尼泊尔语两种语言的标注实例)上评估了多种大语言模型。实验结果验证了本方法在处理梵文字母内容上的有效性。