With the emergence of numerous Large Language Models (LLM), the usage of such models in various Natural Language Processing (NLP) applications is increasing extensively. Counterspeech generation is one such key task where efforts are made to develop generative models by fine-tuning LLMs with hatespeech - counterspeech pairs, but none of these attempts explores the intrinsic properties of large language models in zero-shot settings. In this work, we present a comprehensive analysis of the performances of four LLMs namely GPT-2, DialoGPT, ChatGPT and FlanT5 in zero-shot settings for counterspeech generation, which is the first of its kind. For GPT-2 and DialoGPT, we further investigate the deviation in performance with respect to the sizes (small, medium, large) of the models. On the other hand, we propose three different prompting strategies for generating different types of counterspeech and analyse the impact of such strategies on the performance of the models. Our analysis shows that there is an improvement in generation quality for two datasets (17%), however the toxicity increase (25%) with increase in model size. Considering type of model, GPT-2 and FlanT5 models are significantly better in terms of counterspeech quality but also have high toxicity as compared to DialoGPT. ChatGPT are much better at generating counter speech than other models across all metrics. In terms of prompting, we find that our proposed strategies help in improving counter speech generation across all the models.
翻译:随着大量大型语言模型(LLM)的出现,此类模型在各类自然语言处理(NLP)应用中的使用日益广泛。反言论生成是其中一项关键任务,研究者通过使用仇恨言论-反言论配对数据微调LLM来开发生成模型,但现有尝试均未探索大型语言模型在零样本设置下的内在特性。本研究首次全面分析了GPT-2、DialoGPT、ChatGPT和FlanT5四种LLM在零样本设置下进行反言论生成的性能表现。针对GPT-2和DialoGPT,我们进一步探究了模型规模(小型、中型、大型)对性能的差异影响。此外,我们提出三种不同的提示策略以生成不同类型的反言论,并分析这些策略对模型性能的影响。分析表明,两个数据集的质量提升达17%,但毒性随模型规模增大增加25%。就模型类型而言,GPT-2和FlanT5在反言论质量上显著优于DialoGPT,但毒性也更高。ChatGPT在所有指标上生成反言论的能力均显著优于其他模型。在提示策略方面,我们发现所提出的策略有助于改善所有模型的反言论生成效果。