The potential effectiveness of counterspeech as a hate speech mitigation strategy is attracting increasing interest in the NLG research community, particularly towards the task of automatically producing it. However, automatically generated responses often lack the argumentative richness which characterises expert-produced counterspeech. In this work, we focus on two aspects of counterspeech generation to produce more cogent responses. First, by investigating the tension between helpfulness and harmlessness of LLMs, we test whether the presence of safety guardrails hinders the quality of the generations. Secondly, we assess whether attacking a specific component of the hate speech results in a more effective argumentative strategy to fight online hate. By conducting an extensive human and automatic evaluation, we show how the presence of safety guardrails can be detrimental also to a task that inherently aims at fostering positive social interactions. Moreover, our results show that attacking a specific component of the hate speech, and in particular its implicit negative stereotype and its hateful parts, leads to higher-quality generations.
翻译:作为仇恨言论缓解策略,反驳言论的潜在有效性正日益引起自然语言生成研究界的关注,尤其是在自动生成反驳言论的任务方面。然而,自动生成的回应往往缺乏专家制作的反驳言论所特有的论证丰富性。在本工作中,我们聚焦于反驳言论生成的两个方面以产生更具说服力的回应。首先,通过研究大型语言模型在有益性与无害性之间的张力,我们检验安全防护栏的存在是否会阻碍生成内容的质量。其次,我们评估攻击仇恨言论的特定组成部分是否能成为对抗网络仇恨的更有效论证策略。通过开展广泛的人工与自动评估,我们揭示了安全防护栏的存在甚至可能对本质上旨在促进积极社会互动的任务产生不利影响。此外,我们的结果表明,攻击仇恨言论的特定组成部分——尤其是其隐含的负面刻板印象及仇恨内容——能够产生更高质量的生成内容。