Adversarial examples pose a significant challenge to deep neural networks (DNNs) across both image and text domains, with the intent to degrade model performance through meticulously altered inputs. Adversarial texts, however, are distinct from adversarial images due to their requirement for semantic similarity and the discrete nature of the textual contents. This study delves into the concept of human suspiciousness, a quality distinct from the traditional focus on imperceptibility found in image-based adversarial examples. Unlike images, where adversarial changes are meant to be indistinguishable to the human eye, textual adversarial content must often remain undetected or non-suspicious to human readers, even when the text's purpose is to deceive NLP systems or bypass filters. In this research, we expand the study of human suspiciousness by analyzing how individuals perceive adversarial texts. We gather and publish a novel dataset of Likert-scale human evaluations on the suspiciousness of adversarial sentences, crafted by four widely used adversarial attack methods and assess their correlation with the human ability to detect machine-generated alterations. Additionally, we develop a regression-based model to quantify suspiciousness and establish a baseline for future research in reducing the suspiciousness in adversarial text generation. We also demonstrate how the regressor-generated suspicious scores can be incorporated into adversarial generation methods to produce texts that are less likely to be perceived as computer-generated. We make our human suspiciousness annotated data and our code available.
翻译:对抗性示例对深度神经网络(DNN)在图像和文本领域均构成了重大挑战,其目的是通过精心修改的输入来降低模型性能。然而,对抗性文本与对抗性图像不同,这源于其对语义相似性的要求以及文本内容的离散特性。本研究深入探讨了"人类可疑性"这一概念,该特性不同于图像对抗性示例中传统的"不可感知性"关注点。在图像领域,对抗性修改旨在使人眼无法区分;而在文本领域,对抗性内容通常必须保持不被人类读者察觉或不引起怀疑,即使文本的目的是欺骗自然语言处理系统或绕过过滤器。在本研究中,我们通过分析个体如何感知对抗性文本来扩展对人类可疑性的研究。我们收集并发布了一个新颖的数据集,其中包含针对对抗性句子可疑性的李克特量表式人工评估。这些句子由四种广泛使用的对抗性攻击方法生成,我们评估了这些评估结果与人类检测机器生成修改能力之间的相关性。此外,我们开发了一个基于回归的模型来量化可疑性,并为未来降低对抗性文本生成中可疑性的研究建立了基线。我们还展示了如何将回归器生成的可疑性分数整合到对抗性生成方法中,以产生更不易被感知为计算机生成的文本。我们公开了人工标注的可疑性数据及代码。