We evaluate the robustness of several large language models on multiple datasets. Robustness here refers to the relative insensitivity of the model's answers to meaning-preserving variants of their input. Benchmark datasets are constructed by introducing naturally-occurring, non-malicious perturbations, or by generating semantically equivalent paraphrases of input questions or statements. We further propose a novel metric for assessing a model robustness, and demonstrate its benefits in the non-adversarial scenario by empirical evaluation of several models on the created datasets.
翻译:我们在多个数据集上评估了若干大型语言模型的鲁棒性。此处的鲁棒性指模型对其输入的语义保持变体的相对不敏感性。基准数据集通过引入自然发生、非恶意的扰动,或通过生成输入问题或陈述的语义等价复述来构建。我们进一步提出了一种评估模型鲁棒性的新指标,并通过在创建的数据集上对多个模型进行实证评估,证明了该指标在非对抗场景下的优势。