In NLP, Zero-Shot Classification (ZSC) has become essential for enabling models to classify text into categories unseen during training, particularly in low-resource languages and domains where labeled data is scarce. While pretrained language models (PLMs) have shown promise in ZSC, they often rely on large training datasets or external knowledge, limiting their applicability in multilingual and low-resource scenarios. Recent approaches leveraging natural language prompts reduce the dependence on large training datasets but struggle to effectively incorporate available labeled data from related classification tasks, especially when these datasets originate from different languages or distributions. Moreover, existing prompt-based methods typically rely on manually crafted prompts in a specific language, limiting their adaptability and effectiveness in cross-lingual settings. To address these challenges, we introduce RoSPrompt, a lightweight and data-efficient approach for training soft prompts that enhance cross-lingual ZSC while ensuring robust generalization across data distribution shifts. RoSPrompt is designed for small multilingual PLMs, enabling them to leverage high-resource languages to improve performance in low-resource settings without requiring extensive fine-tuning or high computational costs. We evaluate our approach on multiple multilingual PLMs across datasets covering 106 languages, demonstrating strong cross-lingual transfer performance and robust generalization capabilities over unseen classes.
翻译:在自然语言处理领域,零样本分类已成为使模型能够对训练期间未见过的类别进行文本分类的关键技术,尤其在标注数据稀缺的低资源语言和领域中。尽管预训练语言模型在零样本分类中展现出潜力,但它们通常依赖于大型训练数据集或外部知识,这限制了其在多语言和低资源场景下的适用性。近期利用自然语言提示的方法减少了对大型训练数据集的依赖,但难以有效整合来自相关分类任务的可用标注数据,尤其是当这些数据集源自不同语言或分布时。此外,现有的基于提示的方法通常依赖于特定语言的手工构建提示,限制了其在跨语言环境中的适应性和有效性。为应对这些挑战,我们提出了RoSPrompt——一种轻量级且数据高效的软提示训练方法,该方法在增强跨语言零样本分类能力的同时,确保了对数据分布偏移的鲁棒泛化性。RoSPrompt专为小型多语言预训练语言模型设计,使其能够利用高资源语言提升低资源场景下的性能,而无需大量微调或高昂计算成本。我们在涵盖106种语言的数据集上对多种多语言预训练语言模型进行评估,结果表明该方法具有强大的跨语言迁移性能和面对未见类别的鲁棒泛化能力。