Semantic text embedding is essential to many tasks in Natural Language Processing (NLP). While black-box models are capable of generating high-quality embeddings, their lack of interpretability limits their use in tasks that demand transparency. Recent approaches have improved interpretability by leveraging domain-expert-crafted or LLM-generated questions, but these methods rely heavily on expert input or well-prompt design, which restricts their generalizability and ability to generate discriminative questions across a wide range of tasks. To address these challenges, we introduce \algo{CQG-MBQA} (Contrastive Question Generation - Multi-task Binary Question Answering), a general framework for producing interpretable semantic text embeddings across diverse tasks. Our framework systematically generates highly discriminative, low cognitive load yes/no questions through the \algo{CQG} method and answers them efficiently with the \algo{MBQA} model, resulting in interpretable embeddings in a cost-effective manner. We validate the effectiveness and interpretability of \algo{CQG-MBQA} through extensive experiments and ablation studies, demonstrating that it delivers embedding quality comparable to many advanced black-box models while maintaining inherently interpretability. Additionally, \algo{CQG-MBQA} outperforms other interpretable text embedding methods across various downstream tasks.
翻译:语义文本嵌入对自然语言处理(NLP)中的许多任务至关重要。虽然黑盒模型能够生成高质量的嵌入,但其缺乏可解释性限制了它们在需要透明度的任务中的应用。近期方法通过利用领域专家构建或大语言模型(LLM)生成的问题来提升可解释性,但这些方法严重依赖专家输入或精心设计的提示,这限制了其普适性以及在广泛任务中生成判别性问题的能力。为应对这些挑战,我们提出了 \algo{CQG-MBQA}(对比问题生成-多任务二元问答),这是一个为多样化任务生成可解释语义文本嵌入的通用框架。我们的框架通过 \algo{CQG} 方法系统性地生成具有高判别性、低认知负荷的是非问题,并利用 \algo{MBQA} 模型高效回答这些问题,从而以经济高效的方式产生可解释的嵌入。我们通过大量实验和消融研究验证了 \algo{CQG-MBQA} 的有效性和可解释性,证明其提供的嵌入质量可与许多先进的黑盒模型相媲美,同时保持固有的可解释性。此外,\algo{CQG-MBQA} 在各种下游任务中均优于其他可解释的文本嵌入方法。