The meaning conveyed by a sentence often depends on the context in which it appears. Despite the progress of sentence embedding methods, it remains unclear as how to best modify a sentence embedding conditioned on its context. To address this problem, we propose Condition-Aware Sentence Embeddings (CASE), an efficient and accurate method to create an embedding for a sentence under a given condition. First, CASE creates an embedding for the condition using a Large Language Model (LLM) encoder, where the sentence influences the attention scores computed for the tokens in the condition during pooling. Next, a supervised method is learnt to align the LLM-based text embeddings with the Conditional Semantic Textual Similarity (C-STS) task. We find that subtracting the condition embedding consistently improves the C-STS performance of LLM-based text embeddings by improving the isotropy of the embedding space. Moreover, our supervised projection method significantly improves the performance of LLM-based embeddings despite requiring a small number of embedding dimensions.
翻译:句子所传达的含义通常取决于其出现的上下文。尽管句子嵌入方法已取得进展,但如何根据上下文对句子嵌入进行最佳调整仍不明确。为解决此问题,我们提出条件感知句子嵌入(CASE),这是一种在给定条件下为句子创建嵌入的高效精确方法。首先,CASE利用大型语言模型(LLM)编码器为条件创建嵌入,其中句子通过池化过程影响条件中词元的注意力分数计算。随后,我们通过监督学习方法将基于LLM的文本嵌入与条件语义文本相似度(C-STS)任务进行对齐。研究发现,通过减去条件嵌入能持续提升基于LLM的文本嵌入在C-STS任务上的性能,这归因于该方法改善了嵌入空间的各向同性。此外,尽管所需嵌入维度数量较少,我们的监督投影方法仍显著提升了基于LLM的嵌入性能。