We propose a method for unsupervised abstractive opinion summarization, that combines the attributability and scalability of extractive approaches with the coherence and fluency of Large Language Models (LLMs). Our method, HIRO, learns an index structure that maps sentences to a path through a semantically organized discrete hierarchy. At inference time, we populate the index and use it to identify and retrieve clusters of sentences containing popular opinions from input reviews. Then, we use a pretrained LLM to generate a readable summary that is grounded in these extracted evidential clusters. The modularity of our approach allows us to evaluate its efficacy at each stage. We show that HIRO learns an encoding space that is more semantically structured than prior work, and generates summaries that are more representative of the opinions in the input reviews. Human evaluation confirms that HIRO generates significantly more coherent, detailed and accurate summaries.
翻译:本文提出一种无监督抽象观点摘要方法,该方法将抽取式方法的可溯源性和可扩展性与大语言模型(LLMs)的连贯性和流畅性相结合。我们的方法HIRO学习一种索引结构,该结构将句子映射到语义组织的离散层次结构中的路径。在推理阶段,我们填充该索引并利用它从输入评论中识别和检索包含流行观点的句子聚类。随后,我们使用预训练的LLM生成基于这些提取证据聚类的可读摘要。我们方法的模块化特性允许我们在每个阶段评估其有效性。实验表明,HIRO学习的编码空间比先前工作具有更强的语义结构,生成的摘要能更好地代表输入评论中的观点。人工评估证实,HIRO生成的摘要具有显著更高的连贯性、细节丰富性和准确性。