Large Language Models (LLM) have shown remarkable language capabilities fueling attempts to integrate them into applications across a wide range of domains. An important application area is question answering over private enterprise documents where the main considerations are data security, which necessitates applications that can be deployed on-prem, limited computational resources and the need for a robust application that correctly responds to queries. Retrieval-Augmented Generation (RAG) has emerged as the most prominent framework for building LLM-based applications. While building a RAG is relatively straightforward, making it robust and a reliable application requires extensive customization and relatively deep knowledge of the application domain. We share our experiences building and deploying an LLM application for question answering over private organizational documents. Our application combines the use of RAG with a finetuned open-source LLM. Additionally, our system, which we call Tree-RAG (T-RAG), uses a tree structure to represent entity hierarchies within the organization. This is used to generate a textual description to augment the context when responding to user queries pertaining to entities within the organization's hierarchy. Our evaluations, including a Needle in a Haystack test, show that this combination performs better than a simple RAG or finetuning implementation. Finally, we share some lessons learned based on our experiences building an LLM application for real-world use.
翻译:大语言模型(LLM)已展现出卓越的语言能力,推动着将其集成到广泛领域应用中的尝试。一个重要的应用方向是基于私有企业文档的问答系统,其主要考量因素包括数据安全性(这要求应用能够本地化部署)、有限的计算资源,以及构建能够准确响应查询的鲁棒应用的需求。检索增强生成(RAG)已成为构建基于LLM应用的最主要框架。虽然构建一个基础的RAG相对简单,但要使其成为一个鲁棒且可靠的应用,则需要大量的定制化工作以及对应用领域较为深入的理解。本文分享了我们在构建和部署一个用于私有组织文档问答的LLM应用过程中的经验。我们的应用结合了RAG与一个经过微调的开源LLM。此外,我们称之为树状检索增强生成(T-RAG)的系统,使用树状结构来表示组织内的实体层级关系。该系统利用此结构生成文本描述,以在响应用户关于组织层级内实体的查询时增强上下文信息。我们的评估(包括“大海捞针”测试)表明,这种组合方案的表现优于简单的RAG或仅进行微调的方案。最后,我们分享了基于构建实际应用级LLM系统所获得的一些经验教训。