The use of Retrieval-Augmented Generation (RAG) has improved Large Language Models (LLMs) in collaborating with external data, yet significant challenges exist in real-world scenarios. In areas such as academic literature and finance question answering, data are often found in raw text and tables in HTML or PDF formats, which can be lengthy and highly unstructured. In this paper, we introduce a benchmark suite, namely Unstructured Document Analysis (UDA), that involves 2,965 real-world documents and 29,590 expert-annotated Q&A pairs. We revisit popular LLM- and RAG-based solutions for document analysis and evaluate the design choices and answer qualities across multiple document domains and diverse query types. Our evaluation yields interesting findings and highlights the importance of data parsing and retrieval. We hope our benchmark can shed light and better serve real-world document analysis applications. The benchmark suite and code can be found at https://github.com/qinchuanhui/UDA-Benchmark.
翻译:检索增强生成(RAG)的应用提升了大型语言模型(LLM)与外部数据协同的能力,但在真实场景中仍存在显著挑战。在学术文献和金融问答等领域,数据通常以原始文本及HTML或PDF格式表格的形式存在,这些文档可能篇幅冗长且高度非结构化。本文提出了一个基准套件,即非结构化文档分析(UDA),其中包含2,965份真实世界文档和29,590对专家标注的问答对。我们重新审视了当前流行的基于LLM与RAG的文档分析解决方案,并在多个文档领域和多样查询类型下评估了其设计选择与答案质量。我们的评估得出了一系列有价值的发现,并凸显了数据解析与检索环节的重要性。我们希望本基准能够为真实世界的文档分析应用提供启示与更好的支持。基准套件与代码可在 https://github.com/qinchuanhui/UDA-Benchmark 获取。