Analyzing unstructured data has been a persistent challenge in data processing. Large Language Models (LLMs) have shown promise in this regard, leading to recent proposals for declarative frameworks for LLM-powered processing of unstructured data. However, these frameworks focus on reducing cost when executing user-specified operations using LLMs, rather than improving accuracy, executing most operations as-is (in a single LLM call). This is problematic for complex tasks and data, where LLM outputs for user-defined operations are often inaccurate, even with optimized prompts. For example, an LLM may struggle to identify {\em all} instances of specific clauses, like force majeure or indemnification, in lengthy legal documents, requiring decomposition of the data, the task, or both. We present DocETL, a system that optimizes complex document processing pipelines, while accounting for LLM shortcomings. DocETL offers a declarative interface for users to define such pipelines and uses an agent-based approach to automatically optimize them, leveraging novel agent-based rewrites (that we call rewrite directives), as well as an optimization and evaluation framework. We introduce (i) logical rewriting of pipelines, tailored for LLM-based tasks, (ii) an agent-guided plan evaluation mechanism that synthesizes and orchestrates task-specific validation prompts, and (iii) an optimization algorithm that efficiently finds promising plans, considering the latencies of agent-based plan generation and evaluation. Our evaluation on four different unstructured document analysis tasks demonstrates that DocETL finds plans with outputs that are 25 to 80% more accurate than well-engineered baselines, addressing a critical gap in unstructured data analysis. DocETL is open-source at docetl.org, and as of March 2025, has amassed over 1.7k GitHub Stars, with users spanning a variety of domains.
翻译:分析非结构化数据一直是数据处理领域的一个持续挑战。大型语言模型(LLMs)在这方面已展现出潜力,近期也出现了若干基于LLM的非结构化数据处理的声明式框架提案。然而,这些框架主要侧重于降低使用LLM执行用户指定操作的成本,而非提升准确性,且大多以单一LLM调用的方式原样执行操作。这对于复杂任务和数据而言是有问题的,因为即使用户定义的操作经过提示词优化,LLM的输出也常常不准确。例如,LLM可能难以在冗长的法律文档中识别出特定条款(如不可抗力或赔偿条款)的所有实例,这需要对数据、任务或两者进行分解。我们提出了DocETL,一个在考虑LLM缺陷的同时优化复杂文档处理流程的系统。DocETL为用户提供了定义此类流程的声明式接口,并采用基于智能体的方法自动优化流程,该方法利用了新颖的基于智能体的重写(我们称之为重写指令)以及一个优化与评估框架。我们引入了:(i)针对基于LLM的任务量身定制的流程逻辑重写,(ii)一种智能体引导的计划评估机制,该机制能综合并编排特定于任务的验证提示,以及(iii)一种优化算法,该算法在考虑基于智能体的计划生成与评估延迟的同时,高效地寻找有前景的执行计划。我们在四个不同的非结构化文档分析任务上的评估表明,DocETL找到的执行计划其输出准确性比精心设计的基线高出25%至80%,从而弥补了非结构化数据分析中的一个关键空白。DocETL已在docetl.org开源,截至2025年3月,已获得超过1.7k个GitHub星标,用户遍布多个领域。