AI agents are changing the requirements for document parsing. What matters is \emph{semantic correctness}: parsed output must preserve the structure and meaning needed for autonomous decisions, including correct table structure, precise chart data, semantically meaningful formatting, and visual grounding. Existing benchmarks do not fully capture this setting for enterprise automation, relying on narrow document distributions and text-similarity metrics that miss agent-critical failures. We introduce \textbf{ParseBench}, a benchmark of ${\sim}2{,}000$ human-verified pages from enterprise documents spanning insurance, finance, and government, organized around five capability dimensions: tables, charts, content faithfulness, semantic formatting, and visual grounding. Across 14 methods spanning vision-language models, specialized document parsers, and LlamaParse, the benchmark reveals a fragmented capability landscape: no method is consistently strong across all five dimensions. LlamaParse Agentic achieves the highest overall score at \agenticoverall\%, and the benchmark highlights the remaining capability gaps across current systems. Dataset and evaluation code are available on \href{https://huggingface.co/datasets/llamaindex/ParseBench}{HuggingFace} and \href{https://github.com/run-llama/ParseBench}{GitHub}.
翻译:AI代理正在改变文档解析的需求。关键在于**语义正确性**:解析后的输出必须保留自主决策所需的结构和含义,包括正确的表格结构、精确的图表数据、语义上有意义的格式以及视觉定位。现有基准未能充分涵盖企业自动化中的这一场景,它们依赖于有限的文档分布和文本相似性指标,而这些指标会遗漏对代理至关重要的错误。我们提出**ParseBench**,这是一个包含来自保险、金融和政府领域企业文档的大约2,000个人工验证页面的基准,围绕五个能力维度组织:表格、图表、内容忠实性、语义格式和视觉定位。跨越视觉语言模型、专用文档解析器和LlamaParse等14种方法,该基准揭示了一个碎片化的能力格局:没有一种方法在所有五个维度上都能持续表现强劲。LlamaParse Agentic以最高总分领先,该基准凸显了当前系统在剩余能力方面的差距。数据集和评估代码可在HuggingFace和GitHub上获取。