Text-to-SQL and Big Data are both extensively benchmarked fields, yet there is limited research that evaluates them jointly. In the real world, Text-to-SQL systems are often embedded with Big Data workflows, such as large-scale data processing or interactive data analytics. We refer to this as "Text-to-Big SQL". However, existing text-to-SQL benchmarks remain narrowly scoped and overlook the cost and performance implications that arise at scale. For instance, translation errors that are minor on small datasets lead to substantial cost and latency overheads as data scales, a relevant issue completely ignored by text-to-SQL metrics. In this paper, we overcome this overlooked challenge by introducing novel and representative metrics for evaluating Text-to-Big SQL. Our study focuses on production-level LLM agents, a database-agnostic system adaptable to diverse user needs. Via an extensive evaluation of frontier models, we show that text-to-SQL metrics are insufficient for Big Data. In contrast, our proposed text-to-Big SQL metrics accurately reflect execution efficiency, cost, and the impact of data scale. Furthermore, we provide LLM-specific insights, including fine-grained, cross-model comparisons of latency and cost.
翻译:文本到SQL和大数据都是被广泛基准测试的领域,但评估二者结合的研究却十分有限。在现实世界中,文本到SQL系统通常嵌入于大数据工作流中,例如大规模数据处理或交互式数据分析。我们将此称为"文本到大规模SQL"。然而,现有的文本到SQL基准测试范围仍然狭窄,忽略了规模扩大时产生的成本和性能影响。例如,在小数据集上微小的翻译错误会随着数据规模扩大导致巨大的成本和延迟开销,而这一相关问题完全被文本到SQL指标所忽略。本文通过引入新颖且具有代表性的指标来评估文本到大规模SQL,从而克服了这一被忽视的挑战。我们的研究聚焦于生产级LLM智能体——这是一种与数据库无关、可适应多样化用户需求的系统。通过对前沿模型进行广泛评估,我们证明文本到SQL指标对于大数据场景是不充分的。相比之下,我们提出的文本到大规模SQL指标能准确反映执行效率、成本以及数据规模的影响。此外,我们还提供了针对LLM的具体洞见,包括对延迟和成本进行细粒度、跨模型的比较分析。