Several data warehouse and database providers have recently introduced extensions to SQL called AI Queries, enabling users to specify functions and conditions in SQL that are evaluated by LLMs, thereby broadening significantly the kinds of queries one can express over the combination of structured and unstructured data. LLMs offer remarkable semantic reasoning capabilities, making them an essential tool for complex and nuanced queries that blend structured and unstructured data. While extremely powerful, these AI queries can become prohibitively costly when invoked thousands of times. This paper provides an extensive evaluation of a recent AI query approximation approach that enables low cost analytics and database applications to benefit from AI queries. The approach delivers >100x cost and latency reduction for the semantic filter (AI.IF) operator and also important gains for semantic ranking (AI.RANK). The cost and performance gains come from utilizing cheap and accurate proxy models over embedding vectors. We show that despite the massive gains in latency and cost, these proxy models preserve accuracy and occasionally improve accuracy across various benchmark datasets, including the extended Amazon reviews benchmark that has 10M rows. We present an OLAP-friendly architecture within Google \textit{BigQuery} for this approach for purely online (ad hoc) queries, and a low-latency HTAP database-friendly architecture in \textit{AlloyDB} that could further improve the latency by moving the proxy model training offline. We present techniques that accelerate the proxy model training.
翻译:近期,多家数据仓库与数据库供应商推出了名为AI查询的SQL扩展功能,允许用户在SQL中指定由大语言模型(LLM)评估的函数与条件,从而显著拓宽了在结构化与非结构化数据组合上可表达的查询类型。LLM具备卓越的语义推理能力,使其成为处理融合结构化与非结构化数据的复杂且精细查询的关键工具。尽管功能强大,但当AI查询被调用数千次时,其成本可能变得极为高昂。本文对一种新兴的AI查询近似方法进行了全面评估,该方法使得低成本分析应用与数据库系统能够受益于AI查询。该方法在语义过滤(AI.IF)算子中实现了超过100倍的成本与延迟降低,并在语义排序(AI.RANK)中取得了显著增益。这些成本与性能增益源于在嵌入向量上使用廉价且精确的代理模型。我们证明,尽管在延迟与成本方面获得巨大提升,这些代理模型在多个基准数据集(包括包含1000万行的扩展版亚马逊评论基准)上仍保持了准确性,甚至在某些情况下提升了准确度。我们提出了适用于Google \textit{BigQuery}的、面向纯在线(即席)查询的OLAP友好架构,以及在\textit{AlloyDB}中适用于低延迟HTAP数据库的架构——后者通过将代理模型训练移至离线环节进一步降低延迟。我们还介绍了加速代理模型训练的技术。