Several data warehouse and database providers have recently introduced extensions to SQL called AI Queries, enabling users to specify functions and conditions in SQL that are evaluated by LLMs, thereby broadening significantly the kinds of queries one can express over the combination of structured and unstructured data. LLMs offer remarkable semantic reasoning capabilities, making them an essential tool for complex and nuanced queries that blend structured and unstructured data. While extremely powerful, these AI queries can become prohibitively costly when invoked thousands of times. This paper provides an extensive evaluation of a recent AI query approximation approach that enables low cost analytics and database applications to benefit from AI queries. The approach delivers >100x cost and latency reduction for the semantic filter ($AI.IF$) operator and also important gains for semantic ranking ($AI.RANK$). The cost and performance gains come from utilizing cheap and accurate proxy models over embedding vectors. We show that despite the massive gains in latency and cost, these proxy models preserve accuracy and occasionally improve accuracy across various benchmark datasets, including the extended Amazon reviews benchmark that has 10M rows. We present an OLAP-friendly architecture within Google BigQuery for this approach for purely online (ad hoc) queries, and a low-latency HTAP database-friendly architecture in AlloyDB that could further improve the latency by moving the proxy model training offline. We present techniques that accelerate the proxy model training.
翻译:多家数据仓库与数据库提供商近期在SQL中引入了名为AI查询(AI Queries)的扩展功能,允许用户通过LLM(大语言模型)评估SQL中指定的函数与条件,从而显著拓宽了在结构化与非结构化数据组合上可表达的查询类型。LLM具备卓越的语义推理能力,使其成为处理融合结构化与非结构化数据的复杂精细查询的关键工具。然而,尽管这些AI查询功能极其强大,当调用次数达到数千次时,其成本可能高至难以承受。本文对一种新型AI查询逼近方法进行了全面评估,该方法使低成本的分析与数据库应用能够受益于AI查询。该技术针对语义筛选($AI.IF$)算子实现了超过100倍的成本与延迟缩减,并在语义排序($AI.RANK$)方面也取得了显著增益。其成本与性能提升源于利用嵌入向量上的廉价精确代理模型。我们证明,尽管延迟与成本大幅降低,这些代理模型在多个基准数据集(包括扩展至1000万行的Amazon评论基准)上仍能保持精度,甚至偶尔提升精度。我们针对Google BigQuery提出了适用于该方法的OLAP友好架构,以支持纯在线(ad hoc)查询;同时,在AlloyDB中提出了低延迟HTAP数据库友好架构,通过将代理模型训练移至离线,进一步降低延迟。此外,我们还介绍了加速代理模型训练的技术方案。