Several data warehouse and database providers have recently introduced extensions to SQL called AI Queries, enabling users to specify functions and conditions in SQL that are evaluated by LLMs, thereby broadening significantly the kinds of queries one can express over the combination of structured and unstructured data. LLMs offer remarkable semantic reasoning capabilities, making them an essential tool for complex and nuanced queries that blend structured and unstructured data. While extremely powerful, these AI queries can become prohibitively costly when invoked thousands of times. This paper provides an extensive evaluation of a recent AI query approximation approach that enables low cost analytics and database applications to benefit from AI queries. The approach delivers >100x cost and latency reduction for the semantic filter operator and also important gains for semantic ranking. The cost and performance gains come from utilizing cheap and accurate proxy models over embedding vectors. We show that despite the massive gains in latency and cost, these proxy models preserve accuracy and occasionally improve accuracy across various benchmark datasets, including the extended Amazon reviews benchmark that has 10M rows. We present an OLAP-friendly architecture within Google BigQuery for this approach for purely online (ad hoc) queries, and a low-latency HTAP database-friendly architecture in AlloyDB that could further improve the latency by moving the proxy model training offline. We present techniques that accelerate the proxy model training.
翻译:多家数据仓库和数据库供应商近期推出了名为AI查询的SQL扩展功能,使用户能在SQL中定义由大语言模型(LLM)评估的函数和条件,从而显著扩展了可对结构化与非结构化数据组合表达的查询类型。LLM具备卓越的语义推理能力,成为处理融合结构化与非结构化数据的复杂精细查询的关键工具。然而,尽管这些AI查询功能极其强大,当调用次数达到数千次时,其成本可能变得极为高昂。本文对近期提出的AI查询近似方法进行了全面评估,该方法使低成本分析与数据库应用能够受益于AI查询技术。该方案在语义过滤算子中实现了超过100倍的成本与延迟降低,并在语义排序方面取得了显著成效。成本与性能的提升源于利用嵌入向量的廉价且精确的代理模型。我们证明,尽管在延迟和成本方面获得巨大提升,这些代理模型在各类基准数据集(包括包含1000万行的扩展版Amazon评论基准)中仍能保持精度,甚至偶尔提升准确性。我们在Google BigQuery中为该方案设计了适用于纯在线(即席)查询的OLAP友好型架构,并在AlloyDB中构建了支持低延迟的HTAP数据库友好架构,该架构通过将代理模型训练迁移至离线,可进一步降低延迟。我们还提出了加速代理模型训练的技术方案。