We introduce {\lambda}-Tune, a framework that leverages Large Language Models (LLMs) for automated database system tuning. The design of {\lambda}-Tune is motivated by the capabilities of the latest generation of LLMs. Different from prior work, leveraging LLMs to extract tuning hints for single parameters, {\lambda}-Tune generates entire configuration scripts, based on a large input document, describing the tuning context. {\lambda}-Tune generates alternative configurations, using a principled approach to identify the best configuration, out of a small set of candidates. In doing so, it minimizes reconfiguration overheads and ensures that evaluation costs are bounded as a function of the optimal run time. By treating prompt generation as a cost-based optimization problem, {\lambda}-Tune conveys the most relevant context to the LLM while bounding the number of input tokens and, therefore, monetary fees for LLM invocations. We compare {\lambda}-Tune to various baselines, using multiple benchmarks and PostgreSQL and MySQL as target systems for tuning, showing that {\lambda}-Tune is significantly more robust than prior approaches.
翻译:本文提出λ-Tune框架,该框架利用大型语言模型实现数据库系统的自动调优。λ-Tune的设计灵感源于新一代大型语言模型所展现的能力。与以往仅利用大型语言模型提取单一参数调优提示的研究不同,λ-Tune能够基于描述调优上下文的大规模输入文档,生成完整的配置脚本。λ-Tune采用一种基于原则的方法,从少量候选配置中筛选出最优配置。通过这种方式,它最大限度地减少了重新配置的开销,并确保评估成本作为最优运行时间的函数是有界的。通过将提示生成视为一个基于成本的优化问题,λ-Tune在向大型语言模型传递最相关上下文的同时,限制了输入令牌的数量,从而控制了调用大型语言模型所产生的费用。我们在多个基准测试中,以PostgreSQL和MySQL为目标调优系统,将λ-Tune与多种基线方法进行比较,结果表明λ-Tune相较于以往方法具有显著更强的鲁棒性。