Large language models revolutionize Text2SQL through supervised fine-tuning, yet a crucial limitation is overlooked: the complexity of databases leads to an increased context length, consequently resulting in higher GPU memory demands for model fine-tuning. To address this issue, we propose LR-SQL. LR-SQL comprises two supervised fine-tuning models: the schema\_link model and the SQL\_generation model, with the schema\_link model serving as the focal point for streamlining the overall process. During the fine-tuning of the schema\_link model, LR-SQL breaks down the complete database into flexible combinations of tables with adjustable quantities, enabling the model to learn the relationships within the entire database from these dispersed slices. Furthermore, to enhance the model's ability to perceive the relationships among various discrete slices during inference, LR-SQL trains the model's Chain-of-Thought capability for this task. Experimental results demonstrate that LR-SQL can reduce the total GPU memory usage by 40\% compared to existing fine-tuning methods, while only losing 2\% of table prediction accuracy in schema\_link task. For the overall Text2SQL task, the Execution Accuracy decrease by 0.6\%.Our project is now available on https://github.com/hongWin/LR-SQL
翻译:大型语言模型通过监督微调革新了Text2SQL任务,但一个关键限制被忽视了:数据库的复杂性导致上下文长度增加,进而使模型微调需要更高的GPU内存需求。为解决此问题,我们提出了LR-SQL。LR-SQL包含两个监督微调模型:模式链接模型和SQL生成模型,其中模式链接模型作为精简整体流程的核心。在模式链接模型的微调过程中,LR-SQL将完整数据库分解为数量可调的表组合,使模型能够从这些分散的片段中学习整个数据库内的关系。此外,为增强模型在推理时感知各个离散片段间关系的能力,LR-SQL针对此任务训练了模型的思维链能力。实验结果表明,与现有微调方法相比,LR-SQL可降低总计40%的GPU内存使用量,而在模式链接任务中仅损失2%的表预测准确率。对于整体Text2SQL任务,执行准确率仅下降0.6%。我们的项目现已发布于 https://github.com/hongWin/LR-SQL