Deploying accurate Text-to-SQL systems at the enterprise level faces a difficult trilemma involving cost, security and performance. Current solutions force enterprises to choose between expensive, proprietary Large Language Models (LLMs) and low-performing Small Language Models (SLMs). Efforts to improve SLMs often rely on distilling reasoning from large LLMs using unstructured Chain-of-Thought (CoT) traces, a process that remains inherently ambiguous. Instead, we hypothesize that a formal, structured reasoning representation provides a clearer, more reliable teaching signal, as the Text-to-SQL task requires explicit and precise logical steps. To evaluate this hypothesis, we propose Struct-SQL, a novel Knowledge Distillation (KD) framework that trains an SLM to emulate a powerful large LLM. Consequently, we adopt a query execution plan as a formal blueprint to derive this structured reasoning. Our SLM, distilled with structured CoT, achieves an absolute improvement of 8.1% over an unstructured CoT distillation baseline. A detailed error analysis reveals that a key factor in this gain is a marked reduction in syntactic errors. This demonstrates that teaching a model to reason using a structured logical blueprint is beneficial for reliable SQL generation in SLMs.
翻译:在企业层面部署精确的文本到SQL系统面临着成本、安全性和性能的三重困境。现有解决方案迫使企业必须在昂贵、专有的大型语言模型与性能低下的小型语言模型之间做出选择。提升小型语言模型的尝试通常依赖于从大型语言模型蒸馏非结构化思维链推理轨迹,这一过程本质上仍存在模糊性。相反,我们假设形式化的结构化推理表征能提供更清晰、更可靠的教学信号,因为文本到SQL任务需要明确且精确的逻辑步骤。为验证这一假设,我们提出Struct-SQL——一种新颖的知识蒸馏框架,通过训练小型语言模型来模拟强大的大型语言模型。为此,我们采用查询执行计划作为形式化蓝图来推导这种结构化推理。采用结构化思维链蒸馏的小型语言模型,相比非结构化思维链蒸馏基线实现了8.1%的绝对性能提升。详细的错误分析表明,这一增益的关键因素在于语法错误显著减少。这证明通过结构化逻辑蓝图教授模型进行推理,有助于提升小型语言模型生成可靠SQL的能力。