Large Language Models (LLMs) can translate natural language into SQL, but small models struggle with multi-table and complex queries in Zero-Shot Learning (ZSL) settings. While Supervised Fine-Tuning (SFT) helps, it falls short for harder cases. To address this, we study how different reasoning strategies (general-purpose reasoning in ZSL, reasoning traces in SFT, and Reinforcement Learning with Verifiable Reward (RLVR) with novel reward functions) affect Text2SQL performance across four benchmarks. We show that partial scoring rewards, computed via SQL execution, are crucial for guiding models even when outputs are not fully correct. These fine-grained signals lead to consistently better Text2SQL outcomes. Small LLMs benefit most from reasoning-aware SFT and RL, with the 14B Qwen-Coder-2.5 surpassing 400B+ models on challenging datasets like BIRD.
翻译:暂无翻译