Effective Service Function Chain (SFC) provisioning requires precise orchestration in dynamic and latency-sensitive networks. Reinforcement Learning (RL) improves adaptability but often ignores structured domain knowledge, which limits generalization and interpretability. Large Language Models (LLMs) address this gap by translating natural language (NL) specifications into executable Structured Query Language (SQL) commands for specification-driven SFC management. Conventional fine-tuning, however, can cause syntactic inconsistencies and produce inefficient queries. To overcome this, we introduce Abstract Syntax Tree (AST)-Masking, a structure-aware fine-tuning method that uses SQL ASTs to assign weights to key components and enforce syntax-aware learning without adding inference overhead. Experiments show that AST-Masking significantly improves SQL generation accuracy across multiple language models. FLAN-T5 reaches an Execution Accuracy (EA) of 99.6%, while Gemma achieves the largest absolute gain from 7.5% to 72.0%. These results confirm the effectiveness of structure-aware fine-tuning in ensuring syntactically correct and efficient SQL generation for interpretable SFC orchestration.
翻译:在动态且对延迟敏感的网络中,有效的服务功能链配置需要精确的编排。强化学习提升了适应性,但常常忽略结构化的领域知识,这限制了其泛化能力和可解释性。大语言模型通过将自然语言规范转换为可执行的结构化查询语言命令,以支持规范驱动的SFC管理,从而弥补了这一差距。然而,传统的微调方法可能导致语法不一致并生成低效查询。为解决此问题,我们引入了抽象语法树掩码技术,这是一种结构感知的微调方法,它利用SQL的AST为关键组件分配权重,并在不增加推理开销的情况下强制执行语法感知学习。实验表明,AST掩码技术显著提高了多种语言模型的SQL生成准确率。FLAN-T5的执行准确率达到99.6%,而Gemma模型则实现了最大的绝对增益,从7.5%提升至72.0%。这些结果证实了结构感知微调在确保语法正确且高效的SQL生成、以实现可解释的SFC编排方面的有效性。