Large language models increasingly require structured inference, from JSON schema enforcement to multi-lingual parsing, where outputs must satisfy complex constraints. We introduce MetaJuLS, a meta-reinforcement learning approach that learns universal constraint propagation policies applicable across languages and tasks without task-specific retraining. By formulating structured inference as adaptive constraint propagation and training a Graph Attention Network with meta-learning, MetaJuLS achieves 1.5--2.0$\times$ speedups over GPU-optimized baselines while maintaining within 0.2\% accuracy of state-of-the-art parsers. On Universal Dependencies across 10 languages and LLM-constrained generation (LogicBench, GSM8K-Constrained), MetaJuLS demonstrates rapid cross-domain adaptation: a policy trained on English parsing adapts to new languages and tasks with 5--10 gradient steps (5--15 seconds) rather than requiring hours of task-specific training. Mechanistic analysis reveals the policy discovers human-like parsing strategies (easy-first) and novel non-intuitive heuristics. By reducing propagation steps in LLM deployments, MetaJuLS contributes to Green AI by directly reducing inference carbon footprint.
翻译:大型语言模型日益需要结构化推理,从JSON模式强制到多语言解析,其输出必须满足复杂约束。我们提出MetaJuLS,一种元强化学习方法,能够学习适用于跨语言和跨任务的通用约束传播策略,而无需针对特定任务进行重新训练。通过将结构化推理建模为自适应约束传播,并利用元学习训练图注意力网络,MetaJuLS在GPU优化基线方法上实现了1.5-2.0倍的加速,同时保持与最先进解析器0.2%以内的准确率。在涵盖10种语言的通用依存关系任务及LLM约束生成任务(LogicBench、GSM8K-Constrained)上,MetaJuLS展现出快速的跨领域适应能力:在英语解析任务上训练的策略仅需5-10次梯度更新(5-15秒)即可适应新语言和任务,而无需数小时的特定任务训练。机理分析表明,该策略能发现类人解析策略(易优先原则)及新颖的非直观启发式方法。通过减少LLM部署中的传播步骤,MetaJuLS直接降低推理碳足迹,为绿色AI作出贡献。