Personalized Learning Path Planning (PLPP) aims to design adaptive learning paths that align with individual goals. While large language models (LLMs) show potential in personalizing learning experiences, existing approaches often lack mechanisms for goal-aligned planning. We introduce Pxplore, a novel framework for PLPP that integrates a reinforcement-based training paradigm and an LLM-driven educational architecture. We design a structured learner state model and an automated reward function that transforms abstract objectives into computable signals. We train the policy combining supervised fine-tuning (SFT) and Group Relative Policy Optimization (GRPO), and deploy it within a real-world learning platform. Extensive experiments validate Pxplore's effectiveness in producing coherent, personalized, and goal-driven learning paths. We release our code and dataset to facilitate future research.
翻译:个性化学习路径规划(PLPP)旨在设计与个体目标相匹配的自适应学习路径。尽管大语言模型(LLMs)在个性化学习体验方面展现出潜力,现有方法通常缺乏目标对齐的规划机制。本文提出Pxplore——一种用于PLPP的新型框架,它整合了基于强化的训练范式与LLM驱动的教育架构。我们设计了一种结构化的学习者状态模型和一种自动化奖励函数,能够将抽象目标转化为可计算的信号。我们结合监督微调(SFT)与组相对策略优化(GRPO)来训练策略,并将其部署于真实学习平台中。大量实验验证了Pxplore在生成连贯、个性化且目标驱动的学习路径方面的有效性。我们公开了代码与数据集以促进后续研究。