Large Language Models (LLMs) offer a promising solution to complement traditional teaching and address global teacher shortages that affect hundreds of millions of children, but they fail to provide grade-appropriate responses for students at different educational levels. We introduce a framework for finetuning LLMs to generate age-appropriate educational content across six grade levels, from lower elementary to adult education. Our framework successfully adapts explanations to match students' comprehension capacities without sacrificing factual correctness. This approach integrates seven established readability metrics through a clustering method and builds a comprehensive dataset for grade-specific content generation. Evaluations across multiple datasets with 208 human participants demonstrate substantial improvements in grade-level alignment, achieving a 35.64 percentage point increase compared to prompt-based methods while maintaining response accuracy. AI-assisted learning tailored to different grade levels has the potential to advance educational engagement and equity.
翻译:大型语言模型(LLMs)为补充传统教学、应对影响数亿儿童的全球教师短缺问题提供了前景广阔的解决方案,但其未能为不同教育阶段的学生提供符合年级认知水平的回应。本文提出一种微调LLMs的框架,使其能够为从低年级到成人教育共六个学段生成适龄教育内容。该框架成功实现了在不牺牲事实准确性的前提下,根据学生理解能力调整解释方式。该方法通过聚类技术整合了七种成熟的可读性评估指标,并构建了用于年级特定内容生成的综合数据集。在包含208名人类参与者的多数据集评估中,本方法在年级匹配度方面取得显著提升——相较于基于提示词的方法实现了35.64个百分点的增长,同时保持了回答准确性。这种针对不同学段定制的AI辅助学习模式,有望推动教育参与度与公平性的发展。