With the emergence of large language models (LLMs) as a powerful class of generative artificial intelligence (AI), their use in tutoring has become increasingly prominent. Prior works on LLM-based tutoring typically learn a single tutor policy and do not capture the diversity of tutoring styles. In real-world tutor-student interactions, pedagogical intent is realized through adaptive instructional strategies, with tutors varying the level of scaffolding, instructional directiveness, feedback, and affective support in response to learners' needs. These differences can all impact dialogue dynamics and student engagement. In this paper, we explore how tutor personas embedded in human tutor-student dialogues can be used to guide LLM behavior without relying on explicitly prompted instructions. We modify Bidirectional Preference Optimization (BiPO) to learn a steering vector, an activation-space direction that steers model responses towards certain tutor personas. We find that this steering vector captures tutor-specific variation across dialogue contexts, improving semantic alignment with ground-truth tutor utterances and increasing preference-based evaluations, while largely preserving lexical similarity. Analysis of the learned directional coefficients further reveals interpretable structure across tutors, corresponding to consistent differences in tutoring behavior. These results demonstrate that activation steering offers an effective and interpretable way for controlling tutor-specific variation in LLMs using signals derived directly from human dialogue data.
翻译:随着大型语言模型(LLMs)作为一类强大的生成式人工智能(AI)的兴起,其在教学辅导中的应用日益凸显。现有基于LLM的辅导研究通常学习单一的导师策略,未能捕捉教学风格的多样性。在真实的师生互动中,教学意图通过适应性教学策略实现,导师会根据学习者需求调整脚手架支持程度、教学指导性、反馈方式和情感支持强度。这些差异都会影响对话动态和学生参与度。本文探讨如何利用嵌入在人类师生对话中的导师角色来引导LLM行为,而无需依赖显式的提示指令。我们改进双向偏好优化(BiPO)以学习引导向量——一种能驱使模型响应朝向特定导师角色的激活空间方向。研究发现,该引导向量能捕捉跨对话情境的导师特异性差异,提升与真实导师话语的语义对齐度并改善基于偏好的评估效果,同时基本保持词汇相似性。对学习所得方向系数的分析进一步揭示了跨导师的可解释结构,对应着教学行为中持续存在的差异性。这些结果表明,激活引导提供了一种有效且可解释的方法,能够利用直接从人类对话数据中提取的信号来控制LLM中的导师特异性变异。