This study presents a design science blueprint for an orchestrated AI assistant and co-pilot in doctoral supervision that acts as a socio-technical mediator. Design requirements are derived from Stakeholder Theory and bounded by Academic Integrity. We consolidated recent evidence on supervision gaps and student wellbeing, then mapped issues to adjacent large language model capabilities using a transparent severity-mitigability triage. The artefact assembles existing capabilities into one accountable agentic AI workflow that proposes retrieval-augmented generation and temporal knowledge graphs, as well as mixture-of-experts routing as a solution stack of technologies to address existing doctoral supervision pain points. Additionally, a student context store is proposed, which introduces behaviour patches that turn tacit guidance into auditable practice and student-set thresholds that trigger progress summaries, while keeping authorship and final judgement with people. We specify a student-initiated moderation loop in which assistant outputs are routed to a supervisor for review and patching, and we analyse a reconfigured stakeholder ecosystem that makes information explicit and accountable. Risks in such a system exist, and among others, include AI over-reliance and the potential for the illusion of learning, while guardrails are proposed. The contribution is an ex ante, literature-grounded design with workflow and governance rules that institutions can implement and trial across disciplines.
翻译:本研究提出了一种面向博士指导的协同式AI助手与副驾驶的设计科学蓝图,该助手作为社会技术中介发挥作用。设计需求源自利益相关者理论,并以学术诚信为边界。我们整合了近期关于指导缺口与学生福祉的证据,随后通过透明的严重性-可缓解性分级方法,将问题映射至邻近的大语言模型能力。该人工制品将现有能力整合为一个可问责的智能AI工作流,提出了检索增强生成与时间知识图谱,以及专家混合路由作为技术解决方案栈,以应对现有博士指导中的痛点。此外,研究提出了学生情境存储机制,引入行为补丁将隐性指导转化为可审计的实践,并通过学生设定的阈值触发进度摘要,同时将作者身份与最终判断权保留于人。我们规定了一个由学生发起的调节循环,助手输出将路由至导师进行审查与修正,并分析了重构后的利益相关者生态系统,该系统使信息变得明确且可问责。此类系统存在风险,其中包括对AI的过度依赖与可能产生的学习幻觉,同时研究提出了相应的防护措施。本研究的贡献在于提供了一个基于文献、事前设计的工作流与治理规则,可供各机构跨学科实施与试验。