Advances in deepfake technologies, which use generative artificial intelligence (GenAI) to mimic a person's likeness or voice, have led to growing interest in their use in educational contexts. However, little is known about how key stakeholders perceive and intend to use these tools. This study investigated higher education stakeholder perceptions and intentions regarding deepfakes through the lens of the Unified Theory of Acceptance and Use of Technology 2 (UTAUT2). Using a mixed-methods approach combining survey data (n=174) with qualitative interviews, we found that academic stakeholders demonstrated a relatively low intention to adopt these technologies (M=41.55, SD=34.14) and held complex views about their implementation. Quantitative analysis revealed adoption intentions were primarily driven by hedonic motivation, with a gender-specific interaction in price-value evaluations. Qualitative findings highlighted potential benefits of enhanced student engagement, improved accessibility, and reduced workload in content creation, but concerns regarding the exploitation of academic labour, institutional cost-cutting leading to automation, degradation of relationships in education, and broader societal impacts. Based on these findings, we propose a framework for implementing deepfake technologies in higher education that addresses institutional policies, professional development, and equitable resource allocation to thoughtfully integrate AI while maintaining academic integrity and professional autonomy.
翻译:深度伪造技术利用生成式人工智能(GenAI)模拟人物形象或声音,其进展引发了教育领域日益增长的应用兴趣。然而,对于关键利益相关者如何认知及使用这些工具,目前仍知之甚少。本研究基于技术接受与使用统一理论第二版(UTAUT2)框架,探究了高等教育利益相关者对深度伪造技术的认知与使用意向。通过结合问卷调查数据(n=174)与定性访谈的混合研究方法,我们发现学术利益相关者对该技术的采纳意向相对较低(M=41.55,SD=34.14),并对其应用持有复杂观点。定量分析表明,采纳意向主要受享乐动机驱动,且在价格价值评估中存在性别特异性交互效应。定性研究结果揭示了该技术可能带来的益处:提升学生参与度、改善可访问性、减轻内容创作工作量;但同时也凸显了诸多隐忧:学术劳动力被剥削、机构成本削减导致的自动化、教育关系退化以及更广泛的社会影响。基于这些发现,我们提出了高等教育领域实施深度伪造技术的框架,该框架涵盖制度政策、专业发展与资源公平分配,旨在审慎整合人工智能的同时,维护学术诚信与专业自主性。