Purpose:Generative Artificial Intelligence (GAI) models, such as ChatGPT, may inherit or amplify societal biases due to their training on extensive datasets. With the increasing usage of GAI by students, faculty, and staff in higher education institutions (HEIs), it is urgent to examine the ethical issues and potential biases associated with these technologies. Design/Approach/Methods:This scoping review aims to elucidate how biases related to GAI in HEIs have been researched and discussed in recent academic publications. We categorized the potential societal biases that GAI might cause in the field of higher education. Our review includes articles written in English, Chinese, and Japanese across four main databases, focusing on GAI usage in higher education and bias. Findings:Our findings reveal that while there is meaningful scholarly discussion around bias and discrimination concerning LLMs in the AI field, most articles addressing higher education approach the issue superficially. Few articles identify specific types of bias under different circumstances, and there is a notable lack of empirical research. Most papers in our review focus primarily on educational and research fields related to medicine and engineering, with some addressing English education. However, there is almost no discussion regarding the humanities and social sciences. Additionally, a significant portion of the current discourse is in English and primarily addresses English-speaking contexts. Originality/Value:To the best of our knowledge, our study is the first to summarize the potential societal biases in higher education. This review highlights the need for more in-depth studies and empirical work to understand the specific biases that GAI might introduce or amplify in educational settings, guiding the development of more ethical AI applications in higher education.
翻译:目的:生成式人工智能模型(如ChatGPT)因其基于海量数据集的训练,可能继承或放大社会偏见。随着高等教育机构中师生员工对GAI的使用日益增多,审视相关伦理问题及潜在偏见已刻不容缓。设计/方法/途径:本范围综述旨在阐明近期学术出版物如何研究与探讨高等教育中GAI相关的偏见问题。我们对GAI在高等教育领域可能引发的潜在社会偏见进行了系统归类。综述涵盖中、英、日三种语言的文献,检索范围包括四大主要数据库,聚焦高等教育中的GAI应用与偏见议题。研究发现:尽管人工智能领域针对大语言模型的偏见与歧视问题已形成有意义的学术讨论,但涉及高等教育的文献大多仅作表层探讨。少数研究识别了特定情境下的偏见类型,且实证研究明显匮乏。现有文献主要集中于医学与工程类教育研究领域,部分涉及英语教育,而人文社科领域的讨论几近空白。此外,当前论述多以英语呈现且主要关注英语语境。原创性/价值:据我们所知,本研究首次系统梳理了高等教育中GAI的潜在社会偏见。本综述强调需通过更深入的实证研究,揭示GAI在教育场景中可能引发或强化的具体偏见,从而为高等教育领域开发更符合伦理的AI应用提供指引。