In the rapidly evolving landscape of computing disciplines, substantial efforts are being dedicated to unraveling the sociotechnical implications of generative AI (Gen AI). While existing research has manifested in various forms, there remains a notable gap concerning the direct engagement of knowledge workers in academia with Gen AI. We interviewed 18 knowledge workers, including faculty and students, to investigate the social and technical dimensions of Gen AI from their perspective. Our participants raised concerns about the opacity of the data used to train Gen AI. This lack of transparency makes it difficult to identify and address inaccurate, biased, and potentially harmful, information generated by these models. Knowledge workers also expressed worries about Gen AI undermining trust in the relationship between instructor and student and discussed potential solutions, such as pedagogy readiness, to mitigate them. Additionally, participants recognized Gen AI's potential to democratize knowledge by accelerating the learning process and act as an accessible research assistant. However, there were also concerns about potential social and power imbalances stemming from unequal access to such technologies. Our study offers insights into the concerns and hopes of knowledge workers about the ethical use of Gen AI in educational settings and beyond, with implications for navigating this new landscape.
翻译:在计算学科快速发展的背景下,大量研究致力于揭示生成式人工智能(Gen AI)的社会技术影响。尽管现有研究已以多种形式呈现,但涉及学术界知识工作者直接参与Gen AI的研究仍存在显著空白。我们访谈了18位知识工作者(含教师与学生),从他们的视角探讨Gen AI的社会与技术维度。参与者对训练Gen AI所用数据的不透明性表示担忧——这种透明度缺失使得识别和纠正这些模型生成的错误、偏见乃至潜在有害信息变得困难。知识工作者还忧虑Gen AI会削弱师生间的信任关系,并探讨了诸如教学法准备等缓解方案。此外,参与者认可Gen AI通过加速学习过程实现知识民主化的潜力,及其作为便捷研究助手的价值。但同时也存在对社会与权力失衡的担忧——这种失衡源于此类技术获取机会的不均等。本研究揭示了知识工作者对Gen AI在学术场景及更广泛领域伦理使用的关切与期望,为应对这一新格局提供了启示。