AI chatbots are increasingly stepping into roles as collaborators or teachers in analyzing, visualizing, and reasoning through data and domain problem. Yet, AI's default assistant mode with its comprehensive and one-off responses may undermine opportunities for practitioners to develop literacy through their own thinking, inducing cognitive passivity. Drawing on evidence from empirical studies and theories, we argue that disrupting cognitive passivity necessitates a nuanced approach: rather than simply making AI promote deliberative thinking, there is a need for more dynamic and adaptive strategy through cognitive alignment -- a framework that characterizes effective human-AI interaction as a function of alignment between users' cognitive demand and AI's interaction mode. In the framework, we provide the mapping between AI's interaction mode (transmissive or deliberative) and users' cognitive demand (receptive or deliberative), otherwise leading to either cognitive passivity or friction. We further discuss implications and offer open questions for future research on data literacy.
翻译:AI聊天助手正日益成为分析、可视化以及推理数据和领域问题时的协作者或教师。然而,AI默认的助手模式及其全面、一次性的回应,可能会削弱从业者通过自身思考来发展素养的机会,从而诱发认知惰性。基于实证研究的证据和相关理论,我们认为打破认知惰性需要一种精细化的方法:与其简单地让AI促进深思熟虑的思考,不如通过认知对齐——一个将有效的人机交互刻画为用户认知需求与AI交互模式一致性函数的框架——采取更具动态性和适应性的策略。在该框架中,我们提供了AI交互模式(传递型或思辨型)与用户认知需求(接受型或思辨型)之间的映射关系,否则将导致认知惰性或认知摩擦。我们进一步讨论了该框架的启示,并提出了关于数据素养未来研究的开放性议题。