As semi-autonomous vehicles (AVs) become prevalent, drivers must collaborate with AI systems whose decision-making processes remain opaque. This study examines how drivers of AVs develop folk theories to interpret algorithmic behavior that contradicts their expectations. Through 16 semi-structured interviews with drivers in the United States, we investigate the explanatory frameworks drivers construct to make sense of AI decisions, the strategies they employ when systems behave unexpectedly, and their experiences with control handoffs and feedback mechanisms. Our findings reveal that drivers develop sophisticated folk theories -- often using anthropomorphic metaphors describing systems that ``see,'' ``hesitate,'' or become ``overwhelmed'' -- yet lack informational resources to validate these theories or meaningfully participate in algorithmic governance. We identify contexts where algorithmic opacity manifests acutely, including complex intersections, adverse weather, and rural environments. Current AV designs position drivers as passive data sources rather than epistemic agents, creating accountability gaps that undermine trust and safety. Drawing on critical data studies and algorithmic accountability literature, we propose a framework for participatory algorithmic governance that would provide drivers with transparency into AI decision-making and meaningful channels for contributing to system improvement. This research contributes to understanding how users navigate datafied sociotechnical systems in safety-critical contexts.
翻译:随着半自动驾驶汽车(AV)日益普及,驾驶员必须与决策过程不透明的AI系统协同工作。本研究探讨了AV驾驶员如何发展民间理论来解释违背其预期的算法行为。通过对美国驾驶员的16次半结构化访谈,我们调查了驾驶员为理解AI决策所构建的解释框架、他们在系统行为异常时采用的策略,以及他们在控制权交接与反馈机制方面的体验。研究发现表明:驾驶员会发展出复杂的民间理论——常使用拟人化隐喻描述系统“看见”“犹豫”或“不堪重负”——但缺乏验证这些理论或实质性参与算法治理的信息资源。我们识别出算法不透明性尤为突出的情境,包括复杂交叉路口、恶劣天气和乡村环境。当前AV设计将驾驶员定位为被动数据源而非认知主体,由此产生的责任缺口会削弱信任与安全性。借鉴批判数据研究与算法问责文献,我们提出了参与式算法治理框架,旨在为驾驶员提供AI决策透明度及参与系统改进的有效渠道。本研究有助于理解用户如何在安全关键情境中驾驭数据化的社会技术系统。