Generative AI is reshaping software work, yet we lack clear guidance on where developers most need support and how to design it responsibly. We report a large-scale, mixed-methods study of N=860 developers examining where, why, and how they seek or limit AI help across SE tasks. Using cognitive appraisal theory, we provide the first empirically validated mapping of developers' task appraisals to AI adoption patterns and Responsible AI (RAI) priorities. Appraisals predict AI openness and use, revealing distinct patterns: strong current use and demand for improvement in core work (e.g., coding, testing); high demand to reduce toil (e.g., documentation, operations); and clear limits for identity- and relationship-centric work (e.g., mentoring). RAI priorities vary by context: reliability and security for systems-facing tasks; transparency, alignment, and steerability to maintain control; and fairness and inclusiveness for human-facing work. Our results offer concrete, contextual guidance for delivering AI where it matters to developers and their work.
翻译:生成式 AI 正在重塑软件工作,然而对于开发者最需要支持的位置以及如何负责任地设计此类支持,我们仍缺乏明确指导。我们报告了一项针对 N=860 名开发者的大规模混合方法研究,考察了他们在各类软件工程任务中寻求或限制 AI 帮助的位置、原因与方式。运用认知评估理论,我们首次提供了开发者任务评估与 AI 采纳模式及负责任人工智能(RAI)优先事项之间经实证验证的映射关系。任务评估可预测 AI 的开放度与使用情况,揭示出三种显著模式:核心工作(如编码、测试)当前使用强度高且改进需求大;降低繁琐劳动(如文档、运维)的需求强烈;而对于以身份和关系为中心的工作(如指导)则存在明确的使用限制。RAI 优先事项随情境而变化:面向系统的任务注重可靠性与安全性;为保持控制力需要透明度、对齐性与可操控性;面向人的工作则强调公平性与包容性。我们的研究结果为在开发者及其工作真正需要的环节提供 AI 支持,提供了具体且贴合情境的指导。