LLMs are increasingly supporting decision-making across high-stakes domains, requiring critical reflection on the socio-technical factors that shape how humans and LLMs are assigned roles and interact during human-in-the-loop decision-making. This paper introduces the concept of human-LLM archetypes -- defined as re-curring socio-technical interaction patterns that structure the roles of humans and LLMs in collaborative decision-making. We describe 17 human-LLM archetypes derived from a scoping literature review and thematic analysis of 113 LLM-supported decision-making papers. Then, we evaluate these diverse archetypes across real-world clinical diagnostic cases to examine the potential effects of adopting distinct human-LLM archetypes on LLM outputs and decision outcomes. Finally, we present relevant tradeoffs and design choices across human-LLM archetypes, including decision control, social hierarchies, cognitive forcing strategies, and information requirements. Through our analysis, we show that selection of human-LLM interaction archetype can influence LLM outputs and decisions, bringing important risks and considerations for the designers of human-AI decision-making systems
翻译:大语言模型正日益广泛地应用于高风险领域的决策支持,这要求我们批判性地审视在人机协同决策过程中,影响人类与大语言模型角色分配及交互方式的社会技术因素。本文提出了“人-LLM原型”的概念——其定义为在协作决策中构建人类与LLM角色的、反复出现的社会技术交互模式。通过对113篇LLM辅助决策文献的范围综述与主题分析,我们归纳出17种人-LLM原型。随后,我们在真实临床诊断案例中评估这些多样化原型,以考察采用不同人-LLM原型对LLM输出及决策结果的潜在影响。最后,我们阐述了不同人-LLM原型所涉及的关键权衡与设计选择,包括决策控制权、社会层级结构、认知强制策略及信息需求等。通过分析,我们证明人-LLM交互原型的选择会影响LLM输出与决策结果,这为人机决策系统的设计者带来了重要的风险考量与设计启示。