Loss of decisional capacity, coupled with the increasing absence of reliable human proxies, raises urgent questions about how individuals' values can be represented in Advance Care Planning (ACP). To probe this fraught design space of high-risk, high-subjectivity decision support, we built an experience prototype (\acpagent{}) and asked 15 participants in 4 workshops to train it to be their personal ACP proxy. We analysed their coping strategies and feature requests and mapped the results onto axes of agent autonomy and human control. Our findings show a surprising 86.7\% agreement with \acpagent{}, arguing for a potential new role of AI in ACP where agents act as personal advocates for individuals, building mutual intelligibility over time. We propose that the key areas of future risk that must be addressed are the moderation of users' expectations and designing accountability and oversight over agent deployment and cutoffs.
翻译:决策能力的丧失,加之可靠人类代理日益缺失,引发了关于如何在预先护理规划中体现个人价值观的紧迫问题。为探索这一高风险、高主观性决策支持的设计空间,我们构建了一个体验原型(ACP代理),并邀请15名参与者在4次工作坊中训练其成为个人预先护理规划代理。通过分析参与者的应对策略与功能需求,我们将研究结果映射至代理自主性与人类控制度的坐标轴上。研究发现,参与者对ACP代理的认可度高达86.7%,这揭示了人工智能在预先护理规划中可能扮演的新角色——作为个人权益倡导者,通过长期互动建立双向理解机制。我们认为未来必须解决的核心风险领域包括:调节用户预期、设计代理部署的问责机制,以及建立运行终止的监督体系。