Loss of decisional capacity, coupled with the increasing absence of reliable human proxies, raises urgent questions about how individuals' values can be represented in Advance Care Planning (ACP). To probe this fraught design space of high-risk, high-subjectivity decision support, we built an experience prototype (\acpagent{}) and asked 15 participants in 4 workshops to train it to be their personal ACP proxy. We analysed their coping strategies and feature requests and mapped the results onto axes of agent autonomy and human control. Our findings show a surprising 86.7\% agreement with \acpagent{}, arguing for a potential new role of AI in ACP where agents act as personal advocates for individuals, building mutual intelligibility over time. We propose that the key areas of future risk that must be addressed are the moderation of users' expectations and designing accountability and oversight over agent deployment and cutoffs.
翻译:决策能力的丧失,加之可靠人类代理日益稀缺,引发了关于个人价值观如何在预先护理规划中得以体现的紧迫问题。为探索这一高风险、高主观性决策支持的设计困境,我们构建了一个体验原型(ACP代理),并邀请15名参与者在4场工作坊中训练其成为个人ACP代理。通过分析参与者的应对策略与功能需求,我们将研究结果映射至代理自主性与人类控制度的维度框架。研究发现,参与者对ACP代理的认同率高达86.7%,这揭示了AI在ACP中可能扮演的新角色:作为个人权益倡导者,通过长期互动建立双向理解机制。我们提出,未来风险管控的核心在于调节用户预期,并构建针对代理部署与终止机制的问责与监督体系。