Social robots hold promise for reducing job interview anxiety, yet designing agents that provide both psychological safety and instructional guidance remains challenging. Through a three-phase iterative design study (N = 8), we empirically mapped this tension. Phase I revealed a "Safety-Guidance Gap": while a Person-Centered Therapy (PCT) robot established safety (d = 3.27), users felt insufficiently coached. Phase II identified a "Scaffolding Paradox": rigid feedback caused cognitive overload, while delayed feedback lacked specificity. In Phase III, we resolved these tensions by developing an Agency-Driven Interaction Layer. Synthesizing our empirical findings, we propose the Adaptive Scaffolding Ecosystem, a conceptual framework that redefines robotic coaching not as a static script, but as a dynamic balance between affective support and instructional challenge, mediated by user agency.
翻译:社交机器人在缓解求职面试焦虑方面展现出潜力,但设计既能提供心理安全又能给予教学指导的智能体仍具挑战性。通过一项包含三个阶段的迭代设计研究(N = 8),我们实证性地揭示了这一矛盾。第一阶段发现了“安全-指导鸿沟”:虽然以人为中心疗法(PCT)机器人建立了心理安全(d = 3.27),但用户感到受指导不足。第二阶段识别出“脚手架悖论”:僵化的反馈导致认知超载,而延迟的反馈则缺乏针对性。在第三阶段,我们通过开发一个“能动性驱动的交互层”解决了这些矛盾。综合实证发现,我们提出了“自适应脚手架生态系统”——一个概念框架,该框架将机器人辅导重新定义为并非静态脚本,而是通过用户能动性调节的情感支持与教学挑战之间的动态平衡。