The emerging paradigm of ``Agentic Employment" is a labor model where autonomous AI agents, acting as economic principals rather than mere management tools, directly hire, instruct, and pay human workers. Facilitated by the launch of platforms like Rentahuman.ai in February 2026, this shift inverts the traditional ``ghost work" dynamic, positioning visible human workers as ``biological actuators" for invisible software entities. With speculative design approach, we analyze how Extended Reality (XR) serves as the critical ``control surface" for this relationship, enabling agents to issue granular, context-free micro-instructions while harvesting real-time environmental data. Through a scenario construction methodology, we identify seven key risk vectors, including the creation of a liability void where humans act as moral crumple zones for algorithmic risk, the acceleration of cognitive deskilling through ``Shadow Boss" micromanagement, and the manipulation of civic and social spheres via Diminished Reality (DR). The findings suggest that without new design frameworks prioritizing agency and legibility, Agentic Employment threatens to reduce human labor to a friction-less hardware layer for digital minds, necessitating urgent user-centric XR and policy interventions.
翻译:新兴的“代理雇佣”范式是一种劳动模式,其中自主AI代理作为经济主体而非单纯的管理工具,直接雇佣、指导并支付人类劳动者报酬。随着Rentahuman.ai等平台于2026年2月的推出,这一转变颠覆了传统的“幽灵工作”动态,将可见的人类劳动者定位为不可见软件实体的“生物执行器”。通过思辨设计方法,我们分析了扩展现实(XR)如何作为这一关系的关键“控制界面”,使代理能够发布细粒度、无上下文的微指令,同时采集实时环境数据。借助场景构建方法,我们识别出七个关键风险向量,包括:形成责任真空(人类成为算法风险的道德缓冲带)、“影子老板”式微观管理加速认知技能退化,以及通过减实现实(DR)对公民与社会领域进行操控。研究结果表明,若缺乏优先考虑主体性与可理解性的新设计框架,代理雇佣可能将人类劳动降格为数字思维的零摩擦硬件层,亟需以用户为中心的XR设计与政策干预。