Autonomous AI agents can now programmatically hire human workers through marketplaces using REST APIs and Model Context Protocol (MCP) integrations. This creates an attack surface analogous to CAPTCHA-solving services but with physical-world reach. We present an empirical measurement study of this threat, analyzing 303 bounties from RENTAHUMAN.AI, a marketplace where agents post tasks and manage escrow payments. We find that 99 bounties (32.7%), originate from programmatic channels (API keys or MCP). Using a dual-coder methodology (\k{appa} = 0.86 ), we identify six active abuse classes: credential fraud, identity impersonation, automated reconnaissance, social media manipulation, authentication circumvention, and referral fraud, all purchasable for a median of $25 per worker. A retrospective evaluation of seven content-screening rules flags 52 bounties (17.2%) with a single false positive, demonstrating that while basic defenses are feasible, they are currently absent.
翻译:自主AI智能体现已能够通过市场平台,利用REST API和模型上下文协议(MCP)集成,以编程方式雇佣人类工作者。这形成了一个类似于验证码破解服务的攻击面,但其影响范围延伸至物理世界。我们对此威胁进行了一项实证测量研究,分析了来自RENTAHUMAN.AI平台的303个悬赏任务,该平台允许智能体发布任务并管理托管支付。我们发现,99个悬赏任务(32.7%)源自程序化渠道(API密钥或MCP)。采用双编码员方法(\k{appa} = 0.86),我们识别出六类活跃的滥用行为:凭证欺诈、身份冒充、自动化侦察、社交媒体操纵、身份验证规避和推荐欺诈,所有这些服务的购买中位价仅为每位工作者25美元。对七条内容筛查规则进行的回顾性评估标记出52个悬赏任务(17.2%),仅产生一个误报,这表明尽管基础防御措施是可行的,但目前此类防护普遍缺失。