AI chatbots are widely used by children and teens today, but they pose significant risks to youth's privacy and safety due to both increasingly personal conversations and potential exposure to unsafe content. While children under 13 are protected by the Children's Online Privacy Protection Act (COPPA), chatbot providers' own privacy policies may also provide protections, since they typically prohibit children from accessing their platforms. Age gating is often employed to restrict children online, but chatbot age gating in particular has not been studied. In this paper, we investigate whether popular consumer chatbots are (i) able to estimate users' ages based solely on their conversations, and (ii) whether they take action upon identifying children. To that end, we develop an auditing framework in which we programmatically interact with chatbots and conduct 1050 experiments using our comprehensive library of age-indicative prompts, including implicit and explicit age disclosures, to analyze the chatbots' responses and actions. We find that while chatbots are capable of estimating age, they do not take any action when children are identified, contradicting their own policies. Our methodology and findings provide insights for platform design, demonstrated by our proof-of-concept chatbot age gating implementation, and regulation to protect children online.
翻译:当前,人工智能聊天机器人已被儿童和青少年广泛使用,但由于日益个人化的对话内容及潜在的不安全信息暴露,这些工具对青少年的隐私与安全构成了重大风险。虽然《儿童在线隐私保护法案》(COPPA)为13岁以下儿童提供了法律保护,但聊天机器人服务商自身的隐私政策通常也禁止儿童使用其平台,从而可能提供额外保护。年龄限制机制常被用于限制儿童接触网络服务,但针对聊天机器人年龄限制的具体研究尚属空白。本文通过构建审计框架,系统探究主流消费级聊天机器人是否具备以下能力:(i)仅基于对话内容估算用户年龄;(ii)在识别出儿童用户后采取相应措施。为此,我们开发了程序化交互系统,利用包含隐性与显性年龄提示的综合提示库开展1050组实验,分析聊天机器人的响应与行为。研究发现,尽管聊天机器人能够有效估算用户年龄,但在识别出儿童用户后并未采取任何措施,这与其自身政策相悖。本文提出的方法论与发现为平台设计提供了新视角(通过概念验证的年龄限制实施方案得以展示),并为完善在线儿童保护监管机制提供了参考依据。